The Economics of Gambling
Against a background of extraordinary growth in the popularity of betting and gaming across ...

Author:
Nottingham Tren

This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!

The Economics of Gambling

Against a background of extraordinary growth in the popularity of betting and gaming across many countries of the world, there has never been a greater need for a study into gambling’s most important factor – its economics. This collection of original contributions drawn from such leading experts as David Peel, Raymond Sauer, Stephen Creigh-Tyte and Donald Siegel covers a rich variety of interesting and topical themes, including: • • • • •

betting on the horses over–under betting in football games national lotteries and lottery fatigue demand for gambling economic impact of casino gambling

This timely and comprehensive book covers all the bases of the economics of gambling and is a valuable and important contribution to the ongoing and growing debates. The Economics of Gambling will be of use to academics and students of applied, business and industrial economics, as well as being vital reading for those involved or interested in the gambling industry. Leighton Vaughan Williams is Professor of Economics and Finance and Director of the Betting Research Unit at Nottingham Trent University, UK.

Leighton Vaughan Williams

The Economics of Gambling

Edited by Leighton Vaughan Williams

First published 2003 by Routledge 11 New Fetter Lane, London EC4P 4EE Simultaneously published in the USA and Canada by Routledge, 29 West 35th Street, New York, NY 10001 Routledge is an imprint of the Taylor & Francis Group This edition published in the Taylor & Francis e-Library, 2005. “To purchase your own copy of this or any of Taylor & Francis or Routledge’s collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk.”

© 2003 Leighton Vaughan Williams for selection and editorial matter; individual contributors their chapters All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging in Publication Data Vaughan-Williams, Leighton. The economics of gambling / Leighton Vaughan-Williams. p. cm. Includes bibliographical references and index. 1. Gambling. 2. Gambling – Economic aspects. I. Title. HV6710 .V38 2002 2002031818 338.4 7795–dc21 ISBN 0-203-98693-8 Master e-book ISBN

ISBN 0-415-26091-4 (Print Edition)

Contents

List of ﬁgures List of tables List of contributors 1

Introduction

vii ix xiii 1

L EIGHTON VAUGHAN WILLIAMS

2

The favourite–longshot bias and the Gabriel and Marsden anomaly: an explanation based on utility theory

2

MICHAEL CAIN, DAVID PEEL AND DAVID LAW

3

Is the presence of inside traders necessary to give rise to a favorite–longshot bias?

14

ADI SCHNYT ZER AND YUVAL SHILONY

4

Pari-mutuel place betting in Great Britain and Ireland: an extraordinary opportunity

18

DAVID JACKSON AND PATRICK WALDRON

5

Betting at British racecourses: a comparison of the efﬁciency of betting with bookmakers and at the Tote

30

JOHN PEIRSON AND PHILIP BLACKBURN

6

Breakage, turnover, and betting market efﬁciency: new evidence from Japanese horse tracks

43

W. DAVID WALLS AND KELLY BUSCHE

7

The impact of tipster information on bookmakers’ prices in UK horse-race markets MICHAEL A. SMITH

67

vi Contents 8 On the marginal impact of information and arbitrage

80

ADI SCHNYTZER, YUVAL SHILONY AND RICHAR D T HOR NE

9 Covariance decompositions and betting markets: early insights using data from French trotting

95

JACK DOWIE

10 A competitive horse-race handicapping algorithm based on analysis of covariance

106

DAVID EDELMAN

11 Efﬁciency in the handicap and index betting markets for English rugby league

114

ROBERT SIMMONS, DAVID FORREST AND ANTHONY C UR R AN

12 Efﬁciency of the over–under betting market for National Football League games

135

JOSEPH GOLEC AND MAURRY TAMARKIN

13 Player injuries and price responses in the point spread wagering market

145

RAYMOND D. SAUER

14 Is the UK National Lottery experiencing lottery fatigue?

165

STEPHEN CREIGH-TYTE AND LISA FARRELL

15 Time-series modelling of Lotto demand

182

DAVID FORREST

16 Reconsidering the economic impact of Indian casino gambling

204

GARY C. ANDERS

17 Investigating betting behaviour: a critical discussion of alternative methodological approaches

224

ALISTAIR BRUCE AND JOHNNIE JOHNSON

18 The demand for gambling: a review

247

DAVID PATON, DONALD S. SIEGEL AND LEIGHTON VAUGHAN WILLIAMS

Index

265

Figures

2.1 4.1 4.2

8.1 9.1 9.2 13.1 13.2 14.1 14.2 14.3 14.4 14.5 16.1 17.1

Range of the indifference map in (µ, p) space for Markowitz Utility Function Potential operator loss as a function of the fraction of the pool (fmax ) bet on the favourite Expected operator loss for three values for the probability of the favourite being placed as a function of the fraction of the pool (fmax ) bet on the favourite The dynamics of the favourite–longshot bias Vincennes winter meeting 1997/98 Winning proportion (y) against probability assigned (x) ﬁfty-seven odds ranges (A) Score differences; (B) point spreads; (C) forecast errors Distribution of forecast errors PS − PSPLAY National lottery on-line weekly ticket sales The halo effect for the case of the UK National Lottery Thunderball sales Lottery Extra sales Instants sales A model of the Arizona economy with Indian casinos Predicted win probabilities

6 23

24 89 97 97 149 158 166 168 171 173 174 211 241

Tables

2.1 2.2 2.3 2.4 2.5 5.1 5.2 5.3 5.4 5.5 5.6 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 6.10 7.1 7.2 7.3 7.4

Simulated Tote and bookmaker returns Pari-mutuel and bookmaker pay-outs for winning bets (1978): cumulative Mean bookmaker returns at starting price odds Pari-mutuel and bookmaker pay-outs for winning bets (1978): non-cumulative Estimated Tote pay-out moments Average winning pay-outs per £1 bet Average winning pay-outs per £1 bet Opening prices, SPs and mean drift (expressed as percentage of drift from opening price) Average drift (expressed as percentage of drift from opening price) Average place pay-outs per £1 bet for all placed horses Average pay-outs for £1 each-way bets on all placed horses (including win and place) z-statistics for Japanese horse tracks ordered by turnover z-statistics grouped by index of breakage Distribution of returns for all Japanese races Distribution of returns from all JRA races Distribution of returns from all NAR races Unconditional power and cubic utility function estimates Returns for all Japanese races conditional on a heavy favorite Returns from JRA races conditional on a heavy favorite Returns from NAR races conditional on a heavy favorite Utility function estimates conditional on a heavy favorite Classiﬁcation of racehorses by incidence of tips Null and alternative hypotheses related to price movements Mean price movements from early morning to SP, measured in percentage probabilities (max-early and mean-early baseline) Signiﬁcance of differences in means of horse categorised by tipping status

9 10 10 10 11 34 34 36 38 39 39 46 52 55 56 57 58 59 60 61 62 69 70 71 72

x

List of tables 7.5 7.6 7.7 7.8 8.1 8.2

9.1 11.1A 11.1B 11.2 11.3A 11.3B 11.4A 11.4B 11.5A 11.5B 12.1 12.2 12.3

12.4

12.5

13.1 13.2 13.3

Returns to proportional stakes inclusive of transaction costs by tipping status in per cent Comparison of rates of return in the current and Crafts datasets, by direction and magnitude of price movement Returns to a level stake of £1 per bet, current dataset, by price movement and tip status Signiﬁcant price movers as a percentage of total runners in tip categories Regression of mean win frequency against mean subjective probability Basic statistics on the ﬂow of useful information (per minute during the relevant time interval) Decompositions of PMH and PMU probability scores OLS estimation of actual points differences in handicap betting with twenty trials OLS estimation of actual points differences in index betting Example of index betting from a match with point spread (8–11) Simulation results from handicap betting on all home teams Simulation results from index betting on all home teams Simulated win rates from betting on favourites or underdogs in the handicap betting market Simulated returns from betting on favourites or underdogs in the index betting market Simulated returns from various betting strategies applied to lowest handicaps Simulated returns from various betting strategies applied to best quotes in the index market Summary statistics for NFL point spread and over–under bets during the 1993–2000 seasons Regression estimates for tests of market efﬁciency for NFL over–under bets during the 1993–2000 seasons Market efﬁciency tests for NFL over–under bets during the 1993–2000 seasons adjusted for overtime games and point spread Over–under betting strategies winning percentages. The proﬁtability of over–under betting strategies for National Football League games over the 1993 through 2000 seasons, for combined totals and for individual years Favorite–underdog point spread betting strategies using the over–under line. The proﬁtability of point spread betting strategies for National Football League games over the 1993 through 2000 seasons Score differences and point spreads for NBA games Injury spell durations and hazard rates Forecast errors of point spreads by game missed

73 75 76 77 90 91 102 123 123 127 128 128 129 129 130 131 137 139

140

142

143 150 153 159

List of tables 14.1 14.2 14.3 14.4 14.5 15.1 16.1 16.2 16.3 16.4 16.5 16.6 16.7 16.8 17.1

18.1

Ways to win at Thunderball Big Draw 2001 National Lottery stakes (£ million) Trends in betting and gaming expenditure Trends in betting and gaming expenditure relative to total consumer spending Elasticity estimates in the UK Lotto Arizona Indian tribal population and gaming capacity Per capita slot machine revenue, unemployment rates, welfare and transfer payments for Arizona Indian reservations Variables and sources of data Results of state TPT regressions using quarterly data Results of state TPT regressions using monthly data Results of county TPT regressions City hotel and bed taxes Estimated impact of an increase in slot machines Comparison of bettors’ aggregate subjective probability judgements and horses’ observed (objective) probability of success Key empirical studies of demand characteristics and substitution effects for various types of gambling activity

xi 171 172 175 176 177 191 209 210 214 215 217 217 219 220

238 252

Contributors

Gary C. Anders is Professor of Economics at Arizona State University West. He received his PhD from Notre Dame University. He has written extensively on the economic impact of casino gambling and Native American economic development. Philip Blackburn is Senior Economist at Laing and Buisson, a leading health and social care analysis ﬁrm. He was previously an economist for the Ofﬁce for National Statistics. After gaining his MA in Economics at the University of Kent in 1994, he researched into various racetrack betting markets. Alistair Bruce is Deputy Director of Nottingham University Business School and Professor of Decision and Risk Analysis. He has published widely in economics, management and psychology journals in the area of decision making under uncertainty, with particular reference to horse-race betting markets. Kelly Busche’s research has been concentrated on the economics of horse-race betting. He is now retired and continues to work on betting. Michael Cain is Reader in Management Science at the University of Wales, Bangor. He has published in a number of journals, including the Journal of the American Statistical Association, Journal of Risk and Uncertainty, Naval Research Logistics, the American Statistician, and Annals of the Institute of Statistical Mathematics. Stephen Creigh-Tyte is Chief Economist at the Department for Culture, Media and Sport, and Visiting Professor in Economics in the Department of Economics and Finance at the University of Durham. He has authored over 100 books, articles and research papers, covering labour, small-business, cultural sector and gambling economics. Anthony Curran is a recent graduate in Business Economics with Gambling Studies at the University of Salford. He is presently a freelance researcher. Jack Dowie is a Health Economist and Decision Analyst, who recently took up the newly created chair in Health Impact Analysis at the London School of Hygiene

xiv List of contributors and Tropical Medicine. He retains a long-established interest in betting markets, and is actively involved in French trotting. David Edelman is Associate Professor of Finance at the University of Wollongong (Australia). He has published widely in the areas of ﬁnance, data mining, and statistical theory, as well as on horse-race betting. He is an avid racegoer and jazz pianist. Lisa Farrell completed her PhD The economics of lotteries, in 1997. Lisa’s research area is applied microeconomics, with a focus on lotteries and gambling. Her work spans the theoretical and microeconometric aspects of these issues. She is currently employed as a Senior Lecturer, Department of Economics, University of Melbourne. David Forrest is Lecturer in Economics in the University of Salford. He has published extensively in his ﬁelds of current interest, notably the economics of gambling, economics of sport and valuation issues in cost–beneﬁt analysis. Joseph Golec is Associate Professor of Finance at the University of Connecticut. He has published on the efﬁciency of gambling markets, mutual fund compensation practices and healthcare services in various ﬁnance and economics journals. David Jackson is a Research Fellow in Statistics at Trinity College, Dublin. His Sports Statistics publications include papers related to gambling and others investigating psychological momentum in contests that are decided by a series of trials. Johnnie Johnson is Professor of Decision and Risk Analysis and Director of the Centre for Risk Research in the School of Management at the University of Southampton. His research focuses on risk perception, risk management and decision making under uncertainty, particularly in relation to decisions in betting markets. David Law is Lecturer in Economics at the School of Business, University of Wales, Bangor. His research interests are in ﬁnancial and gambling markets, and economic development. He has published articles in Economica, Journal of Forecasting and Journal of Risk and Uncertainty. David Paton is Head of the Economics Division at Nottingham University Business School. He has published widely on subjects including betting markets, the economics of advertising and the economics of fertility. He is married to Paula and has three children, Stanley, Archie and Sadie. David Peel is Professor of Economics at Cardiff Business School. His research interests are in macroeconomics, forecasting, nonlinear systems and gambling markets. He has published extensively in a variety of journals, including the American Economic Review, Journal of Political Economy, International Economic Review and the European Economic Review.

List of contributors xv John Peirson is Director of the Energy Economics Research Group at the University of Kent. He has researched into various aspects of betting and the economics of uncertainty. He is particularly interested in the efﬁciency of different betting markets. Raymond D. Sauer is Professor of Economics at Clemson University. His studies of wagering markets supplement his interest in the economics of regulation and industrial organization. His papers have appeared in numerous journals, including the American Economic Review, Journal of Finance, and Journal of Political Economy. Adi Schnytzer is Professor of Economics at Bar Ilan University and has published widely in the areas of comparative economics, public choice and the economics of gambling. Yuval Shilony is at the economics department of Bar Ilan University. His areas of research are: economic theory, industrial organization, markets of contingent claims and economics of insurance. Donald S. Siegel is Professor of Economics at Rensselaer Polytechnic Institute. He received his bachelor’s, master’s, and doctoral degrees from Columbia University. He has taught at SUNY–Stony Brook, ASU, and the University of Nottingham, and served as an NBER Faculty Research Fellow and an ASA/NSF/BLS Senior Research Fellow. Robert Simmons is Lecturer in Economics at the University of Salford. He has published widely on sports economics, the economics of gambling and labour economics. Michael A. Smith is a Senior Lecturer in Economics at Bath Spa University College and has taught widely in higher education institutions in the UK. His research interests include the efﬁciency of ﬁxed-odds horse-race betting markets, the operations of betting exchanges, and Internet betting media. Maurry Tamarkin earned a PhD degree in Finance from Washington University in St Louis, USA. He is an Associate Professor at Clark University in Worcester, MA, USA. In addition to gambling, his research interests include discount rates and real options. Richard Thorne is a biologist with a special interest in computer networks, the Internet and horse racing. Leighton Vaughan Williams is Professor of Economics and Finance, Head of Economics Research, and Director of the Betting Research Unit at the Nottingham Trent University. His research interests include risk, asymmetric information, ﬁnancial and betting markets. He has published extensively. Patrick Waldron obtained his PhD in Finance from the Wharton School of the University of Pennsylvania and has been a Lecturer in Economics at Trinity

xvi List of contributors College Dublin since 1992. His research interests are mainly in the economics of betting markets, with particular emphasis on horse racing and lotteries. W. David Walls is Associate Professor of Economics at the University of Calgary. He has also held positions at the University of California-Irvine and the University of Hong Kong. In addition to horsetrack betting, his research focuses on transportation economics, energy markets, and the motion picture industry.

1

Introduction Leighton Vaughan Williams

When I was asked to consider putting together an edited collection of readings on the theme of the ‘Economics of Gambling’, I was both excited and hesitant. I was excited because the ﬁeld has grown so rapidly in recent years, and there is so much new material to draw upon. I was hesitant, however, because I knew that a book of this nature would not be truly satisfactory unless the papers included in it were new and hitherto unpublished. The pressures of time on academics have perhaps never been greater, and it was with this reservation in mind that I set out on the task of encouraging some of the leading experts in their ﬁelds to contribute to this venture. In the event, I need not have worried. The camaraderie of academics working on the various aspects of gambling research is well known to those involved in the ‘magic’ circle, but the generosity of those whom I approached surpassed even my high expectations. The result is a collection of readings which draws on expertise across the spectrum of gambling research, and across the global village. The papers are not only novel and original, but also set the subject within the existing framework of literature. As such, this book should serve as a valuable asset for those who are coming fresh to the subject, as well as for those who are more familiar with the subject matter. Topics covered include the efﬁciency of racetrack and sports betting markets, forecasting, lotteries, casinos, betting behaviour, as well as broad literature reviews. The twenty-nine contributors hail from nineteen academic institutions, as well as government service, from as far aﬁeld as the UK, USA, Australia, Canada, Israel and Ireland. In many cases, the contributions would, in my opinion, have gone on to be published in top-ranked journals, but the authors lent their support instead to the idea of a single volume that would help promote this ﬁeld of research to a wider audience. In all the cases, the authors have provided papers which are valuable and important, and which contribute something signiﬁcant to the burgeoning growth of interest in this area. It has been a joy to edit this book, and my deepest gratitude goes to all involved. Most of all, though, my thanks go to my wife, Julie, who continues to teach me that there is so much more to life than gambling.

2

The favourite–longshot bias and the Gabriel and Marsden anomaly An explanation based on utility theory Michael Cain, David Peel and David Law

Introduction Research on gambling markets has focused on the discovery and explanation of anomalies that appear to be inconsistent with the efﬁcient markets hypothesis; see Thaler and Ziemba (1988), Sauer (1998), and Vaughan Williams (1999) for comprehensive reviews of the salient literature. The best-known anomaly in the literature on horse-race gambling is the so-called ‘favourite–longshot bias’, where the return to bets on favourites exceeds that on longshots. This was ﬁrst identiﬁed by Grifﬁth (1949), and conﬁrmed in the overwhelming majority of later empirical studies; see below for some further discussion. A second apparent anomaly was reported by Gabriel and Marsden (1990 and 1991), who compared the returns to winning bets in the British pari-mutuel (Tote) market with those offered by bookmakers at starting prices. They reported the striking ﬁnding that Tote returns to winning bets during the 1978 British horseracing season were higher, on average, than those offered by bookmakers; even though, they suggested, both betting systems involved similar risks and the payoffs were widely reported. Consequently, they suggested that the British racetrack betting market is not efﬁcient. As noted by Sauer (1998) in his recent survey, the Gabriel and Marsden ﬁnding calls for explanation. That is one of the main purposes of this chapter. We will show that the relationship between Tote returns and bookmaker returns is more complicated than implied in the Gabriel and Marsden study. Whilst Tote pay-outs are higher than bookmakers for longshots, this is not the case for more favoured horses; also see Blackburn and Peirson (1995) for additional evidence consistent with this point. In addition, we argue that bets on the Tote are fundamentally different from bets with bookmakers since the bettor is uncertain of the pay-out. Whilst bettors have some limited information on the pattern of on-course Tote betting via Tote boards, off-course bettors have no such information and the pay-out is determined by the total amount bet. If Tote bettors did have full information on pay-outs, then, the fact that the Tote paid out £2,100 on winning bets of £1 in the Johnnie Walker handicap race at Lingﬁeld on 12 May 1978 whilst the bookmaker SP odds were only 16 to 1, would in itself invalidate the usual economists’ notions of arbitrage processes and market efﬁciency. Assuming, then,

The favourite–longshot bias

3

that the Tote pay-out is uncertain whilst bookmaker returns are essentially certain, expected returns will be equalised only if the representative punter is risk-neutral, an assumption implicit in Gabriel and Marsden, and in previous analyses of the relationship between Tote and bookmaker returns; see, for example, Cain et al. (2001). However, the assumption that the representative bettor is risk-neutral is not consistent with the stylised fact derived from empirical work on racetrack gambling, that there is a favourite–longshot bias; bets on longshots (low-probability bets), have low mean returns relative to bets on favourites, or high probability bets. This has been documented by numerous authors for both the UK (bookmaker returns) and for the US pari-mutuel system (see, e.g., Weitzman, 1965; Dowie, 1976; Ali, 1977; Hausche et al., 1981 and Golec and Tamarkin, 1998). The standard explanation for this empirical ﬁnding has been that the representative punter is locally risk-loving; see, for example, Weitzman (1965) and Ali (1977). However, Golec and Tamarkin (1998) have recently shown for US pari-mutuel data that a cubic speciﬁcation of the utility function, of the Friedman and Savage (1948) form, that admits all attitudes to risk over its range, provides a more parsimonious explanation of the data than a risk-loving power utility function with exponent greater than unity. We will show that, if the representative bettor is not everywhere risk-neutral, an explanation of both the observed relationship between Tote and bookmaker returns and the favourite–longshot bias can still be provided. This is the second main aim of the chapter.

Theoretical analysis Utility and the favourite–longshot bias It is assumed that the representative bettor has utility function, U (·) and total wealth, w. With odds against winning of o and win probability, p, the expected pay-out to a unit bet is µ = p(1 + o) + (1 − p)0 = p(1 + o) and hence o = (µ/p) − 1 = (µ − p)/p. If the punter stakes an amount s, the expected utility of return is E = E(U ) = pU (w + so) + (1 − p)U (w − s) s(µ − p) + (1 − p)U (w − s) = pU w + p

(1)

The optimal stake for the punter is such that (∂E/∂s) = 0 and (∂ 2 E/∂s 2 ) < 0 so that s(µ − p) (2) (µ − p)U w + = (1 − p)U (w − s) p and s = s(µ, p; w){if E > U (w)}. Substituting s = s(µ, p) into equation (1) gives expected utility, E, as a function of µ and p, and hence we may obtain an indifference map in (µ, p) space. It is thus possible to differentiate equation (1)

4

M. Cain, D. Peel and D. Law

with respect to p and equate to zero in order to ﬁnd the combinations of expected return, µ, and probability, p, between which the bettor is indifferent. This produces dE s(µ − p) s(µ − p) sµ U w+ =U w+ − U (w − s) − dp p p p s(µ − p) dµ + sU w + p dp ds s(µ − p) = 0 (3) − (1 − p)U (w − s) − (µ − p)U w + p dp and hence, in view of equation (2), equation (3) reduces to dµ µ 1 U (w + (s(µ − p)/p)) − U (w − s) = − dp p s U (w + (s(µ − p)/p))

(4)

so that µ A A(w − s) dµ = 1− − dp p e es

(5)

where e=

(w + so)U (w + so) U (w + so)

and

A=1−

U (w − s) U (w + so)

When w = 1 = s, the assumption made by Ali (1977) and Golec and Tamarkin (1998), equation (5) simpliﬁes to µ dµ A = 1− dp p e

(6)

If U (0) = 0, then equation (6) reduces to µ 1 dµ = 1− dp p e

(7)

where e = e(X) = e(µ/p) is the elasticity of U (·) at X = µ/p. Observe from equation (7) that the slope of the equilibrium expected return–probability frontier will be positive (or negative) depending on whether the elasticity is greater than (or less than) unity. Clearly, with a power utility function which is everywhere riskloving, the (µ, p) frontier will be everywhere upward sloping – the traditional favourite–longshot bias.

The favourite–longshot bias

5

A condition for the favourite–longshot bias is that (dµ/dp) > 0, in order that the mean return–probability relationship is not constant or declining throughout its range. It is perhaps surprising to ﬁnd that this condition is consistent with a utility function that is not everywhere risk-loving over its range. As an illustration, consider the utility function proposed by Markowitz (1952), where agents are initially risk-loving for an increase in wealth above their customary or normal level of wealth, and then subsequently risk-averse. Conversely, for a decrease in wealth, they are initially risk-averse and then risk-loving. The Markowitz utility function is more general than that proposed by Kahneman and Tversky (1979), which is everywhere risk-averse for ‘gains’ and everywhere riskloving for ‘losses’. As a consequence, the Markowitz speciﬁcation can explain the experimental observations set out in Kahneman and Tversky (1979). If we deﬁne the current level of wealth as w, and the level of utility associated with w as U¯ , then the utility function U = U¯ + h 1 − e−b(x−w) − b(x − w)e−b(x−w) (8) deﬁnes utility for increases in wealth above w, where x is wealth measured from w to ∞; h and b are positive constants. From equation (8) the marginal utility and the second derivative for an increase in wealth are given by ∂U = hb2 (x − w)e−b(x−w) ∂x and ∂ 2U = hb3 e−b(x−w) ∂x 2

(9)

1 − (x − w) b

(10)

From equation (9) the marginal utility of an increase in wealth is always positive, as required, and from equation (10) the agent is risk-loving when (1/b) > x − w, and risk-averse when (1/b) < x − w. Consequently, the utility function initially exhibits risk-loving behaviour and then risk-aversion for increases in wealth above current wealth. For a decrease in wealth below w, we deﬁne the utility function as (11) U = U¯ − k 1 − e−a(w−x) − a(w − x)e−a(w−x) where x is measured from 0 to w, and k and a are positive constants. ∂U = ka 2 (w − x)e−a(w−x) ∂x ∂ 2U 3 −a(w−x) 1 − (w − x) = −ka e ∂x 2 a

(12) (13)

From equations (12) and (13) we observe that the marginal utility of wealth is always positive, and that for decreases in wealth below current wealth, the function

6

M. Cain, D. Peel and D. Law

0.9 0.8 0.7

0.6 0.5 0.4 0.3 0.2 0.1 0

0.2

0.4

0.6

0.8

p

Figure 2.1 Range of the indifference map in (µ, p) space for Markowitz Utility Function.

initially exhibits risk-aversion (w − x < (1/a)), then risk-loving behaviour (w − x > (1/a)). Employing equations (8) and (11) together, we have a mathematical form that accords with the Markowitz hypothesis. We consider the expected utility the agent derives from a gamble at odds o to 1, with a stake s of one unit, where the probability of the outcome occurring is p. The expected utility of this gamble is given by: E = p[U¯ + h(1 − e−bo − boe−bo )] + (1 − p)[U¯ − k(1 − e−a − ae−a )]

(14)

where E represents expected utility. In order to induce a person to gamble, expected utility has to be greater than or equal to the certain utility of not gambling, U¯ . Consequently, in order for gambling to maximise utility we require that ph(1 − e−bo − boe−bo ) − (1 − p)k(1 − e−a − ae−a ) ≥ 0

(15)

In Figure 2.1 we plot the indifference map in (µ, p) space for a range of probabilities observed in horse-racing markets: 0.007 ≤ p ≤ 0.9, where h = 1, k = 1, b = 1 and a = 0.1. We observe from Figure 2.1 that the favourite–longshot bias is consistent with a utility function that is not everywhere risk-loving. Tote and bookmaker pay-outs Because the Tote pay-out (return) is uncertain, the distribution of Tote returns has a different form than that of bookmaker returns. The mean, variance and skewness

The favourite–longshot bias

7

of the return from a unit bet with a bookmaker at starting price odds of o = X − 1 to 1 are Mean:

µ = pX = p(1 + o)

Variance: σ 2 = p(1 − p)X 2 = Skewness:

µ2 (1 − p) p

µ3 = p(1 − p)(1 − 2p)X 3 =

µ3 (1 − p)(1 − 2p) p2

If the (uncertain) Tote pay-out, T , to a winning unit bet on a horse with probability p of winning, has mean E(T ), variance V (T ) and skewness S(T ), the corresponding moments of the Tote return before the result of the race is known, are: Mean: pE(T ) Variance: Skewness:

p(1 − p)[E(T )]2 + pV (T ) p(1 − p)(1 − 2p)[E(T )]3 + 3p(1 − p)E(T )V (T ) + pS(T )

The ratio of skewness to variance of return for the bet with a bookmaker (at starting price) is X(1−2p) = µ(1−2p)/p, and the corresponding ratio for Tote returns is: pS(T ) + p(2 − p)E(T )V (T ) Skewness = (1 − 2p)E(T ) + Variance pV (T ) + p(1 − p)[E(T )]2 Consequently, assuming that the distribution of Tote returns to a winning bet exhibits positive skewness, the ratio of skewness to variance of return is always relatively higher for bets with the Tote than those with bookmakers; even if the mean Tote pay-out is the same as that at starting price. Clearly, this characteristic will be implicitly relevant when punters are choosing between bets with bookmakers and the Tote. Also, the perceived distribution assumed for Tote pay-outs will be relevant to the decision. For the representative bettor to be indifferent between a bet with bookmakers and one with the Tote, we require that the expected utility from the bet with a bookmaker at starting prices and that with the Tote are the same. As bookmaker odds are known and the Tote odds are uncertain, this implies that pU (w + X − 1) = pE[U (w + T − 1)]

(16)

When the bettor is risk-neutral, equation (16) reduces to X = E(T ), and under the assumption that bettor expectations are rational, this yields the relationship T =X+ε where ε is a random error with mean zero.

(17)

8

M. Cain, D. Peel and D. Law

Gabriel and Marsden estimated the linear model that nests equation (17) and found a slope coefﬁcient signiﬁcantly greater than unity, and an intercept that was signiﬁcantly negative. Clearly, one interpretation of their results is that the market is not necessarily inefﬁcient, but rather that punters are not well-described by the riskneutral assumption. We note immediately that the assumption of risk-neutrality of the representative punter is inconsistent with the near universal empirical evidence for the favourite–longshot bias. If agents are everywhere risk-loving, Jensen’s inequality implies that E[U (w + T − 1)] > U (E[w + T − 1]), and with equation (16), this implies that E(T ) < X. If we assume that agents are risk-averse, Jensen’s inequality implies that E[U (w + T − 1)] < U (E[w + T − 1]), and hence, from equation (16), that E(T ) > X. The assumption that bettors are essentially everywhere risk-averse with utility functions such that (dµ/dp) > 0, would therefore be consistent with the favourite–longshot bias, and also with Tote odds exceeding bookmaker odds on average. However, the assumption of risk-aversion would not be consistent with starting price returns exceeding Tote returns for favourites, which may, in fact, be a feature of the data considered in the section on ‘Empirical results’. Reconciliation is possible if we assume that bettors exhibit risk-loving behaviour for favourites and riskaverse behaviour for relative longshots, so that the utility function has the shape envisaged by Markowitz (1952). In this regard, it is interesting that Golec and Tamarkin (1998) suggested that the favourite–longshot bias is consistent with the existence of risk-averse bettors exhibiting a preference for skewness on longshots.1 This is also a reason offered to explain Lotto play; see, for example, Purﬁeld and Waldron (1997).

A model for Tote odds The Tote odds t = T − 1 can be regarded, for given p, as a non-negative positively skewed random variable and hence can be modelled as a (k, λ), a Gamma random variable with shape parameter k and scale parameter λ. For this distribution the probability density function is f (t) =

e−λt λk t k−1 , (k)

t > 0,

where (·) is the Gamma function, and the ﬁrst three moments are: Mean:

E(t) = E(T ) − 1 =

k λ2 2k Skewness: S(t) = S(T ) = 3 λ from which it follows that 2V (t) S(t) = V (t) E(t) Variance:

k λ

V (t) = V (T ) =

(18)

The favourite–longshot bias

9

or equivalently, S(T ) 2V (T ) = V (T ) [E(T ) − 1] Since the Tote deducts 16 per cent from the win pool in the UK, the mean Tote pay-out to a unit stake is 0.84 = p(1 + E(t)), and hence E(T ) = 1 + E(t) = 1 +

k (0.84 − p) =1+ p λ

For small p we would expect E(t) to be large and hence k large and/or λ small. Thus, we might take λ = β/(0.84 − p) and k = β/p for some constant β, to be estimated. We require to solve equation (16) or equivalently, U (w + o) ≡ U (w + X − 1) = E[U (w + T − 1)] ≡ E(w + t)

(19)

for the particular utility function U , where o, t are odds and X, T pay-outs of bookmakers and Tote, respectively. For the Markowitz utility function of (8) and (11), we have that ∞ e−λt λk t k−1 ¯ E[U (w + t)] = U + h dt 1 − e−bt − bte−bt (k) 0

b b −(k+1) ¯ 1 + (k + 1) =U +h 1− 1+ λ λ and equation (19) reduces to b(k + 1) b −(k+1) (1 + bo)e−bo = 1 + 1+ λ λ

(20)

In general, equation (20) does not appear to be inconsistent with either o < (k/λ), o = (k/λ) or o > (k/λ), and which one of these occurs will depend critically on the values of the constant b of the utility function and k, λ of the Tote odds distribution. In the particular case of equation (20) with b = 1, and with the parameter β = 0.3, so that λ = 0.3/(0.84 − p) and k = 0.3/p for various values of the underlying probability p, it is found that o > E(t) if p > 0.46 but E(t) > o if p < 0.46. For example, Table 2.1 gives the solution, o, of equation (20) in this case, for a range of values of E(t) = k/λ generated by a range of values of p. Thus, we have shown how mean Tote returns in excess of starting price returns for longshots are compatible with an expected utility approach. Next, we re-examine the Gabriel and Marsden data set. Table 2.1 Simulated Tote and bookmaker returns p E(t) o

0.50 0.68 0.72

0.10 7.4 4

0.05 15.8 8.3

0.01 83 40.4

10

M. Cain, D. Peel and D. Law

Empirical results The data set used for comparison of Tote and bookmaker returns consists of 1,429 races from the 1978 racing season, and differs slightly from that employed by Gabriel and Mardsen in our omission of races in Ireland, and a small number of races where horses were withdrawn whilst under starter’s orders. The data were obtained from the Raceform Up-to-Date Form Book 1978 Flat Annual. Table 2.2 reports some summary statistics for the relative returns to winning bets in the form reported by Gabriel and Marsden. It appears from the table that Tote returns are higher than starting price returns, though Gabriel and Marsden do not report relative returns for more favoured horses. Columns ﬁve and six of Table 2.2 show that our data set has the same qualitative features as that employed by Gabriel and Marsden (GM). In Table 2.3, we report some summary statistics for returns from bookmakers obtained when betting all horses in a given range of odds. It is clear that these Table 2.2 Pari-mutuel and bookmaker pay-outs for winning bets (1978): cumulative T −X X

Odds range of Winner

Number of observations

Pari-mutuel return (T)

Return at bookmaker odds (X)

%

o < 10 o < 15 o < 20 All

1,208 1,347 1,375 1,429

3.831 5.599 7.445 8.652

3.531 4.353 4.591 5.313

6.9 13.8 23.8 25.5

GM%

T −X X

8.9 19.0 26.6 28.7

Table 2.3 Mean bookmaker returns at starting price odds Odds(s) range

Number of observations

Mean return (µ)

o wj ⇔ πi /πj < wi /wj because from the regression of the bettors’ bias it follows that wi > wj ⇔ pi /wi < pj /wj . Thus, now there is a favorite–longshot even if z = 0. On the other hand, it is evident from equation (4) that, as z grows, so does the extent of the bias because the brackets decline in z. Of course, the disparity between bettors’ actual behavior and Shin’s assumption regarding their behavior, also undermines his estimation of z as the extent of insider participation in the market.

Effect of inside traders

17

Notes 1 See, for example, Ali (1977), Asch and Quandt (1987), Thaler and Ziemba (1988), and Hurley and McDonough (1995). 2 In the case of grouping by p, the groups were of virtually equal size and hence regular OLS was run. It should be noted that, although in the reported regression, there were forty groups, almost identical results were obtained when the data were divided into ﬁfteen groups and when a regression was run on all 41,688 horses as individual groups.

References Ali, M. (1977),“Probability and utility estimates for racetrack betting,” Journal of Political Economy, 85, 803–15. Asch, P. and Quandt, R. E. (1987),“Efﬁciency and proﬁtability in exotic bets,” Economica, 54, 289–98. Cain, M., Law, D. and Peel, D. A. (1996), “Insider trading in the Greyhound betting market,” Paper No. 96–01, Salford Papers in Gambling Studies, Center for the Study of Gambling and Commercial Gaming, University of Salford. Hurley, W. and McDonough, L. (1995), “A note on the Hayek hypothesis and the favorite– longshot bias in pari-mutuel betting,” American Economic Review, 85, 949–55. Quandt, R. E. (1986), “Betting and equilibrium,” Quarterly Journal of Economics, XCIX, 201–7. Shin, H. S. (1991), “Optimal betting odds against insider traders,” The Economic Journal, 101, 1179–85. Shin, H. S. (1992), “Prices of state contingent claims with insider traders, and the favorite– longshot bias,” The Economic Journal, 102, 426–35. Shin, H. S. 1993, “Measuring the incidence of insider trading in a market for state-contingent claims,” The Economic Journal, 103, 1141–53. Thaler, R. H. and Ziemba, W. T. (1988), “Pari-mutuel betting markets: racetracks and lotteries,” Journal of Economic Perspectives, 2, 161–74.

4

Pari-mutuel place betting in Great Britain and Ireland An extraordinary opportunity David Jackson and Patrick Waldron

The British/Irish method of calculating place dividends in pari-mutuel pools differs fundamentally from the older method that is used in the United States and elsewhere. The attraction of the newer method to pari-mutuel operators is that the predicted place dividends (the ‘will pays’) on each horse can be accurately displayed to punters before the race. We show that the British/Irish method can result in ‘minus pools’. We describe a simple overall betting strategy, which gives the punters, on aggregate, a substantial positive expected return. In a best case scenario from the punter’s point of view, the pari-mutuel operator can expect to lose over 50 per cent of the total place pool in certain races.

Pari-mutuel betting and the place pool Horse Racing Ireland (HRI) (formerly the Irish Horse-racing Authority (IHA)) and the Horse-race Totalisator Board (the Tote) in Britain run pari-mutuel betting in the two countries respectively. We are concerned with an extraordinary anomaly, which has existed for over twenty years but has only recently attracted the attention of serious gamblers, in the way these two bodies run the place pool. The anomaly results directly from the method that the British and Irish pari-mutuel operators use to calculate the dividend for horses that are placed, that is, ﬁnish ﬁrst or second in races of 5–7 runners or ﬁrst, second or third in races of eight or more runners. The method introduced in Britain in the mid-1970s and in Ireland in 1995 is fundamentally different from that used throughout the most of the world. The new method allows the predicted place dividends to be displayed prior to the race in a manner similar to the predicted win dividends. In the standard method, the place dividend on any horse depends on which other horses are also placed and hence accurate predictions cannot be displayed prior to the race. The new method has a serious drawback, however, in that it can frequently lead to minus pools whether or not the operator pays a minimum proﬁt (say 5 or 10 per cent of the stake) on short-odds placed horses. Since this anomaly was discovered in 1998, it has led to considerable losses in the place pool for the pari-mutuel operators in both countries. Indeed the pari-mutuel operator can expect to lose money in the majority of races if punters, on aggregate, were to bet in the manner that we will describe. The strategy, however, results in an unstable equilibrium since individual

Pari-mutuel place betting in Britain and Ireland

19

punters have an incentive to free ride by betting only on the horses yielding high expected returns. Pari-mutuel: Pari-mutuel or Tote betting is pool betting. The punters bet against one another and not against the organisers of the pool. Exact dividends are not known until after the event. In theory, the operator should not be risking his own money. Place pool: The pari-mutuel pool we are interested in is the place pool. In races of ﬁve, six or seven runners the punter bets on a horse to ﬁnish either 1st or 2nd. With eight or more runners a place bet is successful if the horse ﬁnishes 1st, 2nd or 3rd. Occasionally, in races with sixteen or more runners the operator also pays a dividend on the horse ﬁnishing fourth. General principles for sharing out the place pool • • •

Operator retains a proportion of the pool to cover costs, etc. Divide the remainder among the successful punters. ‘If you can’t win you can’t lose’.

The operator has a good deal of control over the ﬁrst two of these general principles, namely how much he takes from the pool and the manner in which he divides the remainder among the successful punters. However, except when a dead heat occurs, he is bound by tradition (and possibly by fear of riots) to at least give ‘money back’, though not necessarily anything more, to successful punters.

The standard method – USA and most other places The standard method in the place pool, described in more detail by Hausch and Ziemba (1990), Asch and Quandt (1984, 1986) is to use the losing bets to pay the winning bets, with the operator taking a percentage from the losing bets (or in some jurisdictions from the total pool). Apart from the major exceptions, which are Britain, Ireland and Australia, this is basically the method which is used in countries where pari-mutuel place pools operate. The standard method Step 1. Step 2. Step 3.

Operator deducts some fraction τ of Losing bets. The losing bets (after deductions) are divided into two or three equal portions according to the number of placed horses. Punters who backed each placed horse receive a pro rata share of one of these equal portions plus their stakes.

Disadvantages of the standard method As Box 4.1 illustrates, the main disadvantage of the standard method is the existence of real dividend uncertainty for the punter. In general, the place dividend on

20

D. Jackson and P. Waldron

any horse depends on which other horses are placed. From the operator’s point of view this means that unique pre race predicted dividends (will pays) cannot be displayed, as, for example, they are displayed for the win pool. In addition a minor irritant from the operator’s point of view is that income is variable. However, it is clear that unless the operator has a policy of paying a minimum guaranteed proﬁt on successful bets, income can never be negative even if there are no losing bets.

Box 4.1 Example: Standard method • • •

Five runners. Two places £600 on favourite; £100 on each of the other four runners Total pool = £1,000; Deductions τ = 20% of losing bets

(a) Favourite placed Losing bets = £300; Deductions = £60 Dividends (to a £1 stake) will be ∗ £1.20 on favourite ∗ £2.20 on the other placed horse (b) Favourite unplaced Losing bets = £800; Deductions = £160 Dividends ∗ £4.20 on both placed horses

The new method in Britain and Ireland Under the new method introduced into Britain in the 1970s and more recently into Ireland, the pari-mutuel operator takes a deduction from the total pool, not just from the losing bets. This step is not radically different from the standard method but what happens next is. New method Step 1. Step 2. Step 3.

Operator deducts some fraction τ of the total pool. The total pool (after deductions) is divided into two or three equal portions according to the number of placed horses. Punters who backed each placed horse receive a pro rata share of one of these equal portions (with a minimum guarantee of money back).

Pari-mutuel place betting in Britain and Ireland

21

We illustrate the new method with the same ﬁve-runner race that we used previously but with deductions of 16 per cent of the total pool, rather than the 20 per cent of losing bets in the standard method. Box 4.2 Example: New method • • •

Five runners. Two places £600 on favourite; £100 on each of the other four runners Total pool = £1, 000; Deductions τ = 16% of total pool

Total pool (after deductions) = £840 Calculated dividends (to a £1 stake) are • •

£420/600 = 70 pence for the favourite £420/100 = £4.20 for other horses

Because the calculated dividend for the favourite in this example is less than £1, then, if the favourite is placed, the guarantee of at least money back to a successful punter comes into play. The pari-mutuel operator must subsidise the calculated dividend. The possibility of a minus pool If the favourite in this race is placed, the operator loses £20 overall. And this is merely giving ‘money back’ to those punters who backed the favourite. If the operator were to pay a minimum dividend of say £1.10 on the successful favourite he would lose £80 in this example. Of course, if the favourite is unplaced his pay-out is only £840 and he wins £160. Predicted place pay-outs The new method is an even simpler method than the standard method and the place dividend on any horse does not depend on what other horses are placed. This allows the operator to overcome the main disadvantage of the standard method and display the predicted place dividends for each horse before the race, in exactly the same manner as the predicted win dividends are displayed. As far as we can tell, this appears to be the main reason why the new method was adopted by some pari-mutuel operators, but as we have seen the disadvantage of the new method is the possibility of minus pools if large amounts of the pool are bet on one horse. We concentrate henceforth on the two-place case. The generalisation to three or four places is straightforward.

22

D. Jackson and P. Waldron

Dividends under the new method Let fi = Fraction of ‘place pool’ bet on horse i. Then •

fi = 1

Calculated dividend ci ci =

1−τ 2fi

Since fi < 1 the calculated dividend, ci is bounded below by (1 − τ )/2 •

Declared dividend di and Operator Policy c if ci > 1 di = i 1 if ci ≤ 1 Alternative Policy for pari-mutuel operator if ci > 1.1 ci 1.1 if c∗ ≤ ci ≤ 1.1 di = 1 if ci ≤ c∗

Policy (1)

Policy (2)

Where c∗ is the smallest calculated dividend for which the operator is prepared to round up the declared dividend to 1.1. Possible choices for c∗ 1−τ < c∗ < 1.1 2 a b c

Always pay 10 per cent proﬁt; Always pay £1 if calculated dividend is below £1.10; Sometimes pay £1.10.

In illustrating the new method, we will assume the simple policy (1) above whereby the dividend is either the calculated dividend exactly or money back. This simple policy ignores breakage (rounding dividends – usually down) but breakage is really not relevant to the anomalies that the new method throws up.

Minus pools As we have seen minus pools are a possibility and a sensible operator should be interested in how costly these minus pools can be. Let fmax = the fraction of the pool bet on the favourite. Then if the fraction bet on the favourite is large, speciﬁcally if fmax > (1−τ )/2, then when the favourite is placed since the calculated dividend is less than £1, the operator is going to have to subsidise the dividend. The total pay-out for the pari-mutuel operator when the favourite is placed is given below. Basically the pay-out is half the pool (after deductions) on one of

Pari-mutuel place betting in Britain and Ireland

23

the horses and perhaps a good deal more on the favourite if the fraction bet on the favourite is large. •

Pay-out when the favourite is placed = (1 − τ )/2 + Max {(1 − τ )/2, fmax }

•

Potential operator loss = pay − out − 1.

Figure 4.1 below plots the potential operator loss as a function of the fraction of the pool that is bet on the favourite. It illustrates, for deductions of 20 per cent, that when the fraction of the pool that is bet on the favourite is less than 40 per cent then the operator always retains 20 per cent of the pool. As the fraction rises above 40 per cent, the subsidy, when the favourite is placed, starts to eat into his proﬁt, reaching break even point when 60 per cent of the pool is bet on the favourite. The operator starts to incur losses when the fraction rises above this and in a worst case scenario can lose 40 per cent of the total pool when the fraction approaches 1. •

Worst Case scenario (Two places paid) Operator can lose 40 per cent, that is (1 − τ )/2 of the pool

In general Worst Case scenario (k places paid, k = 2, 3, 4) Operator can lose ((k − 1)/k)(1 − τ ) of the pool (e.g. 53 13 per cent for k = 3 and τ = 20 per cent). 40

Loss in %

•

20

0 0.20

0.40

0.60

0.80

1

–20 Fraction bet on favourite

Figure 4.1 Potential operator loss as a function of the fraction of the pool (fmax ) bet on the favourite.

24

D. Jackson and P. Waldron

Expected operator loss The potential operator losses are substantial, but a loss can occur only if the favourite is placed. The expected losses depend on the true (unknown) probability of the favourite being placed as well as on the fraction of the pool that is bet on the favourite. Let pmax = probability that the favourite is placed. Then when fmax > (1 − τ )/2: •

Expected pay-out = pmax {fmax + (1 − τ )/2} + (1 − pmax )(1 − τ )

The worst case scenario becomes inevitable as pmax , fmax → 1: •

Expected loss → (1 − τ )/2; Half the pool after deductions.

Figure 4.2 is a chart of expected operator loss as the fraction bet on the favourite increases, for three values for the true probability of the favourite being placed. 1 2 3

For p = 1 – The favourite is certain to be placed and potential losses become certain losses. For p = 5/6 – The favourite is very likely to be placed but occasionally the operator will win, even when practically the whole pool is bet on the favourite. For p = 1/3 – This is a low value for the probability of a favourite being placed but the operator can still only expect to break even as fmax tends to 1.

p=1

40

p =1 p = 5/6 p = 1/3

p = 5/6

Loss in %

20

p = 1/3

0 0.2

0.4

0.6

0.8

1

–20 Fraction bet on favourite

Figure 4.2 Expected operator loss for three values for the probability of the favourite being placed as a function of the fraction of the pool (fmax ) bet on the favourite.

Pari-mutuel place betting in Britain and Ireland

25

Making the outsiders favourable bets Of course, if a large fraction of the pool is bet on the favourite, then some or all of the other horses are likely to be favourable bets. •

Consider the aggregate of bets on all the outsiders.

If fmax is the fraction of the pool bet on the favourite then the aggregate of the total pool bet on outsiders = 1 − fmax . The aggregate pay-out on the outsiders is half the net pool when the favourite is placed and the total net pool when the favourite is unplaced. Hence •

Expected aggregate pay-out on outsiders is greater than the amount bet on outsiders if 1−τ pmax + (1 − τ )(1 − pmax ) > 1 − fmax 2 1−τ ⇔ fmax > (1) pmax + τ. 2

For example for τ = 20 per cent the aggregate pay-out on the outsiders will always be greater than the total invested on them if fmax > 60 per cent. •

If even a small group of punters collude to keep fmax high enough then the aggregate of all outsiders will be favourable.

Making all the outsiders favourable bets simultaneously On aggregate, the outsiders are favourable bets if a large fraction of the pool is bet on the favourite. Indeed, all the outsiders individually can be favourable bets simultaneously. •

Assume that the fraction fi of the pool that is bet on each horse except the favourite is in proportion to its probability (pi ) of being placed.

Since two places are being paid it follows that pi = 2 and hence that

pi = 2 − pmax

outsiders

⇒ for the outsiders fi =

pi (1 − fmax ). 2 − pmax

26

D. Jackson and P. Waldron

•

If an outsider is placed, the dividend di will be half the net pool divided by the fraction bet on that horse di =

1−τ (1 − τ )(2 − pmax ) = 2fi 2pi (1 − fmax )

For the outsiders the dividend is inversely proportional to the probability of the horse being placed. When is a bet on an individual outsider a favourable bet? Since the expected value of a bet is the probability of the bet being successful multiplied by the dividend it follows that •

The expected value of a bet on the outsider is di p i =

(1 − τ )(2 − pmax ) 2(1 − fmax )

This expression is the same for each outsider and it follows that •

The expected value is greater than unity provided

1−τ pmax + τ (1a) 2 This is exactly the same condition, see inequality (1), that applied for the aggregate of all outsiders to be favourable. Also, we see that the expected value of a bet on an outsider tends to inﬁnity as fmax tends to one. Of course as fmax tends to one the amounts bet on the outsiders are small but nonetheless the expected value of these small bets is large. fmax >

Can the favourite be a favourable bet? Yes, of course, if the fraction of the pool bet on it is small. But if the favourite is a favourable bet then, by deﬁnition, it must be paying better than money back and a

If favourite pays better than money back then

b

Percentage of pool bet on favourite < (1 − τ )/2 = 40% (τ = 20%) and

c

We are in that area of Figure 4.1 where no subsidy is necessary. The operator is guaranteed a proﬁt of 20 per cent and the aggregate of punters are guaranteed to lose 20 per cent. It follows that all horses cannot be favourable bets simultaneously (see note).

Pari-mutuel place betting in Britain and Ireland

27

Note: Suppose the operator has a liberal policy of always paying a minimum proﬁt, say £1.10 instead of money back. This is not the assumption we have been making here, but if that is the case and a large fraction of the pool is bet on a favourite who has a high probability of being placed then in that case, of course, all horses in that race can be favourable bets simultaneously.

Forming a favourable portfolio of bets • • •

When the fraction bet on the favourite is large, then the outsiders are favourable bets and the operator may be in an expected loss situation If the operator expects to lose, then punters in the aggregate expect to win. How can we exploit this situation? Operators expected pay-out per pound invested = pmax {fmax + (1 − τ )/2} + (1 − pmax )(1 − τ )

• • • • •

The expected pay-out increases as fmax increases, but it also depends on pmax the probability of the favourite being placed Expected pay-out → 1 + (1 − τ )/2 as pmax , fmax → 1 Expected operator loss tends to half the net pool The public controls fmax but not pmax , the probability that the favourite is actually placed Critical condition (two places paid) for the expected pay-out to be greater than unity as fmax → 1 is pmax >

•

1 2τ = for τ = 20% 1+τ 3

(2)

In general (k places paid) the expected pay-out is greater than unity if pmax >

3 1 kτ = or for k = 3, 4 resp for τ = 20%. (k − 1) + τ 11 4

(3)

So the portfolio of bets which we are suggesting for the public as a body is to invest the vast majority of their funds on the favourite and minimal amounts on the outsiders. This will be a favourable portfolio in the two place case as long as inequality (2) is satisﬁed, that is, pmax > 1/3 when deductions are 20 per cent. Of course the larger the pmax actually is the more favourable the portfolio becomes, achieving returns of up to 40 per cent when the favourite is nearly certain to be placed. In the general case, with k places paid, such a portfolio is a favourable one as long as inequality (3) is satisﬁed.

Conclusions •

The new method for calculating the place dividends as it was used in Britain until the beginning of 1999, and in Ireland until the beginning of 2000 was fundamentally ﬂawed.

28

D. Jackson and P. Waldron

•

When the total fraction of the pool bet on the favourite is ‘large’ then the operator should expect to lose. If the public bets as we are suggesting then the favourite must be placed for the operator to lose. However, in nearly every race that is run, the true probability of the favourite being placed is high enough for the operator to expect to lose if the public, as a body, bets in the proposed manner. The existence of these minus pools does not depend on the operator paying a minimum guaranteed proﬁt on short odds horses which under the standard method is a necessary condition for the existence of minus pools. In many races, where the authors have been using the method since 1998, the pools are sufﬁciently small so that a single large investor can dominate the betting and form a favourable portfolio of bets. He does this by investing most of his money on the favourite and reasonable amounts on the outsiders (amounts similar to the total amounts that the public bets on these horses) but need not rely on any collusion from the aggregate of the rest of the punters. Indeed, the greater his investment, the greater will be both the absolute expected proﬁt and the percentage proﬁt on his investment. For a pari-mutuel pool this is truly an extraordinary anomaly.

•

• •

Acknowledgements Our thanks are due to the Horse-race Totalisator Board and to the Irish Horse-racing Authority who have unwittingly supported this research.

Postscript Sadly, good things come to an end and must then be written up. There were thirty coups in Britain in 1998 of which one was unsuccessful when the favourite was unplaced and a slightly larger number in Ireland in 1998–99 with the favourite being placed on each occasion. Although we were conservative in choosing only races with a long odds on favourite, we were fortunate in having only one favourite unplaced from approximately sixty-ﬁve races. Our model predicted 3–4 failures. Our offers to the IHA and the Tote in Britain to ﬁx the problem by writing a little extra software were refused. However, both have now quietly introduced a change in how the place dividends are calculated when large amounts are bet on the favourite and the favourite is placed. Basically, they claw back money from the fraction of the pool allocated to the other placed horses in order to avoid subsidising the dividend on the favourite. They ﬁnd it necessary to do this calculation manually after the race is over. It takes them a considerable time and means that in this situation the pre-race predicted place dividends for all horses apart from the favourite are grossly inﬂated. However, the basic method of dividing the total pool remains and predicted place dividends, which are accurate in the majority of races, are still displayed beforehand.

Pari-mutuel place betting in Britain and Ireland

29

References Asch, P. and Quandt, R. (1986) Racetrack Betting, Dover, MA: Auburn House. Asch, P., Malkiel, B. and Quandt, R. (1984) ‘Market efﬁciency in racetrack betting’. Journal of Business, 57, 165–75. Hausch, D. and Ziemba, W. (1990) ‘Locks at the racetrack’. Interfaces, 20, 41–8.

5

Betting at British racecourses A comparison of the efﬁciency of betting with bookmakers and at the Tote John Peirson and Philip Blackburn

It is shown that, at British racecourses, bookmakers offer more/less favourable odds on favourites/outsiders compared to the Tote system of betting. This would seem to suggest semi-strong inefﬁciency between these parallel markets. However, the degree of inefﬁciency between the odds in these two markets falls as the market operates and the structures of the two markets suggest that it is not efﬁcient for the odds offered by the two markets to converge exactly. Though systematic differences exist in the odds offered by the two markets, the variation in the differences in Tote and bookmaker odds is great. This variation is likely to hinder adjustment between the two markets. It is noted that the differences between the two markets are compatible with proﬁt maximisation by bookmakers and efﬁcient behaviour by bettors.

Introduction Gambling on horse racing is generally regarded as an important source of information for the study of decision-making under uncertainty. Betting markets are particularly good examples of contingent claims markets, see Shin (1992). The markets are simple and complete in that odds are offered on all horses. The return from a successful bet is clear and the uncertainty is resolved rapidly and at a known time. Economists have investigated and attempted to explain the evidence on betting on horse races and used their conclusions to consider more complicated ﬁnancial markets, for example, see Thaler and Ziemba (1988) and Shin (1991). Empirical interest has focused on the relation between the odds offered by bookmakers and pari-mutuel systems, and the probability of types of horses winning, in particular whether these odds are efﬁcient and whether insider information exists, good examples are Dowie (1976), Ali (1979), Crafts (1985), Asch and Quandt (1987 and 1988), Dolbear (1993), Lo and Busche (1994) and Vaughan Williams and Paton (1997a). However, only the studies by Vaughan Williams and Paton (1997b), Gabriel and Marsden (1990)1 and Cain et al. (2001) have investigated the comparative efﬁciency of the two modes of betting available at British racecourses: with bookmakers and the Totalizator (hereafter the Tote).2 These parallel markets offer an opportunity to investigate the efﬁciency between two markets operating under uncertainty. The bettors, the so called punters, would appear to

Betting at British racecourses

31

have access to the same information and, assuming semi-strong efﬁciency,3 one would expect the odds given by the bookmakers and Tote to be the same. Gabriel and Marsden (1990) concluded that the odds offered by the Tote are more generous and that the two markets are not semi-strong efﬁcient. It was suggested that the difference in odds was caused by the presence of insider information and the market did not adjust to the presence of this information. This conclusion would appear to be an important example of where the Efﬁcient Markets Hypothesis is directly refuted by empirical evidence. Cain et al. (2001) considered Gabriel and Marsden’s propositions and evidence. They found that the difference between Tote and bookmaker returns on winning horses depends on the probability of the horse winning and the existence of insider information. The present analysis uses a longer data set and shows that a more complete investigation does not arrive at the same conclusions as Gabriel and Marsden (1990) and Cain et al. (2001). The systematic differences that we ﬁnd in the data on odds given by bookmakers and the Tote are not consistent with the general conclusion of Gabriel and Marsden (1990) and Cain et al. (2001). The systematic differences that we observe are compatible with efﬁcient behaviour on the part of the bookmakers and punters. Given the short duration of racecourse betting markets and the imperfect information held by punters, we believe that price information is imperfect in these markets and that the two sets of prices are not exactly comparable. However, the evidence suggests that during the market the bookmakers’ odds move substantially in the direction of the Tote odds and this adjustment does not appear to be consistent with the existence of insider information. Thus, the market responses are more consistent with the Efﬁcient Markets Hypothesis than has previously been suggested and is compatible with the view of Vaughan Williams and Paton (1997b). The chapter is made up of ﬁve further sections. First, the important characteristics of British horse-racing betting markets are discussed in the section on ‘British markets for racecourse betting on horse racing’. Second, the different notions of efﬁciency that are relevant to betting on horse racing are examined in the section on ‘Efﬁciency and British betting markets’. Third, the empirical analysis is conducted in the section by the same name. Fourth, the evidence is explained in terms of proﬁt maximisation by bookmakers and efﬁcient behaviour by punters in the section on ‘Interpretation of the results’. Finally, a concluding section draws together the important arguments and ﬁndings.

British markets for racecourse betting on horse racing British racecourse betting has two forms. Punters can bet at the Tote or with on-course bookmakers. The Tote is a pari-mutuel system, where there is a proportionate deduction from the total sum bet and the remainder is returned to winning punters in proportion to their stakes. The winning pay-out is declared for a £1 stake.4 An important characteristic of betting on the Tote is that the punter is not guaranteed the odds. However, at the racecourse and Tote outlets, potential dividends are displayed on public electronic monitors. Thus, punters are informed about which horses are being supported at the Tote and which are not. For horses

32

J. Peirson and P. Blackburn

that are not well supported, moderate sized bets can signiﬁcantly alter the winning pay-out. There are different types of Tote bet, for example ‘win’, ‘place’ and ‘exotic’ bets. A win bet is concerned solely with whether the horse wins. A place bet is for a horse to be placed in the ﬁrst 2–4 runners and can only be made in conjunction with a win bet. This is an example of commodity tying, see Phlips (1989). The average pay-out for on-course Tote betting to win is 84 per cent and for place betting it is 76 per cent. The setting of books on a horse race is different from the operation of a parimutuel system.5 The intention of bookmakers is to offer odds that attract bets and make a proﬁt over a period of time. There is some confusion over whether bookmakers attempt to make the same proﬁt whatever horse wins or are willing to take a risk to gain higher expected proﬁts, see the discussion in the Royal Commission of Gambling (1978). Peirson (1988) showed that bookmakers would have to be perfectly risk-averse to wish to follow the former strategy. The anecdotal evidence appears to support the view of expected proﬁt maximising bookmakers as ‘if one of the fancied horses wins, the bookmakers lose, but if one of the outsiders win, they win’ Royal Commission on Gambling (1978, p. 471). This systematic result is presumably the result of a conscious strategy. Unlike the Tote, bookmakers are engaged in a complicated example of decisionmaking under uncertainty. They do not have perfect information on the demand for betting on different horses. About 15 minutes before the start of a race, bookmakers post opening prices, which are based on past form and anticipated demand. As bookmakers may have information that is inferior to that possessed by insiders, the opening show is usually regarded as being a conservative estimate of the ﬁnal odds in the market – this is shown in Table 5.3. Bookmakers then alter odds according to the ﬂow of betting, their subjective probabilities of horses winning and the odds offered by other bookmakers and, presumably, the Tote. At the racecourse, punters usually take the bookmakers’ odds offered at the time the bet is made. In this case, the punters are sure of the amount of a winning pay-out. The ﬁnal odds offered by bookmakers are the starting prices (SPs). The SPs are used to settle most off-course bets with bookmakers. The SPs are recorded by the Sporting Life and the Press Association.6 Betting on a horse being placed is also possible.7 However, such bets can only be made in conjunction with an equal win bet. The average pay-out with on-course bookmakers has been estimated at 90 per cent for win bets and for the place element of each way bets ‘it is certainly lower than on bets to win’, Royal Commission on Gambling (1978, p. 475).

Efﬁciency and British betting markets Racecourse betting has been used as an example to test the efﬁcient markets hypothesis, see, for example the studies by Dowie (1976), Snyder (1978), Tuckwell (1983) and Asch et al. (1984), Crafts (1985) and the discussion by Thaler and Ziemba (1988). These studies have considered the relation between offered odds and the probabilities of horses winning, and the possibility of proﬁtable insider

Betting at British racecourses

33

information. These two types of studies are testing for weak and semi-strong efﬁciency, respectively. The market for betting on horses is similar to other ﬁnancial markets, where there is publicly available information, an uncertain outcome and insider information. Only the studies by Gabriel and Marsden (1990) and Cain et al. (2001) have considered the relation between British bookmaking and Tote betting markets. According to the assumption of semi-strong efﬁciency, see Fama (1970) and Malkiel (1987), current prices reﬂect historical information and, more importantly, all relevant publicly available information. At racecourses, Tote odds are broadcast on electronic monitors and bookmakers ‘chalk-up’ and ‘shout out’ offered odds. Gabriel and Marsden (1990, pp. 879 and 883) suggested that ‘tote payoffs were consistently higher than identical bets made at [bookmakers’] starting price odds’ and ‘the market fails to satisfy semi-strong efﬁciency conditions’. Cain et al. (2001) suggest that the differences between Tote and bookmakers’ winning returns depends on the probability of the horse winning and the existence of insider information. These conclusions are investigated in the empirical analysis of the following section.

The empirical analysis Data from the whole of the 1993 season of ﬂat racing was used to examine the efﬁciency of betting with the Tote and bookmakers. The empirical analysis consists of three tests: the differences in Tote and bookmakers’ odds; drift in bookmakers’ odds; and the pay-outs for place bets. The standard test uses a t-statistic for the difference in sample means and where appropriate a paired t-test is carried out. The analysis considers separately the Tote and bookmakers’ winning returns for horses with low and high probabilities of winning. Data was collected from Sporting Life 1993 Flat Results and Raceform 1993 Flat Annual. A total of 3,388 races from March to November were included. Races with dead heats for ﬁrst place were excluded. Data was recorded on opening prices, ‘touched prices’, SPs, position of horse, winning tote, place tote, number of runners in race, racecourse, date, age of runners in race and handicap grade. Data on potential Tote winning and place dividends on horses that lost or were not placed was not available, as it is not published and is only kept by the Horse-race Totalizator Board for three months. Before the empirical analysis, it is appropriate to consider other relevant evidence: 1

The difference between the reported average winning pay-outs of the Tote and bookmakers of 84 and 90 per cent, respectively, are incompatible with the Tote consistently offering more favourable odds than bookmakers. For some odds at least, bookmakers must offer more favourable odds than the Tote. This simple evidence contradicts Gabriel and Marsden’s general conclusion.

34

J. Peirson and P. Blackburn

2

As the odds change, it is not possible to follow exactly the odds offered by bookmakers. These odds are literally chalked-up and shouted out. Professional gamblers and bookmakers employ men to report these odds quickly. Average punters cannot easily directly compare Tote and bookmakers’ odds. It is the case that different bookmakers offer different odds. The odds on the Tote are continuous. The odds offered by bookmakers are discrete and not all ‘integer odds’ are used, for example, between odds of 10/1 and 20/1, the used odds are 11/1, 12/1, 14/1 and 16/1. Commonly, bookmakers have minimum bets of £5, whilst the smallest Tote bet is £2. Additionally, some bettors may ﬁnd betting with bookmakers intimidating and prefer the Tote’s friendlier counter-service. On-course betting with the Tote is about 5 per cent of the turnover of on-course bookmakers (McCririck, 1991, p. 59).

3

4

5

Tote and starting price pay-outs In Table 5.1, the average pay-out for winning £1 bets with the Tote and at bookmakers’ SPs prices is given. As Gabriel and Marsden (1990) found, there is an apparent clear and statistically signiﬁcant difference in favour of betting on the Tote. However, the average Tote pay-out is approximately 35 per cent lower than that previously reported, but the SP pay-out is about the same. Table 5.2 reports the differences between Tote odds and SPs for different ranges of SPs. All the differences are statistically signiﬁcant. However, only for the ﬁrst three ranges of SPs does the Tote offer more favourable returns. For the favourable Tote odds, the differences in means may appear large. However, the construct of equal £1 winning bets inﬂates the importance of these differences, as much less money is bet on outsiders than on fancied horses. Table 5.1 Average winning pay-outs per £1 bet Range of odds

Observations

Tote mean

SP mean

Difference

t-Value

All

3,388

7.20 (10.86)

6.09 (5.97)

1.11 (6.62)

9.74

Note Standard deviations are in parentheses.

Table 5.2 Average winning pay-outs per £1 bet Range of odds

Observations Tote mean

SP mean

Difference

% Difference t-Value

SP ≥ 20/1 10/1 ≤ SP < 20/1 5/1 ≤ SP < 10/1 5/2 ≤ SP < 5/1 Evens ≤ SP < 5/2 SP < Evens

140 539 926 855 649 279

26.27 (8.96) 12.34 (2.15) 6.61 (1.26) 3.47 (0.65) 1.64 (0.40) 0.64 (0.22)

13.70 (24.80) 3.50 (7.24) 0.19 (2.36) −0.18 (0.91) −0.11 (0.39) −0.03 (0.17)

34 22 3 −5 −7 −5

Note Standard deviations are in parentheses.

39.97 (29.17) 15.84 (8.13) 6.80 (2.85) 3.29 (1.12) 1.53 (0.55) 0.61 (0.26)

6.54 11.24 2.52 −5.96 −6.90 −2.70

Betting at British racecourses

35

The pattern of more/less favourable odds on favourites/outsiders offered by bookmakers compared to the Tote is compatible with the evidence of the Royal Commission on Gambling (1978) that the pay-outs on bets with bookmakers is greater than that for the Tote and the evidence of Cain et al. (2001). It contradicts the conclusion of Gabriel and Marsden (1990, p. 874) of the ‘persistently higher Tote returns’. Two issues arise from the results of Table 5.2. First, why should bookmakers choose to offer odds on the fancied horses that are more favourable than the Tote? Second, how can bookmakers attract betting on the less fancied horses when the odds offered appear to be so much less than those given by the Tote? The remainder of the empirical analysis and theoretical investigation investigate the evidence of Table 5.2 and consider its implications. The market in the returns for fancied horses From Table 5.2, it is clear that, compared to the Tote and on average, bookmakers offer favourable SPs on the fancied horses, taken as being horses with SPs of 5/1 or less. Why should bookmakers wish to offer such favourable odds? The bookmakers dominate the market in racecourse betting, being responsible for 95 per cent of the turnover and most of the volume of betting is on fancied horses.8 Presumably, they wish to achieve this outcome because the volume of betting creates greater expected proﬁts than simply ensuring that they make the same proﬁt whatever the outcome of the race and match the returns offered by the Tote. However, to do this they must be offering a better product than the Tote. Betting with bookmakers is a superior product for three reasons. First, bets are made at known odds whilst, in the case of Tote, the actual return on a winning bet is not known exactly until the end of betting and depends on the ﬁnal amounts bet on the winning horses and all horses. Second, with the Tote, an additional bet on a winning horse reduces the average return as the total pool has to be spread over a larger amount bet on the winning horse. Finally, bookmaker SP payments for the three groups of most fancied horses are 6 per cent better than those with the Tote. However, there are two reasons why bets with the bookmaker may be less attractive than using the Tote. First, for the fancied horses, though on average the SPs are better than the Tote returns, the Tote returns are not always better. For the fancied runners, the Tote returns were greater than the SPs for 32 per cent of winning horses and worse for 68 per cent. Thus, for about a third of winning horses, a small bet placed with the Tote would secure a greater return than with the Bookmakers. Second, bookmakers alter the odds offered on horses. Most on-course bets with bookmakers are struck at the currently offered odds and these vary across the duration of the betting market. The volume of betting on different horses, odds on other horses and presumably forecast Tote returns affect the odds offered on a particular horse, see Royal Commission on Gambling (1978). It is common to notice that the odds on outsiders drift out and the odds on clear favourites drift in. This drift can be interpreted as bookmakers protecting themselves from insider

36

J. Peirson and P. Blackburn

Table 5.3 Opening prices, SPs and mean drift (expressed as percentage of drift from opening price) Range of odds

Observations

Mean drift

OP/SP

Tote mean

SP ≥ 20/1 10/1 ≤ SP < 20/1 5/1 ≤ SP < 10/1 5/2 ≤ SP < 5/1 Evens ≤ SP < 5/2 SP < Evens

140 539 926 855 649 279

39.30 (38.42) 20.20 (27.52) 17.93 (28.98) 12.50 (30.33) 5.43 (29.33) −4.81 (23.40)

18.86/26.27 10.27/12.34 5.61/6.61 3.08/3.47 1.55/1.64 0.67/0.64

39.97 15.84 6.80 3.29 1.53 0.61

Note Standard deviations are in parentheses.

information on the likelihood of a horse winning. The degree of insider information is likely to be reﬂected in the volume of betting for particular horses. Table 5.3 measures the drift from opening prices of different SP odds for winning horses. The drift is measured by the difference of starting and opening prices divided by the latter. The major conclusion to be drawn from Table 5.3 is that, in each odds category, there is a large degree of variation in the drift (this is shown by the high standard deviation relative to the mean drift). So the direction and magnitude of the drift is varied and the returns offered by bookmakers will not always be in excess of the Tote returns. For the most fancied horses, the average drift reduces the returns and brings the Tote and Bookmaker returns closer. The drift for the second most fancied groups of horses increases the difference in returns, but the average drift is a small proportion of the standard deviation. For the third most fancied group of horses, and on average, the opening price is less than the Tote return and the SP exceeds the Tote return by nearly the same amount. This implies that the drift starts off taking the bookmaker odds in the direction of the ﬁnal Tote return, but the adjustment, on average, over shoots and contains a lot of noise. Market in the returns for unfancied horses It appears that bookmakers offer, on less fancied horses (here taken as horses with SPs of 5/1 or more), less favourable odds than the Tote. The following analysis suggests that the simple conclusion to be drawn for the Table 5.2 has to be strongly qualiﬁed for four reasons. First, for the unfancied horses the differences between Tote and bookmaker returns are heavily skewed with a few very large differences.9 Such positive outliers tend to occur in small pools where there is very little support in the Tote betting market for these unfancied horses. The consequence of the existence of these outliers is that the mean difference for the remaining horses is much less. The importance of these outliers can be seen by the absolute size of the standard deviations of the Tote samples in Table 5.2 relative to the difference of the Tote and SP means for three groups of unfancied horses. Bookmakers refuse to match these outlier Tote returns as it exposes them to a risk of a large loss, the reasons

Betting at British racecourses

37

for this behaviour are explained below and should not be regarded as evidence of semi-strong inefﬁciency. Second, betting on the horse with the Tote depresses the winning return. By comparison, bookmakers are expected to accept reasonable sized bets at the posted odds. The same absolute bet will depress the winning return for an unfancied horse more than for a much fancied horse because the pool of bets on the unfancied horse is less. Thus, bookmakers do not have to match predicted Tote returns on unfancied horses. For example, take the case of the total bet with the Tote on a race being £1,190 and the predicted return on a particular horse is twenty. A punter betting £10 on this horse will reduce the winning return and, thus, receives a return of £160.81 for his bet. This represents a reduction of 16 per cent from the predicted return. Thus, for quite moderate sized bets and Tote pools that are not unduly small, bookmakers can offer substantially lower odds and still remain competitive. Third, the drift in the odds of unfancied horse is on average in the direction of reducing the difference between Tote and bookmaker winning returns, see Table 5.3. As noted above, there is a large degree of variation in the percentage drift. The drift may be regarded as the response of bookmakers to the volume of betting on different horses. If a horse receives little support its odds will drift out. If a horse receives an unexpected amount of support, this may be taken as representing betting supported by insider information. The bookmaker will protect their position by reducing the odds and reducing support for the horse. Holders of such insider information will not use the Tote to place bets as this automatically reduces the return the insider receives. Thus, where insider information is perceived to exist bookmakers will contrive to force their SPs down and they will be much lower than the Tote returns. It may not be appropriate to regard this as an example of semi-strong inefﬁciency, as bookmakers can be interpreted as responding to information contained in the volume of betting, see Vaughan Williams and Paton (1997b). Gabriel and Marsden (1990) and Cain et al. (2001) suggest that bookmakers protect themselves by reducing the odds on heavily supported horses. Shin (1991) has developed a famous theoretical model that shows how bookmakers protecting themselves against insider trading will offer odds with the well-known favourite– longshot bias. Unfortunately, the evidence of drift in bookmaker odds for winners and all other runners does not support this hypothesis. In Table 5.4, the drift from opening to starting prices is reported for the chosen six SP categories. In no case are the differences in drift statistically signiﬁcant between winning and non-winning horses. Additionally, the differences are not quantitatively important, the largest difference being less than 2 per cent and all others being less than 1 per cent. Cain et al. (2001) provide evidence that estimates of a Shin measure of the degree of insider information are related to the discrepancy in the Tote and bookmaker winning returns. This proposition, which the present author is sympathetic to, would not appear to be compatible with the evidence of Table 5.4. Crafts has provided evidence (1985 and 1994) that there are horses on which the odds shortened signiﬁcantly and went on to win. However, these are a small proportion of all winning horses and may get lost in the large number of winning horses considered

38

J. Peirson and P. Blackburn

Table 5.4 Average drift (expressed as percentage of drift from opening price) Range of odds

SP ≥ 20/1 10/1 ≤ SP < 20/1 5/1 ≤ SP < 10/1 5/2 ≤ SP < 5/1 Evens ≤ SP < 5/2 SP < Evens

Losing runners

Winning runners

Observations

Mean drift

Observations

Mean drift

10272 (51.32) 9447 (30.28) 8446 (28.64) 4292 (30.34) 1849 (28.41) 482 (23.98)

39.74 22.13 17.88 12.40 4.84 −5.61

140 (38.42) 539 (27.52) 926 (28.98) 855 (30.33) 649 (29.33) 279 (23.40)

39.30 20.20 17.93 12.50 5.43 −4.81

Difference

t-Value

0.44 1.93 −0.05 −0.10 −0.59 −0.80

0.13 1.57 −0.05 −0.09 −0.44 −0.45

Note Standard deviations are in parentheses.

in Table 5.4. At the aggregate level, it is difﬁcult to identify winning horses as having their odds shortened more than other runners, but see Vaughan Williams and Paton (1997b). This idea is important to the Shin model (1991) and is commonly accepted, for example, see Vaughan Williams (1999). As noted in the section on ‘Efﬁciency and British betting markets’, the odds offered by bookmakers are discrete. The drift in bookmakers’ odds towards the Tote odds is likely to be restricted to a degree by the discrete odds bookmakers use. The effect of this can be considered by assuming that when updating odds the bookmakers calculate the exact odds they wish to offer and only when this exceeds an allowed odds category by a sufﬁcient margin do they change to this new category. Thus, when odds drift out, on average, bookmakers will offer less than the Tote odds because of this ratchet effect. This risk-averse strategy applied to odds drifting out will produce Tote returns in excess of bookmakers, but by an unknown margin. Fourth, bookmakers’ odds are used to settle bets on horses to be placed in a race. The SPs for place bets may be more or less favourable than the Tote returns. It is common to make place bets on outsiders, as they are more likely to be placed rather than win a race. However, with bookmakers in the United Kingdom, it is only possible to make a place bet with a bet to win. For example, bookmakers and the Tote require that an equal win bet is also made, this is called an each-way bet. The return to the place part of an each-way bet with a bookmaker is a certain fraction of the offered odds for a win bet and the number of places depends on the number of runners and whether the race is a handicap.10 In Tote betting, it is possible to bet on horses to be placed only. The separate Tote place pool is the total of place bets minus a 24 per cent take. The pool is divided equally between the place positions. For each place, the dividend is calculated by dividing the allocated pool by the total bet on the placed horse. Table 5.5 gives the place pay-outs for Tote and bookmakers on unfancied horses. In two out of the three SP ranges, the bookmakers offer more favourable odds for place bets and in the third case the difference is very small and statistically insigniﬁcant. As the betting with bookmakers on outsiders is often in the form of each way bets, the unfavourable SPs of Table 5.2 compared with the Tote odds are offset

Betting at British racecourses

39

Table 5.5 Average place pay-outs per £1 bet for all placed horses Range of odds

Observations Tote mean

SP ≥ 20/1 702 10/1 ≤ SP < 20/1 1,935 5/1 ≤ SP < 10/1 2,883

SP mean

Difference

% Difference t-Value

6.56 (7.55) 6.48 (2.96) 0.08 (6.83) 1 2.50 (1.50) 2.90 (0.57) −0.40 (1.39) −14 1.37 (4.42) 1.57 (0.34) −0.20 (4.41) −13

0.30 −12.70 −2.44

Note Standard deviations are in parentheses.

Table 5.6 Average pay-outs for £1 each-way bets on all placed horses (including win and place) Range of odds

Observations Tote mean

SP ≥ 20/1 702 10/1 ≤ SP < 20/1 1,935 5/1 ≤ SP < 10/1 2,883

SP mean

Difference

% Difference t-Value

13.69 (22.98) 10.84 (11.99) 2.85 (15.03) 26 6.18 (9.08) 5.60 (6.14) 0.58 (4.76) 10 2.84 (6.13) 2.97 (3.65) −0.13 (4.71) −4

5.02 5.39 −1.45

Note Standard deviations are in parentheses.

by the favourable bookmakers’ odds for placed outsiders. A comparison of the returns on each way bets on outsiders with bookmakers and at the Tote is given in Table 5.6. This shows that the differences between Tote and bookmaker returns are reduced and in one case the sign of the difference is actually reversed.

Interpretation of the results Bookmakers could attempt to set perfect books and/or match the returns offered by the Tote. The evidence presented here suggests that bookmakers attempt to do something different than this. It is also clear that the punters will not always have incentives to equalise completely the returns between betting with the Tote and bookmakers. This evidence is summarised and interpreted. The most important point to make about the Tote/bookmaker betting market is that there is considerable variation in the returns offered. This has not been emphasised sufﬁciently in previous discussions of the market. The distribution of the differences in winning Tote and bookmaker returns shows great variation and there is great variation in these differences during the operation of the market. Thus, it is not sufﬁcient to consider only what happens to the average differences but it is appropriate to consider the distribution of differences as well. It is suggested that bookmakers wish to maximise expected proﬁt and that most bets are placed on horses with a high probability of winning. Thus, it is important that bookmakers offer better returns on fancied horses than the Tote to attract such betting. On average this is correct, but the favourable odds are only of the order of about 6 per cent better. Additionally, on-course bookmakers pay out on the odds listed at the time of the bet and, unlike the Tote, the effect of a substantial bet is not to reduce the predicted return to the punter. However, for an important minority of fancied horses the Tote returns are better than the SPs of the bookmaker.

40

J. Peirson and P. Blackburn

Additionally, the bookmaker odds drift and except for the most fancied group of horses drift out. So with these qualiﬁcations, the SPs are on average more favourable than the bookmaker odds offered during the operation of the market. For two out of the three fancied groups of horses the drift in odds reduces the average difference in Tote/bookmaker winning returns. Thus, the behaviour of bookmakers and punters is compatible with a more complicated idea of efﬁciency that amongst other things embraces the idea that the market is characterised by variation and uncertainty. For betting on unfancied horses, the Tote on average gives a better return. However, this result is biased by the presence of unsupported horses with large Tote returns. Betting with the Tote on an unsupported horse will have an important effect on the winning return because of the increased size of the pool of winning bets. For all categories of unfancied horses, the bookmakers’ odds drift in a manner that reduces the average Tote bookmaker difference in winning returns. However, there is again a large variation in the drift of bookmaker odds. It is not yet possible to detect in aggregate data that bookmakers are efﬁcient in reducing the odds on horses on which insider information is revealed by the volume of betting though see Vaughan Williams and Paton (1997). The bookmakers’ odds are used to settle bets for horses to be placed. For unfancied horses and each way bets, there is less difference than between Tote and bookmaker winning returns and for one group of horses the bookmakers returns are better.

Conclusion The observed odds offered by bookmakers and the Tote can be used to test the hypothesis of semi-strong efﬁciency. Gabriel and Marsden (1990) concluded that the Tote offers more favourable odds than bookmakers, which implies semistrong inefﬁciency. A more thorough empirical investigation shows that, at British racecourses, bookmakers offer more/less favourable odds on favourites/outsiders compared to the Tote system of betting, a result that was also found by Cain et al. (2001) and Vaughan Williams and Paton (1997b). This and other evidence is compatible with efﬁcient behaviour by bookmakers and punters operating in a market characterised by much variation and particular structural characteristics. The conclusion that can be drawn from this study is that the two betting markets are not identical and perfect information on odds does not exist. The pattern of differences between the odds of the two markets is of a different and more complicated nature than that suggested by Gabriel and Marsden (1990) and Cain et al. (2001). However, there is a systematic movement in the odds offered by bookmakers towards those of the Tote. Thus, an efﬁcient movement in prices in the two markets appears to exist but it is difﬁcult to conclude whether it is complete. The market structures and imperfect information suggest that the two markets would not be expected to offer exactly the same odds. The average differences in Tote and bookmakers’ winning returns are small compared to the distributions of these differences. This indicates that most if not close to all of the feasible convergence between the two markets takes place.

Betting at British racecourses

41

Acknowledgements We are very grateful for the advice and suggestions of Andrew Dickerson and Alan Carruth.

Notes 1 Gabriel and Marsden (1991) published a correction, the contents of which do not affect the present study. 2 Bird and McCrae (1994) consider some Australian evidence that suggests equalisation of prices in two different betting markets. 3 The semi-strong form of the Efﬁcient Markets Hypothesis asserts that prices reﬂect all publicly available information, see Malkiel (1987). 4 The minimum dividend to a £1 stake is £1.10, implying a return of 10 p. 5 Sidney (1976) contains a detailed discussion and description of on-course and off-course bookmaking. 6 To be exact, these prices are those of Rails and Tatersalls bookmakers, see Sidney (1976). 7 The rules used to settle each-way bets are: 2–4 runners – win only; 5–7 runners – 1/4 of the odds for 1st and 2nd; 8 or more runners – 1/5 of the odds for 1st, 2nd and 3rd (in non-handicap races); 8–15 runners – 1/4 of the odds for 1st, 2nd and 3rd (in handicap races); 16 or more runners – 1/4 of the odds for 1st, 2nd, 3rd and 4th (in handicap races). 8 See McCririck (1991), p. 59 and Cain et al. (2001), p. 203. 9 Cain et al. (2001) also make a similar observation. 10 See footnote 7 for details of the rules used to settle each-way bets with bookmakers.

References Ali, M. M. (1979), ‘Some evidence on the efﬁciency of a speculative market’, Econometrica, 47, 387–92. Asch, P., Malkiel, B. G., and Quandt, R. E. (1984), ‘Market efﬁciency in racetrack betting’, Journal of Business, 57, 165–75. Asch, P. and Quandt, R. E. (1988), ‘Betting bias in exoctic bets’, Economic Letters, 28, 215–19. Asch, P. and Quandt, R. E. (1987), ‘Efﬁciency and probability in exotic bets’, Economica, 54, 289–98. Bird, R. and McCrae, M. (1987), ‘Tests of the efﬁciency of racetrack betting using bookmakers odds’, Management Science, 33, 1552–62. Bird, R. and McCrae, M. (1994), ‘Efﬁciency of racetrack betting markets: Australian evidence’, in D. Hausch, V. S. Y. Lo, and W. T. Ziemba (eds), Efﬁciency of Racetrack Betting Markets, Academic Press, London, pp. 575–82. Cain, M., Law, D. and Peel, D. A. (2001), ‘The incidence of insider trading in betting markets and the Gabriel and Marsden anomaly’, The Manchester School, 69, 197–207. Crafts, N. (1985), ‘Some evidence of insider knowledge in horse racing betting in Britain’, Economica, 52, 295–304. Crafts, N. F. R. (1994), ‘Winning systems? Some further evidence on insiders and outsiders in British horse race betting’, in D. B. Hausch, V. S. Y. Lo, and E. T. Ziemba (eds), Efﬁciency of Racetrack Betting Markets, Academic Press, London, pp. 545–9. Dolbear, F. T. (1993), ‘Is racetrack betting on exactas efﬁcient’, Economica, 60, 105–11. Dowie, J. (1976), ‘On the efﬁciency and equity of betting markets’, Economica, 43, 139–50.

42

J. Peirson and P. Blackburn

Fama, E. (1970), ‘Efﬁcient capital markets: a review of theory and empirical work’, Journal of Finance, 26, 383–417. Gabriel, P. E. and Marsden, J. R. (1990), ‘An examination of market efﬁciency in British racetrack betting’, Journal of Political Economy, 98, 874–85. Gabriel, P. E. and Marsden, J. R. (1991), ‘An examination of market efﬁciency in British racetrack betting: errata and corrections’, Journal of Political Economy, 99, 657–9. Hausch, D. B., Ziemba, W. T. and Rubinstein, M. (1981), ‘Efﬁciency of the market for racetrack betting’, Management Science, 27, 1435–52. Lo, V. S. Y. and Busche, K. (1994), ‘How accurately do bettors bet in doubles?’, in D. B. Hausch, V. S. Y. Lo and W. T. Ziemba (eds), Efﬁciency of Racetrack Betting Markets, London, Academic Press, pp. 465–8. Malkiel, B. G. (1987), ‘Efﬁcient markets hypothesis’, in J. Eatwell, M. Milgate and P. Newman (eds), The New Palgrave: Finance, London, Macmillan. McCririck J. (1991), World of Betting, London, McCririck. Peirson, J. (1988), ‘The economics of the setting of odds on horse races’, Fourth International Conference on the Foundations and Applications of Utility, Risk and Decision Theory, Budapest, June 1988. Phlips, L. (1989), The Economics of Price Discrimination, Cambridge, Cambridge University Press. Raceform 1993 Flat Annual, Newbury, Raceform. Royal Commission on Gambling (1978) vols I & II, Cmnd 7200, London HMSO. Shin, H. S. (1991), ‘Optimal betting odds against insider traders’, Economic Journal, 101, 1179–85. Shin, H. S. (1992), ‘Prices of contingent claims with insider traders and the favourite– longshot bias’, Economic Journal, 102, 426–35. Sidney C. (1976), The Art of Legging, London, Maxline. Snyder, W. W. (1978), ‘Horse racing: testing the efﬁcient markets model’, The Journal of Finance, 33, 1109–18. Sporting Life Flat Results 1993, London, Mirror Group Newspapers. Thaler, R. and Ziemba, T. (1988), ‘Anomalies and parimutuel betting markets: racetracks and lotteries’, Journal of Economic Perspectives, 2, 161–74. Tuckwell, R. H. (1983), ‘The thoroughbred gambling market: efﬁciency, equity and related issues’, Australian Economic Papers, 22, 106–18. Vaughan Williams, L. and Paton, D. (1997a), ‘Why is there a favourite–longshot bias in British racetrack betting markets?’, Economic Journal, 107(1), 150–8. Vaughan Williams, L. and Paton, D. (1997b), ‘Does information efﬁciency require a perception of information inefﬁciency?’, Applied Economics Letters, 4, 615–17. Vaughan Williams, L. (1999), ‘Information efﬁciency in betting markets: a survey’, Bulletin of Economic Research, 53, 1–30.

6

Breakage, turnover, and betting market efﬁciency New evidence from Japanese horse tracks W. David Walls and Kelly Busche

In this research we analyze more than 13,000 new races run at eighteen Japanese horse tracks. We examine the relationship between breakage (the rounding down of pay-outs to winning wagers), betting turnover (the dollar amounts bet), and betting market efﬁciency. The evidence across Japanese horse tracks indicates that tracks with high turnovers are more informationally efﬁcient than tracks with low turnovers. We also ﬁnd that breakage costs are systematically related to betting market efﬁciency. We investigate the possibility that bettors have preferences over the skewness of betting returns in addition to their level and variance, and we relate this to betting turnover as well. The new evidence leads us to reject the skewness–preference model at tracks with a high volume of betting; however, the skewness–preference model is consistent with betting behavior at tracks with low betting turnovers.

Introduction A slew of empirical research on horse track betting ﬁnds that bettors do not behave in a way consistent with market efﬁciency. The results of Ali (1977), Fabricand (1977), Hausch et al. (1981), Asch and Quandt (1987), Asch et al. (1982, 1984), and other authors all point toward market inefﬁciency in horse wagering.1 Few published papers have found evidence consistent with market efﬁciency: Busche and Hall (1988), Busche (1994), and Busche and Walls (2000) are among those who have used the same empirical methods as previous researchers and obtained results consistent with optimizing behavior on the part of racetrack bettors. The most well-established market inefﬁciency – known in gambling parlance as the favorite–longshot bias – is that the favorite or low-odds horses are systematically underbet relative to the longshot or high-odds horses.2 Bettors appear not to be optimizing because they could reallocate bets from longshot horses to favorite horses in a way that would increase the expected returns for the same amount bet. Many explanations have been offered for the observed betting bias in wagering markets ranging from psychological explanations based on misperceptions (Slovic et al., 1982) to arguments that racetrack bettors have a love of risk (e.g. Asch and Quandt, 1990). Sauer (1998) states in a recent survey article on the economics of wagering markets that, “Work documenting the source of variation

44

W. D. Walls and K. Busche

in the favorite–longshot bias would be particularly useful” (p. 2048). In this paper we take an empirical stab at uncovering how the favorite–longshot bias is related to breakage and betting turnover, and also how it is related to bettor preferences over the moments of the returns distribution. In this mostly empirical chapter we analyze more than 13,000 races run at horse tracks across Japan in 1999 and 2000. The races come from horse tracks operating under the Japan Racing Association (JRA) and the National Association of Racing (NAR), the tracks differing primarily in the betting turnover: JRA tracks have an average daily turnover of about 3 million American dollars, while the NAR tracks have an average daily turnover of about 30,000 American dollars. Our sample of data is unique in that we have a large number of races across tracks that differ by several orders of magnitude in bet turnover, yet all venues are in the same country.3 We ﬁnd that betting market efﬁciency is systematically related to breakage costs – the cost associated with the rounding down of pay-outs on winning bets. We construct an index of breakage costs and ﬁnd that races with higher breakage costs are more likely to be measured as inefﬁcient. Betting behavior for races with very small breakage costs is consistent with market efﬁciency. Our results suggest that ignoring the effect of breakage may bias statistical tests toward rejection of the hypothesis of market efﬁciency. We examine market efﬁciency at each track and relate it to betting turnover. Finding that bettors at low-turnover tracks do not equalize betting returns across alternative bets, while bettors at high-turnover tracks do, is consistent with the hypothesis that bettors make non-optimizing decisions when the cost of such errors is small. But they do not make such errors when the cost is large. Bettors at highturnover tracks bet as if they are maximizing betting returns, while bettors at low-turnover tracks may trade off returns for the consumption of a beer and a hot dog, and the excitement of occasionally hitting the longshot. Finally, we examine the skewness–preference hypothesis put forward by Golec and Tamarkin (1998). This hypothesis formalizes the “thrill of hitting the longshot” by including skewness explicitly into the representative bettor’s utility function. The evidence in support of this hypothesis varies with the volume of betting. The skewness–preference, risk-aversion model is better supported with data from our low-volume tracks where returns are not equalized across horses of different win probabilities. At high-volume tracks where bettors’ behavior seems consistent with equalizing expected returns, we ﬁnd evidence of risk preference and skewness aversion! The following section discusses brieﬂy the metric of betting market efﬁciency that has been commonly used in the literature. We then proceed to examine empirically turnover, breakage, and skewness preference in all the following sections except the ﬁnal section that concludes the chapter.

Quantifying betting market efﬁciency The most direct way to examine betting market efﬁciency is to test whether bettors allocate bets across horses to equalize returns. This is equivalent to testing if

Breakage, turnover, and betting market efﬁciency

45

bettors’ subjective probabilities are equal to the objective probabilities. The method of grouping data and calculating statistics to test the market efﬁciency hypothesis was developed by Ali (1977). First, horses in each race are grouped by rank order of betting volume; the horse with largest bet fraction is the ﬁrst favorite, the horse with the second largest bet fraction is the second favorite, and so on.4 The fractions of money bet on horses in each rank are the subjective probabilities (Rosett, 1965) and these probabilities are compared to the objective probabilities (fractions of wins in each rank).5 Rosett (1965) showed that if risk-neutral bettors have unbiased expectations of win probabilities, then the proportion of money bet on a horse will equal the win probability. If the difference between subjective probability and objective probability is zero, the return from each horse will be equalized at the average loss due to the track’s extraction of a portion of the betting pool. We can test the null hypothesis that the subjective probability (ψ) equals the objective probability (ζ ) in each favorite position by treating the number of wins as a binomial statistic.6 For a sample of n observations, the statistic z = (ψ − ζ ) n/ζ (1 − ζ ) (1) has a limiting normal distribution with mean zero and unit variance.7 Very large or small z-statistics, as compared with the upper or lower percentage points of the normal distribution, provides statistical evidence of overbetting or underbetting on horses in each favorite position. In the empirical work that follows below, data on bet volumes, odds, and race outcomes were obtained from the eighteen Japanese horse tracks listed in Table 6.1. Since we were able to obtain the exact bet volumes, we were able to compute the bet fractions for each horse in a race directly as opposed to imputing them from the odds.8

The role of turnover in betting markets9 The divergences from market efﬁciency in our results reported here, and in the results of previous researchers, are inversely related to the betting turnover. This relationship has been hinted at by other authors, but it has only been confronted directly by Walls and Busche (1996) and Busche and Walls (2000).10 Evidence of non-optimizing behavior, in the data analyzed in this chapter and in all prior studies, is present only at racetracks with betting turnovers of a few thousand dollars per race. When the turnover is scaled up by orders of magnitude, to a few hundred thousand dollars per race, we ﬁnd no signiﬁcant deviations from market efﬁciency. Our ﬁndings provide further non-experimental support for the decision– cost theory. Also, because we only examine horse tracks in Japan, cultural factors can be ignored. Economists seem to have had a fascination with anomalies while ignoring the mantra of opportunity cost. It is fortunate that the mountain of evidence of non-optimizing behavior in gambling markets and in economic experiments has prompted some economists to re-think the representative consumer’s optimization problem in terms of the cost of making decisions. Smith (1991) has challenged

477 286 288 288 192 192 192 192 384 969 858 1,350 1,467 1,560 1,016 942 1,288 1,082

Tokyo Nakayama Kyoto Hanshin Kokura Chukyo Hakodate Sapporo Fukushima Urawa Kawasaki Mizusawa Sonoda Nagoya Kamiyama Niigata Saga Arao

457,866 425,726 346,535 295,186 205,636 163,556 150,141 148,343 138,529 9,477 9,427 4,252 4,195 1,417 1,365 738 515 441

Turnoverb 2 −0.42 −1.51 0.78 −0.25 0.56 0.19 0.02 −0.86 1.09 −1.73 1.25 −0.51 −0.72 −0.06 0.24 0.78 −0.08 −1.78

1 −0.77 −0.62 −1.11 −1.48 0.78 −0.32 1.11 0.28 0.54 1.84 −1.52 0.85 0.48 −1.10 −0.54 −1.39 −1.00 −0.37

Favorite positiona

0.17 1.41 0.20 0.38 −0.62 −0.64 −1.01 −0.49 0.02 0.22 1.29 −0.06 0.06 0.96 −0.28 0.79 0.36 1.87

3 0.42 0.12 −0.76 1.20 −0.03 0.04 −1.24 −0.17 −0.37 0.45 0.54 0.49 1.05 0.85 0.52 1.63 −0.76 1.27

4 −0.99 1.55 1.04 −1.15 −2.00 0.50 −0.93 −0.84 −1.13 0.89 −0.92 −1.52 1.07 0.62 0.03 −0.76 0.80 2.29

5

Notes a z-statistics are listed by favorite position for each track. b Turnover is listed in 102 Yen (approximately equal to US dollars at then-current exchange rates).

Races

Track name

Table 6.1 z-statistics for Japanese horse tracks ordered by turnover

−0.12 −1.13 −1.20 0.08 −0.18 0.20 −0.43 −1.24 −0.88 −0.43 0.93 0.24 −0.68 −0.38 0.05 −1.10 1.45 0.07

6 −0.32 0.29 −0.10 −0.09 −0.31 −1.27 −0.07 0.67 −2.32 −0.97 0.70 0.39 0.74 0.33 2.01 1.30 2.32 −0.43

7

−0.21 −0.57 −0.35 −0.72 1.74 −0.39 1.61 1.04 0.41 1.89 0.40 2.79 1.33 1.29 2.11 4.15 1.50 −0.21

8

0.34 0.71 −0.03 2.09 −0.88 0.09 −0.01 0.96 0.59 0.66 −0.48 0.51 0.54 2.38 −0.62 0.32 1.50 2.54

9

Breakage, turnover, and betting market efﬁciency

47

the interpretation placed upon evidence from experimental studies, and Smith and Walker (1993a,b) develop what they call a decision–cost theory in which the size of payoffs affects the efﬁciency of outcomes. Smith and Walker go on to review thirty-one experimental studies and ﬁnd that the pattern of results is consistent with optimization theory: When the costs of optimization errors increase, the size of the errors decreases and risk aversion replaces risk seeking. Harrison (1989, 1992) argues that the anomalies observed in experimental economic markets simply reﬂect the fact that the opportunity cost of non-optimizing decisions is tiny.11 When the potential gain to participants in gambling experiments increases, the percentage who appear risk averse also increases (Hershey et al., 1982; Battalio et al., 1990). Horse tracks around the world use the pari-mutuel system of betting.12 In parimutuel betting markets, the track operator extracts a percentage of the betting pool and returns the remainder to winning bettors in proportion to their individual stakes on the outcome of the race. The net return per dollar from a bet on a particular horse i is given by w −1 (2) Ri = (1 − t) xi where t is the track take; xi is the total amount bet on horse i; and w = i xi is the total amount bet on all horses. As the returns on each bet depend on the total amount bet on all horses, the actual payoffs to bets are not determined until all bets have been made. If the proportion of the total betting pool bet on each horse were equal to each horse’s win probability, returns across all horses would be equalized and the betting market could be considered efﬁcient in that bettors have exploited all betting margins. However, if the pattern of betting resulted in a particular horse yielding a statistically larger expected return than another horse, this would be evidence against the hypothesis of market efﬁciency. Suppose there were a single risk-neutral professional bettor who knew the horses’ true win probabilities, and for simplicity also assume that there was a single underbet horse.13 The bettor’s decision problem is to maximize expected returns w+B −B (3) E(R) = π(1 − t)B x+B by choice of his bet B, where π is the horse’s win probability. For the bettor, maximizing expected returns yields the optimal bet B ∗ π(1 − t)(w − x)x ∗ −x (4) B = (1 − π(1 − t)) The optimal bet is a function of the amount currently bet on that horse, the total bet on all other horses, the track take t, and the horse’s win probability. If a horse is sufﬁciently underbet, the bettor could place a bet that would have a positive return in expectation. Suppose, for example, a horse with a 0.15

48

W. D. Walls and K. Busche

win probability has already attracted a 10 percent share of the win pool. If the track take is 18 percent, advertised odds will be 8.2 : 1, and a $1 bet on this horse would have an expected net return obtained by evaluating equation (2): π(1 − t)(w/x) − 1 = 0.15 × 0.82 × (1/0.1) − 1 = $0.23. The bettor’s optimal bet on this horse depends upon the value of the total betting volume w, the initial amount bet on the horse, x, and the win probability π. If the original win pool were $10,000, with $1,000 already bet on the selected horse, the proﬁt-maximizing bet for the professional bettor is $123.50. The professional’s bet causes the odds to fall from 8.2 to 7.39 : 1 and expected net return is $13.38. The professional’s bet removes all further proﬁtable bets in the example.14 When there are multiple professional bettors competing to make the proﬁtable bets, the odds converge even more rapidly toward the level implied by market efﬁciency (Adams et al., 2002). From equation (3) the expected proﬁt is 123.5(0.15)(0.82)(10123.5/11123.5) − 123.5 = 13.38. If the professional bettor made a bet of $262.26, the odds on this horse would be reduced to (1 − t)10,262.26/1262.26 = 6.67 and this would drive the professional’s return to zero. The proﬁt maximizing bettor drives the ﬁnal track odds to between the initial odds of 8.2 and the zero-return odds of 6.67. Optimal bets and expected returns are scaled by the total pool size: If the win pool were two orders of magnitude larger ($1,000,000) then expected returns would also increase by two orders of magnitude ($1,338).15 The magnitude of returns effectively constrains the professional’s research costs incurred in estimating horses’ win probabilities: Research will be proportional to the size of the betting pool. In the example given in the previous paragraph, if only one underbet horse could be found on each race day, the professional bettor with an alternative employment opportunity paying $100 per day could not proﬁtably spend any money on research at the racetrack with a $10,000 win pool. However, at the $1,000,000 track, the bettor could spend up to $1,238 per day on research before becoming indifferent between professional betting and his alternative employment.16 In the event that research costs and betting volume made it unproﬁtable for a professional to bet at the track, an outside investigator would observe returns across horses that reﬂect the risk preferences of the remaining non-professional bettors. With small betting pools, the reward to low-variance predictions about a horse’s win probabilities is small, so it is unlikely that a professional bettor would be willing to incur the research cost involved in ﬁnding underbet horses. At racetracks with small volumes of betting, the enjoyment of a day at the races is perhaps sufﬁcient to attract people who treat their day at the races as consumption of entertainment. The consumption involved in recreational betting may include accepting greater than minimum required average losses to achieve the occasional thrill of hitting the longshot.17 If that is the case, the examination of betting data from tracks with small betting volumes would be expected to show that longshots are overbet. At racetracks with large volumes of betting, some bettors may proﬁtably become professionals, researching horses’ win probabilities in order to ﬁnd horses sufﬁciently underbet to yield high returns in expectation.18

Breakage, turnover, and betting market efﬁciency

49

If racetracks are populated by both consumers who value thrills like the possibility of hitting the longshot by betting on extreme favorites, and professional investors who value only returns, we should expect that underbet horses will be rarer at racetracks with larger turnovers. Where rewards are sufﬁciently high, bettors will research more and markets will be measured as more efﬁcient.19 A testable implication of this view is that large betting volume racetracks will be measured as more efﬁcient than small volume tracks. We now confront this prediction of the decision–cost theory with empirical evidence across Japanese racetracks with a wide range of bet volumes. Table 6.1 shows the z-statistics for the null hypothesis that betting returns are equalized across horses for each track. Each horse track is shown as a separate row in the table, and the tracks are listed in decreasing order of betting turnover. The ﬁrst nine tracks, members of the JRA, have betting turnover in the hundreds of thousands per race. Among JRA tracks, only Hanshin, Kokura, and Fukushima show any evidence that betting returns are not equalized across betting alternatives when testing at the 10 percent marginal signiﬁcance level. This is not strong evidence of systematic betting market inefﬁciency: at the 10 percent marginal signiﬁcance level we would expect to ﬁnd 10 percent of the z-statistics in the critical region as a result of chance, but we ﬁnd only four out of eighty-one which is about half of what we would expect to ﬁnd. The bottom nine rows of Table 6.1 consist of the NAR tracks, which have betting turnover from the hundreds to slightly less than ten-thousand per race. Seven of these nine tracks show evidence that bettors have not equalized the returns across betting alternatives; only Sonoda and Kawasaki showed no evidence of underbetting or overbetting. Testing again at the 10 percent marginal signiﬁcance level, chance would lead us to expect about eight signiﬁcant z-statistics for the nine tracks with nine-horse races. But we ﬁnd thirteen signiﬁcant z-statistics, more than half again above what would we would expect from chance. The pattern that emerges from the raw z-statistics indicates that horse tracks with larger betting turnover are measured as being more informationally efﬁcient. To relate the z-statistics in each row of Table 6.1 to the betting turnover requires the construction of a metric to quantify how the z values differ from zero as a group. If we treat each row of z-statistics as a nine-dimensional vector, an intuitive way to quantify the vector of z-statistics is to measure its deviation from the zero vector in terms of Euclidean distance. This is precisely the norm of the z-vector normi =

9

2 j =1 zij

i = 1, . . . , 18

(5)

where i indexes the eighteen individual horse tracks and j indexes the favorite position at each track. We regressed the norm of the z-vector on the betting turnover and obtained the following results normi = 4.259 − 5.22e−6 × Turnoveri + residuali [0.565]

[2.01e−6]

(6)

50

W. D. Walls and K. Busche

where White’s (1980) heteroskedasticity-consistent estimated standard errors are reported in brackets below the respective coefﬁcient estimates.20 The R 2 for the regression was 0.24. The coefﬁcient on turnover is negative and statistically different from zero at the 5 percent signiﬁcance level. This is strong evidence that the z-statistics reported in analyzes of racetrack betting markets are closer to zero when bet volumes are high. These empirical results show that the volume of betting is an important determinant of observed betting market efﬁciency across Japan’s horse tracks.

The role of breakage in pari-mutuel betting21 In betting markets, the gross return per dollar from a bet on a particular horse i is given by

w Ri = (1 − t) xi

(7)

where t is the “track take” – the percentage that the racetrack operator extracts from the betting pool as the fee for coordinating the gambling market; w is the total amount bet on all horses; and xi is the total amount bet on horse i. The track take is the primary source of revenue for racetracks and it is often about 0.17 or 0.18, although it is as high as 0.26 at racetracks in Japan. The track take is removed from the pool before any other calculations or payoffs are made. We explain below how returns to bettors are functions of relative amounts bet across horses; the track take does not affect the allocation of wagers across horses, although it does reduce the amount bet by any individual bettor.22 As a secondary source of revenue, and to simplify pay-outs, race track operators typically round payoffs down – often to the nearest lower 20 cents for a $2 bet; the rounding down of payoffs is called breakage in betting industry parlance. Where the exact payoff corresponding to the advertised odds might indicate $12.77 or $12.67 winning payoffs to $2.00 bets, the actual payoffs will be $12.60 for each bet, those 17 and 7 cents, respectively, removed as breakage. The methodology employed by previous researchers was to add track take and breakage together. However, track take and breakage affect the behavior of bettors differently.23 Track take alters the returns from horses across win probabilities: Expected return of horse i with πi probability of winning is πi ∗ R − 1. A horse that attracts $1,000 of a $10,000 win pool at a track with 16 percent take will have odds of (1 − 0.16) × 10000/1000 − 1 = 7.40, so the gross return from a winning bet will equal $8.40. A horse that attracts 50 percent of that win pool will have a gross return from a $1 winning ticket of $1.68. If the track takes were increased to 18 percent, those same horses would have gross returns of $8.2 and $1.64. Percentage increases of track takes and gross returns are equal, so changes in the track take do not change the relative proﬁtability of betting different horses; the results of an experimental betting market conﬁrm this prediction (Hurley and McDonough, 1995).

Breakage, turnover, and betting market efﬁciency

51

Breakage cost differentially affects the returns across horses. A bettor placing a bet on a horse with an anticipated 10 percent chance of winning would anticipate odds of 8.3 : 1 with a 17 percent track take, and breakage would reduce the pay-out by an expected 10 cents: Rather than getting paid $16.60 for a $2 winning ticket, which would be the pay-out if actual odds were between 8.3 and 8.4, if actual odds turn out to be 8.29 the payment would be reduced by 20 cents to $16.40. With breakage distributed uniformly between 0 and 20 cents, the expected reduction is 10 cents on a $2 winning ticket. Although the expected reduction on the payment of winning tickets is a constant 10 cents, the cost is borne more heavily by winning tickets on favorite horses since 10 cents on an even–odds winner paying $2 is more than the 10 cents on a longshot winner paying $100 for a $2 winning ticket. The expected cost to all participants is a function of the odds times the probability that purchasers of the tickets will become winners. An index of expected breakage cost can be constructed by examining the components of breakage cost. The ﬁrst component can be approximated by the odds on the particular horse: horses with low odds will have relatively high breakage per dollar returned on a winning ticket. The second component is related to the bet fraction because it approximates the probability that the ﬁrst component will be realized. A metric of breakage for a particular horse could be constructed by multiplying these two components. Consider the breakage for a particular horse that has 49 percent of the pool bet on it: With a 17 percent track take, the exact odds would be 1.59, but due to breakage a winning ticket would be paid only $1.50 upon winning due to breakage. Since the win probability can be approximated by the bet fraction, this horse would add 0.779 [= 0.49 × 1.59] to the breakage index. A horse collecting only 5 percent of the win pool would add 0.053 [= 0.05 × (0.05/0.83 + 1)]. The expected breakage cost for a particular race can be approximated by summing the breakage over the individual horses i (xi /w)Oddsi , where i indexes horses within a race. We calculate the index of breakage for each of the races in our sample and sorted the races in decreasing order of the index which ranged from 0.0795 to 0.3303. Then we divided the sorted races into twenty-six equal groups of approximately 500 races each. The z-statistics for the null hypothesis of equal returns across horses in each favorite position were calculated for each subgroup and they are displayed in Table 6.2. In the thirteen breakage groups shown in the top half of the table we ﬁnd fourteen z-statistics that are signiﬁcantly different from zero at the 10 percent marginal signiﬁcance level. Since we are testing at the 10 percent level, we would only expect about twelve (10 percent × 13 × 9), so there is evidence that bettors are not equalizing returns across betting alternatives for the high-breakage groups. In the thirteen breakage groups comprising the bottom thirteen rows of the table, we ﬁnd only eight z-statistics that are signiﬁcant at the 10 percent marginal level compared to the twelve that we would expect to ﬁnd as the result of sampling variation. Thus, the lower breakage groups appear to be consistent with the hypothesis of bettors equalizing returns across favorite positions, while the high breakage groups are inconsistent with this hypothesis. To relate the z-statistics in each row

52

W. D. Walls and K. Busche

Table 6.2 z-statistics grouped by index of breakage Indexa

0.3303 0.2745 0.2465 0.2257 0.2116 0.1996 0.1900 0.1806 0.1731 0.1666 0.1607 0.1549 0.1494 0.1444 0.1394 0.1347 0.1305 0.1262 0.1224 0.1183 0.1138 0.1092 0.1045 0.0987 0.0913 0.0795

Favorite positionb 1

2

3

4

5

6

7

8

9

−1.37 −1.17 −0.12 −1.01 −1.12 −1.53 0.14 0.42 −0.17 −1.22 −1.58 0.79 −0.16 0.71 −0.11 1.35 2.03 −0.21 −0.91 −2.09 0.81 1.13 −0.54 0.46 1.23 −1.47

0.94 0.34 1.50 −1.13 0.58 0.87 −1.10 −1.87 0.88 0.49 −0.91 −0.66 −1.21 −0.00 −0.05 −1.61 −0.62 0.13 1.15 1.56 0.22 −2.34 −1.10 0.13 −0.75 1.08

2.56 1.72 −1.28 1.06 0.01 1.53 −0.19 0.04 −1.09 0.53 3.19 0.22 1.13 −0.38 1.55 0.07 0.17 −0.76 −0.83 2.40 −1.56 1.05 1.85 −0.23 −0.74 −1.02

−2.02 −0.87 −1.71 1.09 0.15 1.44 1.08 1.61 1.63 2.53 1.08 1.26 1.00 0.68 0.12 −0.83 0.36 0.60 −0.22 1.11 −0.63 0.14 0.23 −0.55 −0.47 0.67

−0.23 1.83 −0.10 −0.03 −0.49 −0.59 −1.10 3.26 −0.32 −0.35 0.97 −0.89 −0.05 −0.09 −0.67 0.33 −1.55 0.57 0.01 −1.74 0.35 0.33 −0.69 1.84 −0.58 1.21

−0.43 0.36 1.36 1.42 1.44 −1.35 0.78 0.33 −1.57 −0.66 −0.61 −0.78 0.62 −0.69 0.40 −0.49 −1.39 −0.70 1.54 1.04 0.45 −1.06 0.08 −0.24 −0.21 −0.84

−0.17 −0.02 0.07 0.44 0.12 2.04 2.76 −0.18 1.70 0.81 0.89 −0.82 1.68 0.36 −1.61 1.22 1.47 0.56 −1.33 −0.64 1.23 0.68 −0.00 −1.41 0.45 −1.05

0.52 0.64 1.71 1.34 1.35 −0.11 1.59 −1.28 −0.13 0.65 0.64 1.04 0.65 0.36 1.15 1.38 0.41 1.70 2.18 1.63 0.82 1.25 0.16 0.43 0.98 1.01

−0.48 0.67 3.71 2.89 1.34 0.75 0.09 0.89 1.60 −0.41 0.13 0.64 −0.89 −0.47 −0.03 0.23 1.19 0.06 1.30 −1.41 0.79 0.17 1.77 0.10 1.59 0.45

Notes a The index of breakage is deﬁned in the main text. b z-statistics are listed by favorite position for each index grouping.

of Table 6.2 to the breakage cost requires the construction of a metric to quantify how the z values differ. from zero as a group. If we treat (as we did in the section on “The role of turnover in betting markets”) each row of z-statistics as a nine-dimensional vector, we again can quantify vector of z-statistics in terms of its Euclidean distance from the zero by taking the norm of the vector normi =

9

2 j =1 zij

i = 1, . . . , 26

(8)

where i indexes the races grouped by breakage and j indexes the favorite position within each group. We regressed the norm of the z-vector on the index of breakage and obtained the following results: normi = 2.437 + 5.272 × Breakagei + residuali [3.391]

[2.314]

(9)

Breakage, turnover, and betting market efﬁciency

53

where White’s (1980) heteroskedasticity-consistent estimated standard errors are reported in brackets below the respective coefﬁcient estimates.24 The R 2 for the regression was 0.15. The coefﬁcient on breakage is positive and statistically different from zero at the 5 percent level. This is strong evidence that the z-statistics reported in analyzes of racetrack betting markets are biased away from zero by ignoring breakage costs.

The skewness–preference hypothesis Modeling bettors’ utility functions Two other ways of quantifying and testing betting behavior are based on alternative speciﬁcations of a representative bettor’s utility function. Modeling bettors’ utility is based primarily on the work of Ali (1977) where a representative bettor has utility function u(·). A bet on horse h returns Xh dollars if the horse wins and zero otherwise. The utility function is normalized so that the utility of a winning bet on the longest-odds horse is unity and the utility of any losing bet is zero. In this formulation, the utility of a winning bet on horse h is u(xh ) = pH /ph , where pH is the objective win probability on the least-favorite horse and ph is the objective win probability on horse h. Power utility β

Ali (1977) ﬁt a power function to approximate utility so u(xh) = αxh and he estimated this using a logarithmic transformation ln u(xh ) = α + β ln x + µ

(10)

In this model risk-neutrality is implied if the exponent β equals unity, risk preference is indicated if β is greater than unity, and risk aversion is indicated if β is less than unity. Modeling utility as a power function is arbitrary and it implies constant relative risk aversion. As an alternative Golec and Tamarkin (1998) suggest using a cubic utility model. Cubic utility Golec and Tamarkin (1998) suggest that we approximate the unknown utility function u(xh ) by expanding a third-order Taylor series approximation.25 The Taylor series approximation results in the following cubic utility model that can be estimated using standard linear regression: u(xh ) = α + β1 x + β2 x 2 + β3 x 3 + µ

(11)

In this model risk-neutrality is implied when β2 = 0, risk preference is implied when β2 > 0, and risk aversion is implied when β2 < 0. Skewness-neutrality is implied when β3 = 0, and skewness preference and aversion are implied when

54

W. D. Walls and K. Busche

β3 > 0 or 2 and FP/SP > 2, respectively) tend to be over-bet by some margin, hence the relatively high SP losses. Although a proportion of the price movements in these categories represent proﬁtable arbitrage opportunities, a further proportion may represent unsuccessful attempts to follow the ‘smart money’. A direct comparison cannot be made in respect of semi-strong and strong-form efﬁciency; whilst the former is a prime focus in the current study, Crafts was more interested in the latter. It is useful nonetheless to analyse the current data using the Crafts’ price movement categories, by tip status, as in Table 7.7. Table 7.8 then shows the number of runners moving signiﬁcantly in the market by tip status as a percentage of all runners in each tip category. Much of the data in these tables has to be treated with caution because of the lack of statistical signiﬁcance, but still has value in being highly suggestive, given Crafts’ ﬁndings. The data in Tables 7.7 and 7.8 show that, in principle, the knowledge of a horse being napped substantially improves the bettor’s chances of exploiting high early prices relative to SP, especially in the case of Winsome tips, on which high returns could have been made at mean- and max-early, assuming these odds were available to real wagers, and at SP. This conﬁrms the overall impression gained from Tables 7.3 and 7.5. Table 7.8 conﬁrms that WAOT and WO status is a fair predictor of which horses would move most in the dataset: nearly one-third of WAOT horses contracted signiﬁcantly, and one-ﬁfth of WO horses. In addition, the knowledge that a horse is not napped at all is useful in that the NOT category horses are not only less likely to contract substantially; those horses that do contract in this category are also associated with negative returns (bar a modest max-early proﬁt in the 1.5 to less than 2 category, amounting to £0.16 per £ bet, generated by only two high priced winners, that is, outside the 10–1 Crafts’ division). Furthermore, the average SP of NOT runners greatly

18 100 15.25 0.73 0.97 0.09 145 567 20.37 0.64 −0.01

21 171 10.94

0.53 0.78 −0.05

N/A N/A N/A

N/A N/A

≤10/1

N/A N/A

N/A N/A N/A

0.18 0.30 −0.47

6 53 10.17

All

1.41 −0.09

68 329 17.13

0.62 0.78 −0.28

6 37 13.95

≤10/1

ME ∗ /SP ≥ 2.0

N/A N/A

N/A N/A N/A N/A N/A

−0.63 −0.38

−1.00 −1.00 −1.00

0 7 0

All

−0.64 −0.13

20 297 6.3

−1.00 −1.00 −1.00

0 1 0

≤10/1

SP/ME ∗ ≥ 2.0

54 804 6.29

−0.39 −0.32 −0.07

−0.48 −0.40 −0.20 N/A N/A N/A

9 61 12.86

≤10/1

11 145 7.05

All

1.5 ≤ SP/ME ∗ < 2.0

Notes * ME refers to the mean-early price, this being more representative of generally available prices. 1 Crafts used trade newspaper betting forecast prices as the baseline for measurement, as opposed to mean- and max-early ﬁxed odds in this study (ﬁxed odds markets were infrequent in 1978, the year from which data was drawn). As in the current study SP was the destination price. Crafts claimed that the impact of insider information could be distinguished from that of publicly available information, which would be discounted by bookmakers by the time of opening show. This distinction cannot easily be made in a study of ﬁxed odds, as the impact of the two types of information work simultaneously on early morning ﬁxed odds. 2 Crafts measured price movements by the ratio of newspaper forecast price (FP) to SP (odds contracting to SP), and the ratio of SP to FP (odds extending to SP), with classes of magnitude 1.5 to less than 2; and 2+. 3 Returns in Table 7.6 (and Table 7.7) are calculated to a £1 level stake per bet, as this was the staking used by Crafts, and ignore transaction costs. 4 Because of the characteristics of SP betting forecasts at the time the Crafts data refer to, he limited his study to horses with an FP and/or SP of 10–1 or less. For purposes of comparison, the same procedure was adopted in Table 7.6; this gives the added beneﬁt of allowing a relative appraisal of the performance of long and short priced runners.5

Current dataset Won Lost % winners Average proﬁt per £ bet, at Mean-early Max-early Starting price Crafts dataset Won Lost % winners Average proﬁt per £ bet, at Forecast price Starting price

All

1.5 ≤ ME ∗ /SP < 2.0

Table 7.6 Comparison of rates of return in the current and Crafts datasets, by direction and magnitude of price movement

8 22 26.67 2.02 2.37 0.89 3 10 23.08 1.69 2.23 0.69 5 29 14.71 0.43 0.65 −0.09 2 39 4.88 −0.26 −0.17 −0.55

1.59 1.89 0.62

3 18 14.29

0.66 1.00 0.05

6 36 14.29

0.82 1.14 0.14

4 90 4.26

−0.02 0.16 −0.40

≤10/1

8 27 27.59

All

1.5 ≤ ME/SP < 2.0

−0.36 −0.28 −0.67

0.65 0.86 −0.14

1 6 14.29

−0.28 −0.17 −0.64

−0.41 −0.32 −0.70 1 17 5.56

1 8 11.11

0.31 0.41 −0.27

2 9 18.18

1.31 1.53 0.13

2 14 12.50

≤10/1

1 10 9.09

0.20 0.29 −0.33

2 10 16.67

1.06 1.25 −0.23

2 16 11.11

All

ME ∗ /SP ≥ 2.0

−0.74 −0.71 −0.61

3 116 2.52

0.36 0.60 1.11

8 29 21.62

N/A N/A N/A

0 0 N/A

N/A N/A N/A

0 0 N/A

All

−0.67 −0.63 −0.46

2 37 5.13

−0.05 0.08 0.42

7 24 22.58

N/A N/A N/A

0 0 N/A

N/A N/A N/A

0 0 N/A

≤10/1

1.5 ≤ SP/ME ∗ < 2.0

Note * ME refers to the mean-early price, this being more representative of generally available prices.

WAOT Won Lost % winners Average proﬁt per £ bet, at Mean-early Max-early Starting price WO Won Lost % winners Average proﬁt per £ bet, at Mean-early Max-early Starting price OTO Won Lost % winners Average proﬁt per £ bet, at Mean-early Max-early Starting price NOT Won Lost % winners Average proﬁt per £ bet, at Mean-early Max-early Starting price

Tip status

Table 7.7 Returns to a level stake of £1 per bet, current dataset, by price movement and tip status

−1.00 −1.00 −1.00

0 4 0

−1.00 −1.00 −1.00

0 2 0

N/A N/A N/A

N/A N/A N/A

0 0 N/A

−1.00 −1.00 −1.00

0 1 0

N/A N/A N/A

0 0 N/A

N/A N/A N/A

−1.00 −1.00 −1.00 0 0 N/A

0 0 N/A

≤10/1

0 1 0

All

SP/ME ∗ ≥ 2.0

The impact of tipster information

77

Table 7.8 Signiﬁcant price movers as a percentage of total runners in tip categories Category

WAOT WO OTO NOT All

Total runners in category

Contracting (ME*/SP ≥ 1.5)

Extending (SP/ME* ≥ 1.5)

Number

%

Number

%

174 169 1,033 2,902 4,278

53 33 53 112 251

30.46 19.53 5.13 3.86 5.87

1 0 39 123 163

0.57 0 3.8 4.2 3.8

Note * ME refers to the mean-early price, this being more representative of generally available prices.

overestimates their true chance of winning, evidenced by substantial SP losses on these runners. The data on NOT runners in Tables 7.7 and 7.8 is difﬁcult to square with the association claimed by Crafts between signiﬁcant price movements with proﬁtable arbitrage opportunities, and insider activity; one would expect proﬁtable insider arbitrage to be more apparent in this category, although Crafts does suggest this is particularly a feature of low proﬁle races. As it is, the most proﬁtable potential arbitrage opportunities are to be found in the categories in which horses are napped, and hence with publicly available information.

Conclusions Media tips appear to have a signiﬁcant impact on prices from max/mean early to SP, and this analysis suggests that knowledge of Winsome selections is a useful predictor of large contractions in price with the prospect of potential arbitrage opportunities. The analysis of price movements conﬁrms many of the outcomes of the Crafts study, although a question is raised regarding the strength of Crafts’ conclusion regarding insider activity, due to the poor performance in this study of horses that are not napped. There is some evidence of semi-strong inefﬁciencies in respect of media tips (OTO, WO and WAOT), based on this dataset. The above average actual SP and nominal mean-early returns are not accounted for by the differential incidence of the favourite–longshot bias on tipped and non-tipped categories. The differences in returns, therefore, may reﬂect an inefﬁcient use of tips. The rates of return are not signiﬁcant by conventional statistical tests, but it is suggested that further work is required on the nature of the distribution of betting returns in general. Whether the additional returns advantage at max-early prices constitutes semistrong inefﬁciency depends upon the extent of arbitrage opportunities, and warrants further study of the path of prices to SP. Do the abnormal Winsome proﬁts over three years indicate the judgement of sophisticated bettors who assess this as an aberration, and expect reversion to the mean, or is this evidence of inefﬁcient use of information? To answer this question

78

M. A. Smith

further extension of this study should use a larger sample that looks at the total naps record of each tipster individually, which would also reduce any bias caused by concentrating only on the type of race in which Winsome specialises.

Notes 1 Alternative names are used for the newspaper and journalist’s column to maintain anonymity. 2 The Sporting Life, was the authoritative trade paper at that time. It is important to note that betting forecasts offer estimates of prices – they are not available to real bets. 3 The nap is the horse considered by the journalist to be the best bet of the day. 4 Crafts uses an alternative measure, FP/SP, which has the disadvantage of being unweighted for the amount of money needed to move the price. 5 This is an appropriate division because the favourite–longshot bias appears to become marked at odds of about 8–1 (Hausch et al., 1981).

References Alexander, C. (2001), Market Models: A Guide to Financial Data Analysis. Wiley: Chichester. Ali, M. M. (1979), ‘Some evidence on the efﬁciency of a speculative market’, Econometrica, 47, 387–392. Ball, R. and Brown, P. (1968), ‘An empirical evaluation of accounting income numbers’, Journal of Accounting Research, Autumn, 159–178. Conrad, J. and Kaul, G. (1993), ‘The returns to long term winners and losers: bid-ask biases or biases in computed returns’, Journal of Finance, 48(3), 39–63. Crafts, N. F. R. (1985), ‘Some evidence of insider knowledge in horse race betting in Britain’, Economica, 52, 295–304. Dissanaike, G. (1997), ‘Do stock market investors overreact?’, Journal of Business Finance and Accounting, 24(1), 27–49. Fama, E. F. (1970), ‘Efﬁcient capital markets: a review of theory and empirical work’, Journal of Finance, 25(2), 383–417. Figlewski, S. (1979), ‘Subjective information and market efﬁciency in a betting market’, Journal of Political Economy, 87(1), 75–88. Hausch, D. B., Ziemba, W. T. and Rubinstein, M. (1981), ‘Efﬁciency of the market for racetrack betting’, Management Science, 27(12), 1435–1452. Krauss, A. and Stoll, H. (1972), ‘Price impacts of block trading on the New York Stock Exchange’, Journal of Finance, 27(2), 210–219. Patell, J. M. and Wolfson, M. A. (1984), ‘The intraday speed of adjustment of stock prices to earnings and dividend announcements’, Journal of Financial Economics, 13, 223–252. Shin, H. S. (1991), ‘Optimal betting odds against insider traders’, Economic Journal, 101, 1179–1185. Snyder, W. W. (1978), ‘Horse racing: testing the efﬁcient markets model’, The Journal of Finance, 33(4), 1109–1118. Vaughan Williams, L. and Paton, D. (1997), ‘Why is there a favourite–longshot bias in British racetrack betting markets?’, Economic Journal, 107, 150–158.

The impact of tipster information

79

Vaughan Williams, L. (1999), ‘Information efﬁciency in betting markets: a survey’, Bulletin of Economic Research, 53, 1–30. Vaughan Williams, L. (2000). ‘Can forecasters forecast successfully? Evidence from UK betting markets’, Journal of Forecasting, 19, 505–513. Zarowin, P. (1990), ‘Size, seasonality and stock market overreaction’, Journal of Financial and Quantitative Analysis, 25(1), 113–125.

8

On the marginal impact of information and arbitrage Adi Schnytzer, Yuval Shilony and Richard Thorne

Introduction It is self-evident that information is valuable, even indispensable, for optimal decision making when investing in ﬁnancial markets. A question which naturally arises is: at what point, if any, does the cost of additional information gathering exceed the beneﬁts? The question is complicated by the fact that information is not a homogeneous commodity. This distinguishes our question from that posed by Stigler (1961) on the diminishing marginal returns to (homogeneous) searching for the lowest price of a commodity. Under certain conditions, Radner and Stiglitz (1984) showed that for an expected utility maximisation problem under constraint, the marginal value of information at the point of no information is non-positive. For a similar result in a principal–agent setting see Singh (1985). These results suggest that information has a rising marginal value when information is ﬁrst accumulated. Indeed, it is easy to ﬁnd examples of particular scenarios where the marginal value of information1 is negative, see below, or not diminishing. The problem arises from the heterogeneity of information in ﬁnancial markets and is complicated by the existence of both public and private information. On the other hand, it may be that, if the investors gather information about a large number of stocks, the proposition of positive but diminishing returns to information at the successive going market equilibrium points, which develop and change over time, holds true on average. The purpose of this chapter is to present a formal representation of the information accumulation process and to use this representation to formulate the testable hypotheses that, in a ﬁnancial market, the marginal value of information is, on average, positive and diminishing. Using data from a horse-betting market, it will be shown that this hypothesis cannot be rejected, in spite of the fact that it does not hold for particular horses or races. We show that the ﬂow of inside information to the market, when its gainful exploitation is permitted, positively impacts upon the market by eradicating remaining biases in prices and that this impact is diminishing. The choice of a horse-betting market is motivated by a number of factors. First, since the betting on each race represents a separate market, it is possible to obtain data on many separate markets. Second, the institutional framework within which betting takes place in our data facilitates the transmission of both public and private information to the market. Third, the acquisition of transmitted information is

Marginal impact of information and arbitrage

81

virtually costless. The costless availability of both public and (second hand) inside information permits the focus of the chapter to be placed squarely upon the value of the information. Finally, in the context of horse betting, the marginal value of information is readily deﬁned intuitively: additional information has value if it permits the generation of a more accurate estimate of the winning probabilities of the horses in the race, than would be possible without that information. The question, then, is how may we use horse betting data to test the behaviour of the marginal value of information, on average? Pari-mutuel betting markets in the United States have received by far the most attention from researchers. A number of papers2 have shown that these markets are beset by what is known as the favourite–longshot bias. That is, bettors on the pari-mutuel systematically under-bet favourites and over-bet longshots relative to their winning frequencies. On the other hand, Busche and Hall (1988) have shown that the pari-mutuel market in Hong Kong is characterised by a reverse bias; that is, favourites are over-backed and longshots under-backed relative to their winning frequencies. We would argue that changes in the extent of any such bias in the market provide us with the appropriate measure. Using data on tote betting on harness races in Victoria, Australia at various times before the race, we show that in a betting market in which the pari-mutuel operates alongside bookmakers, betting by insiders with the latter provides valuable information to outsiders regarding the true winning probabilities of the horses. Outsiders use this information to update their expectations and the consequent change in their betting behaviour with the tote leads to an efﬁcient; that is, unbiased ﬁnal equilibrium. In a second tote market considered, bettors bet on races taking place in a different state, where a different tote operates. This latter tote is not available to the majority of bettors in Victoria, although prospective pay-out updates are available.3 In this case, local bettors receive information on the distant bookmaking market via a local on-course bookmaker who bets on out of state races. We show that this – less efﬁcient, because not all price changes are transmitted – information transmission mechanism leads to a signiﬁcant reduction in the extent of bias over time, but does not permit its complete removal. We use this comparison between the two markets to show that both markets are characterised by diminishing marginal value of information. We proceed as follows: The formal representation is provided in the section on ‘A formal representation’. Empirical results are presented and discussed in the section on ‘Empirical results’ while some conclusions are offered in the last section.

A formal representation There are three types of economic agent at the track. The betting public is composed of two disjoint segments – outsiders and insiders, while bookmakers add a third disjoint segment: (1) Outsiders, who have access only to public information of past records and current conditions. These are mainly pleasure bettors who bet relatively small

82

A. Schnytzer, Y. Shilony and R. Thorne

amounts with either the tote or the bookies. These bettors, when choosing horses to back, have a trade-off between favourites; that is, horses with a high probability of winning but small return, and longshots with a low probability of winning, but a high return. The bettors’ choices of horse on the tote affect the returns. The equilibrium in bettors’ decisions has been analysed by Quandt (1986) under the assumption that the objective probabilities of winning, p, are known. He ﬁnds that if bettors are risk-loving, a favourite–longshot bias must be present. A consequence, easily proved by same means, is that risk-aversion on the part of bettors implies the opposite bias. The argument follows even if, as we assume, bettors do not know p and employ instead expectations, e = Ep. In other words, on average, over many races, we should observe the implied bias. A bias in any direction may also be present owing to faulty probabilistic reasoning on the part of the public, such as that considered by Kahaneman and Tversky (1979, 1984) or Henery (1985) or, for that matter, for any other reason.4 The existence of bias in many countries is widely documented, see Hausch et al. (1994). (2) Insiders, who are usually associated with owners, trainers, drivers and other members of the trade, and have access to useful private information. An insider who wishes to gainfully employ his superior information will seek a more attractive outlet for using the information than the tote, namely a bookmaking market, where he can secure a guaranteed return. The reason is that the bettor’s mere backing of a horse in the tote reduces its return and all the more so if he has followers who try to imitate his actions. On the tote, the scope for heavy betting (plunging) is, therefore, limited and the ﬁnal return subject to uncertainty. We assume here that access to a market of bookmakers is available to the insider and that most plunging is carried out there.5 Of course, insiders may also bet with the tote, but if they do so it will be late in the betting, when uncertainty about the price is minimal. (3) Bookmakers, who sell bets at ﬁxed odds. In terms of the information at their disposal, bookmakers are assumed to be in a situation between that of outsiders and that of insiders, knowing more than the former and less than the latter. Thus, they will, on occasion, offer odds about a horse which, according to insiders, represent an unfair bet in the latter’s favour. It is under these conditions that a plunge occurs. Thus, the discrepancy between expected returns and prices, which gives rise to arbitrage opportunities, may derive from two sources. One is the bias discussed above, which is observed even by outsiders with public information. Could not a shrewd, risk-neutral bettor design a way to arbitrage the bias and make a proﬁt on average? In practice not, because the bias is not usually large enough relative to the tax on tote betting to warrant such activity.6 More important is another source for arbitrage, namely superior information held by insiders. An insider who observes a large enough gap between what the market ‘knows’ and his own more intimate knowledge may seize this opportunity and back an under-estimated horse. As noted above, this arbitrage activity will take place mostly in the bookmakers’ market, which is also composed of outsiders and insiders. If the consequent plunge is visible to outsiders in the tote market, the observers may learn something new about the horse from the plunge and follow suit.

Marginal impact of information and arbitrage

83

Since the plunge has been carried out at ﬁxed odds, insiders’ returns are unaffected by any such following. We now turn to a formalisation of this information-driven arbitrage. Let be our relevant sample space. Each element ω ∈ is an elementary composite event, which, if known to have occurred, conveys the fullest possible information about the coming race. A typical ω includes a full description of the horses’ and drivers’ conditions, the owners’ interests and stakes in winning or losing, track and weather conditions etc. Full information does not mean, of course, knowing the winner. Rather, deﬁne the Interpretation Function, I : → , which assigns to each elementary composite event a vector of random variables, namely the race, in the nwinning probability vector p = (p1 , . . . , pn ) for the n horses in the dimensional simplex = {(p1 , . . . , pn )|pi ≥ 0, i = 1, . . . , n and ni=1 pi = 1}. Of course, different people may have different interpretative faculties and therefore arrive at different conclusions regarding the winning probabilities. However, because we wish to concentrate on the informational element, we shall assume that all people are equally and perfectly astute in understanding racing events and have the same interpretation function, which gives the objective probabilities of winning. Thus, the most highly informed person can know at most the realisation of a particular elementary composite event, ω, which amounts to knowing p = I (ω). On , there is an a priori probability measure µ which is a Lebesgue measure over Borel subsets of . This a priori probability is common knowledge and is derived statistically from past frequencies and general knowledge by all keen observers. The difference between different bettors lies in the information at their disposal. Our formal description of information follows that developed in recent years in game theory; for example, see Osborne and Rubinstein (1994, ch. 5). A bettor’s information may be described as a partition R of into a set of disjoint subsets of itself; that is, R = (R1 , . . . , Rm ) such that Ri ∩ Rj = φ

for i = j and ∪m i=1 Ri =

The idea is that the bettor will know only which of the m possible events, R1 , R2 , . . . , Rm took place; that is, to which Ri the realised ω belongs. The more reﬁned the partition, that is, the greater the number of (thus smaller) sets it contains, the more revealing and useful is her information. An outsider with no access to useful information beyond past records and readily ascertainable current weather and track conditions; that is, beyond µ, has the degenerate partition R = ( ) and can do no better than estimate the winning probabilities by E(p| ) = I (ω) dµ. A better informed bettor; that is, one with a more reﬁned partition, R, knows which event Ri has occurred but not, of course, which ω ∈ Ri . To appraise the winning chances of the horses, she uses the information available to her to update the à priori distribution employing Bayes’ rule to get E(p|Ri ) =

Ri

I (ω) dµ µ(Ri )

84

A. Schnytzer, Y. Shilony and R. Thorne

Bookmakers have a partition Q, such that for every Qj in Q there is, for each insider with partition R, a set Ri in R such that Ri ⊂ Qj ⊂ . In other words, their partition is not so reﬁned as that of insiders but is more reﬁned than that of outsiders. They set their opening prices greater than their best estimate of the probabilities:7 Qj I (ω) dµ E(p|Qj ) = µ(Qj ) As the betting begins, outsider bettors make decisions based on their expected probability for each horse i to win, ei = E(pi | ). They will bet with both bookies and the tote, even though the former will never offer a price lower than ei for horse i. The price difference may be viewed as a premium for providing the market with initial odds, which is, in some degree, offset by the take-out of the tote. An insider, who usually specialises in a particular horse, may have a different estimate for horse j , say, E(pj |R) > ej , which is greater than the bookies’ opening price and therefore plunges the horse. If the plunge is visible to outsiders it reﬁnes their partition and reveals to them that ω is in a subset of to which I assigns higher probabilities for j to win than ej and thereby lowers the probabilities of other horses. Their estimation of horse j to win is updated upwards. Since, before the updating, outsiders were in equilibrium, which means indifference or absence of any incentive to switch horses, following the updating they have enhanced interest to back horse j , regardless of the direction of the initial bias, and doing so on the tote will lower its return. The typical outsider bets a small amount and can safely assume the odds given are not affected by his actions. Outsiders may also back the horse with the bookies, but they know that, since bookies have a more reﬁned partition, they will have revised their price after the plunge to a point at which it is now greater than the expected winning probability. Thus, again, outsiders will bet with bookies only if they are prepared to pay a premium for ﬁxed odds. The insiders do not all act together. Some may choose to act later and some may come across late information and so the process goes on. Suppose now, that there is a plunge on horse h with the bookmakers. Alert observers get their partitions reﬁned, directing their attention to subsets of the event they have known to occur where horse h is more highly appraised. That is, if a certain bettor knows that ω ∈ Rk , where Rk is one of his partition’s sets, the bettor learns from the new plunge that ω ∈ A ⊂ Rk and would now like to bet more on horse h with the tote if E(ph |A) > E(ph |Rh ) and the expected probabilities of other horses commensurably decline. The plunges may continue until racing time. In the process, information partitions get more and more reﬁned and the expected probabilities get closer and closer to the true probabilities, p = I (ω). In summary, the prediction from our model is that the more visible is the incidence of insider trading via plunges, the more outsiders tend to imitate insiders, thereby driving the subjective probabilities, ei towards the objective probabilities, pi . Note, we have assumed that all outsiders have access to plunges, whereas in

Marginal impact of information and arbitrage

85

the Victorian market, there are bettors on- and off-course. However, all off-course tote agencies provide regular updates of provisional pay-outs, so that in practice, outsiders on-course update their preferences, bet on the tote, and thus signal to those off-course the relevant information. Also, one can predict from our approach that in the absence of a bookmakers’ market, insiders who have no choice but to bet with the tote will bet lesser amounts, will thereby transmit less information to others and any extant bias will persist. Letting and I have more structure, one can build and test more elaborate hypotheses. For example, is the marginal value of information positive and is it increasing or decreasing? Suppose bettor i has three levels of information at three points in time; formally, ω εQi ⊂ Ri ⊂ Si . When least informed, her ignorance can be measured by µ(Si ) since this is the size of the set among whose points she is unable to distinguish. Thus, her information is the complement µ( \Si ) = 1−µ(Si ). The value of information at Si may be deﬁned as: V (Si ) = 1−|E(p|Si )− I (w)| where w is the true (unknown) state and the absolute value is the error committed by relying on Si to estimate I (w). Note that gaining more information and moving to Ri ⊂ Si could, in principle, be detrimental, that is, V (Ri ) < V (Si ). This could happen if, for example, w is close to the boundary of Ri and therefore less representative of it than of Si so that I (w) < E(p|Si ) < E(p|Ri ). An example is provided below. Suppose now that the marginal value of information is positive and that and I and the three sets are such that for the given true point, ω: Qi

I (ω) dµ −

Ri

I (ω) dµ

µ(Ri ) − µ(Qi )

I (w0 ) for ϕ(a, b) < I (w0 )

1 (b − a)2

b

(b − a)I (a) −

I (w) dw a

For a rising I over [a, b], extra information helps if the estimate overshoots the true value and distorts if the estimate undershoots it. The same result follows for lowering b. Of course, globally information is beneﬁcial as it drives the estimate toward the a,b→w0 true value, that is, ϕ(a, b)−−−−→ I (w0 ). Now we turn to the marginal value of information, where the same ambiguity holds. Because information is empirically useful it stands to reason that we concentrate more on this issue. Claim 2 1

The marginal value of information may be increasing or decreasing depending on the sign of the slope of I , on the sign of the estimating error and on the sign of the updating information, that is, whether a is increased or b is decreased. 2 For a rising I , the marginal value of information is decreasing everywhere, whenever information is beneﬁcial, if b 1 1 2 (b−a)I (b)− (b−a) I (b) < I (w)dw < (b−a)I (a)+ (b−a)2 I (a) 2 2 a Proof Differentiating V again we get −ϕaa for ϕ(a, b) > I (w0 ) −ϕbb for ϕ(a, b) > I (w0 ) Vbb = Vaa = for ϕ(a, b) < I (w0 ) for ϕ(a, b) < I (w0 ) ϕaa ϕbb Which shows part 1 of the claim. Part 2 requires Vaa < 0, Vbb > 0 where information helps for a rising I , that is, Va > 0, when ϕ < I , and Vb < 0, when ϕ > I . By working out the second derivatives one ﬁnds that Vaa < 0, Vbb > 0 together imply the inequalities of 2. Note that a necessary, but not sufﬁcient, condition for these inequalities is I (a) < 0, I (b) > 0. There are other possibilities and variations. For example if I is rising, which would be the case in our example of health affecting probability of winning, and concave throughout then the marginal value of information is diminishing for positive information about the horse, or share, and increasing for negative information. To wit, horror tips accumulate in strength while good ones lose weight. The exact opposite is true if I is falling and concave throughout. Note that a plunge reveals positive information for a horse while negative information does not have as direct and ready a way to make its presence felt in the market.

Marginal impact of information and arbitrage

87

Empirical results In summary, the general prediction from our model is that the more visible is the incidence of insider trading via plunges, the more outsiders tend to imitate insiders, thereby driving the subjective probabilities, ei , towards the objective probabilities, pi . However, the extent to which the process is completed depends critically upon the speciﬁc institutional environment. Further, the method by which we can determine whether there is diminishing marginal value of information also depends upon the institutional environment. Thus, a brief description of the two markets for which we have data is appropriate. In the Victorian markets, there are bettors both on-course and off-course. The tote has a monopoly in off-course betting and competes with bookmakers at the track. There is, however, only one tote pool and pay-outs are identical across betting locations. All off-course betting locations provide regular updates of provisional pay-outs and live telecasts of the races. However, they provide no direct information on the odds offered by bookmakers. Thus, bettors off-course obtain plunging information second-hand, via the tote updates which reﬂect changes in the pattern of tote betting on-course. Since outsiders on-course are able to see most bookmakers’ odds, they will, in practice, collect far more information than that shown by large plunges. In consequence, we would expect the ﬁnal tote equilibrium to be unbiased. The second market studied here is the inter-state market. In this market, Victorians bet on the Victorian tote on races which are run outside of Victoria. Thus, the bettors do not see bookmakers on-course and neither insiders nor outsiders, who are at the track at which the race is run, can bet on the Victorian tote. There is, however, a transmission mechanism for bookmakers’ price information from the track. Virtually without exception, when a race meeting is held in either New South Wales, Queensland or South Australia – the states on whose races the Victorian tote typically offers bets – there will be a parallel meeting of some kind somewhere in Victoria. Since bookmakers are permitted to bet on races taking place at other tracks than the one at which they operate, there will always be at least one, if not more, bookmaker betting on the inter-state race. Before he sets his opening odds on the race, the bookie receives a list of the average opening prices of all horses in the race. This list is transmitted by phone and arrives via a loudspeaker which is set up in his vicinity. Thus, all interested bettors may hear the opening prices at the distant track. As betting there proceeds, there are further transmissions, now of odds changes. Thus, with one important exception, Victorian bettors on-course are provided with the relevant information regarding plunges. The exception is with respect to very late plunges. When such occur at the very end of the betting, there is insufﬁcient time for the information to be transmitted. Further, since only average market odds are reported, some important information may be missing. Finally, the information arrives in discrete bundles at irregular intervals, which implies that its transmission to projected tote payouts may be more discrete than the regular ﬂow of information provided in the Victorian market. In short, while any bias extant in the inter-state market should

88

A. Schnytzer, Y. Shilony and R. Thorne

also be diminished in extent over time, the extent of information ﬂow may not be sufﬁciently complete to permit its eradication. We are now in a position to outline an empirical test for the presence of diminishing marginal value of information. The institutional set-up of both markets should permit bettors to obtain an increasingly accurate estimate of the horses’ true winning probabilities as race time approaches. One way to measure whether this is, indeed, the case on average, is to measure the extent and manner in which any favourite–longshot bias diminishes over time in these markets. Diminishing marginal value of information could be inferred from a diminishing extent of eradication of the favourite–longshot bias over time, provided that information ﬂows in these markets were more or less uniform over time. However, if, for example, more inside information is revealed at the start of betting, and the extent of revelation diminishes over time, then we would expect the extent of eradication of a bias also to diminish over time without any implication of diminishing marginal value of information. It should be noted that the choice of harness racing for this study is not fortuitous. Unlike jockeys, who are not permitted to bet, drivers may bet on their own horses without legal limit. Consequently, our choice eliminates any potential principle– agent problem which may exist between jockeys and the horses’ owners and/or trainers. For a detailed description of the data and the manner in which they were gathered, see Appendix. We use the following deﬁnitions: bhτ = the amount bet on the tote on horse h at time τ before the race, h = 1, . . . , n, where n is the number of horses in the race; Bτ = the total amount bet on the race at time τ and t = the tote take-out rate, not including breakage (14.25 per cent in the case of our data). Breakage, in the case of the Victorian tote, arises since pay-outs are rounded down to the nearest 10 cents. Since rounding causes a larger percentage loss for small odds than for large odds, we follow Grifﬁth (1949) and assume continuous pay-outs rather than pay-outs falling into 10-cent intervals. The easiest way to accomplish this is to assume that for a sufﬁciently large number of observations, the mean pay-out before rounding will fall half-way between the actual pay-out and the next pay-out up. In practice, this amounts to adding 5 cents to the projected pay-outs at time τ, Phτ . Let πhτ = Phτ + 0.05. Then the adjusted pay-out is given by: πhτ =

Bτ (1 − t) bhτ

and the bettors’ subjective probability at time τ that horse h will win the race, phτ , is given by: phτ =

bhτ (1 − t) = Bτ πhτ

Owing to breakage, the probabilities thus calculated did not sum to exactly one over each race and were thus normalised. All statistical calculations were performed on a data set screened to remove all races in which the data were not

Marginal impact of information and arbitrage

89

complete or in which there were late scratchings and in which any horse showed a pay-out of only $1 starting from 30 minutes before the posted race time until closing time.8 This reduced the number of observations for the data set to 2,054 races with 19,955 horses. The horses were sorted by the closing pay-outs and the sample then divided into 30 groups of as nearly as possible equal size. In addition to the pay-outs at the close of betting, data were available in viable quantities for the projected pay-outs 1, 2, 3, 5, 7, 9, 10 and 15 minutes before the actual start of the race, and 30 minutes before the ofﬁcial start time of the race. The latter case was chosen to obtain a set of pay-outs reﬂecting bettor behaviour before bookmakers had begun to offer ﬁxed odds on-course. Sorting by prospective pay-outs at one price only means that the pay-outs for these time periods in each group reﬂect changing bettor evaluation of the same horses over time. The same procedure was adopted for races being run in Victoria as for those being run outside the state. For each group at each time period, mean winning frequencies, w¯ iτ , and mean subjective winning probabilities, p¯ iτ , were calculated and the former regressed on the latter. A favourite–longshot bias is indicated by a negative intercept in the estimated regression. Figure 8.1 shows the intercepts for both markets over time. As the diagram makes clear, in the betting prior to the opening of the bookmakers’ market, there is a signiﬁcant bias in both markets. Much of this bias is eradicated as soon as tote bettors learn bookmakers’ opening prices and/or the nature of early

Constant in the regression of the mean win frequency vs the mean subjective probability

0.05 Start of on-track betting

No bias 0.00

–0.05

Within Victoria Outside Victoria –0.10 30

15

10

9

7

5 3 Minutes

2

Figure 8.1 The dynamics of the favourite–longshot bias.

1 Close

90

A. Schnytzer, Y. Shilony and R. Thorne

plunges. From that point on, there is steady convergence to efﬁciency, a state achieved in the Victorian market by around 5 minutes before the start of the race. On the basis of this result, it may be concluded that most of the valuable information has found its way into the market by this time. In the second market, the trend is similar although the bias is always more pronounced and has not been entirely removed even at the close of betting. In the case of the Victorian market, not only is the regression intercept highly insigniﬁcant (t-statistic = 0.411), but the point estimate is very low at −0.002. Table 8.1 shows the regression results consolidated as one regression for each market, with dummy variables for the intercepts and slopes of the different time periods. These results indicate the more discrete nature of the inter-state market, with all variables signiﬁcant except the dummies for 1 minute before the close. The latter lends support to the hypothesis that, in this market, any important late changes in the bookmakers’ market inter-state are not transmitted. On the other hand, in the Victorian market, there is a smooth, signiﬁcant change in the regression line until around the 5-minute mark, at which point the market has appeared to reach

Table 8.1 Regression of mean win frequency against mean subjective probability Variable

Coefﬁcient in Victoria

Slope 1.019355 Slope_1 0.0245735 Slope_2 0.0427793 Slope_3 0.0604486 Slope_5 0.0785061 Slope_7 0.0939413 Slope_9 0.1007083 Slope_10 0.1018986 Slope_15 0.1093001 Slope_30 0.3021055 Constant −0.0020021 Dummy_1 −0.0025288 Dummy_2 −0.0044023 Dummy_3 −0.0062212 Dummy_5 −0.0080795 Dummy_7 −0.0096684 Dummy_9 −0.0103649 Dummy_10 −0.0104871 Dummy_15 −0.0112486 Dummy_30 −0.0311057 Adjusted R 2 0.9717 No. of obs. 300

t-statistic

P >t

32.164 0.542 0.934 1.308 1.683 1.997 2.133 2.156 2.301 5.791 −0.411 −0.365 −0.633 −0.890 −1.151 −1.372 −1.468 −1.485 −1.589 −4.193

0.000 0.589 0.351 0.192 0.094 0.047 0.034 0.032 0.022 0.000 0.681 0.715 0.527 0.374 0.251 0.171 0.143 0.139 0.113 0.000

Coefﬁcient in other markets 1.107853 0.0650679 0.117948 0.1649855 0.2534352 0.312916 0.3560262 0.3796904 0.4467271 0.8822636 −0.0111775 −0.0066958 −0.0121376 −0.016978 −0.02608 −0.0322008 −0.0366371 −0.0390722 −0.0459706 −0.0907896 0.9825 300

t-statistic 41.001 1.654 2.928 4.011 5.922 7.121 7.947 8.386 9.581 15.739 −2.708 −1.132 −2.029 −2.809 −4.230 −5.152 −5.802 −6.153 −7.124 −12.604

P >t 0.000 0.099 0.004 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.007 0.259 0.043 0.005 0.000 0.000 0.000 0.000 0.000 0.000

Note Slope_x is a dummy variable for the slope of the regression at x minutes before the actual start of the race and Dummy_x is a dummy variable for the constant x minutes before the actual start of the race (except x = 30, which is 30 minutes before the ofﬁcial race start time).

Marginal impact of information and arbitrage

91

something very close to its ﬁnal equilibrium. The fact that the regression constant is generally greater in the inter-state market may indicate that Victorian bettors are, on average, less knowledgeable about inter-state markets than their own. This would lead to more uninformed betting, a known cause of a favourite–longshot bias.9 Indeed, although the bias is not removed in this market, it appears that the presence of bookmakers, as conveyors of information, is more important in this market than the domestic market. Two striking results are neatly captured in Table 8.1: First, for both markets, the size of the regression constant is monotonically rising, while the slopes are monotonically falling. Second, at any given point of time, the constant of the within-state regression line is consistently greater than that of the out-of-state line, while the slope is consistently lower. This change in each market and the comparison between them is highlighted by the ‘lines’ in Figure 8.1. (A) Both lines manifest monotonically diminishing slopes and (B) the slope of the within-state line is consistently lower than that of the out-of-state line. It could be argued that (A) is due to concentration of the ﬂow of useful information in the early stages of the betting. However, this explanation is contradicted by (B), since the out-of-state bettors always enjoy less information, so they could not get more information to account for their larger slope. Further, we may check directly the hypothesis that more useful information arrives during the early stages of betting. In our representation, useful information is provided by plunges. Accordingly, deﬁne del_x_y as the change in a horse’s subjective winning probability between y minutes before the race and x minutes before the race, if positive, zero otherwise. Table 8.2 shows the mean and standard deviation, per minute, for this variable in our data set. On the basis of the values shown in Table 8.2, there is evidence in support of an increase in the ﬂow of useful information over time in the Victorian market and an initial decrease followed by an eventual increase in the out-of-state market. We are, thus, unable to reject the hypothesis that there is, on average, diminishing marginal value of information in these markets. That is, even if in equation (1) the denominators were equal, which over time would imply Table 8.2 Basic statistics on the ﬂow of useful information (per minute during the relevant time interval) Variable

Mean in Victoria (7,176 observations)

Standard deviation

Mean in other markets (12,779 observations)

Standard deviation

del1530 del1015 del910 del79 del57 del35 del23 del12 delc1

0.0007811 0.0008857 0.0022199 0.0015625 0.0018640 0.0020891 0.0031813 0.0035377 0.0040938

0.0021235 0.0023572 0.0063435 0.0042454 0.0047649 0.0057342 0.0099417 0.0089089 0.011325

0.0012104 0.0013898 0.0031131 0.0022982 0.0026814 0.0032570 0.0047984 0.0058396 0.006417

0.0029187 0.0040426 0.0099793 0.0070794 0.0080292 0.0089488 0.0134331 0.0159559 0.0180889

92

A. Schnytzer, Y. Shilony and R. Thorne

a constant ﬂow of information, the inequality is due to the numerators. The effect of a new piece of information is larger when information is scanty.

Conclusions In this work we addressed two related issues: 1

2

The eradication over time of an inefﬁciency bias in a market for contingent claims due to transaction made by insiders and the information ﬂowing from them, and The marginal impact of inside information on the market. A model was built to describe information and its updating and accumulation over time through market revelations. The major prediction is that institutional environments that afford proﬁtable market use of inside information and facilitate its transmission will sustain less of any market bias. Data on horse betting markets were utilised to test that hypothesis and were found supportive. It was also found that the ﬂow of information over time is not skewed toward the beginning of the trading period and that therefore, the marginal impact of information is, on the average, declining.

Appendix The data set was compiled from pre- and post-harness race postings onto the Victoria region TABCORP website (www.tabcorp.com.au) and comprises 3,430 races with 33,233 horses starting from June 1997 to the end of February 1998. Race data were obtained from the remote site using a command driven http browser (LYNX) and PERL operating on the university’s UNIX network. Starting between 4 and 7 hours before the start of the day’s races a list of available races were downloaded and start times for harness races extracted. Starting from 70 minutes prior to the posted start time each race’s information was then saved from the remote site in Victoria onto the local host at Bar Ilan University. Each ﬁle was updated periodically so that any new information between 2 hours to the ﬁnal post-race results could usually be obtained. Due to the dynamic nature of the data acquisition, disruptions in internet access caused by overload of either the local (Bar Ilan) or remote site (Victoria) resulted in loss of information to the data set. This loss was without any discernible pattern and therefore should have no systemic inﬂuence on the analysis. During the tabulation of the data from individual races downloaded into the ﬁnal data set, updates were expressed according to their occurrence relative to the actual rather than the posted start time for each race for posting times less than 30 minutes before the listed start time. This adjustment was necessary since in 20.4 per cent of the races the actual start time of the race was up to 10 minutes latter than the listed start time displayed on the Victoria TABCORP web page. We assume that bettors on-course adjust their betting behaviour to delays in the start of a race.

Marginal impact of information and arbitrage

93

Notes 1 This notion is formalised in the next section of the chapter. 2 See, for example, Snyder (1978). The one exception to the existence of a favourite– longshot bias known to us is provided by Swidler and Shaw (1995). 3 For most totes in Australia, contingent prices are available via the internet and even betting is sometimes possible. Victorian tote agencies also provide an updating service. 4 See Thaler and Ziemba (1988) for a discussion of different explanations for the favourite– longshot bias. 5 See Schnytzer and Shilony (1995) for evidence of insider trading in this market. 6 We know of no study which has found a bias of sufﬁcient size to provide after-tax arbitrage opportunities. 7 For a detailed analysis of price setting by bookmakers, see Schnytzer and Shilony (1998). 8 This ﬁnal adjustment is necessary since the above equations hold true only in cases where the amount bet on a horse is not so great that the tote could not return a mandatory minimum pay-out of $1.10 for winners and still obtain the take-out rate on the race. Where betting on one horse to such an extent occurs, there is no way to deduce the subjective probability on the basis of projected pay-outs. 9 See Thaler and Ziemba (1988).

References Busche, K. and Hall, C. D. (1988), ‘An exception to the risk preference anomaly’, Journal of Business, 61, 337–46. Copeland, T. E. and Friedman, D. (1992), ‘The market value of information: some experimental results’, Journal of Business, 65, 241–66. Gandar, J. M., Dare, W. H., Brown, C. R. and Zuber, R. A. (1998), ‘Informed traders and price variations in the betting market for professional basketball games’, Journal of Finance, 53, 385–401. Grifﬁth, R. M. (1949), ‘Odds adjustments by American horse-racing bettors’, American Journal of Psychology, 62, 290–4. Hausch, D. B., Lo, V. S. W. and Ziemba, W. T. (1994), Efﬁciency Of Racetrack Betting Markets. Academic Press. Henery, R. J. (1985), ‘On the average probability of losing bets on horses with given starting price odds’, Journal of the Royal Statistical Society (A), 148, Part 4, 342–9. Kahaneman, D. and Tversky, A. (1979), ‘Choices, values and frames’, American Psychologist, 39, 341–50. Kahaneman, D. and Tversky, A. (1984), ‘Prospect theory: an analysis of decision under risk’, Econometrica, 47, 263–91. Osborne, M. J. and Rubinstein, A. (1994), A Course in Game Theory, MIT Press. Quandt, R. E. (1986), ‘Betting and equilibrium’, Quarterly Journal of Economics, XCIX, 201–7. Radner, R. and Stiglitz, J. E. (1984), ‘A nonconcavity in the value of information’, in Boyer, M. and Khilstrom, R. E. (eds), Bayesian Models in Economic Theory, NorthHolland, Amsterdam. Schnytzer, A. and Shilony, Y. (1995), ‘Inside information in a betting market’, Economic Journal, 105, 963–71. Schnytzer, A. and Shilony, Y. (1998), ‘Insider trading and bias in a market for state-contingent claims’, mimeo.

94

A. Schnytzer, Y. Shilony and R. Thorne

Singh, N. (1985), ‘Monitoring and hierarchies: the marginal value of information in a principal–agent model’, Journal of Political Economy, 93, 599–609. Snyder, W. W. (1978), ‘Horse racing: testing the EFM’, Journal of Finance, 33, 1109–18. Stigler, G. J. (1961), ‘The economics of information’, Journal of Political Economy, 69, 213–25. Swidler, S. and Shaw, R. (1995), ‘Racetrack wagering and the uninformed bettor: a study of market efﬁciency’, The Quarterly Review of Economics and Finance, 35, 305–14. Thaler, R. H. and Ziemba, W. T. (1988), ‘Anomalies – pari-mutuel betting markets: racetracks and lotteries’, Journal of Economic Perspectives, 2, 161–74.

9

Covariance decompositions and betting markets Early insights using data from French trotting Jack Dowie

The literature on the economics of betting markets has developed largely independently of the part of the psychological literature on judgement and decision making that focuses on the evaluation of subjective probability assessments. The aim of this chapter is to indicate how a speciﬁc type of subjective probability evaluation can be applied to racetrack data and to note the additional insights that may thereby be gained. It is shown that one can both arrive at a summary measure of the overall quality of the betting on a set of races and establish the relative contributions to this overall achievement of different types of knowledge and skill, in particular the ‘discrimination’ and ‘calibration’ displayed by the market/s. Furthermore, one can carry out this analysis for both different sub-markets and different types of event. Accordingly, it becomes possible for serious bettors to identify where their activities might be most proﬁtably deployed and for the operators of betting service (who have access to data not available here) to determine, on the basis of concepts not previously employed, which particular bets and events will maximise their turnover. The underlying data relate to horse racing at the Hippodrome Paris-Vincennes in France, where the races are trotting ones. Trotting is one of the two gaits in harness racing in the English-speaking world (North America, Australasia, Britain and Ireland), pacing being the other, but pacing races are outlawed in mainland Europe and ‘harness racing’ is therefore exclusively trotting. The data comprise all 663 races run during the 1997/98 ‘winter meeting’ at Vincennes, which involves racing several days a week from early November to late February on an ‘all-weather’ (cinder) track. In France there is a pari-mutuel betting monopoly and betting takes place offcourse in PMU (Pari-Mutuel Urbain) outlets throughout the country up to 13.15 on raceday (‘avant le reunion’ – ‘ALR’). These PMU outlets are located within cafes, tabacs or other type of shop. Afterwards (‘pendant le reunion’ – ‘PLR’) betting occurs either at the Hippodrome itself (PMH) or – and increasingly – in specialist off-course betting shops called ‘cafe-courses’. Since 1997/98 this has been extended to include betting via TV remote controls but our data precede this. Betting in France is focused heavily on racing in the capital, Paris, and trotting at Vincennes is (astonishingly) responsible for about one-third of the annual total betting turnover in France. Of the total national betting of roughly 6 billion francs

96

J. Dowie

in 2001 just over half is on trotting. In 2001 almost 98 per cent of this turnover was off-track, with about a quarter of that taking place ‘PLR’, this proportion having grown very rapidly in recent years. Two sets of odds are accordingly available for analysis: the ﬁnal PMU ones as at 13.15, which continue to be displayed alongside the latest updates as betting occurs (PLR) at the track and elsewhere; and those at the close of all betting (equivalent to ‘starting prices’), which we will call the PMH odds even though they incorporate the money bet PMU and PLR as well). From an analytical point of view one can therefore explore the difference between these two sets of odds and establish the overall effect – size and direction – of the changes in the populations betting and informational circumstances in which bets are placed. In addition, trotting at Vincennes occurs in two distinct disciplines, attelé (in harness with sulky) and monté (in saddle) – roughly a third of races are monté – and we can therefore also analyse the results by discipline. In fact, the data collected in this study also distinguish between races conducted on the ‘Grande Piste’ of 2000 metres (day time) and the ‘Petite Piste’ of 1,300 metres (at night), between races for horses of different sex, age and class, and between the four or ﬁve major betting races of the week (the ‘événements’) and the rest. About half the betting and the vast majority of newspaper coverage occurs on exotic bets on these big races, which involve picking the ﬁrst ﬁve, four or three in correct order – hence their name: ‘tiercé-quarté-quinté’). They are usually events with large ﬁelds – minimum fourteen starters – of well-known older horses and are only occasionally ‘classic’ races. Our main purpose here is to introduce the probability score and its covariance decomposition as tools for the analysis of horse race data such as these and to present some relevant illustrative data. We concentrate on the PMU/PMH comparison and the attelé/monté breakdown, but also present results for ‘événements’ versus ‘non-événements’ even though the number of the former is undesirably small.

A favourite–longshot bias? First we report on a conventional analysis of the aggregate PMH data to see whether a ‘favourite–longshot bias’ of the normal sort exists. The broad verdict (Figure 9.1) would have to be that it does not, at least not in any straightforward fashion. The ﬁgure is based on odds (unnormalised) grouped into ﬁfty-seven ranges at intervals of 0.3 (up to 1) of 0.75 (up to 10), of 2 (up to 20), of 5 (up to 50) and of 10 (up to 80, with all runners at 80–1 or more forming a ﬁnal interval. Around 69 per cent of stakes were returned to win pool bettors in this set of races. A ﬁve-interval moving average seems to be the simplest way to highlight the oscillating pattern that appears to be present. The pattern might be characterised as one in which •

after the typical early high returns (c. 100 per cent) at the very shortest odds there is a gradual deterioration to c. 50–60 per cent around odds of 4 and 5/1

Covariance decompositions and betting markets • • •

97

a subsequent return to ‘money back’ somewhere around 6/1 is sustained until about 10/1 there is then a rapid deterioration to about 80 per cent which then seems to persist until about 40/1 ﬁnally there is a progressive deterioration of the typical sort in the upper ranges, falling to approximately 30 per cent in the 80/1 and over interval

This oscillation produces a remarkably ﬂat regression line (y = −0.2684x + 91.352; but R 2 = 0.0215), which is conﬁrmed when we normalise the odds and subject them to simple calibration analysis (Figure 9.2).

180.0 160.0 140.0 % return

120.0 100.0 80.0 60.0 40.0 20.0 0.0

% return 2

5 per. Mov. Avg. (% return) 4

6

8

10

15

20 32 50 90

Odds

Figure 9.1 Vincennes winter meeting 1997/98. 1.00 y = 1.1346 x –0.0057 R 2 = 0.9081

0.80

0.60

0.40

0.20

0.00 0.00

0.20

0.40

0.60

0.80

1.00

Figure 9.2 Winning proportion (y) against probability assigned (x) ﬁfty-seven odds ranges.

98

J. Dowie

What might we learn if we apply the covariance decomposition of the probability score to this data set? (All the necessary references for the following section are contained in (Yates, 1982, 1988, 1994; Yates and Curley, 1985; Yates et al., 1991.)

The probability score and its decompositions If we normalise the odds on the horses in a pari-mutuel race we arrive at the proportions of the pool bet on each and hence the collective ‘subjective probability’ of the betting population responsible. We can ask about the quality of these probabilities – ‘how good’ they are – by the criterion of ‘external correspondence’, in the same way as we can seek to evaluate probabilistic forecasts made in relation to weather, politics or other topic. Broadly, probabilistic forecasts are ‘externally correspondent’ to the extent that high probabilities are assigned to events that ultimately occur and low ones to events that do not occur. An alternative criterion of quality or goodness is ‘internal coherence’, which asks, for example, whether a set of posterior probabilities reﬂect the normatively correct revision of prior ones in the light of the likelihood of the new evidence according to Bayes theorem. This alternative criterion is not considered here. One well-known and widely accepted overall evaluation of the external correspondence of a set of probability forecasts is the ‘Brier Score’. This is simply the mean squared error, arrived at by taking the difference between the probability assigned to each event and 1 (if it occurs) or 0 (if it does not occur), squaring the resulting difference and summing the results over the set of judgements. This score is negatively oriented, so that 0 is the best possible score, arising when probability 1 was assigned to all events that occurred and probability zero was assigned to all those that didn’t: (1 − 1)2 + (0 − 0)2 = 0. The worst possible score is 2, arising when zero probability is assigned to all events that occurred and probability 1 assigned to all those that did not: (0 − 1)2 + (1 − 0)2 = 2. Such an overall quality score provides no insight into the reasons for any differences in ‘goodness’ between alternative sets of probability assessors or assessments. Various decompositions have accordingly been developed to pursue greater understanding of the contribution of different skills and abilities to judgemental performance. We introduce two of the main decompositions of the Brier probability score (PS) and deﬁne them below, using horse races as the subject.

Murphy’s decomposition PS = U + C − D or PS = ‘outcome uncertainty’ + ‘calibration’ (‘reliability’) − ‘discrimination’ (‘resolution’) using the terms customarily applied

Covariance decompositions and betting markets

99

where U = d(1 − d) and d is the base rate of the to-be-predicted event, in our case the proportion of starters that win, that is, the number of races divided by the number of starters. Note that this term increases (and hence PS worsens, other terms equal) as ﬁeld size decreases. However, this is of no consequence in evaluation since this term is outside the control of the probability assessor and not the subject of judgemental or forecasting skill. C is the Calibration index. Probability judgements (derived from normalised odds) are grouped into ranges (e.g. 0.100–0.149). The proportion of starters that win in a range (e.g. 0.170) is deducted from the range’s midpoint value (0.125), the result (0.045) squared and multiplied by the number of starters in that range. The resulting numbers for each range are then summed and the total divided by the total number of starters to give the C index. The aim is to maximise calibration but therefore to minimise the C index. DI is the Discrimination index. The same ranges are used. This time the proportion of starters that win in a range is deducted from the base rate proportion of winners (d), the result squared and multiplied by the number of starters in the range. The resulting numbers for each range are then summed and the total divided by the total number of starters to give the DI. The aim is to maximise discrimination and to maximise the DI.

Yates’ covariance decompositions Yates was concerned about the nonindependence of the reliability and resolution terms in the Murphy decomposition and, for this, and other reasons, suggested using conventional covariance decomposition principles to arrive at PS = Variance d + Bias2 + Minimum Variance f + Scatter f − 2 (Slope ∗ Variance d) where d is, as above, the base rate of the to-be-predicted event, the proportion of ¯ − d), ¯ and f is the assigned probability starters that win so that Variance d = d(1 (i.e. forecast). Bias is the mean probability assigned to a starter winning minus the mean ¯ In pari-mutuel probability of a starter winning and so is equivalent to f¯ − d. markets for which the odds have been normalised this should, in principle, be zero. The mean probability of a starter winning is simply 1 over the size of the ﬁeld, irrespective of the distribution of betting and in a single race this must be the same as the average probability assigned to a starter derived from the normalised odds. It will differ from zero only for reasons connected with the use of variable deductions from the pool according to the odds on the winners (the French ‘prélevement supplementaire progressive’, which involves higher deductions from the win payout when the winner is 30–1 or longer) or with the rounding of odds in their journey from pari-mutuel operator to publication in a newspaper, in our case ‘Paris-Turf’.

100

J. Dowie

‘Bias’, thus deﬁned, is regarded by Yates as a measure of ‘calibration in the large’, as opposed to the more conventional concept of calibration (i.e. Murphy’s) which Yates calls ‘calibration-in-the-small’ and which has no equivalent in his covariance decomposition. Slope is the average probability assigned to winners (f 1) minus the average probability assigned to non-winners (f 0). The difference between these two conditional probabilities provides an alternative and intuitively more understandable measure of discrimination than Murphy’s ‘resolution’ (DI). The slope may vary from 0 (no discrimination: average odds on winners same as average odds on non-winners) to 1 (perfect discrimination: hypothetical average probability of 1 assigned to all winners and of 0 assigned to all non-winners). We can interpret an increase in slope as a percentage improvement in discrimination. The aim is clearly to maximise slope. (The slope is in fact literally the slope of the regression line that results when probability assigned is regressed on winning proportion.) Scatter f is an index of the overall ‘noisiness’ of the judgements and is the result of weighting the variance of the probabilities assigned to winners and the variance of the probabilities assigned to non-winners by the relative number of winners and non-winners. The aim is to minimise scatter, subject to exploiting any discriminatory power possessed. Minimum Variance f is the minimum variance in f necessary to achieve the given slope and exploit this amount of discriminatory power. Like the Variance d and Bias this can be taken to be essentially outside the control of the forecaster (bettors in our case), given their discriminatory power, so that the evaluation of judgemental/forecasting skill can be focused on the ﬁnal terms (Slope and Scatter). Minimum Variance f is equal to Slope2 ∗ Variance d. It is important to see that in a pari-mutuel market the odds may be perfectly calibrated (in Murphy’s terms) – and hence the unit return at all odds the same – irrespective of the degree of discrimination (‘knowledge’). To take the simplest example, imagine a set of two horse races. If all horses were assigned either a 60 per cent chance or a 40 per cent chance and they won in proportion the unit return would be the same at both odds. On the other hand, if all were assigned either 80 per cent or 20 per cent and won in proportion the unit return would again be the same at both odds. However, we would clearly want to say that the market knew more – showed more ability to discriminate between winners and non-winners – in the latter case.

Results of analysis The 663 races constituting the data set were the entire ‘meeting d’hiver’ at Vincennes which ran from 3 November 1997 to 28 February 1998. While 9,484 horses ran in these 663 races, some were part of ‘écuries’ of two, three or even four horses, which meant they were coupled in the win betting and formed one betting opportunity in the win pool. While individual win odds are displayed for each horse in an écurie (because the coupling does not apply to more exotic bets and the separate odds are useful information for exotic bettors), one can not actually

Covariance decompositions and betting markets

101

ask to bet the écurie and the écurie dividend is the same whichever horse wins. We have deleted from the data set all écurie entries apart from the one that ﬁnished highest in place order (or the one that appeared ﬁrst in the result if both/all were unplaced). The deleted number was 177 so the data set comprises 9,307 betting entries. We will often refer to these as ‘the starters’, even though it is not strictly correct. Table 9.1 contains all the data referred to in the following section. Before moving to the decompositions, to help the reader get to grips with the table we can note that the mean probability assigned to the winner at the close of betting was 17.0 per cent (column f 1 PMH = 0.1695) compared with 15.0 per cent at 13.15 offcourse (column f 1 PMU = 0.1498). Also that the lowest mean probability (of the estimates provided here) was 13 per cent (0.1321) for the winners of ‘événements’ offcourse and the highest 18 per cent for monté at close of betting (0.1807). (Événements’ are almost always attelé.)

PMH versus PMU and attelé versus monté Q: A:

Do the ﬁnal (PMH) odds show better calibration than the PMU ones and, if so, how much better? The major feature of the calibration results is the very high overall calibration in all cases – conﬁrming the conventional analysis presented earlier – except for the événements. The limited amount of data available on these may contribute to the much higher (i.e. poorer) Calibration index, but we believe there is a substantive effect as well (see below). On these limited data one could not support any claim that calibration is different between the PMU and PMH odds.

Q: Is this true of both monté and attelé races? A: No, there is a deﬁnite suggestion that calibration improves on monté from PMU to PMH (0.0006 to 0.0004), but deteriorates on attelé (0.0003 to 0.0007). This prompts the speculation that those able to see the horses in action in the 10–20 minutes before the event (at the track or by picture transmitted into PLR locations) overrate their interpretative ability in attelé races relative to monté ones. Q:

Do the ﬁnal PMH odds show better discrimination – more knowledge – than the PMU ones and, if so, how much better? A: Yes, they do, and of the ﬁnal PMH discrimination level about 20 per cent has been added since the PMU betting ﬁnished. Speciﬁcally the ﬁnal (PMH) odds for the complete data set (slope 0.1147) represent a 23.9 per cent increase in discrimination compared with the PMU odds (slope 0.0926). (The alternative – and less preferred – Murphy measure of discrimination gives a 25.9 per cent increase.) So one can say that they incorporate roughly 25 per cent more ‘knowledge’ than the PMU base, a formidable amount. Note that the scatter is also greater, even after taking account of the greater variance necessary

PMH Attelé Monté PMU Attelé Monté PMH Événement Non Évén’t PMU Événement Non Évén’t

0.0593 0.0579 0.0624 0.0607 0.0593 0.0637

0.0504 0.0604

0.0511 0.0618

962 8345

962 8345

PS

9307 6379 2928 9307 6379 2928

N

0.0582 0.0727

0.0582 0.0727

0.0712 0.0690 0.0762 0.0712 0.0690 0.0762

d

0.0685 0.0798

0.0687 0.0807

0.0794 0.0776 0.0837 0.0787 0.0771 0.0823

f

0.0427 0.0589

0.0419 0.0564

0.0548 0.0528 0.0594 0.0572 0.0547 0.0627

f0

0.1321 0.1514

0.1411 0.1722

0.1695 0.1639 0.1807 0.1498 0.1448 0.1596

f1

Table 9.1 Decompositions of PMH and PMU probability scores

0.0548 0.0674

0.0548 0.0674

0.0661 0.0642 0.0704 0.0661 0.0642 0.0704

0.0021 0.0004

0.0036 0.0004

0.0004 0.0007 0.0004 0.0003 0.0003 0.0006

0.0059 0.0061

0.0080 0.0075

0.0073 0.0070 0.0083 0.0058 0.0053 0.0073

0.0548 0.0674

0.0548 0.0674

0.0661 0.0642 0.0704 0.0661 0.0642 0.0704

Var d

DI

d(1 − d) CI

Yates

Murphy

0.0001 0.0001

0.0001 0.0001

0.0001 0.0001 0.0001 0.0001 0.0001 0.0000

Bias2

0.0004 0.0006

0.0005 0.0009

0.0009 0.0008 0.0010 0.0006 0.0005 0.0007

Min Var d

0.0894 0.0925

0.0991 0.1158

0.1147 0.1111 0.1213 0.0926 0.0901 0.0970

Slope

0.0055 0.0062

0.0058 0.0076

0.0074 0.0071 0.0080 0.0061 0.0060 0.0063

Scatter

Covariance decompositions and betting markets

103

to exploit the greater discrimination (as indicated by the higher Minimum Variance f ). Q: A:

Is this true of both monté and attelé races? Yes, both show the same 23–25 per cent proportionate increase in discrimination from PMU to PMH. The scatter data are also parallel.

Q:

Which do the betting markets know more about – monté or attelé races – and by how much? A: It may be initially surprising to those who know that monté races have a very high relative rate of disqualiﬁcation (for failing to maintain the correct trotting gait) that the answer is monté. The monté discrimination is about 9 per cent greater than that for attelé in the PMU odds, and, consistent with the previous answer, this superiority remains the same in the PMH odds. While monté ﬁelds are smaller (as evidenced by the larger d) this is supposedly dealt with in the decompositions by the incorporation of the Variance d term.

‘Événements’ and ‘non-événements’ Q:

Given the vastly greater public information and analysis applied to the ‘tiercéquarté-quinté’ races compared with others, what do the data suggest on calibration? A: The answer has to be offered with some caution because of the relatively limited number of événements in the data – they comprise only about 10 per cent of starters in our data set. (Note also that we are analysing the standard win pool odds on the horses in these races, not the unknown win odds assigned by those placing the exotic bets.) However, the data do suggest that calibration is signiﬁcantly poorer. One could speculate that the ‘professionals’ are less interested in these races – purposely selected by the betting organisers for their likely difﬁculty and high returns – and hence fail to act so as to bring the out-of-line odds (and returns) into line. The implication, if this ‘inefﬁciency’ truly exists, is that there are proﬁtable opportunities lurking in the win pool on événements. Q: A:

And what do the discrimination ﬁgures say in relation to this comparison? Here the position is more confused. While all the discrimination ﬁgures increase from PMU to PMH, Murphy’s discrimination increases more for ‘événements’ than ‘non-événements’ (36 per cent against 23 per cent), whereas Yates’ slope increases less (11 per cent against 25 per cent). We need therefore to remind ourselves that these two concepts are not the same and are measuring different aspects of forecasting ability. Yates is the preferred measure, given the nonindependence of the Murphy elements, and so we support the implication of his decomposition, which is that much less knowledge becomes available late on ( just before the race) in relation to these events than in relation to the ordinary ones.

104

J. Dowie

Conclusions Treating racecourse odds as subjective probability distributions means that we can draw on the various scoring principles and score decompositions developed in the judgement and decision-making literature. These decompositions enable us to distinguish, in particular, between the ability of the markets concerned to (a) discriminate between winners and non-winners and (b) assign appropriate odds to starters. In France there seems, on the basis of this limited study, little evidence of an overall bias up through the odds range either in PMU or PMH. The difference (‘inequity’) between PMU bettors (betting before 13.15) and later bettors is almost certainly down to the greatly superior information of the latter, rather than either their superior ability to assign appropriate odds to the runners, given available information, or differences in utility functions (odds preferences). There is some suggestion that both main aspects of ‘external correspondence’ are poorer for the win pools for the ‘événements’ on which over half French betting takes place (though most of this is exotic betting and the win pools on these events are not particularly above average). This prompts the speculation that the amount of information supplied about these races is overwhelming, even to the ‘professional’ bettors, who either perform no better than the rest in their betting on them or else choose to leave most of these races to the ‘amateurs’. In many ways this result is a conﬁrmation of the success of the betting promoters, in conjunction with the media, in providing highly uncertain races of high quality where ‘inside information’ plays little or no part and the market is therefore strongly as well as weakly efﬁcient. While these decompositional analyses may initially be of main interest to academic researchers they could prove a very useful monitoring tool for betting organisers wishing to establish what is happening between different pools in different areas and at different times. In particular, differences in slope between betting populations raise a priori ‘equity’ issues and the decomposition elements could be used as quantitative signs to be monitored and, if necessary, followed up. Such analysis, when combined with information on turnover, would also enable the links between the decompositional elements and betting activity to be established and exploited in the design of bets. Substantively, the tentative implication is that betting at Vincennes on trotting is fairly (weakly) efﬁcient but with the intriguing possibilities that there is ‘overbetting’ in the 4–5/1 range but plenty of value – and in fact almost ‘fair betting’ – in the 6–10/1 range, even given the high deductions which characterise this pari-mutuel monopoly. But of course this conclusion is based on just one small sample of races and much further work is needed to substantiate it and further explore the insights to be gained from this approach.

Acknowledgements I am grateful to Frank Yates for making his Probability Analyser software available and to Dominique Craipeau of the PMU in Paris for assistance with the data on French betting patterns.

Covariance decompositions and betting markets

105

References Yates, J. F. (1982), ‘External correspondence: decompositions of the mean probability score’, Organizational Behavior and Human Processes, 30: 132–156. Yates, J. F. (1988), ‘Analyzing the accuracy of probability judgments for multiple events – an extension of the covariance decomposition’, Organizational Behavior and Human Decision Processes, 41: 281–299. Yates, J. F. (1994), ‘Subjective probability accuracy analysis’, in G. Wright and P. Ayton (eds), Subjective Probability, Chichester, John Wiley and Sons, pp. 381–410. Yates, J. F. and S. P. Curley (1985), ‘Conditional distribution analyses of probabilistic forecasts’, Journal of Forecasting, 4: 61–73. Yates, J. F., L. S. McDaniel et al. (1991), ‘Probabilistic forecasts of stock-prices and earnings – the hazards of nascent expertise’, Organizational Behavior and Human Decision Processes, 49: 60–79.

10 A competitive horse-race handicapping algorithm based on analysis of covariance David Edelman

A model for empirically determining Competitive Strength or Class of races in a historical database is presented. The method, based on Analysis of Variance methods, is based on horses’ succesive runs, and includes a necessary weight allowance. The variable is applied out-of-sample to forecast the results of future races, with a Case Study being carried out on a set of 1,309 Australian Metropolitan Sprint races, demonstrating signiﬁcant added value, in both a statistical sense and a ﬁnancial sense.

Introduction In recent years, the scientiﬁc study of horse-race handicapping methods has established itself alongside the more traditional literature relating to other ﬁnancial markets and evironments as a serious, multifaceted challenge, both practical and academic. From its origins as a pastime, horse-race betting has evolved into a set of highly complex international markets, in the sense that virtually anyone in the world with sufﬁcient knowledge and means can bet on racing events taking place in any one of the hundreds of countries with organised markets for these events. Like any other international market, horse-race betting markets contain both rational and irrational investors, behavioural components, notions of efﬁciency, and the scope for Technical and Fundamental Analyses. In analogy with the literature on the Financial Markets, the literature on horse-race betting markets is divided among several catgories: investment systems (Asch and Quandt, 1986; Ziemba et al., 1987), textbook-style publications (Edelman, 2001), and academically-orientated journal articles, books, and collections (Hausch et al., 1994), with a moderate degree of overlap occurring from time to time. However, there is one fundamental difference between horse-race betting markets and ‘traditional’ ﬁnancial markets, which is that the tradition of the latter began with the notion of investment to either (i) enable or facilitate the production and/or delivery of goods and services, or (ii) to underwrite or aggregate inidividual risk, both of these generally seen historically as being beneﬁcial to mankind. The latter characteristic has meant therefore that the notion of such types of investment

A competitive horse-race handicapping algorithm

107

has been historically encouraged and even exalted, in a Moral sense, by secular and religious institutions alike. By contrast, activities such as horse-race betting and gambling, in general, have been regarded in a negative light in varying degrees by both secular and religious institutions, there being no by-products seen as being beneﬁcial to mankind, but perhaps rather being viewed as activity guilty of attracting Capital (both human and ﬁnancial) away from more ‘worthwhile’ uses. This stigma has meant that governments tacitly agree to treat horse-race betting markets in a fashion that resembles the manner in which they treat other activities or products judged to be ‘destructive’ (such as cigarettes), and regulate and tax them in such a way as to discourage their growth and prevalence. One of the main effects of this is the fact that, in contrast to traditional ﬁnancial markets in which an average investor can be expected to earn a return without any particular skill or knowledge, in horserace betting markets, average expected returns are decidedly negative, ranging by country from about −10 per cent to −40 per cent. When put together with the widely-held view that Markets are Efﬁcient in general, the ‘average negative expectation’ property of race-betting tends to lead to its grouping with other gambling games (possibly excluding certain forms of Blackjack), where the expectation is always negative for every player, regardless of how skillful or informed that player may be. Thus, the emphasis here will be the exploration of market inefﬁciency, which will be studied through probability forecasts produced from competitive ratings. It will be shown that the methods here lead to models which not only exhibit statistically signiﬁcant added forecast value marginal to bookmakers’ predictions, but which produce a clear out-of-sample proﬁt.

Background The assignment of probability forecasts to horse racing appears to have evolved universally into an approach generally known as handicapping. In the sport of racing, handicapping originally referred merely to a method whereby horses deemed to have better chances are weighted more heavily so as to make the chances of the various runners more even. But since this inherently involves the assessment of the chances of the various runners prior to the allocation of the weights, the term ‘handicapping’ has more commonly come to refer to the assessment step of this process. It is of interest to note that handicapping has come to be universally carried out using an incremental assessment method. Logically, it is assumed ﬁrst that in any given race, horses are handicapped correctly. From one race to the next, then, horses are handicapped based on their previous handicap ratings, plus any new relevant information since the last handicap assessment. The primary components to this change are the performance in the previous race, and the change in Class (grade) from the last handicapped race to the current one. Occasionally, there are other minor changes, such as a ‘weight-for-age’ improvement for horses in the earlier years of their racing careers.

108

D. Edelman

The primary weakness in this approach is the difﬁculty in quantifying reliably the change in Class from one race to another. It is this weakness which the competitive ratings model proposed here seeks to address. Before proceeding, a mention of several other accepted probability assessment methods is in order. Of these, the most demonstrably effective and widely accepted type is the Multinomial Logit Regression model proposed by Bolton and Chapman (1986), Benter (1994) and others, where various variables relating to runners’ abilities and track record are regressed against actual race results. These methods have been generalised to Neural Network models (Drapkin and Forsyth, 1987). Other ‘traditional’ methods have involved probability assessments based on absolute Weight-Equivalence ratings (see Scott, 1982) or Adjusted Time ratings (see Beyer, 1995; Mordin, 1998) or Pace ratings (see Brohamer, 1991). While each of the approaches referred to above has been shown to have usefulness of at least some degree, there has yet to appear a systematic study of a concept which skillful bettors and handicappers often apply in practice, known as ‘Collateral Form’, a method whereby the Class assessment or believed ‘difﬁculty level’ of a race may be amended post-hoc based on the subsequent performance of the various runners in the race, but there appears to be no published work making this concept precise, prior to the results to be presented here. The approach to be taken in the following sections takes the Collateral Form concept to its logical limit by considering at a given point in time the complete network of all interrelationships between all races for which there exist recorded information.

Methodology We shall consider successive runs of the same horse and (rather than trying to estimate the strengths of the various horses) focus on an estimation of the overall strengths of the various races, as evidenced by the difference in (weight-corrected) performances of horses in successive races. To this end, we shall consider the differences ij k in adjusted beaten lengths for the same horse moving from race i to j , where the index k allows for possibly more than one horse to have competed in these two events successively. As an additional variable, we shall use δwij k to denote the change in carried weight associated with ij k . Next, let η1 , η2 , . . . , ηn ( n1 ηi = 0) be parameters denoting the relative strengths of races 1, 2, . . . , n, let cw denote the coefﬁcient associated with the Weight Change variable, and let c0 be a constant. The model we shall consider is of the form ij k = c0 + cw δwij k − ηi + ηj + εij k where εij k denotes the error associated with factors extraneous to Class and Weight effects.

A competitive horse-race handicapping algorithm

109

Rather than basing the estimation of the parameters ηi , cw , and c0 on the minimisation of

{ij k − (c0 + cw δwij k − ηi + ηj )}2

which would tend to overemphasise longer beaten lengths, we shall consider a weighted least-squares solution, with the weights tij k =

1 blij(1)k

+ blij(2)k + 1

2 and employ the use of a Ridge stabilisation term ηi , with a cross-validated constant coefﬁcient K. Summarising, we seek to minimise {ij k − (c0 + cw δwij k − ηi + ηj )}2 1 + blij(1)k + blij(2)k

+K

ηi2

over η, cw , and c0 . For a ﬁxed history of races 1, 2, . . . , n this optimisation may be performed, and applied to the handicapping of subsequent races. In order to analyse the performance of such a handicapping method, however, it is necessary to envision an expanding history (1, . . . , n1 ), (1, . . . , n2 ), . . . , where the optimisation is performed anew as of each new day of race-meeting results, and applied to the next raceday. As the number of races (and hence parameters) can be very large, an optimisation algorithm based on incremental optimisation (i.e. using the solutions as of day1 as initial estimates of the solutions as of day2 can be shown to save large amounts of computing time. It is also worth noting that in programming, the use of sparse matrix representations can help considerably in the conservation of memory. Such optimisations have been performed using SCILAB (an interactive software package from INRIA) for problems containing as many as 20,000 races (parameters), with the optimisation on a l.2 GHz AMD Athlon with 0.5 Gb RAM taking approximately ten minutes to complete. The adjustments applied to the beaten lengths going into the analyses of the above type may be performed in various different ways. Since it is arguable that variation in beaten lengths beyond ten lengths or so may not contain too much marginal information, a smooth truncation at approximately ﬁfteen lengths is recommended: bltrunc = 15 tanh(blraw /15) Also, for races of varying distances, in order to account for the fact that longer races give rise to a greater variation in beaten lengths, a distance adjustment may

110

D. Edelman

be considered, of a form similar to bladj = blraw /

distance 1000

0.6

which has been found, statistically, to give roughly constant variation in beaten lengths accross various distances.

An experiment As a speciﬁc test of these methods, we will study a set of races that occurred in Australia between January 1991 and July 1998 at the Metropolitan (signiﬁcantly higher-grade than average) level, and at distances of between l,000 m and 1,200 m, inclusive. We shall not attempt to forecast any races prior to 1994, but will use the entire history of race results starting from January 1991 and continuing up until the day preceding the day of each race to be forecast. Such forecasts will be carried out for 1,309 races, after which a test against the bookmakers’ prices will be carried out using the multinomial logit model (see Bolton and Chapman, 1986; Benter, 1994, etc.), otherwise known as Conditional Logistic Regression model, to see if the removal of the Competitive Form (CForm) variable from the model including both it and the bookmakers’ prices signiﬁcantly affects the Likelihood Score on the actual outcome, and (perhaps more importantly) whether a proﬁtable betting strategy arises.

Results The results of a conditional logistic regression analysis over 1,390 races are shown below. It appears that the CForm variable is highly signiﬁcant (T = 15) marginal to the log-odds variable. When this form variable is omitted from the model, the R-squared drops from approximately 21 per cent to less than 19 per cent. The model including the Form variable without the log-odds variable is highly statistically signiﬁcant (T = 13), but at R-squared of approximately 2 per cent is virtually worthless by itself. -->scox([logodd,cform,stratum,indwin],‘x = [1,2],str = 3, stat = 4’); 7373.849 1462.882 203.071 13.599 0.608 0.024 0.001 Coef. Val. S.E. T-ratio ----------------------------1 -1.178 0.033 -35.788 2 0.715 0.048 14.923 ----------------------------L= 4573.30 R-sq: 0.213

A competitive horse-race handicapping algorithm

111

-->scox([logodd,cform,stratum,indwin], ‘x = [1], str = 3, stat = 4’); 6158.444 1357.574 180.593 11.714 0.516 0.020 0.001 Coef. Val. S.E. T-ratio ------------------------------1 -1.141 0.032 -35.599 -------------------------------L= 4728.04 R-sq: 0.186 -->scox([logodd,cform,stratum,indwin], ‘x = [2], str = 3, stat = 4’); 956.493 11.242 0.283 0.007 0.000 Coef. Val. S.E. T-ratio --------------------------------2 0.673 0.051 13.298 --------------------------------L = 5682.65 R-sq: 0.022

At any given timepoint, the mean of the Race Class parameters (on which the CForm variables are based) is near zero with standard deviation approximately equal to 0.33. The standard deviation of the (centered) CForm variable is approximately 0.42, indicating that the characteristic variation in log-odds in the ﬁtted composite model due to CForm is about 30 per cent, which is fairly strong in betting terms. It is of interest to test the efﬁcacy of betting runners with favourable values of the CForm variable overall, and to see if its effect differs over various odds ranges. We shall assume that betting is for a ﬁxed gross return of 1 unit. For all runners in our sample, regardless of form history, the total outlay would be 1,696 units, for a return of 1,390, or a loss of about 18 per cent. For runners whose One-run form variable is larger than half of a standard deviation above average, the total outlay would be approximately 442 units, for a return of 540, or a proﬁt of approximately 22 per cent. For runners of 2/1 or shorter, the outlay for all runners would be 348 units, for a return of 314, or a loss of 9.8 per cent, as compared to a loss of 20 per cent for runners longer than 2/1. Restricting to those runners with favourable values of the One-run form variable which are 2/1 or shorter, an outlay of 92 units results, for a return of 117, approximately 27 per cent. For runners with favourable One-run form variable which are longer than 2/1, the

112

D. Edelman

proﬁt margin is approximately 21 per cent. Surprisingly, as further odds-range breakdowns are investigated, the proﬁt margins achieved using this criterion do not appear to vary signiﬁcantly from 21 per cent, suggesting that the variable has roughly the same degree of impact accross all odds ranges. In summary, it appears that even this simple version of CForm variable is highly effective in identifying value.

Discussion This model appears to function very well at determining Empirical Class ratings, giving rise to apparently reliable proﬁtable betting strategies. There are several important extensions of this method which are under investigation. In the above analysis, only Sprint (1,000–1,200 m) races were analysed, where the problem of quantifying horses’ distance preferences was avoided. However, a much more powerful model would include a larger database of races over various distances, where (lifetime constant?) horse-speciﬁc distance preference models are simultaneously ﬁt along with the full optimisation. This clearly greatly increases the computational complexity of the model, but preliminary results suggest that the gains could be worth the additional trouble and complexity. It is believed that a signiﬁcant improvement in estimation is possible by including at least one ‘non-collateral’ Class Ratings variable in the model as a predictor, changing the interpretation of the η s to that of a competitively-determined Class Ratings adjustment. Other predictor variables can be added as well, giving rise in the end to an Index, which can then be used as an input to a ﬁnal Multinomial Logit Regression to produce a probability forecast. To date, such models appear to be possible and seem to show at least some marginal improvement, based on studies currently under investigation.

References Asch, Peter and Quandt, Richard E. (1986) Racetrack Betting: The Professor’s Guide to Strategies, Dover, MA: Auburn House. Benter, William (1994) ‘Computer-based horse race handicapping and wagering systems: a report’, Efﬁciency of Racetrack Betting Markets. San Diego: Academic Press, pp. 169–84. Beyer, A. (1995) Beyer on Speed, New York: Houghton Mifﬂin. Brohamer, T. (l991) Modern Pace Handicapping, New York: William Morrow Co., Inc. Bolton, Ruth N. and Chapman, Randall G. (1986) ‘Searching for positive returns at the track: a multinomial logit model for handicapping horseraces’, Management Science, 32 (8), 1040–60. Drapkin, T. and Forsyth, R. (1987) The Punter’s Revenge, London: Chapman and Hall. Edelman, David C. (2001) The Compleat Horseplayer, Sydney: De Mare Consultants. Hausch, D., Lo, V. and Ziemba, W. (eds) (1994) Efﬁciency of Racetrack Betting Markets, San Diego: Academic Press.

A competitive horse-race handicapping algorithm

113

Lo, Victor (1994) ‘Application of logit models to racetrack data’, Efﬁciency of Racetrack Betting Markets, San Diego: Academic Press, pp. 307–14. Mordin, N. (1998) On Time, Oswestry, UK: Rowton Press. Scott, Donald. (1982) The Winning Way, Sydney: Wentworth Press. Snyder, Wayne N. (1978) ‘Horseracing: testing the efﬁcient markets model’, Journal of Finance XXXII, 1109–18. Ziemba, William and Hausch, Donald B. (1987) Dr Z’s Beat the Racetrack, New York: William Morrow and Co., Inc.

11 Efﬁciency in the handicap and index betting markets for English rugby league Robert Simmons, David Forrest and Anthony Curran

This chapter examines the properties of two types of sports betting market: index betting and handicap betting. The former type of market has been particularly under-explored in the academic literature. Our speciﬁc application is to English rugby league matches over the 1999–2001 period. We test for market efﬁciency and for speciﬁc forms of bias in the setting of spreads and handicaps. Regression analysis suggests that favourite–underdog bias is absent in these markets. However, although we do not observe home–away bias in the index market it appears that bookmaker handicaps do not fully incorporate home advantage. Hence, the index market is found to be efﬁcient whereas the handicap market contains a particular home–away bias. We attempt to rationalise these divergent results. Simulation analysis suggests that a strategy of shopping around for lowest spreads and handicaps can improve betting returns in each market, even to the extent of delivering proﬁts in the handicap betting market.

Introduction Sports have been played for many centuries as a means for people to satisfy (relatively) peacefully natural desires to challenge and compete against one another. Betting markets have emerged worldwide, both legally and illegally, in response to demands from people to make wagers on the outcomes of sporting contests. In the United States, there are very few jurisdictions where sports betting is legal and the dominant market is based at Las Vegas, Nevada. The typical form of betting market there, in the cases of American Football and basketball, is based upon the notion of a betting line, in which the bookmaker will quote a points spread. Suppose the betting line places Washington Redskins as favourites to beat Dallas Cowboys by six points. A bet on the Redskins minus six wins only if there is a Redskins victory by seven or more points. A bet on the Cowboys plus six wins only if the Cowboys do better than a six points defeat. A Redskins win by six points represents a push and the original stake is returned to the bettor. The typical bet will be struck at odds of 10 to 11, so the bettor must place $11 to win $10. The bettor will not make a proﬁt by betting on each side of the line as the bookmaker attempts to achieve a balanced book with equal volumes of bets on either side of

Handicap and index betting markets

115

the points spread. The points spread is adjusted by the bookmaker, in the period before the game takes place, in response to ﬂows of money either side of the line. For example, a large volume of money on the Redskins minus six may cause the bookmaker to revise the points spread, so that the Redskins are favoured to win by, say, eight points. Note that it is the spread which is adjusted and not the odds of 10 to 11. The spread observed at the end of the betting period will reﬂect interaction between the demand (bettors) and supply (bookmakers) sides of the market. The National Football League (NFL) betting market has been extensively analysed, inter alia, by Gandar et al. (1988), Lacey (1990), Golec and Tamarkin (1991), Dare and MacDonald (1996), Gray and Gray (1997), Vergin (1998), Vergin and Sosik (1999), Osborne (2001), and Woodland and Woodland (2000). In Europe, most sports betting is based on ﬁxed odds which are announced several days before a ﬁxture takes place and which, generally, are immovable despite weight of money or announcements of new information about the teams. Odds are typically quoted on home win, draw and away win. A bettor who bets on all three outcomes simultaneously will in the case of UK soccer (the largest sports betting market in Europe) lose around 10.5 pence for £1 stake. This loss represents the bookmaker’s commission or over-round. In this chapter, we examine two further types of sports betting market. First, traditional British bookmakers, with high street retail shops, offer handicap betting on rugby league. This betting market differs only in detail from the US NFL betting market. Second, by contrast, index betting is a radically different style of betting from anything found in the US. The index betting market is relatively recent, covering many sports and a large variety of possible subjects for betting from match scores to more speciﬁc features such as number of cautions in a soccer match (see Harvey, 1998, for an entertaining layperson’s account; Haigh (1999) and Henery (1999) provide technical expositions). Index bets, called ‘spread bets’ in the UK, are usually made by telephone to an index betting ﬁrm. A bettor can buy or sell points around the offered spread, which is quoted on the Internet and on television text services. Our application below is to English rugby league and we can take, as an example, a quote by an index ﬁrm for Wigan to beat Salford by eight to eleven points. A bettor can ‘buy’ Wigan at the top side of the margin, eleven points. The total won or lost equals the unit stake multiplied by the absolute deviation of the actual points difference of the match from the predicted point at which the bet was placed. Suppose Wigan actually wins by just three points and the unit stake is £5. Then the bettor loses (£5 ∗ (11 − 3)) which is £40 for a unit stake. In contrast, if Wigan won by fourteen points the bettor wins (£5 ∗ (14 − 11)) which is £15. Alternatively, the bettor could ‘sell’ Wigan at the lower value of the spread, here eight points. This bettor believes that Salford will perform better than indicated by the spread. If Wigan wins by three points, then selling the spread at eight will return £25 or (£5 ∗ (8 − 3)). It is clear from this simple example that a modest unit stake can generate large gains and losses in this market especially when compared to the likely gains and losses for a similar unit stake in the less risky ﬁxed odds market. Index betting carries more risk than conventional betting because the magnitudes of potential gains and losses cannot be known in advance.

116

R. Simmons, D. Forrest and A. Curran

The handicap betting market is restricted to rugby league, as far as UK sports are concerned. It is organised around quotations of handicaps by bookmakers, who will usually offer a wide range of betting services on various sports, either by telephone accounts or in retail outlets. Again, an example will clarify what is involved. Suppose a bookmaker quotes the Wigan–Salford rugby league match at plus 10. A bet on Wigan, here the favourite to win, will be successful if Wigan beats Salford by more than ten points. A bet on Wigan loses if Wigan does only as well or less well than the quote of ‘plus 10’ indicates (i.e. Wigan must win by at least eleven for the bettor to win, otherwise he loses). Note that there is no equivalent to the ‘push’ present in US sports, where stakes are returned to bettors. In contrast, the bettor could back the outsider, Salford. If Salford does better than lose by ten points then the bet is successful. In rugby league handicap betting, the bookmaker offers ﬁxed odds of 5 to 6 so a winning bet offers a proﬁt of £5 for every £6 wagered. If the stake is £6 a winning bet then returns a total of £11 to the bettor. In rugby league betting, handicaps tend to be ﬁxed in the build-up to matches whereas index betting quotes are allowed to vary. In this chapter, we are concerned with the question of whether the index and handicap sports betting markets are efﬁcient. By efﬁciency, we shall mean the absence of opportunity for bettors to obtain a positive expected value from a particular betting strategy. This absence of a proﬁtable trading strategy is weak-form efﬁciency as deﬁned by Thaler and Ziemba (1988). Economists are naturally interested in whether markets are efﬁcient and betting markets offer an excellent opportunity to study efﬁciency due to the precise nature of the events in the market. Unlike markets for shares and bonds, sports ﬁxtures have well-deﬁned termination points and clearly deﬁned trading periods. If market inefﬁciency is observed in sports betting, we would then wish to discover whether traders can act proﬁtably upon this. In our rugby league betting context, there are two ways in which inefﬁciency may occur. First, it is possible that the variations in handicap, or spread midpoint in the index market, are not matched one-for-one by variations in actual points differences between teams. Then the handicap or spread midpoint would not be an unbiased predictor of game outcomes and there would be favourite–underdog bias. Even then, transactions costs such as commissions may be too great to permit proﬁtable trading and efﬁciency may be sustained. A further source of bias occurs when home-ﬁeld advantage is not fully reﬂected in the spread or handicaps that are set. This is home–away bias. Home-ﬁeld advantage, where home wins are a disproportionate share of match outcomes, is a common phenomenon in team sports. From North America, Schlenker et al. (1995, p. 632) report that ‘in several studies, covering amateur and professional baseball, (American) football, basketball, and ice hockey, home teams have been found to win more often than visiting teams, usually anywhere from 53% to 64% of the time’. In our case of rugby league, our sample reveals a 60 per cent win rate for home teams. In English League soccer, where draws (ties) are a frequent outcome, home teams win about 48 per cent of all games and away teams only about 24 per cent (Forrest and Simmons, 2000).

Handicap and index betting markets

117

Reasons for home-ﬁeld advantage include familiarity of the home team with speciﬁc stadium and ground conditions, greater intensity of support from home fans compared to (usually much fewer) away fans, disruption to players’ off-ﬁeld routines and physical and mental fatigue associated with travelling to away grounds (Schwartz and Barsky, 1977; Courneya and Carron, 1992; Clarke and Norman, 1995). In addition, it has been alleged that home fans can exert inﬂuence on refereeing decisions in a match in favour of the home side (Courneya and Carron, 1991; Garicano et al., 2001). Our concern here is not whether this home-ﬁeld advantage exists (it clearly does) but whether it is correctly incorporated into betting markets via handicaps or spreads. If not, the question follows: can bettors take advantage of this bias to make abnormal proﬁts, which in turn violates market efﬁciency? A deeper question, which we are unable to answer here due to lack of data, is whether inefﬁciency can persist over time or whether rational arbitrageurs eliminate mispricing in the betting markets. The methods that will be used to consider these questions of betting market efﬁciency are, ﬁrst, the use of regression analysis to investigate existence and sources of bias (if any) and, second, the use of simulation to examine the proﬁtability of various betting strategies which may be guided by the results of the regression analysis. The remainder of this chapter is set out as follows. In the section on ‘Institutional background to English rugby league and data’, we outline the nature and structure of English rugby league and describe our data set. In the section, ‘A model of market efﬁciency’, we develop our empirical model, with particular attention to the identiﬁcation of home–away bias. Regression results reported in the section on ‘Tests for market efﬁciency using regression analysis’ show that index ﬁrms do incorporate home-ﬁeld advantage fully into their quoted spreads. In contrast, though, bookmakers fail to incorporate home-ﬁeld advantage fully into handicaps, to varying degrees according to choice of bookmaker. We attempt an explanation of the contrasting results from bookmaker and index betting markets. This motivates the attempt in the section on ‘Evidence from simulations of betting strategies’ to explore simulations of various betting strategies, including the use of ‘lowest quotes’ found by comparison of quotes across index ﬁrms and bookmakers. The ﬁnal section concludes.

Institutional background to English rugby league and data English rugby league is a game which originated as a variation of ‘rugby’ in the nineteenth century. Until recently, it was played predominantly in the North of England, speciﬁcally in Lancashire and Yorkshire. A rugby league match consists of two teams of thirteen players chasing, running with, kicking and throwing an oval shaped ball on a rectangular turf pitch about 100 metres long. Throughout 80 minutes of play, one team employs speed, strength and aggression to try to

118

R. Simmons, D. Forrest and A. Curran

transport the ball to the opposite end of the pitch to earn a ‘try’, similar to a touchdown in American Football. The other team uses similar traits to try to stop them. Thousands of fans show their support by dressing up in team replica shirts, jumping up and down and bellowing words of encouragement, hindrance and reprimand at appropriate times. A referee and two ‘touch’ judges attempt to maintain a sense of order by intermittently blowing a whistle and waving their arms about. The sport is noted for its physical contact, with minimal protection for players from equipment or the laws of the game. Fans tend to regard this as a positive feature and often show disdain for the less physical, but more popular, game of soccer. Points are awarded in the match for scoring goals, tries and conversions. A goal is scored by kicking the ball over the crossbar of the opponent’s huge H-frame and is worth one point. A try is achieved by placing the ball on the ground at the opponent’s end of the pitch for which four points are given. On scoring a try, a team is given the opportunity to score a goal. This is known as a conversion and earns two further points. Team quality varies considerably. It is possible for a strong, dominant team to approach a score of 100 points in a match. Conversely, weak teams may score zero points although a nil–nil scoreline is extremely rare, unlike soccer. In our sample, the highest number of points recorded by one team in a match was ninety-six and the biggest points difference was eighty; but for 90 per cent of matches, supremacy was less than thirty-eight points. Scorelines tend to resemble those found in American Football. Our data relate to the English Rugby Super League over the period 1999–2001 (statistical details can be found on http://uk.rleague.com). Although previously a winter game, declining audiences exacerbated by the growth in popularity of soccer induced the rugby league authorities to reschedule the season from March to September. The Super League represents the highest level of rugby league played in the UK. Over our sample period, there were fourteen teams in the 1999 season, and twelve in the other seasons. The authorities allow very limited and discretionary promotion opportunities from the second tier, currently known as the Northern Ford Premiership, but in 2001 one team (Huddersﬁeld-Shefﬁeld) was relegated to be replaced by the Premiership champions, Widnes. Most of the teams come from a concentrated region in the North of England of mostly small towns, such as Castleford, Halifax, St Helens and Wigan. Currently, the dominant teams are Bradford, Leeds, St Helens and Wigan. Soccer and rugby league do not co-exist well and neither Liverpool nor Manchester, famous for their soccer teams, has a rugby league franchise, despite being located close to rugby league territory. London does have a Super League franchise but is the only southern based team. Some teams (Wigan, Halifax) share stadia with local soccer clubs but the cities of Bradford and Leeds, which each has a sizeable soccer club, have separate stadia. Each team played thirty games in 1999 and twenty-eight thereafter. Two points are won by the victor and, in the unusual event of a draw, each team receives one point. The top ﬁve ranking teams at the end of the season enter the play-offs. These consist of six knockout style matches culminating in the Grand Final to determine one deﬁnitive champion team. In 2001, it was Bradford who defeated Wigan to win this honour. This structure ensures a competitive atmosphere through the season

Handicap and index betting markets

119

as a team needs only to be in the upper 40 per cent of the league ﬁnal placings for a chance to end the year as grand champions. Over three seasons, and for a maximum of 497 matches for which we had accessible records, we collected (a) the date and the teams involved; (b) which team was the favourite and at which team’s stadium the match was held; (c) the match outcome; and (d) index ﬁrm spreads and bookmaker handicaps. Four index ﬁrms’ point spreads were quoted in the daily betting newspaper, the Racing Post and we selected four bookmakers for whom handicaps were available. The selected bookmakers comprise the three biggest retailers (Corals, William Hill and Ladbrokes) in the UK plus a wholesale supplier of odds and handicaps to independent bookmakers, Super Soccer. These data were not available electronically and library archives were searched for the data. For some weeks, issues of the newspaper were unavailable and, where they were, not all spreads or handicaps were quoted by each index ﬁrm or bookmaker. This means that sample sizes for our regression analysis will vary according to which index ﬁrm or bookmaker is the object of attention. In particular, we only have information on index betting markets for 1999 and 2000 whereas we were able to obtain information on handicaps for the additional 2001 season. Compared to North American literature on sports betting, we have very small sample sizes which are an inevitable result of the immaturity of the markets which we are studying. This means that our conclusions will necessarily be tentative.

A model of market efﬁciency One might test for market efﬁciency in rugby league by inspecting the coefﬁcients from estimation of a regression model: Yh = αh + βxh + random error

(1)

where Yh denotes home team’s points scored minus the points scored by the away team, xh denotes handicap or midpoint of index ﬁrm spread awarded to the home team and αh and β are coefﬁcients to be estimated. A test of weak-form efﬁciency would be an F -test of the joint restrictions, αh = 0, β = 1.1 A departure of the constant term from zero partly captures some home–away bias. If β > 1 then we have favourite–underdog bias where favoured teams are more likely to cover the index quote or handicap than the offered values suggest. If β < 1 then we have reverse favourite–underdog bias where underdog teams (whose handicap and index quotes will be the exact opposite of those for the favoured teams) are more likely to cover their index quote or handicap than the offered values suggest. It should be stressed that odds are ﬁxed and invariant across matches in the handicap betting market for rugby league; all handicap bets are struck at the same odds of 5 to 6. This means that favourite–longshot bias, where favourites offer odds of better value than outsiders, cannot arise in rugby league betting. Since all bets have the same odds, the same range of bettors’ wealth is covered and the potential for favourite–longshot bias is removed.

120

R. Simmons, D. Forrest and A. Curran

However, favourite–underdog bias may remain. For instance, sentiment may encourage fans to bet on ‘their’ team to win the match. This represents an overspill of fan affection from the pitch to the betting market, where fans place a wager in order to enhance their ‘stake’ in their team’s match result. This sentimental behaviour could generate favourite–underdog bias if the favourite has stronger fan support than the underdog, where favourites tend to be larger clubs. In the Superleague, it is indeed the case that the top clubs in League rankings tend to have the greatest support. The literature on NFL betting, which has the closest North American resemblance to rugby league handicap betting, offers mixed conclusions regarding efﬁciency. Authors who ﬁnd evidence of inefﬁciency include, inter alia, Golec and Tamarkin (1991), Gandar et al. (1988) and Osborne (2001). Vergin and Sosik (1999) report home–away bias in NFL betting on ‘national focus’ games, regular season games that are nationally televised and playoff games.2 Sauer et al. (1988) could not reject efﬁciency. Gray and Gray (1997) found some evidence of inefﬁciency but also found that exploitable biases were removed over time. In the NFL, market efﬁciency must imply that points spreads offered in the handicap market are unbiased measures of the relative strengths of the competing teams. As suggested in our interpretation of equation (1), the points spread in the betting market should not be systematically greater or less than the actual difference between home and away team points. As pointed out by Golec and Tamarkin (1991) in their analysis of points-spread NFL betting, the application of the above test procedure for efﬁciency, embodied in equation (1) (or its probit counterpart used by Gray and Gray (1997)), is deﬁcient if home team betting quotes are used to predict match scores deﬁned as home team points minus away team points. Equation (1) is acceptable as a basis for estimation and testing of efﬁciency only if there is no speciﬁc bias in the market. The problem identiﬁed by Golec and Tamarkin (1991) is that the model in equation (1) masks offsetting biases. The constant term measures the average of biases that are invariant to the size of points spread. If half of the observations in the sample of matches have a positive bias in the constant term and the other half a negative bias of equal size, then the constant term is zero, yet biases exist. In NFL and rugby league, a bias in favour of home teams implies a bias against away teams. If offsetting biases are hidden, estimation of equation (1) produces a constant term of zero and also, as shown by Golec and Tamarkin (1991), a βh parameter that is biased towards one, since betting lines are distorted. Favourite–underdog bias would be incorporated into both the constant term and βh . Estimation fails to reject market efﬁciency, even though biases and inefﬁciency exist. The test of market efﬁciency requires some modiﬁcation to equation (1) so as to separate favourite–underdog bias from simultaneous home–away bias. The procedure recommended by Golec and Tamarkin (1991), which we will follow for the case of rugby league betting, is to select home and away teams randomly from our sample and to create a dummy variable, HOME, which takes the value of one

Handicap and index betting markets

121

if the selected team is at home and zero if it is away. The model then becomes: Yi = αi + βi x + γi HOME + error

(2)

where the subscript i denotes the randomly selected team. This revised model allows us to test simultaneously for favourite–underdog bias and home–away bias. If γ > 0, then index quotes or handicaps for home teams are, on average, lower than actual points achieved by the home team relative to the away team. This holds regardless of the values of quotes or handicaps. Bettors would be predicted to be relatively more successful if they back home teams rather than away teams. Conversely, if γ < 0, then index quotes or handicaps for away teams are, on average, lower than actual points achieved by away teams relative to home teams and bettors would be relatively more successful if they back away teams. Why should a sports betting market exhibit home–away bias at all? Surely such a bias is indicative of irrationality on the part of traders, particularly as home advantage is well-known? Much depends on the type of bettor being observed. Following Terrell and Farmer (1996), we can usefully distinguish between ‘professional’ and ‘pleasure’ bettors. Professional bettors only bet when they perceive an expected proﬁt. These bettors utilise available information fully and undertake a bet as an investment. In contrast, ‘pleasure’ bettors consider a bet as a complement to the sporting activity which is the object of the wager. In our case of rugby league, a signiﬁcant proportion of potential bettors may be fans who would consider betting, conditional on value, to give themselves more of a stake in the outcome. The bet adds to the fun and excitement of watching, or following, a particular rugby league team in a match. Index ﬁrms and bookmakers would be expected, consistent with proﬁt-maximising calculations of index quotes and handicaps, to take account of how sensitive this segment of the market is to index quotes and handicaps. If the ‘pleasure’ bettors are primarily home fans who bet (if at all) on home teams, then we may detect home–away bias, reﬂected in a non-zero coefﬁcient on γ in the estimation of equation (2).3 There are two possible outcomes for a non-zero value of γ . First, pricediscriminating bookmakers seek to exploit the sentiment of home fans by offering particularly unfavourable quotes or handicaps to these fans. Home fans are perceived as having inelastic demand and, by taking advantage of this fact in setting index quotes or handicaps, the home–away bias generates a negative value of γ . The opposite case is where fans are perceived as having elastic demand. Bookmakers and index ﬁrms may then set especially favourable terms of bets in order to attract greater turnover from home fans. In this case, we would observe a positive value of γ . Hence, existence of home–away bias is not prima facie evidence of market irrationality but may reﬂect the utility that fans derive from supporting their team in the betting market and be an optimal discriminatory response by bookmakers or index ﬁrms to differing demand elasticities between groups of bettors. It is most likely that bookmakers are maximising expected proﬁts over a number

122

R. Simmons, D. Forrest and A. Curran

of bets and not expected proﬁts per bet. Studies of US sports betting markets tend to assume that bookmakers operate a balanced book on any particular game. A balanced book is not a requirement for bookmakers to earn proﬁts in English rugby league (or soccer) betting. A policy of offering more favourable index quotes or handicaps to home fans may generate an elastic response from betting volume. Then, the more favourable odds for home fans may be consistent with both proﬁt-maximising behaviour, assuming some possibilities for price discrimination in an imperfectly competitive betting market, and home–away bias. Offering more favourable quotes or handicaps need not imply losses for index ﬁrms or bookmakers, so long as the bet remains unfair. There remains the possibility, though, that bettors (in the aggregate) do not accurately estimate home advantage, for sentimental or other reasons, which in turn results in bookmakers setting handicaps that are not consistent with market efﬁciency. However, following the distinction between ‘pleasure’ and ‘professional’ bettors developed by Terrell and Farmer (1996), we would predict that fan sentiment and associated home–away bias are more prevalent in the handicap betting market than the more ‘exclusive’ index betting market. The higher risks attached to returns in the latter case imply a greater premium on information processing and less room for fan sentiment.

Tests for market efﬁciency using regression analysis Estimation of equation (2) is by Ordinary Least Squares (OLS), with White standard errors used to correct for heteroscedasticity. Simply estimating once with a randomised sample would not be adequate, as in addition to sampling matches from a population we would be further sampling a set of bets, namely those on teams picked out in the randomisation. A single set of estimates will not have reliable standard errors. Accordingly, for each index ﬁrm, and for each bookmaker, we repeat the randomisation procedure and estimation twenty times. The statistical signiﬁcance of the coefﬁcients can be examined using a procedure outlined in Snedecor and Cochran (1967). We count the number of cases in the twenty trials where a particular coefﬁcient is signiﬁcant at the 5 per cent level. A normal approximation can be used to test the null hypothesis that the ‘true’ proportion of cases where the coefﬁcient is not equal to zero is 5 per cent. If the null is true, the observed proportion of rejections, R, is distributed normally with mean r and standard deviation s = (r(1 − r)/n)1/2 , where n is number of trials, here twenty. The normal deviate, with a correction for continuity, is z = (|R − r| − (2n)−1 )/s. The critical value for this test statistic is 2.33 at a conservative 1 per cent signiﬁcance level. If there are four signiﬁcant (at 5 per cent) coefﬁcients out of 20 in our trials, the value of z is 2.57 which exceeds the 1 per cent critical value. Hence, where there are four or more signiﬁcant coefﬁcients amongst twenty trials, we conclude that the particular coefﬁcient is signiﬁcantly different from zero. In the case of the coefﬁcient β, though, we are concerned with whether this is signiﬁcantly different from one and a similar procedure can be adopted for this case.

Handicap and index betting markets

123

Equation (2) was estimated over twenty trials for each of four index ﬁrms offering spreads on rugby league matches and four bookmakers offering handicaps. In addition, for both index and handicap betting markets we report results from using the lowest spread or handicap available amongst the index ﬁrms and bookmakers, respectively. If less than four quotes were available, the lowest of those available was taken and if just one quote was available that was selected. This represents the ‘best’ index spread or handicap that, in respect of the focus team, could be obtained by shopping amongst the index ﬁrms or bookmakers. Table 11.1A and 11.1B report our results. The coefﬁcients shown are mean values across twenty trials. The ﬁgures in parentheses indicate in how many trials the particular coefﬁcient estimate was signiﬁcantly different from the value speciﬁed by the null (zero or one). Where this number is four or more, we can reject the null hypothesis. The following sub-sections summarise the regression results reported in Tables 11.1A and 11.1B.

Table 11.1A OLS estimation of actual points differences in handicap betting with twenty trials Variable

Bookmaker 1 Bookmaker 2 Bookmaker 3 Bookmaker 4 Lowest handicap

CONSTANT HANDICAP HOME

−2.202 (9) 1.012 [0] 4.830 (20)

R 2 (average) 0.50 N 441

−2.298 (12) 1.010 [0] 4.885 (20)

−1.703 (4) 1.009 [0] 3.640 (20)

−1.701 (3) 1.015 [0] 3.145 (20)

0.50 481

0.49 468

0.49 454

0.117 (0) 1.003 [0] 3.985 (20) 0.49 487

Dependent variable is points difference between randomly selected focus team i and its opponent. Notes Table shows mean coefﬁcients across twenty trials; ( ) is number of cases where coefﬁcient estimate is signiﬁcantly different from zero; [ ] is number of cases where coefﬁcient estimate is signiﬁcantly different from one.

Table 11.1B OLS estimation of actual points differences in index betting Variable

Firm 1

Firm 2

Firm 3

Firm 4

CONSTANT SPREAD HOME

−1.288 (1) 0.983 [0] 2.927 (0)

−1.161 (1) 1.013 [0] 2.807 (0)

−0.490 (2) 1.032 [0] 1.175 (0)

−1.180 (3) 0.965 [0] 3.146 (0)

R 2 (average) N

0.48 301

0.50 294

0.50 284

0.48 296

Best quote 0.606 (1) 0.996 [0] 2.537 (0) 0.49 310

Dependent variable is points difference between randomly selected focus team i and its opponent Notes SPREAD denotes midpoint of index ﬁrm’s point spread; Table shows mean coefﬁcients across twenty trials; ( ) is number of cases where coefﬁcient is signiﬁcantly different from zero; [ ] is number of cases where coefﬁcient is signiﬁcantly different from one.

124

R. Simmons, D. Forrest and A. Curran

Neither index nor handicap betting markets exhibit favourite–underdog bias In all our trials (200) there is not a single case where the coefﬁcient β is signiﬁcantly different from one. Average point estimates are very close to one, for each index ﬁrm, for each bookmaker and for the minimum spread and handicap. It seems, from our limited sample sizes, that index spread midpoints and bookmaker handicaps are each accurate predictors of rugby league scorelines, in the speciﬁc sense that a unit increase in index ﬁrm spread midpoint or in handicap is reﬂected one-for-one in the actual points difference between teams in a match. This is true for all index ﬁrms and all bookmakers in our data set. However, it does not follow from this that other biases in the setting of index quotes and handicaps are absent. Bookmaker handicaps do not fully incorporate home advantage in rugby league The γ coefﬁcients are positive and signiﬁcant at the 5 per cent in all twenty trials for each of the three main retail bookmakers and for the minimum handicap across the four bookmakers. For the specialist odds-setting ﬁrm (bookmaker 4), γ coefﬁcients are positive and signiﬁcant at the 10 per cent level in all twenty trials but only signiﬁcant at the 5 per cent level in two cases, below the critical threshold level of four. This would seem to give a strong indication that handicaps on home teams under-predict actual points differences in favour of these teams. The scale of home–away bias can be discerned by the size of coefﬁcients. Across twenty trials these are 4.83, 4.89 and 3.64 for the retail bookmakers, 3.15 for the specialist odds-setter and 3.99 for the minimum handicap. Hence, a team playing at home earns an average of three to ﬁve points more than if it plays away, for any given handicap. We shall examine below whether this four point discrepancy between home and away teams can be utilised to make abnormal returns. On ﬁrst sight, one would expect that this home–away discrepancy, which could not have been revealed without randomisation, offers the potential for higher returns from backing home teams compared to away teams. Index ﬁrms fully incorporate home advantage into their spreads; the index betting market is efﬁcient The results from inspection of γ coefﬁcients for the four index ﬁrms are extremely clear. Although these coefﬁcients are always positive, none is signiﬁcant at the 5 per cent level for any of the index ﬁrms or the minimum quote across ﬁrms. Since the constant term is also insigniﬁcant under our criterion for evaluation of trials, we are left with the conclusion that the index betting market for rugby league is weak-form efﬁcient. There are no biases revealed in this market for the bettor to exploit. The constant term and coefﬁcient on home dummy are not signiﬁcantly different form zero and the coefﬁcient on spread midpoint is not signiﬁcantly different from unity.

Handicap and index betting markets

125

It may be the case that we have not fully explored all the possibilities for bias in either the index or handicap market. We extended our model in three further directions. First, we considered the possibility of semi-strong inefﬁciency (Vaughan Williams, 1999) with respect to ‘fundamental’ information. We added to equation (2) variables to represent cumulative (season to date) points ratios, deﬁned as points divided by maximum possible. F -tests showed that these did not add signiﬁcant explanatory power to our model. In contrast, adding index spread midpoint or handicaps did add signiﬁcantly to a model containing home dummy and cumulative points ratios as variables. We conclude that both the index and handicap betting markets are semi-strong efﬁcient. Second, following Forrest and Simmons (2001), we explored the notion that fan support could affect efﬁciency in the markets. In the context of rugby league, teams with large fan support (such as Bradford and Leeds) might deliver higher points differences in their matches beyond those predicted by handicaps or index spread midpoints. To capture this possibility, we created a variable to denote the difference in previous season’s average home attendance for any two teams in a match. The coefﬁcient on this ‘difference in attendance’ variable was found by Forrest and Simmons to be positive and signiﬁcant in most divisions and most seasons for English soccer. In rugby league the ‘difference in attendance’ variable was never signiﬁcant in any trial for either index betting or handicap betting markets. Also in soccer, Dobson and Goddard (2001) argue that the underestimation of home advantage may vary along the range of odds in ﬁxed-odds betting. They report an advantage to bettors (with superior returns in the 1998–99 English soccer season) from ‘betting long’ on away teams. For rugby league, our third extension was to test their proposition by including an interaction variable to denote the product of home dummy and either spread midpoint or handicap. This ‘home times spread/handicap’ variable was not signiﬁcant in any trial. This leaves us with a puzzle: why is the index betting market efﬁcient relative to the bookmaker handicap market? Why do parallel betting markets deliver different outcomes, particularly in terms of presence of home–away bias? One possible rationale for the appearance of home–away bias in the handicap market together with its absence in the index market may lie in the constituency of each market. The index betting market can be characterised as comprising bettors who are not committed to particular teams. Some of these are sophisticated, professional bettors who simply desire a positive expected return. Others gain pleasure from the betting activity per se, but are not experts. To bet with index ﬁrms these bettors must be creditworthy and must be prepared to accept a higher variance of returns in the index market compared to the markets supplied by bookmakers. This combination of high risk and high returns offers an incentive for bettors to acquire and process more information surrounding the bet. In contrast, many investors in handicap betting markets may be fans who see their outlay as part of a general ﬁnancial and emotional stake in their team of allegiance. With a constituency populated largely by fans, the handicap betting market may be more prone than the index betting market to forces of bettor ‘sentiment’ (as termed by Avery and Chevalier (1999) in the context of NFL betting).

126

R. Simmons, D. Forrest and A. Curran

It is the absence of ‘fan’ bettors in the index market that is critical here. Index ﬁrms can choose a spread to maximise proﬁt (on both sides of the quote). With efﬁciency ‘pleasure’ bettor losses will be exactly offset by professional bettor gains and index ﬁrms rely on the over-round to generate proﬁt.4 In the handicap market, a bias in the direction of home fans is acceptable to bookmakers so long as the bias is not sufﬁcient to generate a positive expected return to these bettors. Index ﬁrms, lacking the presence of ‘fan’ bettors, do not enjoy this counter-balance and must offer more efﬁcient, unbiased quotes in order to avoid a loss. In an interesting parallel, some US economists, such as Golec and Tamarkin (1991) have pointed to the relative efﬁciency of college NFL betting markets compared to betting on the professional game. Their argument is that amateur or ‘pleasure’ bettors are prevalent in the NFL betting markets but are largely absent from the college game, which is less exposed to media coverage and publicity. According to Gandar et al. (1988), commenting on handicap betting in the NFL, which we have argued is a reasonably close approximation to handicap betting in rugby league: ‘the pool of money wagered by the unsophisticated public dominates the pool of money wagered by knowledgeable bettors’. The contrasting composition of bettor types between handicap and index betting markets could help explain why bookmakers can afford to set less efﬁcient quotes than index ﬁrms.

Evidence from simulations of betting strategies The regression analysis points to efﬁciency in the index betting market and a positive home–away bias in the bookmaker handicap betting market. The purpose of this section is to investigate the proﬁtability of various betting strategies in order to provide a check against the model. In the index betting market, we predict no success from a set of betting strategies. That is, returns should not be positive for sets of index bets placed with any of our four ﬁrms. Since home–away bias has been detected in the handicap betting market, we can determine whether a strategy of backing home teams would deliver positive proﬁts for bettors. Ideally, the model of the rugby league betting market should be used to make out-of-sample forecasts rather than making predictions from the existing sample. Also, Sauer (1998) notes the tendency for allegedly successful trading rules for proﬁtable betting to disappear over time (see Tryfos et al. (1984) and Sauer et al. (1988) for examples from the NFL). To track the persistence of proﬁtable betting strategies would require, as Sauer (op cit.) and Osborne (2001) note, a large number of observations covering many seasons. Unfortunately, in the case of rugby league betting, we are constrained by the small number of available observations to assess proﬁtability of betting strategies by simulation within the sample. What denotes successful betting in rugby league? In the handicap market, success is easily gauged as a win rate deﬁned as the number of wins as a proportion of total number of bets placed. A win rate above 54.55 per cent would indicate a proﬁtable strategy over our sample period.5 In the point spread market, the overall return can be computed, which is the sum of winnings and losses from all bets made.

Handicap and index betting markets

127

Table 11.2 Example of index betting from a match with point spread (8–11) Bets taken

Return from outcome = 3

Return from outcome = 9

Return from outcome = 16

(Buy at 11) £2 (Buy at 11) £3 (Sell at 8) £5 Total

−16 −24 25 −15

−4 −6 −5 −15

10 15 −40 −15

In an efﬁcient market, the actual return from employing betting rules should not be signiﬁcantly different from the expected return, given the over-round in the market, from betting at random. The expected return from random betting in the handicap market is – 10.4 per cent.6 The computation of expected return in the index market is best seen using an example. The index ﬁrms’ advantage lies in the margin between the buy and sell point. Whatever the match outcome, every pound of unit stake that is taken on both sides of the spread guarantees revenue to the index ﬁrm equal to the width of the spread, normally three pounds. Table 11.2 shows a simpliﬁed example relating to a match with point spread (8–11). When the ﬁrms are successful in equating betting volumes placed on either side of the spread, a single bet of £1 per unit placed at random will earn the ﬁrm £1.50. Allowing for rare cases where the spread points width is four points, not three, the observed average point width is 3.06. The over-round is then £1.53 per £1 unit stake. We proceed to evaluate the returns to some very simple betting rules applied to our sample covering the 1999 and 2000 seasons for index betting and the 1999, 2000 and 2001 seasons for handicap betting.7 Initially at least, simulations proceed on the assumption that all bets are place with the same bookmaker or index ﬁrm. This restriction is relaxed below, to permit shopping between bookmakers and index ﬁrms. Bet on all home teams The positive coefﬁcient observed on the HOME variable in the previous section conﬁrmed the existence of a bias in the home–away dimension. Results of betting on home teams are presented in Table 11.3A for bookmakers offering handicaps and Table 11.3B for index ﬁrms. Placing bets on the home team earns superior returns to those of random betting in both markets. A win rate of between 50.4 and 53.5 per cent is achieved in the handicap market, depending on choice of bookmaker. In line with our regression results, t-tests show that the percentage losses at bookmakers 1 and 2 are signiﬁcantly lower than random betting would offer, with p-values of 0.01 or less. Although, the win rates at bookmakers 3 and 4 are above the 48 per cent rate indicated by random betting, they are not signiﬁcantly higher than this ﬁgure.

128

R. Simmons, D. Forrest and A. Curran

Table 11.3A Simulation results from handicap betting on all home teams

Number of bets Bets won Win rate (%) Proﬁt Proﬁt (%)

Bookmaker 1

Bookmaker 2

441 236 53.5 −8.3 −1.88∗

481 257 53.4 −9.8 −2.04∗

Bookmaker 3 468 236 50.4 −35.3 −7.54

Bookmaker 4 454 231 50.9 −30.5 −6.72

Notes Proﬁt is the difference of returns from winning bets at odds of 5 to 6, including return of stake, and value of bets placed and assumes zero tax. ∗ Denotes that a t-test of the null hypothesis that computed proﬁt is −10.4 per cent, the value associated with random selection of bets, is rejected at the 1 per cent level.

Table 11.3B Simulation results from index betting on all home teams

Number of bets Return Return per bet

Firm 1

Firm 2

Firm 3

Firm 4

301 −63 −0.21

294 −38.5 −0.13

284 −232.5 −0.82

296 −62.5 −0.21

With odds of ﬁve to six, positive proﬁts from a ‘home only’ strategy cannot be found at any bookmaker. From Table 11.3B, we see that the observed return per bet for betting only on the home team at each index ﬁrm is higher (less bad) than the expected −£1.53. In contrast to our regression results, for three of the index ﬁrms’ spreads this home bias is stronger than at any of the handicap ﬁrms shown in Table 11.3A. Contrary to our regression results, for three of the index ﬁrms’ spreads the home bias is proportionately greater than at any of the handicap ﬁrms although again there are no opportunities for positive proﬁts once the over-round is taken into account. Bet on favourite or underdog For these strategies all matches are again covered, since in a two-team contest designating one team as ‘favourite’ automatically implies that the other is the ‘underdog’. Simulation results are summarised in Tables 11.4A (handicaps) and 11.4B (index betting). In the handicap market, betting only on the favourite is a bad strategy: at all four bookmakers the win rate is below the expected rate of 48 per cent so this strategy is inferior to betting at random. The highest win rate is 46.8 per cent over the period – far short of the 54.55 per cent required for positive returns. The number of times the match result is equal to the handicap, resulting in all bets losing, is a relevant factor in the success of the strategy. This occurred several times with bookmaker 2, in particular, causing both strategies of betting on and against the favourite to deliver expected returns below normal. In contrast,

Handicap and index betting markets

129

Table 11.4A Simulated win rates from betting on favourites or underdogs in the handicap betting market

Number of bets Bet on all favourites Bets won Win rate (%) Bet on all underdogs Bets won Win rate (%)

Bookmaker 1

Bookmaker 2

Bookmaker 3

Bookmaker 4

417

451

443

435

195 46.8

200 44.7

198 44.7

202 46.4

206 49.4

224 49.7

223 50.3

221 50.8

Table 11.4B Simulated returns from betting on favourites or underdogs in the index betting market

Number of bets Bets on all favourites (buy) Return Return per bet Bets on all underdogs (sell) Return Return per bet

Firm 1

Firm 2

Firm 3

Firm 4

302

295

285

297

−337 −1.12

−339.5 −1.15

−313.5 −1.10

−458 −1.54

−589 −1.95

−572.5 −1.95

−560.5 −1.97

−442 −1.49

betting on underdogs delivers slightly higher win rates than would be expected from random play. In the index market, a loss of £1.53 per bet is expected when selection of buy or sell is random. Buying every spread at one ﬁrm over two seasons gives an average loss of £1.23, which is not as bad as would be predicted from random selection. A ‘buy’ strategy is clearly superior to selling all spreads; this strategy results in a loss of between £1.49 and £1.97. Placing wagers in the index market according to whether teams are favourites or underdogs cannot provide proﬁt: betting £1 per unit results in a loss of several hundred pounds over the two-year period. Neither betting consistently on favourites nor on underdogs in either market is suggestive of existence of bias. The small deviations from expected returns seem to be generated by random noise. We suspect that bias in the favourite–underdog dimension is not a source of inefﬁciency in the betting markets for Super League rugby.8 Shopping for ‘best’ quotes The literature on sports betting tends to assume that all bookmakers publish similar odds and spreads and that arbitrage possibilities are absent. In North America, this is a reﬂection of the inﬂuence of a small number of major Las Vegas casinos on sports betting lines, and a remarkable consensus of these lines, which spreads to

130

R. Simmons, D. Forrest and A. Curran

both smaller bookmakers and illegal operators (Vergin and Sosik, 1999; Osborne, 2001). In English sports betting, there are more ﬁrms offering a greater diversity of betting products compared to the US. The simulations reported above suggest varying degrees of success depending on which bookmaker or index ﬁrm is selected for trade. In the Racing Post the handicaps and point spreads are published alongside one another for all ﬁrms. Given a particular strategy, it is easy to compare quotes and take advantage of any arbitrage possibilities. There is quite a lot of variance between handicaps and index quotes across ﬁrms. This variation tends to cancel out over time so no one ﬁrm sets quotes systematically too low or too high. Nevertheless, betting at the ‘best’ quote each time might signiﬁcantly improve returns. Already, we have seen from our regression results that a home team bias occurs when the lowest handicap is considered whereas this bias is absent from the handicaps offered by bookmaker 4. The rules tested above are reconsidered here. Each simulated bet is placed on the optimal quote, from the bettor’s point of view, from the choice of four index ﬁrms and, alternatively, four bookmakers. The relevant question is not whether this approach yields higher win rates (it must) but whether or not proﬁts can be earned. Simulation results from selection of optimal prices are shown in Tables 11.5A and 11.5B. Since the Racing Post does not display quotes for all index ﬁrms or bookmakers for all games, we deﬁne optimal quote to be the best quote out of all those available. If only two bookmakers or index ﬁrms offer quotes the better is selected. This is a more conservative selection procedure than disregarding any games where not all quotes are on offer. Clearly, use of optimal prices with selections of all home teams does deliver a proﬁt. The win rate in the handicap market is 57.5 per cent and the index market return per bet is £1.22. Shopping between ﬁrms delivers proﬁts in each market. Of other four strategies reported in Table 11.5A, betting on underdogs clearly outperforms favourites and betting on home underdogs delivers a greater win rate (over 60 per cent), and even positive proﬁts relative to all underdogs. This, again, is a reﬂection of home bias in the handicap market. Compared to using handicaps from single bookmakers, a selection across bookmakers reduces transactions costs

Table 11.5A Simulated returns from various betting strategies applied to lowest handicaps

Number of bets Bets won Win rate (%) Return Proﬁt (%)

Bet on home team

Bet on favourite

Bet on underdog

Bet on home favourite

Bet on home underdog

486 274 56.4 502.0 3.29

484 242 50.0 443.7 −8.33

484 267 55.2 489.5 1.14

321 173 53.9 317.2 −1.18

176 106 60.2 194.3 10.40

Handicap and index betting markets

131

Table 11.5B Simulated returns from various betting strategies applied to best quotes in the index market

Number of bets Return Return per bet

Bet on home team

Bet on favourite

Bet on underdog

Bet on home favourite

Bet on home underdog

310 379.5 1.22

306 169 0.55

300 −158 −0.53

195 162.5 1.35

189 −278 −1.47

of betting by cutting into bookmaker over-round, with an opportunity for positive proﬁt from backing home underdogs. In the index market, ‘buying’ favourites over the 1999 and 2000 seasons would have returned a proﬁt of £169 (per £1 per point stake). This is an average of 55-pence proﬁt for each bet; a signiﬁcant improvement on buying favourites at individual index ﬁrms where the average return was −£1.23. Conﬁning wagers to just those favourite teams at home would have yielded a higher average return again: £1.35 for every £1 (per point) speculated. A strategy of selling home underdogs delivers a loss close to normal whereas a strategy of betting on home underdogs yields a positive proﬁt. Hence, these two betting strategies offer differential returns, a feature that should be explored in further research to check for its robustness over time.

Conclusions We have examined efﬁciency in the handicap and index betting markets for rugby league. We ﬁnd that variations in quotes in each market are matched one-forone by actual points differences observed in Super League ﬁxtures. However, our regression results show a signiﬁcant home bias in the handicap betting market which is absent in the index market. This bias, of the order of three or four points, implies that backing home teams should generate higher returns to bettors than backing away teams. The differences in home bias between the two types of market, and the contrasting efﬁciency properties, may be attributed to different constituencies. Handicap markets may attract ‘pleasure’ bettors, including fans who bet in order to add to their emotional stake in a team. In contrast, the index market may be dominated by ‘professional’ bettors with no sentimental team attachment who simply seek the best possible expected return from a wager. When simulations of betting strategies are conducted, the home bias is conﬁrmed in that lower expected losses are found by backing home teams, compared to random selection, in the handicap betting market. But betting on home teams at particular bookmakers does not yield a proﬁt in either betting market. Selection of ‘best’ handicaps or index quotes alters the simulation outcomes considerably. In the handicap market, backing all home teams or backing home underdogs delivers positive proﬁts. In the index market, returns per bet are positive

132

R. Simmons, D. Forrest and A. Curran

when the ‘best’ quote on home teams is searched. This was not apparent from the regression results, suggesting an anomaly that deserves attention to be resolved in further research. Generally, the implications for efﬁciency from searching for lowest prices amongst bookmakers in sports betting markets have not been properly explored in the literature, in which arbitrage betting opportunities tend to be assumed to be absent. Unlike studies of North American sports betting, where sample sizes run into thousands, we are constrained in our study of rugby league betting by the immaturity of markets and the consequent low numbers of observations for analysis. For index betting markets, samples sizes were as low as 281. With such small samples, the cliché that further work is needed is particularly apt here, if only to reduce the chances of invalid inferences. At present our work must be seen as tentative and suggestive. There are three further directions of research that could usefully be taken. One is to see if the home bias revealed here for rugby league handicap betting persists over time. The second is to ascertain whether the proﬁtable opportunities obtained by shopping for ‘best’ quotes remain in place in future seasons. This is an issue which deserves greater attention in other UK sports, including soccer. The third question, following work by Vergin and Sosik (1999) and Gandar et al. (2001) on US sports betting, is whether there is an additional source of home bias to be found in higher proﬁle rugby league games, comprising nationally televised games on Friday nights and end-of-season playoff games.

Notes 1 An alternative approach is a probit model, where the dependent variable is the probability that a bet on a team beats the spread or handicap, that is, the probability that the bet is won. This approach is applied by Gray and Gray (1997) to NFL betting. We prefer not to use the probit model because this imposes a non-linear S-shape to the relationship between bet outcome and terms of the bet which makes interpretation of marginal effects of terms of bet, the probit equivalent of the β parameter, problematic. In the probit model, β will deviate from unity at extreme ends of the distribution of spread midpoint or handicap distribution, by construction. 2 Gandar et al. (2001), however, found no such bias in a study of betting markets for baseball or basketball or in a relatively small out-of-sample test of the Vergin–Sosik NFL results. 3 An alternative hypothesis is that fans who are nervous about their team’s prospects of winning take out a wager on the opponent to win. The disutility brought about by their team’s defeat would be partially offset by the satisfaction of winning the bet. If this ‘insurance’ motive predominates then the coefﬁcient on γ is predicted to be negative, assuming home fans are the majority of bettors on the outcome of a particular match. 4 Without ‘pleasure’ bettors in the index market, professional bettors would not enter as they would lack an opportunity for positive proﬁt. 5 By contrast, a win rate of 52.5 per cent is needed for proﬁtable betting in the NFL handicap betting market. The difference comes from the superior odds (10–11) offered on NFL games relative to rugby league games (5–6). 6 At odds of 5–6, the bettor needs to wager £109 to receive £100, ignoring draws. But in the case of draws the bettor loses. Draws are 2.35 per cent of match results in

Handicap and index betting markets

133

our sample. Given draws, the bettor must wager £109 ∗ 1.0235 = £111.6. The overround is 11.6 per cent while the expected take-out, conditional on a balanced book, is 11.6–111.6 = 10.4 per cent. 7 Some literature on sports betting examines several more complex and sophisticated rules. See, for example, Cain et al. (2000) on exact scores betting in English soccer and Lacey (1990) and Woodland and Woodland (2000) on NFL betting. 8 Simulations of betting on all home favourites or all home underdogs did not deliver a higher proportion of wins than betting on all home teams or all favourites or all underdogs.

References Avery, C. and Chevalier, J. (1999), ‘Identifying investor sentiment from price paths: the case of football betting’, Journal of Business, 72: 493–521. Cain, M., Law, D. and Peel, D. (2000), ‘The favourite–longshot bias and market efﬁciency in UK football betting’, Scottish Journal of Political Economy, 47: 25–36. Clarke, S. and Norman, J. (1995), ‘Home advantage of individual clubs in English soccer’, The Statistician, 44: 509–21. Courneya, K. and Carron, A. (1992), ‘The home advantage in sports competitions: a literature review’, Journal of Sports and Exercise Psychology, 14: 13–27. Dare, W. and MacDonald, S. (1996), ‘A generalised model for testing the home and favourite team advantage in point spread markets’, Journal of Financial Economics, 40: 295–318. Dobson, S. and Goddard, J. (2001), The Economics of Football. Cambridge: Cambridge University Press. Forrest, D. and Simmons, R. (2000), ‘Forecasting sport: the behaviour and performance of football tipsters’, International Journal of Forecasting, 16: 317–331. Forrest, D. and Simmons, R. (2001), ‘Globalisation and efﬁciency in the ﬁxed-odds soccer betting market’, University of Salford, Centre for the Study of Gambling and Commercial Gaming. Gandar, J., Zuber, R., O’Brien, T. and Russo, B. (1988), ‘Testing rationality in the point spread betting market’, Journal of Finance, 43: 995–1008. Gandar, J., Zuber, R. and Lamb, R. (2001), ‘The home ﬁeld advantage revisited: a search for the bias in other sports betting markets’, Journal of Economics and Business, 53: 439–453. Garicano, L., Palacios-Huerta, I. and Prendergast, C. (2001), ‘Favouritism under social pressure’, National Bureau of Economic Research Working Paper 8376. Golec, J. and Tamarkin, M. (1991), ‘The degree of inefﬁciency in the football betting market’, Journal of Financial Economics, 30: 311–323. Gray, P. and Gray, S. (1997), ‘Testing market efﬁciency: evidence from the NFL sports betting market’, Journal of Finance, 52: 1725–1737. Haigh, J. (1999), ‘(Performance) index betting and ﬁxed odds’, The Statistician, 48: 425–434. Harvey, G. (1998), Successful spread betting. Harrogate: Take That Ltd. Henery, R. (1999), ‘Measures of over-round in performance index betting’, The Statistician, 48: 435–439. Lacey, N. (1990), ‘An estimation of market efﬁciency in the NFL point spread betting market’, Applied Economics, 22: 117–129. Osborne, E. (2001), ‘Efﬁcient markets? Don’t bet on it’, Journal of Sports Economics, 2: 50–61.

134

R. Simmons, D. Forrest and A. Curran

Sauer, R., Brajer, V., Ferris, S. and Marr, M. (1988), ‘Hold your bets: another look at the efﬁciency of the gambling market for National Football League games’, Journal of Political Economy, 96: 206–113. Sauer, R. (1998), ‘The economics of wagering markets’, Journal of Economic Literature, 36: 2021–2064. Schlenker, B., Phillips, S., Bonieki, K. and Schlenker, D. (1995), ‘Championship pressures: choking or triumphing in one’s territory’, Journal of Personality and Social Psychology, 68: 632–643. Schwartz, B. and Barsky, S. (1977), ‘The home advantage’, Social Forces, 55: 641–661. Snedecor, G. and Cochran, W. (1967), Statistical Methods, 6th edition. Ames, Iowa: The Iowa State University Press. Terrell, D. and Farmer, A. (1996), ‘Optimal betting and efﬁciency in parimutuel betting markets with information costs’, Economic Journal, 106: 846–868. Thaler, R. and Ziemba, W. (1988), ‘Anomalies – parimutuel betting markets: racetracks and lotteries’, Journal of Economic Perspectives, 2: 161–174. Tryfos, P., Casey, S., Cook, S., Leger, G. and Pylpiak, B. (1984), ‘The proﬁtability of wagering on NFL games’, Management Science, 24: 809–818. Vaughan-Williams, L. (1999), ‘Information efﬁciency in betting markets: a survey’, Bulletin of Economic Research, 51: 1–30. Vergin, R. (1998), ‘The NFL point spread market revisited: anomaly or statistical aberration?’, Applied Economics Letters, 5: 175–179. Vergin, R. and Sosik, J. (1999), ‘No place like home: an examination of the home ﬁeld advantage in gambling strategies in NFL football’, Journal of Economics and Business, 51: 21–31. Woodland, B. and Woodland, L. (2000), ‘Testing contrarian strategies in the National Football League’, Journal of Sports Economics, 1: 187–193.

12 Efﬁciency of the over–under betting market for National Football League games Joseph Golec and Maurry Tamarkin

Introduction Sports betting markets are recognized as good data sources to test market efﬁciency. Readily observable outcomes and a deﬁnite betting or investment horizon are features that make these markets attractive research candidates. Various studies have examined American football, baseball, basketball, and horse-racing markets. In the American football betting market, the efﬁciency tests have focused on whether bettors can use certain simple team features, such as being the home team or the underdog, to select bets that can generate statistically signiﬁcant economic proﬁts. The most recent work on football betting focuses on econometric techniques that may improve the statistical tests of the efﬁciency of the football point spread betting market, or the forecasts from a betting model. For example, Gray and Gray (1997) extend the literature by using a discrete-choice probit model rather than the ordinary least squares regression methodology used previously by Golec and Tamarkin (1991). The basic approach to testing for market efﬁciency has been to regress game outcomes (difference in team scores) on the betting market’s predicted point spread. Various studies extended the basic model by including other explanatory variables such as home–away and favorite–underdog variables (see Golec and Tamarkin, 1991). In addition to using probit regression, Gray and Gray add “streak” variables to the regression such as team record in the most recent four games and overall winning percentage. They ﬁnd that some of the streak variables are signiﬁcant, implying some market inefﬁciency. In this chapter, we consider a different football bet. The most common football bet is the point spread bet, which tests one’s ability to predict the difference in team scores, compared to the market’s prediction. The next most common football bet is the over–under bet, which tests one’s ability to predict the total number of points scored in a game. This chapter focuses on the over–under bet. We know of no comprehensive study to date which tests the basic efﬁciency of the over–under market. In addition, we consider any differences in the statistical properties of point spreads and over–under totals and whether information in one market can be used to win bets in the other. The chapter is organized as follows: the next section on “The football betting market: setting point spreads” brieﬂy describes the football betting market and how

136

J. Golec and M. Tamarkin

point spreads are set; the section on “Testing football betting market efﬁciency” after that describes the data and presents the test results. The results are summarized in the conclusion.

The football betting market: setting point spreads Jaffe and Winkler (1976) point out that football betting markets are analogous to securities markets: a gambler “invests” through a bookie (market-maker) at a market-determined point spread (price), which is the market’s expectation of the number of points by which the favorite will outscore the underdog. The larger the spread, the larger the handicap the favorite must overcome. Those who bet on the favorite believe their team is underpriced; they speculate that the favorite will defeat the underdog by more than the point spread. In turn, those who bet on the underdog believe that the favorite is overpriced, that is, the favorite will either lose the game or win by less than the point spread. Licensed sports books in Las Vegas dominate the organized football betting markets. They commence betting on the week’s games at “opening” point spreads (the line) that reﬂect the expert opinions of a small group of professional spread forecasters. If new information on the relative strengths of opposing teams (e.g. a player injury) is announced during the week, the bookie may adjust the line. In addition, since the identity of the bettors is known, Las Vegas bookies may also change the line if professional gamblers place bets disproportionately on one team. Although, of course, once bets are placed at a speciﬁc point spread number, the bet stands at that number regardless of future changes in the point spread. Shortly before game time, the bookie stops taking bets at the “closing” point spread. Like securities prices at the end of trading, closing spreads are assumed to reﬂect an up-to-date aggregation of the information and, perhaps, biases of the market participants. In addition to point spreads, sports books also publish betting lines on the total points scored in each game. The bettor tries to predict whether the total number of points scored in a football game will be over or under a published number, the so-called over–under. The over–under number varies depending on the two participants’ offensive and defensive prowess and, to some extent, the weather forecast, as inclement weather can hold down scoring. The over–under number also may be adjusted by bookies until game time although, of course, once bets are placed at a speciﬁc over–under, the bet stands at that number regardless of future changes in the over–under number. In Las Vegas and other markets for large bettors, winners of point spread betting, or of the over–under, receive two dollars for each dollar bet; losers forfeit the amount of their bets plus an additional 10 percent paid to the bookie as commission (this commission is called vigorish or juice). In the case of ties, typically all bets are canceled (a push) although some bookies treat this as a loss for the bettor. Thus, a betting strategy must win at least 52.4 percent of the bets to be proﬁtable. The fact that bookies can change the line (we are including point spread and over–under bets in the line) leads researchers to propose an optimal strategy for setting the line. Assuming the line is a good forecast of the outcomes, the line

Efﬁciency of the over–under betting market

137

is an even bet and, over many bets, the bookie’s expected return is the vigorish, regardless of how disproportionate the betting might be on any particular team in a game. Many researchers have assumed that bookies adjust the line to even out the betting on each game, essentially hedging their positions in each game. But the bookie manages a portfolio of mutually uncorrelated unsystematic risks. Thus, the risk can be diversiﬁed away over many games. The bookie wants simply to maximize the bets placed subject to the constraint that the line is determined so that each bet is an even gamble. In conversation with bookies, we have found that they do not try to adjust the line to even out the betting.1 As one bookmaker put it, “Say I have a large difference in line of $8,000 on one team and only $2,000 on the other. Why should I try to change the line? I am laying $5,800 against $6,800 on what is essentially an even money proposition. I’ll take those kinds of bets all day!”2 This is similar to the way casinos operate. In a roll of the dice in craps, there may be disproportionately more money bet on the “pass line” than on the “don’t pass” line, but casinos do not care. What bookies do care about is increasing the amounts wagered, for their expected proﬁt goes up with the amount wagered whether or not bets are evened out in any particular game. Thus, sports books have expanded their offerings of bets to attract more wagers. One such bet that has become more popular recently is the over–under bet. We focus on this gamble in the empirical work below.

Testing football betting market efﬁciency We use data from The Gold Sheet, a sports information publication that has been in business for forty-three years. The data consists of all National Football League (NFL) games from 1993 through 2000 for which there were published point spreads and over–under lines. Final scores were also obtained here along with game dates. There is a total of 2,008 games. First, we look at some summary statistics for the point spread (PS) and the over– under (OU) in Table 12.1. The point spread is deﬁned as the number of points by which the favorite (underdog) is expected to win (lose). We note that for both betting lines, the actual outcomes are close to the predictions as given by the lines. Table 12.1 Summary statistics for NFL point spread and over–under bets during the 1993–2000 seasons Variable

Point spread (PS) Margin of victory (MV) [MV − PS] Over–under (OU) Total points scored (TP) [TP − OU]

Standard Mean

Median

Deviation

Skewness

Kurtosis

5.64 5.17 0.47 40.15 41.22 1.07

5.00 4.00 0.50 40.00 41.00 0.00

3.58 13.50 12.91 4.13 14.22 13.76

−0.98 0.02 0.02 0.72 0.35 0.36

0.75 0.30 0.24 1.20 −0.04 0.06

138

J. Golec and M. Tamarkin

They differ by about one point in the OU line and by one half point in the PS line. In both the cases, the medians of the differences are at or near zero. This is an indication that the lines are good estimates of the outcomes. When we look at the differences, (MV − PS) and (TP − OU), we see two things. First, when we take differences to get (MV − PS), we reduce the standard deviation relative to MV alone proportionately more than when we do the same for (TP − OU). This shows that PS is more highly positively correlated to MV than OU is to TP. That is, PS explains more of MV than OU explains of TP, so that differencing eliminates more variance from (MV − PS). Indeed, the correlation between PS and MV is 0.29, versus 0.25 for OU and TP. One interesting feature of the PS and OU lines is that they both exhibit skewness and kurtosis, with PS negatively skewed and OU positively skewed. But MV and TS are basically normally distributed, with little skewness or kurtosis. This is surprising because if the goal of the bookie is to set PS to mirror MV and OU to mirror TP, one might expect them to have similar distribution features. That is, if the realizations (MV and TS) are normally distributed, why are not the expectations (PS and OU)? This question is not answered here but it does bring up a related question for ﬁnancial asset returns. Short-term asset returns have been shown to be approximately normally distributed, although long-term returns may be lognormally distributed. Are expected returns normally distributed, and what is the consequence for asset pricing models if there is a difference between the distributions of expectations and realizations? For football betting, market efﬁciency implies that the closing over–under line is an unbiased measure of the total score in a game. In other words, the closing line should not be systematically higher or lower than the actual ﬁnal game scores. This can be tested with the following ordinary least squares regression of total points scored on the over–under line. TP = β1 + β2 (OU) + ε where TP = total points scored, OU is the over–under line, β1 and β2 are regression coefﬁcients and ε is an error term. The test of efﬁciency is an F -test of the joint hypothesis, β1 = 0 and β2 = 1. Table 12.2 presents the regression tests for the over–under National Football League betting market for the 1993 through 2000 seasons combined and for individual seasons. The regression is TP = β1 + β2 (OU) + ε where TP is total points scored, OU is the over–under line, β1 and β2 are regression coefﬁcients and ε is an error term. For the entire sample period, we ﬁnd that there is a statistically signiﬁcant bias in the over–under line. This result is largely driven by a ﬁxed bias of about six points, as measured by the signiﬁcantly positive intercept (β1 ). Under this condition, the slope estimate will be biased down. Indeed, β2 is smaller than one, but only marginally so. And given the bias imposed by the

Efﬁciency of the over–under betting market

139

intercept, we can say that there is probably little bias that varies proportionately with the OU line. Further evidence on the consistency of the bias can be found in the regressions for the data subset by year. Although six of the eight years have positive intercept estimates, only the 1995 estimate is signiﬁcant. In fact, it appears that this year may be driving the rejection of efﬁciency for the overall sample. During the last three years of the sample period, the market appears to be relatively more efﬁcient in the sense that the intercepts ﬂuctuate closer to zero and the slope closer to one. Furthermore, the regression R-squareds are considerably larger. This means that the OU line explains a larger portion of the variation in TP. It can be inferred from the overall regression results, taken at face value, that it would have been better to bet the over, and, that in 1995, this would have been a particularly proﬁtable strategy. One possible explanation for these results is that we have not accounted for overtime games. Out of 2,008 games, 108 are overtime games. If two teams are tied at the end of regulation play, they play a sudden death period in which the ﬁrst team to score wins. The probability of overtime may be a positive function of the spread. When team scores are predicted to be closer, regulation play is more likely to end in a tie. But when we ran a probit regression of overtime on the spread, there was only a weak negative relationship. Indeed, the correlation between the spread and a zero–one variable representing no overtime – overtime is only about 4 percent. Therefore, overtime appears to be largely unpredictable. Overtime may be unpredictable, but overtime games tend to result in larger point totals. This can be seen by redoing the regressions in Table 12.2 and including a dummy for overtime games and the spread to account for the slightly greater

Table 12.2 Regression estimates for tests of market efﬁciency for NFL over–under bets during the 1993–2000 seasons Sample period

1993 –2000 1993 1994 1995 1996 1997 1998 1999 2000

β1

β2

SER

R2

F-statistic

Obs.

(β1 = 0, β2 = 0)

(β1 = 0, β2 = 1)

5.96∗ (3.00)

0.88∗ (0.07)

13.75

0.065

139.34∗

7.39∗

2,008

13.18 (9.57) 8.87 (8.66) 29.65∗ (11.02) 13.32 (11.77) 13.75 (11.90) −3.85 (8.56) 6.41 (8.34) −3.73 (6.31)

0.66∗ (0.25) 0.81∗ (0.21) 0.34 (0.27) 0.68∗ (0.29) 0.67∗ (0.29) 1.13∗ (0.20) 0.87∗ (0.20) 1.08∗ (0.15)

13.68 13.97 14.54 12.64 13.68 13.16 14.08 14.06

0.027 0.055 0.006 0.022 0.021 0.105 0.065 0.165

6.71∗ 13.68∗ 1.52 5.56∗ 5.38∗ 29.66∗ 18.08∗ 51.24∗

1.14 1.53 7.01∗ 0.85 0.78 1.87 1.64 0.18

242 236 252 252 252 254 259 261

Notes Standard errors in parentheses show the estimates. SER is the standard error of the regression. ∗ Denotes statistical signiﬁcance at least at the 5 percent level.

3.73∗ (1.35) 8.69 (4.63) −0.25 (3.63) 5.95 (3.31) 4.81 (3.48) −1.37 (3.55) 2.10 (4.75) 10.82 (4.53) 1.40 (3.90)

0.73∗ (0.26) 0.71∗ (0.23) 0.41 (0.28) 0.68∗ (0.29) 0.67∗ (0.29) 0.97∗ (0.23) 0.91∗ (0.21) 1.08∗ (0.15)

11.66 (9.56) 10.40 (8.74) 27.15∗ (11.11) 12.94 (11.79) 12.96 (11.97) 0.53 (8.98) 5.34 (8.31) −4.13 (6.39)

β3

0.87∗ (0.07)

β2

5.64∗ (3.00)

β1

−0.21 (0.24) 0.42 (0.26) −0.17 (0.26) 0.02 (0.24) 0.23 (0.26) 0.40 (0.26) −0.12 (0.26) 0.09 (0.25)

0.06 (0.08)

β4

0.044 0.066 0.021 0.029 0.025 0.115 0.086 0.166

0.069

R2

Notes Standard errors in parentheses show the estimates. SER is the standard error of the regression. ∗ Denotes statistical signiﬁcance at least at the 5 percent level.

1993 –2000 1993 1994 1995 1996 1997 1998 1999 2000

Sample period

0.85 1.90 3.58∗ 0.44 0.82 1.95 0.66 0.18

3.52∗

F-statistic (β1 , β4 = 0, β2 = 1)

242 236 252 252 252 254 259 261

2,008

Obs.

Table 12.3 Market efﬁciency tests for NFL over–under bets during the 1993–2000 seasons adjusted for overtime games and point spread

Efﬁciency of the over–under betting market

141

tendency for low-spread games to end in overtime. Table 12.3 reports these results. It presents regression tests for the over–under National Football League betting market for the 1993 through 2000 seasons combined and for individual seasons. The regression is TP = β1 + β2 (OU) + β3 (OT) + β4 (PS) + ε where TP is total points scored, OU is the over–under line, OT equals 1 for an overtime game and 0 otherwise, PS is point spread and β1 , β2 , β3 , β4 are regression coefﬁcients and ε is an error term. The test of efﬁciency is an F -test of the joint hypothesis, β1 = β4 = 0 and β2 = 1. For the entire sample period, we ﬁnd that overtime games increase the total points scored by a statistically signiﬁcant average of 3.73 points. This is reasonable because the ﬁrst team to score in overtime wins, so most overtime games are settled by a three-point ﬁeld goal, which is easier to score than a six-point touchdown. Indeed, the actual average difference in scores in overtime games is 3.75. This means that the OU line explains none of the effect of overtime on TP. Market efﬁciency is still rejected for the overall sample but the F -statistic is much smaller and less signiﬁcant. Nevertheless, 1995 again drives the overall rejection of efﬁciency. For individual years, taking account of overtime games has moved the intercepts somewhat closer to zero and OU coefﬁcients closer to one, in most cases. Nevertheless, the point estimates of the intercepts are still very large in the ﬁrst ﬁve years of the sample. These are also the years in which the R-squareds are low. This may indicate that certain betting strategies will be more proﬁtable in these years. Table 12.4 presents outcomes for betting strategies of both “over” and “under”. Even though ties in Las Vegas are “pushes”, that is, bets are returned, other bookies may treat pushes as losses, so we also show results for the case where ties lose. These markets might exist in local areas where gambling is illegal, and bookies require a larger proﬁt margin because of the increased risk. Results for the full sample period show that betting the over is only marginally better than betting the under. Furthermore, the 50.1 percent winning percentage is nowhere near the 52.4 percent required to cover the vigorish paid to the bookie. Only in 1995 could one have made signiﬁcantly more than the required 52.4 percent. Of course, this is ex post, and we are not surprised to ﬁnd one proﬁtable strategy in eight sub-periods especially since we are considering both sides of the bet. Even in 1993 through 1997, where according to the intercept estimates there appear to have been relatively large ﬁxed biases, betting the over would not have yielded a proﬁt. Furthermore, in three of the ﬁve years, betting the under would have been as good as or better than betting the over. The results in Table 12.3 show that information impounded in the PS cannot be used systematically to predict the TP after the OU line is accounted for. Nevertheless, we considered two ways to use PS and OU in a more nonlinear fashion that could produce proﬁtable bets. First, when PS is larger than average and OU is smaller than average, we reasoned that it might be proﬁtable to bet the underdog.

142

J. Golec and M. Tamarkin

Table 12.4 Over–under betting strategies winning percentages. The proﬁtability of over– under betting strategies for National Football League games over the 1993 through 2000 seasons, for combined totals and for individual years Sample period

Betting strategy

Number of bets

Bets won

Ties

1993–2000

Over Under Over Under Over Under Over Under Over Under Over Under Over Under Over Under Over Under

2,008 2,008 242 242 236 236 252 252 252 252 252 252 254 254 259 259 261 261

988 984 119 119 115 117 133 112 125 122 122 128 126 123 125 131 123 132

36 36 4 4 4 4 7 7 5 5 2 2 5 5 3 3 6 6

1993 1994 1995 1996 1997 1998 1999 2000

Winning percentages Ties push

Ties lose

0.501 0.499 0.500 0.500 0.496 0.504 0.543∗ 0.457 0.506 0.494 0.488 0.512 0.506 0.494 0.488 0.512 0.482 0.518

0.492 0.490 0.492 0.492 0.487 0.496 0.528∗ 0.444 0.496 0.484 0.484 0.508 0.496 0.484 0.483 0.506 0.471 0.506

Note * Denotes a statistically signiﬁcant winning percentage. A push means that all bets are returned when the over–under betting line equals the total points scored in the corresponding game (a tie). Winning percentages are calculated assuming that ties lose and that ties push.

When OU is small, the market is expecting fewer points to be scored, and this may make it more difﬁcult for a favorite to beat an underdog by a large number of points. Second, when PS is larger than average and OU is also larger than average, we speculated that it might be proﬁtable to bet the favorite. Here, the market expects a large number of points to be scored so favorites might cover a large spread more easily. Table 12.5 reports results for these strategies. First, for the full sample without any ﬁlter, betting the underdog was nearly a proﬁtable strategy, with a 52.1 percent winning percentage. When we ﬁltered by choosing only games where the point spread was above average (PS > 5.5) and over–under was below average (OU < 41), betting on the underdog was more proﬁtable, as predicted. Assuming ties push, such bets yielded a 55.8 winning percentage. This strategy is proﬁtable in six of the eight sample years and has a winning percentage greater than 55 in four of those years. On the other hand, when we chose games where the point spread was above average (PS > 5.5) but the over–under was above average (OU > 40), our proposed strategy of betting on the favorite was still unproﬁtable. The winning percentage improved only marginally, from 47.9 to 48.5.

Efﬁciency of the over–under betting market

143

Table 12.5 Favorite–underdog point spread betting strategies using the over–under line. The proﬁtability of point spread betting strategies for National Football League games over the 1993 through 2000 seasons Sample period

Betting strategy

Number of bets

Bets won

Ties

1993–2000

Favourite Underdog PS > 5.5, OU < 41 Favourite Underdog PS > 5.5, OU < 40 Favourite Underdog PS > 8, OU < 38 Favourite Underdog PS > 8, OU < 43 Favourite Underdog

2,008 2,008

920 1,002

468 468 438 438

Winning percentages Ties push

Ties lose

86 86

0.479 0.521

0.458 0.499

202 255

11 11

0.442 0.558

0.432 0.545

205 218

15 15

0.485 0.515

0.468 0.498

99 99

40 56

3 3

0.417 0.583

0.404 0.566

131 131

66 60

5 5

0.524 0.476

0.504 0.458

Note * Denotes a statistically signiﬁcant winning percentage. A push means that all bets are returned when the point spread betting line equals the actual difference in points scored in the corresponding game (a tie). Winning percentages are calculated assuming that ties lose and that ties push.

We also look at more extreme ﬁlters for the same strategies. When we ﬁltered by choosing only games where the point spread was much above average (PS > 8) and over–under was much below average (OU < 38), betting on the underdog was even more proﬁtable. The winning percentage increased from 55.8 to 58.3. Furthermore, when we chose games where the point spread was much above average (PS > 8) but the over–under was much above average (OU > 43), betting on the favorite became barely proﬁtable at 52.4 percent. Clearly, there are fewer games that pass the more restrictive ﬁlters, however, these more restrictive ﬁlters support both of our speculations. Indeed, the restrictive ﬁlter has sharply increased the winning probability (from 47.9 to 52.4) for the strategy of betting the favorite, overcoming the strong overall tendency for the underdog to cover the spread.

Conclusion The examination of the over–under betting line gives additional insight to the efﬁciency of the football betting market. We ﬁnd that both PS and OU are good predictors of MV and TP respectively. In our data, neither the over nor the under are proﬁtable wagers. There does appear, however, to be a statistically signiﬁcant bias in OU, but the bias does not seem to vary proportionally with the line. One year, 1995, has a large ﬁxed bias and may be driving this result. When we account

144

J. Golec and M. Tamarkin

for the tendency of overtime games to be higher scoring, the bias is reduced but still remains statistically signiﬁcant. Our more interesting result comes from using the OU in conjunction with the PS to concoct a winning strategy. We argue that games that have a low OU are likely to have a low total score and thus, may prove to be more difﬁcult for the favorites to cover the point spread. Similarly, games that have a high OU may prove easier for the favorites to cover the point spread. Our results partially bear out these conjectures. Using the average OU as a ﬁlter improves predictions for the underdog when the PS is above the average and OU is below the average. On the other hand, wagering on the favorites when the PS and OU are above average is not proﬁtable. The adoption of more extreme ﬁlters improved the results and we were able to show proﬁtable betting results for both of our theoretical strategies. Betting on the underdog in games that pass the ﬁlters produces the best proﬁts. Betting on the favorites in games that pass the restrictive ﬁlters sharply increase the winning percentage overcoming the strong overall tendency for the underdog to win. The novel use of the over–under betting line that we employ shows that bettors can use information from one type of betting line to enhance their betting strategies in a different betting line. It is not known to what extent professional gamblers are aware of this. Future research in gambling can explore other combinations of betting lines.

Notes 1 Bookies will adjust the line if they notice that professional bettors are betting more heavily on one side. This is an indication to them that the original line may not be an accurate estimate of the mean of the distribution. 2 Conversation with Sonny Reizner, former sports book manager of Rio Suite Hotel and Casino, in Las Vegas.

References Golec, Joseph and Maurry Tamarkin (1991), “The degree of inefﬁciency in the football betting markets,” Journal of Financial Economics 30, 311–323. Gray, Philip K. and Stephen F. Gray (1997), “Testing market efﬁciency: evidence from the NFL sports betting market,” Journal of Finance 62, 1725–1737. Jaffe, Jeffrey F. and Robert L. Winkler (1976), “Optimal speculation against an efﬁcient market,” Journal of Finance 31, 49–61.

13 Player injuries and price responses in the point spread wagering market Raymond D. Sauer

This chapter studies the response of point spreads to a readily observed event: the absence of a key player due to injury. The analysis is thus similar to an event study, with the added feature that the mean price response is compared with the mean effect of the injuries on actual outcomes (game scores). The analysis in this chapter can thus be viewed as a test of event study methods using a market where the simplicity of the ﬁnancial contract makes such a test feasible. Yet, though the contract is simple, the injuries themselves create problems, since many of them are partially anticipated events. In the case of basketball injuries, an empirical model of the probability of player participation can be estimated and used in conjunction with a model of efﬁcient pricing to interpret the relation between point spreads and scores. The pricing model yields numerous implications that are consistent with the data. Hence, the good news is that the relation between point spreads and scores during injury events is consistent with efﬁcient pricing. The exercise tests and lends credence to the importance of partial anticipation as an important factor in interpreting abnormal returns when the ex ante probability of an event differs substantially from zero.

Introduction This chapter studies the point spread wagering market for professional basketball games. Its primary concern is the wagering market’s response to a series of events: injuries to star basketball players. In colloquial terms, the chapter seeks to determine if the absence of a Larry Bird or a Magic Johnson (perennial stars in this sample) is efﬁciently priced. Injuries were chosen for this study because the absence of a key player is arguably the single most important factor affecting game scores, and thus prices in this market. Contracts in the point spread betting market are quite simple, in that the value of a wager is determined once and for all by the outcome of a single game. This contrasts with the value of most ﬁnancial assets, which are affected by a continuum of events and anticipations at multiple horizons. The relative simplicity of wagering markets enables a sharper focus on the relation between events, market prices, and outcomes. For the most part however, the literature on wagering markets has failed to exploit this possibility. There are many papers which evaluate the proﬁtability of

146

R. D. Sauer

various betting rules, or that test for statistical biases in reduced form regressions, but few papers focus directly on the relation between events, prices, and outcomes. The analysis of injury events in the wagering market enables us to address an important question: do changes in market prices precipitated by events accurately reﬂect changes in the distribution of outcomes? Event studies generally presume that the answer is yes: abnormal returns measured therein are used to draw inferences about the consequences of changes in regulation, corporate governance, and other factors. These studies are valuable precisely because direct measurement of the event’s impact on earnings is difﬁcult. But the difﬁculty of measurement also makes it difﬁcult to directly test the event study method itself. This study uses a point spread pricing model to provide such a test based on injury events to star basketball players. A sample of 700 games missed by star players over a six-year period provides an opportunity to confront this model with a sequence of repeated events. We can therefore carefully evaluate the performance of the point spread market as a mechanism which puts a price on the event of interest. In fact, the study shows that point spreads are biased predictors of game scores during injury events. The question then becomes whether this bias reﬂects inefﬁcient pricing, or alternatively, a combination of the event-generating process and the empirical approach employed by the event study method. The deﬁnition of an injury event is guided by the central question of this study, namely, is the absence of a star basketball player efﬁciently priced in the point spread betting market? Hence, an injury event consists of a game missed by a star player. Although this appears to be a natural deﬁnition, it creates problems since many games missed due to injury are neither surprises nor perfectly anticipated. Partial anticipation of player absences can create bias in point spread forecast errors, much in the way that estimates of value changes due to takeover activities contain a negative bias due to sample selection and partial anticipation (Malatesta and Thompson, 1985; Bhagat and Jefferis, 1991). A unique feature of this study is the means by which the bias problem is resolved. By studying the nature of injury spells to basketball players, we learn how to form a subsample free of selection bias. In addition, knowledge of the injury process can be incorporated into a simple pricing model. The model implies that biases in the primary sample will vary in predictable ways as an injury spell progresses. Finally, the pricing model can be used to extract the market’s estimate of the participation probability of an injured player. This estimate is quite close to the estimate obtained from a duration analysis of the injuries. In each case, we ﬁnd that the point spread response to injury events is consistent with efﬁcient pricing. Hence, the primary question addressed in the chapter is answered in the afﬁrmative: price changes accurately reﬂect changes in the distribution of outcomes. Yet, proper interpretation of these price changes required detailed knowledge of the event generating process. Without such knowledge, interpretations of event study returns can be misleading, as Malatesta and Thompson (1985) and Bhagat and Jefferis (1991) have argued. The analysis begins with a brief description of the wagering market and the data. This is followed by a section that documents the essential facts on the nature

Player injuries and price responses

147

of injury spells. These are used to construct a model of efﬁcient point spreads in the section on ‘Participation uncertainty and efﬁcient point spreads surrounding injury spells’. The section that follows describes an estimation procedure which enhances our ability to test the model, and subsequently conducts the tests.

The point spread market for professional basketball games Efﬁcient point spreads A point spread (PS) wager is deﬁned by the following example. Suppose the Hawks are favored by 5 points over the Bulls. Let PS = 5 represent this 5 point spread, and deﬁne DP as the actual score difference, that is, points scored by the Hawks less points scored by the Bulls. A point spread wager is a bet on the sign of (DP − PS). Bets on the Hawks pay off only if DP − PS > 0, that is, if the Hawks outscore the Bulls by more than the 5 point spread. Bets on the Hawks lose if DP − PS < 0. Bets on the Bulls pay off/lose in the opposite circumstances and bets on both teams are refunded if DP − PS = 0. Winning bets pay off at odds of 1 to (1 + t), where t can be thought of as a transactions cost which covers the bookmaker’s costs of operation. A winning wager of $1 therefore returns $(1 + 1/(1 + t)). Standard terms in the Las Vegas market set t at 10 cents on the dollar. Consider a strategy that places bets on a chosen team under a speciﬁc set of conditions. Without loss of generality, deﬁne the DP and PS ordering by subtracting the opponent’s points from the points scored by the chosen team. The probability that a bet is a winner is p = [prob (DP–PS) > 0]. In addition, the probability that the bet is a loser is p = [prob (DP − PS) < 0], and the probability of the bet being refunded is p0 = [prob (DP − PS) = 0] = 1 − p − p . Efﬁcient point spreads deny the existence of a proﬁt opportunity to any strategy. Speciﬁcally, the expected return to a $ 1 wager must be non-positive: p(1 + 1/(1 + t)) − (1 − p0 ) ≤ 0

(1)

A similar requirement holds for PN, which is the probability that the opposing team beats the spread. Combining these yields bounds for the probability of a winning wager: (0.5 − p0 /2)/(1 + t/2) ≤ p ≤ (0.5 − p0 /2)(1 + t)/(1 + t/2)

(2)

This result is simpliﬁed if the commission is assumed to be zero. Then p = 0.5 − p0 /2. Since the probabilities sum to 1, p = p . Hence [prob (DP − PS) > 0] = [prob (DP − PS) < 0], which is satisﬁed only if PS is the median of the distribution of DP. Provided that the ex ante distribution of DP is symmetric, then PS = E(DP) is also implied, and the point spread can be said to be an unbiased forecast of the score difference of the game. The ex post distribution of the forecast errors

148

R. D. Sauer

(DP − PS) will then be symmetric with a mean of zero. Hence the null hypothesis implied by efﬁcient point spreads under these conditions is that the mean forecast error is zero: H0 : MFE(PS) = 0

(3)

which is a standard test that is often performed in point spread studies.1 These restrictions on efﬁcient point spreads are weakened when non-zero transaction cost are recognized. Assuming that p0 = 0 for convenience and setting t = 0.10, p is bounded by p ∈ (0.4762, 0.5238), which restricts PS to being within a precise distance of the median of DP.2 Inspection of equation (2), above, indicates that the distance from the median allowed by the no proﬁt opportunity condition shrinks to zero as t → 0. In sum, an efﬁcient point spread is the median of the distribution of score differences in the absence of transaction cost. For symmetric distributions, equation (3) is implied, and efﬁcient point spreads are the optimal forecast of the score difference. Given symmetry and positive transaction cost, failure to reject equation (3) is consistent with efﬁcient pricing of point spreads.3 Point spreads and scores: the data The data are based on a sample of 5,636 games played over six consecutive seasons beginning in 1982. The point spreads are those prevailing in the Las Vegas Market at 5 pm. Eastern time on the day the game is played.4 Deﬁne DPtij as the difference in the score of a game at t, and PStij as the point spread’s prediction of this differential, where the ordering is obtained by subtracting the visiting team’s (team j ) points from the home team’s (i) points.5 Figure 13.1(A–C) depicts the distributions of the point differences, spreads, and forecast errors. A glance at the distributions shows no obvious asymmetry; and the data pass formal tests of the hypothesis that the distributions are symmetric.6 Since the symmetry property is accepted, tests based on expected values can be used to test the proposition that point spreads are efﬁcient forecasts of the difference in scores of NBA games. Alternative ways of deﬁning the score difference ordering exist. Indeed, the home–visitor ordering is a simple transformation of the ordering displayed in daily newspapers, in which the point difference is deﬁned by subtraction of the points scored by the underdog from those of the favorite. A recent series of papers considers the implications of the score difference ordering (Golec and Tamarkin, 1991; Dare and McDonald, 1996; Sauer, 1998) for simple tests of efﬁcient pricing. In light of this discussion, we examine these tests under all partitions of the favorite–underdog/home–visitor partitions of the sample. Table 13.1 Panel A examines the median condition for each sub-sample. The right-most column in Table 13.1 lists the proportion of winning bets realized by a strategy of betting on the team listed ﬁrst in the score difference. Betting on the home team yields a winning percentage of 50.6 percent over the six-year period, whereas betting on the favorite wins 50.3 percent of the time. Only in the case

Player injuries and price responses A

149

Point spreads and score differences in the NBA (1982–88)

240

Frequency

200 160 120 80 40 0

–50

–40

–30

–20

–10

0

10

20

30

40

50

–50

–40

–30

–20

–10

0

10

20

30

40

50

–50

–40

–30

–20

–10

0

10

20

30

40

50

B

Frequency

450 350 250 150 50 0

C

240

Frequency

200 160 120 80 40 0

Figure 13.1 (A) Score differences; (B) point spreads; (C) forecast errors.

of pick ’em games in which betting on the home team wins just 47.7 percent of bets, is the proportion near the efﬁcient bound (0.4762, 0.5238), hence this simple test is consistent with efﬁcient pricing (test statistics are not applicable since the proportions are inside the bound). Table 13.1 Panel B presents the sample means and standard deviations for the point differential (DP), point spread (PS), and the forecast error (DP − PS) under each ordering. The right-most column lists the t-statistic for testing the hypothesis

5,636 4,341 1,209 86 5,550

A1. Home–away All games Home favorites Home underdogs Pick ‘em games A2. Favorite–underdog 5,510 4,243 1,181 86 5,424

Bets

4.62 (12.42) 6.87 (11.82) −3.09 (11.74) −0.91 (10.58) 6.05 (11.83)

B1. Home–away All games Home favorites Home underdogs Pick’em games B2. Favorite–underdog 4.38 (5.59) 6.81 (3.62) −4.05 (2.30) 0.00 (0.00) 6.21 (3.56)

PS 0.24 (11.15) 0.06 (11.07) 0.96 (11.45) −0.91 (10.58) −0.16 (11.16)

DP − PS

2,789 2,148 600 41 2,729

Wins

1.62 0.37 2.91 −0.79 1.06

t-stat

126 98 28 0 126

Ties 0.506 0.506 0.508 0.477 0.503

Wins/bets

Notes Sample characteristics. The sample encompasses all regular season NBA games played in the six seasons from 1982–83 through 1987–88. Score differences were obtained from the annual edition of the Sporting News NBA Guide. Point spreads were obtained from The Basketball Scoreboard Book. These point spreads are those prevailing in the Las Vegas market about 2.5 hours prior to the start of play (5 pm Eastern time on a typical night). No point spread is reported for twenty-two games during this period, which reduces the sample from 5,658 (all games played) to 5,636 (all games with point spreads). Panel A. This panel lists the number of games, bets (the number of games in which DP – PS, which are ties), and the number of bets won by wagering on the team in the ﬁrst position of the score difference. Wins/bets is the sample estimate of p, the proportion of such bets won. Since this proportion always lies inside the bounds given by (2), no test statistic is required to evaluate this implication of efﬁcient pricing. Panel B. Standard deviations are given in parentheses. The t-statistic tests the null hypothesis that the mean forecast error (DP − PS) is zero. Although the null is rejected in the case of home underdogs, the failure to reject efﬁcient pricing in panel A for this partition indicates that the rejection in B is caused by a departure from the symmetry assumption.

DP

Differencing method/sample partition

B. Sample means and standard deviations

Games

Differencing method/sample partition

A. Sample frequencies

Table 13.1 Score differences and point spreads for NBA games

Player injuries and price responses

151

the point spread is an unbiased forecast, that is, H0 : E(DP − PS) = 0. In the case of home underdogs, it appears that the point spread is biased as the mean of DP − PS = 0.96 (t = 2.91). Betting on home underdogs is not proﬁtable however, as seen in Table 13.1 Panel A (wins/bets = 0.508). Hence, one infers that this latter result is a violation of the symmetry condition for this sub-sample and not a rejection of efﬁciency.

Point spreads and injury spells: the data The sample of injury games We now examine the forecast errors of point spreads when important players are unable to participate in a contest. A sample of star players was compiled for each of the six seasons by reference to the previous year’s performance standings in the Ofﬁcial NBA Guide. The top twenty leading scorers and rebounders were recorded, as were members of the All-Star team. The games missed by these players in the subsequent year constitute the sample of injury games for analysis. This procedure creates a sample of 273 injury spells encompassing 700 missed games.7 Bias in the forecasts For this analysis, the score ordering is deﬁned by subtracting the opponent’s points from the points of the team with the injured player. The forecast error then, is the observed score differential for the game less the point spread (similarly deﬁned). The mean forecast error of the spread for the 700 game sample is −1.28 points (t = 2.87). Point spreads for injury games therefore contain signiﬁcant bias. Teams with injured players do worse, by more than a point on average, than predicted by the point spread. As far as previously documented biases in point spreads go, this is quite large. There are two possible explanations for this bias. A conjecture motivated by the behavioral school might go as follows. Bookmakers trade mostly with a relatively uninformed, unsophisticated clientele (since on average the clientele must lose). These bettors are not up to date on the status of injured players, so bookmakers do not fully adjust prices for injury games. Had one known that the player was destined to miss the game, betting against the team with the injured player would represent a proﬁt opportunity. An alternative hypothesis is that the bias stems from (rational) partial anticipation of the player’s absence from the game. If there is some chance ex ante that the player might participate in the game, the mean forecast error of −1.28 points is affected by selection bias even if the point spread is efﬁcient. The following example illustrates the point. Suppose there is a 50 percent chance (ex ante) that the player will miss the game. With the player, team i is expected to win by 4 points; without him, by 2 points. If so, the efﬁcient point spread would be 3 points. On average then, his team wins by 2 points when he misses the game, but the spread is 3, which delivers the bias.

152

R. D. Sauer

There are, thus, two competing explanations for the point spread bias. The remainder of the chapter explores implications of the partial anticipation explanation in which proﬁt opportunities are absent. We begin by studying the injuries themselves. Injury spells in the NBA When the casual sports fan thinks of injured athletes, famous cases of incapacitating injuries, such as a broken leg (Tim Krumrie in the 1989 Super Bowl, Joe Theisman on Monday Night Football) or broken ankle (Michael Jordan in 1985) come to mind immediately. Yet, these are relatively infrequent occurrences. Far more common are the nagging injuries such as muscle pulls and ligament sprains which could either heal or deteriorate between contests. Professional athletes continually play with taped ankles and thighs, knee braces, ﬁnger splints, wrist casts, ﬂak jackets, etc. Indeed, many games at the professional level are contested by the “walking wounded.”8 This is relevant to the analysis because it is nagging injuries which cause uncertainty over participation. This uncertainty exists not only for the general public, but also for team trainers and the players themselves. Indeed, the classic injury situation occurs when the team announces that a player is on a “day-to-day basis.” It is not uncommon for the player to announce that he is ﬁt and ready to play, while team ofﬁcials state otherwise. Whether or not an injured player will participate is often determined by his response during the pre-game warm-up, only moments before the game begins.9 Table 13.2 tabulates information on injury spell durations for the full sample of 263 injuries. Most spells are short: 75.5 percent of uncensored spells last three games or less.10 For the ﬁrst three games, the probability of returning in each subsequent game, that is, the hazard, is in the 0.30–0.40 range. For injuries that preclude participation for more than three games, the hazard is much lower, in the 0.10–0.20 range. The data on spell lengths suggest a fairly simple message: if a player hasn’t missed many games, his chances of returning in the next game are fairly high. On the other hand, if he has missed more than a few games, his chances of returning in the next game are quite low. This suggests two broad classes of injuries. In the much larger class the player may return to the lineup at any time. We classify these as nagging injuries that require an uncertain amount of rest for recuperation. More serious injuries can be incapacitating, completely ruling out a quick return to action. These injuries comprise the second, less common class. To take a closer look at nagging injuries, Table 13.2 Panel B tabulates spell lengths and return probabilities for spells lasting ﬁve games or less. Almost half of these spells terminate after just one game. Of spells that continue, slightly more than half terminate after the second game; the hazard is a bit greater following the third. On the assumption that the hazard rate for this sub-sample is constant, its maximum likelihood estimate is 0.513, with a standard error of 0.032.11 A rate of 0.50 would yield a cumulative return rate of 75 percent by the second game,

Player injuries and price responses

153

Table 13.2 Injury spell durations and hazard rates Spell length

Frequency

Censored

At risk

Hazard

Cumulative return rate

A. Injury spell durations and hazard rates 1 105 16 2 51 2 3 29 0 4 6 2 5 7 1 6 8 1 7 6 1 8 5 0 9 5 0 10 0 1 11 4 1 12 1 0 13 1 0 14 1 0 15 1 1 16 1 0 17 1 0 18 1 0 19 0 0 20 0 0

263 142 89 60 52 44 35 28 23 18 17 12 11 10 9 7 6 5 5 5

0.3992 0.3592 0.3258 0.1000 0.1346 0.1818 0.1714 0.1786 0.2174 0.0000 0.2353 0.0833 0.0909 0.1000 0.1111 0.1429 0.1667 0.2000 0.0000 0.0000

0.3992 0.6316 0.7551 0.7796 0.8148 0.8512 0.8797 0.9042 0.9250 0.9250 0.9456 0.9538 0.9580 0.9622 0.9664 0.9747 0.9789 0.9831 0.9831 0.9831

B. Hazard rates for nagging injuries 1 105 16 2 51 2 3 29 0 4 6 2 5 7 1

219 98 45 16 8

0.4795 0.5204 0.6444 0.3750 0.8750

0.4795 0.7685 0.9204 0.9502 0.9950

Note Frequency is the number of injury spells terminated after an absence of n games, where n is given in the spell length column. Censored observations are terminated by the end of the season rather than a return to action. Hence hazard is frequency/(at risk-censored); the cumulative return rate incorporates censored spells in a similar manner.

87.5 percent by the third, and 93.75 percent by the fourth. These are quite close to the actual return rates of 76.9, 92.0, and 95.0 percent.12 Hence, for nagging injuries (spells of short duration), we assume the probability of returning in the n + 1st game, having missed n games thus far, is about 0.5.

Participation uncertainty and efﬁcient point spreads surrounding injury spells Explaining the ex post bias by appealing to participation uncertainty is straightforward. Testing this explanation yields insight into its credence, and can be achieved by imposing a simple probability structure on the injury process. This structure differs according to whether the injury is nagging (yielding a short spell) or incapacitating (yielding a long spell). We begin with the case of nagging injuries.

154

R. D. Sauer

Assume that participation in games reveals information about the soundness of a player. Playing informs the market that the player is relatively sound, whereas not playing informs the market that the player is currently unsound. For simplicity, we assume that the participation probability, given that the player participated in the previous game, is 1.13 Based on the evidence in the section on “Point spread and injury spells: the data”, we assume that the probability of playing conditional on having missed the previous game is 0.5. These assumptions apply to nagging injuries; the onset of these spells is unexpected, and there is a positive probability of terminating the spell in each subsequent game. In contrast, incapacitating injuries are likely to be observable (e.g. a broken leg). Thus, the onset of long spells will be anticipated, and expected to continue for some time. Hence, for long spells we assume that the participation probability is 0. The following notation is used to develop the model’s implications. p = probability (ex ante) that the player participates DP = DPtij ; the difference in points scored on date t between teams i and j, where the ordering deﬁnes team i as having the injured player PS = PStij ; the market point spread with the ordering deﬁned as for DP PSPLAY = PS|play; the point spread conditional on the player participating (p = 1) PSNOT = PS|not play; PS conditional on p = 0. In an efﬁcient market, PSPLAY = E(DP|play), and PSNOT = E(DP|not play). The efﬁcient unconditional point spread for injury games, PS∗ , is thus given by: PS∗ = p · PSPLAY + (1 − p) · SNOT

(4)

A construction which will be important in testing some of the propositions that follow is the estimated “change” in the point spread. This is deﬁned as the difference between the market point spread, and the point spread that would be in effect assuming the injured player were to participate. This deﬁnes DIFF = PS − PSPLAY

(5)

We now use the model to describe the evolution of the point spread bias as an injury spell progresses. p = 1 for the ﬁrst game of short spells. Thus, the player’s initial absence is a surprise, which leads to the ﬁrst proposition. Proposition 1: For the ﬁrst game of short spells, PS∗ = PSPLAY, and therefore DIFF = 0. Given that the player does not participate in games during the spell, the expected outcome is E(DP|not play) = PSNOT. Since the efﬁcient point spread incorporates a non-zero probability of participation, it is a biased forecast (ex post), which is proposition 2. Proposition 2: The forecast errors of an efﬁcient point spread are biased for games missed during short injury spells.

Player injuries and price responses

155

This can be tested by calculating MFE(PS), which is predicted to be the difference between the expected outcome and the point spread. Thus, MFE(PS) = PSNOT − PS∗ = p · [PSNOT − PSPLAY] < 0. This proposition explains the point spread bias which we have already documented, provided that p is non-zero. We can be more precise however. The value of the player to the team is measured by the loss in output due to his absence. This deﬁnes LOSS = E(DP|not) − E(DP|play) = PSNOT − PSPLAY in an efﬁcient market. Our study of injury durations indicated that prob (playt |nott−1 ) = 0.5. Hence we can sharpen this proposition for games 2–n of an injury spell. Proposition 3: In games 2–n of an injury spell, the point spread adjustment (DIFF) will equal half the value of the injured player. Hence the mean forecast error will be 50 percent of the value of the player: MFE(PS) = 0.5 · LOSS. Obtaining a measure of LOSS along with MFE(PS) allows us to infer the market’s estimate of p. The analysis used above is symmetric in the sense that it applies not only during the injury spell, but in the game when the player returns to the lineup. Thus, when the player returns to the lineup, the bias is reversed, since the expected outcome given participation is PSPLAY, and p < 1. Proposition 4: The bias for the return game is PSPLAY − PS∗ = (1 − p) · [PSPLAY − PSNOT] > 0. If p = 0.5 and constant throughout the spell, the return game bias is simply the mirror image of the injury game bias. We also predict that DIFF < 0, as above. Once a player returns to the lineup after a short spell, p subsequently returns to 1.0. We thus have Proposition 5: After the initial return game, PS∗ = PSPLAY. Hence, we expect DIFF = 0, and the absence of forecast bias. Long spells involve more serious injuries. By distinguishing long from short spells, we develop two additional propositions. These stem from the assumption that the injury produces no uncertainty regarding the player’s status (p = 0). Proposition 6: For long spells, PS∗ = PSNOT, and thus DIFF = PSNOT − PSPLAY = LOSS. Since p = 0, the expected forecast error is E(DP|not play)−PSNOT = 0. Efﬁcient point spreads thus display no ex post bias for long spells. Recall the assumption that incapacitating injuries are observed by the market when they happen. Hence, p = 0 for the ﬁrst game. Proposition 7: For the ﬁrst game of long spells, PS∗ = PSNOT.

156

R. D. Sauer

Thus, DIFF = LOSS, and PS∗ is unbiased. This proposition stands in contrast to its counterpart for short spells. The model in this section provides us with an array of predictions about the behavior of point spreads during injury spells. Not only does it imply the ex post bias, it predicts the magnitude of the bias, and differences in the bias over the duration of the spell and across different types of injuries. Testing some, though not all, of these predictions requires knowledge of E(DP|play), or PSPLAY in an efﬁcient market. Since wagering opportunities on NBA games are normally offered only on the day of the game, PSPLAY is not observed in situations where injuries are involved. It turns out however, that a simple statistical technique provides very accurate estimates of PSPLAY, enabling tests of all 7 propositions.

Empirical analysis of the nagging injury hypothesis The propositions are tested in the section titled “Empirical tests of the partial anticipation hypothesis”. The basis for estimating PSPLAY is presented in the section that follows and the next section examines its statistical properties. This analysis shows that PSPLAY can be estimated with considerable precision. A method for estimating PSPLAY The method we use is like that of an event study, which requires an estimate of the expected return in order to compute an abnormal return. The former can be calculated using regression estimates of the parameters in the market model. Brown and Warner (1985) have studied the statistical properties of this method, and conclude that its estimates of abnormal returns are reasonably precise, with desirable statistical properties. This is so, despite the fact that the market model is a very poor conditional predictor of stock returns. In sample, the average R 2 of Brown and Warner’s market model regressions was 0.10. We can do much better with point spreads out of sample. The technique we use is motivated by the following. Suppose that the outcome of a game – the difference in score – is determined by luck, the home court advantage, the relative ability of the two teams, and idiosyncratic factors. Then score differences can be thought of as being generated by the following: DP = g(Ci , Si , −Sj , e, w)

(6)

Si and Sj are measures of the ability of teams i and j at full strength, ci is the home court advantage of team i, and e and w are random components. Each variable is assumed to be calibrated in terms of points (scoring). We assume that w is “luck” that cannot be anticipated, whereas e includes idiosyncratic factors (matchup problems, length of road trip, injured players, etc.) that may be known. It is assumed that e and w are uncorrelated with the teams’ abilities. Based on the ﬁrst section, we assume that PS = E(DP), and further, that g is a simple additive function.

Player injuries and price responses

157

Thus PS = ci + Si − Sj + e

(7)

Recall that the object of this exercise is to obtain an estimate of PSPLAY, the point spread that would be observed if the injured player was expected to play. Since Si and Sj are team abilities at full strength, PSPLAY can be constructed if they, along with ci , are known. We estimate them using the following regression: PStij = Shi · dhi − Svj · dvj + B · dtij + e

(8)

The estimation procedure uses the twenty games for each team played prior to each injury spell.14 di is a dummy variable which takes on the value of 1 when team i is the home team, and dj is 1 when team j is the visitor. Shi and Svj are the coefﬁcients of the team dummies, and are interpreted as the ability indexes. Since Shi and Svi differ, this speciﬁcation embeds a team speciﬁc home court advantage (ci ) in the model. Itij is the difference in the number of injured players. Since we have this data, it is included in the regression to keep the estimates of Shi and Svj free of omitted variable bias, which would otherwise occur if point spreads in the estimation period were affected by the absence of an injured player. e is the idiosyncratic error term which remains after account is taken of observable injuries. Out-of-sample estimates of PSPLAY can be obtained by subtracting the visiting strength of team j from the home strength of team i: PSPLAY = Shi − Svj The accuracy of PSPLAY Before using PSPLAY, we check the model’s ability to predict point spreads – for non-injury games – out of sample. There are three criteria: 1 2 3

What percentage of the variation in point spreads is explained by out of sample predictions from the model? What are the characteristics of the distribution of the forecast errors PS − PSPLAY? Is there a discernable difference between the ability of actual and estimated point spreads to predict game outcomes?

For each of the six seasons, the model was successively re-estimated using samples including the most recent twenty games for each team. Out-of-sample forecasts (PSPLAY) were then generated for the next ﬁve games. This procedure yielded 3,567 predicted point spreads for non-injury games over the six-year period. The variance of actual point spreads for these games is 31.4 points. The residual variance of the forecast error, PS − PSPLAY, is 4.2. Hence, out of sample, the model explains more than 85 percent of the variation in point spreads. This is almost an order of magnitude greater than what market models used in event studies achieve in sample.

158

R. D. Sauer PSPLAY 3567 non-injury games A

500

Frequency

400 300 200 100 0 –10

–8

–6

–4

–2 0 2 Value of PSPLAY

4

6

4

6

8

10

PSPLAY 700 Injury games B

70 60

Frequency

50 40 30 20 10 0 –10

–8

–6

–4

–2 0 2 Value of PSPLAY

8

10

Figure 13.2 Distribution of forecast errors PS − PSPLAY. Note The horizontal axes of these distributions are the magnitude of the difference between the actual point spread and the predicted point spread of the statistical model. Figure 13.2A is constructed by using the statistical model (estimated on a twenty game sample) to forecast ﬁve games ahead, and sequentially updating for each of the six seasons. Figure 13.2B displays the distribution of the error in predicting point spreads when players are injured.

The distribution of the forecast errors PS − PSPLAY, is depicted in Panel A of Figure 13.2. The mean of the distribution is −0.003, with standard deviation of 2.06. Less than a quarter of the forecast errors are more than 2 points away from zero. The distribution is thus concentrated on a narrow interval around zero, as it must be if we are to use the model to predict what point spreads would be in the absence of an injury. In contrast, observe the distribution of PS − PSPLAY for

Player injuries and price responses

159

games which players miss due to injury, in Panel B of Figure 13.2. This distribution is clearly shifted to the left of its non-injury counterpart. Evidently, the method is precise enough to portray changes in the point spread due to observable factors such as injuries. Returning to non-injury games, since the actual and predicted point spreads are very close to each other, it therefore follows that their ability to forecast game outcomes is similar. Indeed, for each of the six years, the mean forecast errors of PS and PSPLAY (and their variance) are virtually identical. PSPLAY is thus an accurate and unbiased predictor of point spreads. We can therefore employ it in tests of the injury bias model. Empirical tests of the partial anticipation hypothesis The model implies differences in point spread bias depending on whether the spell was long or short, and whether the game is the ﬁrst game of the spell, in the middle of the spell, or upon the player’s return to the lineup. In order to make a sharp distinction, long spells are deﬁned as those lasting ten or more games, and short spells as those lasting ﬁve games or less. Table 13.3 presents summary statistics on the forecast errors by game. Panel A lists the results for short spells, and panel B for long spells. In addition, panel C tabulates results for the ﬁve-game sequence (for all spells) beginning with the game when the player returns. Column 1 provides the mean forecast error of the actual point spread (PS), which is used to examine the market’s ex post bias. The loss to

Table 13.3 Forecast errors of point spreads by game missed MFE(PS)

DP − PSPLAY (LOSS)

PS − PSPLAY (DIFF)

N

A. Injury spells of ﬁve games or less (absolute value of t-statistics in parentheses) Game 1 −1.03 (1.18) −2.06 (2.35) −1.03 (6.08) 185 Games 2–n −2.16 (2.25) −3.83 (3.91) −1.67 (9.38) 145 B. Injury spells of ten or more games Game 1 3.08 (1.39) 0.92 (0.43) Games 2–n −0.56 (0.70) −2.20 (2.66)

−2.15 (3.22) −1.64 (9.53)

14 193

C. Forecast errors of point spreads upon return (all injury spells) Game 1 0.73 (0.92) −0.16 (0.43) −0.89 (5.88) Games 2–5 −0.01 (0.01) −0.17 (0.40) −0.17 (1.98)

207 778

Notes This table calculates forecast errors according to the sequence of the games missed by the injured player. Game 1 is the ﬁrst game of the injury spell for games missed, etc. Panel C tabulates the statistics for the ﬁrst ﬁve games after completion of the injury spell. MFE(PS) is the mean forecast error of the market point spread. LOSS is the average of DP − PSPLAY, that is, a measure of the effect of the player’s absence (or lack of it in Panel C) on the game. DIFF is average of PS − PSPLAY, that is, the point spread reaction in the market to the injury situation.

160

R. D. Sauer

the team due to the absence of the injured player is LOSS = E(DP)−PSPLAY. The mean forecast error of PSPLAY is thus our estimate of LOSS, which is tabulated in column 2. In column 3 is DIFF, the difference between the actual spread and PSPLAY. One can see immediately from inspection of panel B that for long spells, the hypothesis of no bias cannot be rejected. For short spells, the model predicts bias for games 2–N , and furthermore that 0.5 · LOSS = MFE(PS). Indeed, MFE(PS) is negative, and is 0.56 · LOSS, quite close to the predicted value. Hence, the ex post bias in the point spread provides an estimate of the return probability (0.56) which closely approximates the conditional return probability observed in the sample. The model fares less well in its implications for the ﬁrst game of the injury spells. The point spread response for short spells, measured by DIFF, is signiﬁcantly lower for game 1 than in subsequent games, as expected. Yet, the spread does drop by a point (t = 6.08), indicating that the model omits information that is factored into point spreads. Note, however, that teams with injured players suffer a signiﬁcantly lower LOSS (−2.06 vs −3.83 points) in game 1 than in subsequent games. This indicates that the surprise hypothesis has some merit, as the opposing team is unable to take complete advantage of the player’s absence in the ﬁrst game of the injury spell.15 When players return to the lineup, the forecast error of the ﬁrst game is positive (0.73), but not signiﬁcant (t = 0.92), and DIFF remains signiﬁcantly negative (−0.89, t = 5.88). These results are in rough accord with proposition 4.16 The forecast errors of the point spread thereafter are not statistically different from zero (MFE(PS) = −0.01, t = 0.01), as implied by proposition 5. On three counts the model performs quite well. The point spread is unbiased for long injury spells, where the selection bias problem stemming from partial anticipation is not relevant. In the middle of short injury spells, point spreads contain an estimate of a player’s return to action that is quite close to that implied by a duration analysis of the injury spells. Finally, point spreads for games played after the player’s return are unbiased. The model fares less well in the transition games surrounding the injury spells, most likely due to the stark probability structure that is assumed.

Conclusion This chapter began with what was expected to be a simple exercise: analyzing the response of market prices to a sample of repeated events. In the wagering market, this exercise encompasses not just events and price changes, but the outcomes themselves, enabling a comparison between price changes and outcomes that is generally infeasible with stock prices. Hence, the ﬁndings of this exercise provide a small piece of evidence on the efﬁciency of price responses to changes in information that is difﬁcult to replicate in other settings. Point spreads are biased predictors of game scores during injury events, which is potential evidence of inefﬁcient pricing in the wagering market. An alternative

Player injuries and price responses

161

explanation is based on the idea that many injuries are partially anticipated. A pricing model which combines empirical features of player injuries with the selection method for determining injury games predicts variations in the ex post bias which are consistent with both the data and efﬁcient pricing. The bottom line is that price responses in the wagering market contain efﬁcient estimates of the value of an injured player. Extracting this signal from a sample of injury games is, unfortunately, somewhat tricky. As with other events of interest in ﬁnancial economics, one needs to develop knowledge speciﬁc to the class of the event being studied before interpreting the forecast error. This creates a problem. Explanations of apparently inefﬁcient pricing for a class of events reduce to story-telling exercises unless the stories can be tested. Sorting through the relevant details that are potentially relevant to each class of event can take many papers and many years, as indicated by various takeover puzzles (Malatesta and Thompson, 1985; Bhagat and Jefferis, 1991). In this chapter we develop such a story using facts about injury durations in the NBA. A model that makes use of these facts has testable implications for the behavior of the ex post forecast bias across and within injury spells. These implications are easily tested, and are generally consistent with the data. Although this is good news for event studies – price responses to injury events are efﬁcient – these results highlight the problems involved in obtaining accurate estimates of value from price changes when the ex ante probability of an event is not well known.

Acknowledgments I’m grateful to Jerry Dwyer, Ken French, Mason Gerety, Alan Kleidon, Mike Maloney, Robert McCormick, Harold Mulherin, Mike Ryngaert, and seminar participants at Clemson, Colorado, Indiana, Kentucky, Penn State and New Mexico for comments on earlier drafts.

Notes 1 The symmetry condition is rarely examined however, and in some cases this is critical (baseball over/under wagers are an example). If the ex ante distribution is not symmetric, then the distribution of the forecast errors will be skewed. In this case the mean forecast error will not be zero even when PS = median (DP), and hence a test of equation (3) is inappropriate. 2 Tryfos et al. (1984) were the ﬁrst to systematically examine this bound. 3 Rejection of equation (3) would motivate consideration of transaction costs. For example, simple betting rules proposed in Vergin and Scriabin (1978) are re-evaluated by Tryfos et al. (1984), who use statistical tests which explicitly recognize the transaction costs. The conclusion that these rules are proﬁtable is overturned by these tests. 4 The point spread data are from Bob Livingston’s The Basketball Scoreboard Book. There were 943 games played in each of the six seasons. Point spreads were not offered for some games. The scores were obtained from the Sporting News Ofﬁcial NBA Guide. 5 Henceforth subscripts are dropped except where needed.

162

R. D. Sauer 3/2

6 The skewness coefﬁcients m3 /m2 are 0.02 (0.05) and 0.10 (0.05) for the point difference and forecast error distributions, respectively (m3 and m2 are the third and second moments of the distribution, with standard errors of the coefﬁcients in parentheses). This coefﬁcient is zero for a normal distribution, which is the usual standard of comparison. 7 In one case, the player checked into a drug rehabilitation clinic and missed several games. Although this is not an injury in a precisely deﬁned sense, these games were retained in the injury game sample for the purposes of simplicity of deﬁnition. If a game is missed, it is assumed to be an injury game. Differences in injury severity and so on are not commonly divided by bright lines, so we adopt a simple deﬁnition here as well. 8 There is ample evidence in newspaper reports to support this. One example is the sequence of hamstring injuries to World B. Free in 1985. Free is quoted in the Cleveland Plain Dealer of 11/21/85: “I had the left one for about two weeks but I didn’t say anything. Then I had put too much pressure on the right one and hurt it. . . . Things like that happen . . . .” 9 For example, consider the following remark of Clemson University’s sports information director, referring to star running back Terry Allen: “It’s the classic case of not knowing if he can play until he warms up (Greenville News, Oct. 21, 1989).” Allen suited up, but didn’t play. He returned to the lineup later in the season. 10 Censored spells are those terminated by the end of the season rather than a return to action. 11 The technique is described in Kiefer (1988), especially pp. 662–3. Since the sample used was deﬁned by excluding spells of six games or longer, this estimate is biased upward. This exclusion is the only practical means of separating incapacitating from nagging injuries. The bias induced is very slight however, since only 0.0312 of the sample would be expected to incur spells of six games or more if the return probability were indeed 0.5. As a means of evaluating the bias, an estimate of the hazard was calculated by treating all ﬁve game spells (there are only eight such games in the 219 game sample) as censored at four games, that is, as being of length >= four games rather than ﬁve. The estimate obtained is 0.505 (std error = 0.036), virtually the same as that reported in the text. 12 In contrast, the return rates implied by p = 0.4{0.64, 0.784, 0.870} are consistently below that observed. The rates given by p = 0.6 {0.84, 0.914, 0.956} are slightly above that observed for the second game, but close to the mark for the third and fourth games. 13 This is obviously untrue, but is a convenient way of imposing the condition that this probability is the same for players on both teams. Nagging injuries are a factor on both sides of the score. 14 Diagnostic checks indicated that ten game samples yielded accurate estimates as well. Hence, for injury spells commencing in the eleventhth through twentieth game of each season, the maximum available number of pre-injury games was used. Summary statistics for regressions using samples of the ﬁrst ten and twenty games of the 1982 season are presented in Appendix A. 15 Teams attempt to keep certain injuries secret for exactly this reason. A recent example is the New York Giants’ secrecy concerning an injury to quarterback Phil Simms’ throwing hand, suffered in practice prior to an 1990 NFL playoff game. Advance knowledge can suggest successful strategies to the opponent. 16 The magnitude of the bias (in absolute value) is less in the return game than during the spell, indicating that the return probability may increase with spell duration. The difference in bias can be traced to a decline in DIFF of 0.78 points (t = 3.23) in the return game relative to games 2–n of the spell. Note that this decline is sharply reduced (to 0.40 points, t = 1.37) if one compares the last game in a short spell to the return game. Recall also that the data on the actual injuries hint of an increasing hazard for nagging injuries, which seems reasonable.

Player injuries and price responses

163

Appendix 1982 Observations: 230, R-squared: 0.969 Variable Estimate t-value HAWKS CELTICS BULLS CAVS MAVS NUGGETS PISTONS WARRIORS ROCKETS PACERS CLIPPERS LAKERS BUCKS NETS KNICKS SIXERS SUNS BLAZERS KINGS SPURS SONICS JAZZ BULLETS HO-HAWKS HO-CELTI HO-BULLS HO-CAVS HO-MAVS HO-NUGGE HO-PISTO HO-WARRI HO-ROCKE HO-PACER HO-CLIPP HO-LAKER HO-BUCKS HO-NETS HO-KNICK HO-SIXER HO-SUNS HO-BLAZE HO-KINGS HO-SPURS HO-SONIC HO-JAZZ

−3.334359 2.910794 −6.229776 −11.091149 −4.196604 −4.830953 −4.356497 −6.065996 −11.510297 −6.846988 −8.742780 3.052869 0.430284 −3.712991 −6.676410 2.710749 −2.643456 −2.265396 −3.759032 −2.050936 1.049140 −8.272100 −3.932458 0.866018 6.213943 −1.819314 −9.438899 −2.065136 0.512874 −1.133018 −1.040707 −8.786952 −3.798072 −5.959837 7.183994 3.482937 0.121674 −3.169490 6.855646 2.692941 0.825848 −0.813232 1.410239 4.058689 −5.331165

−6.139491 4.645186 −11.396556 −20.562050 −7.130448 −8.943902 −7.775327 −9.978129 −17.661852 −11.348433 −15.109341 5.209969 0.773051 −6.807746 −11.700208 5.012054 −4.196462 −4.150938 −6.027770 −3.536199 1.616020 −15.026895 −6.694430 1.428633 11.137314 −3.059896 −16.256515 −3.323394 0.821924 −1.886866 −1.787460 −15.468773 −6.649878 −10.205729 12.348228 6.311453 0.197153 −5.575937 11.335485 4.853336 1.364113 −1.422055 2.455259 7.286932 −8.976392

References Bhagat, Sanjai and Richard H. Jefferis, 1991, “Voting power in the proxy process: the case of antitakeover charter amendments,” Journal of Financial Economics 20, 193–225.

164

R. D. Sauer

Brown, Stephen J. and Jerold B. Warner, 1985, “Using daily stock returns: the case of event studies,” Journal of Financial Economics 14, 3–31. Dare, William H. and McDonald, S. Scott, 1996, “A generalized model for testing the home and favorite team advantage in point spread markets,” Journal of Financial Economics 40, 295–318. Golec, Joseph and Tamarkin, Maurry, 1991, “The degree of inefﬁciency in the football betting market: Statistical tests,” Journal of Financial Economics 30, 311–323. Keifer, Nicholas M., 1988, “Economic duration data and hazard functions,” Journal of Economic Literature 26, 646–679. Malatesta, Paul H. and Rex Thompson, 1985, “Partially anticipated events,” Journal of Financial Economics 14, 237–250. Sauer, Raymond, 1998, “The economics of wagering markets,” Journal of Economic Literature, forthcoming in December.

14 Is the UK National Lottery experiencing lottery fatigue? Stephen Creigh-Tyte and Lisa Farrell

In this chapter recent innovations to the UK National Lottery on-line lotto game are considered. We suggest that innovations are necessary to prevent players from becoming tired of the game and therefore to keep sales healthy. We also examine how the lottery operators have tried to stimulate the wider betting and gaming market and maintain interest in the on-line game, through the introduction of periphery games and products. In summary, we conclude that the UK lottery market has been stimulated and expanded in line with all the available evidence from lotteries elsewhere in the world.

Introduction This chapter addresses the concept of lottery fatigue in the context of the UK National Lottery games, which were launched at the end of 1994. Creigh-Tyte (1997) provides an overview of the policy related to the UK National Lottery’s introduction and Creigh-Tyte and Farrell (1998) give an initial overview of economic issues. Lottery fatigue is the phenomenon experienced by many state/national lotteries whereby players have been found to tire of lottery games (reﬂected in a downward trend in sales) and so require continual stimulation to entice them to play (see Clotfelter and Cook, 1990, for a discussion of the US experience up to the late 1980s). This is the usual explanation given for the diversiﬁcation of lottery products. As the US National Gambling Impact Study Commission (1999) comments: ‘Revenues typically expand dramatically after the lottery’s introduction, then level off, and even begin to decline. This “boredom” factor has led to the constant introduction of new games to maintain or increase revenues.’ In this chapter we will review the latest facts and ﬁgures pertaining to the sale of National Lottery games and the recent economic research on the lottery games. To date there has been no single empirical analysis of the impact of the launch of peripheral games on the main on-line game, so we draw what evidence we can from the available research. We begin by looking at the performance of the on-line game since its launch in November 1994. Then we look at the Thunderball game, consider the impact of special one-off draws and give an introduction to the latest lottery game, Lottery

166

S. Creigh-Tyte and L. Farrell

Extra. A brief introduction to the market for scratch cards is then provided, followed by the conclusions.

The on-line game The on-line game is the central product in the National Lottery range. It was launched in November 1994 and has been running continually (on a once–weekly and more recently on a twice–weekly basis) ever since. Given that 2001 was the ﬁnal year of operations under the initial license, it is an appropriate time to review the game’s performance. The on-line game is the largest National Lottery product in terms of weekly sales ﬁgures, around £70–75 million in a normal week (i.e. a week with no rollover draws). Figure 14.1 shows the weekly sales ﬁgures by draw from the game’s launch until 31 March, 2002. The spikes in the distribution represent weeks that contained at least one rollover or superdraw, and draw 117 is the ﬁrst of the midweek draws. Whilst sales per draw have fallen since the introduction of the midweek draw, the weekly sales ﬁgures are higher than when there was just a Saturday single draw.

Conscious selection One way to ensure the long-term success of the on-line game is to encourage players to use systems to select their numbers or to play the same number every week. This results in players getting locked into the game and makes them more likely to play regularly. Evidence that players do behave in this way can be seen from the startling feature of the on-line game that it exhibits many more rollovers than could have been generated by statistical chance, as can be seen from Figure 14.1. This can only arise from individuals choosing the numbers on the lottery tickets they buy in 140,000,000 120,000,000

Sales (£)

100,000,000 80,000,000 60,000,000 40,000,000 20,000,000

Figure 14.1 National lottery on-line weekly ticket sales.

365

378

339

352

326

300

313

274

287

248

261

222

Week number

235

196

209

170

183

144

157

118

131

92

105

79

53

66

27

40

1

14

0

Is the UK National Lottery experiencing lottery fatigue?

167

a non-uniform way.1 That is, many more individuals choose the same combinations of numbers than would occur by chance if individuals selected their numbers uniformly. The result is that the probability distribution of numbers chosen does not follow a uniform distribution, whereby the probability of each number being chosen is one in forty-nine. Thus, the tickets sold cover a smaller set of possible combinations than would have been the case had individuals chosen their numbers in a uniform way – there will be more occasions when there are no jackpot prize winners.2 The implications of this non-uniformity and (unintentional) co-ordination between players are important. If players realise that such non-uniformity is occurring then they will expect the return to holding a ticket to be smaller (for any given size of rollover) than it would be if individuals were choosing their numbers uniformly. Essentially, the non-uniformity increases the probability that there will be a rollover and this changes the behaviour of potential ticket purchasers (provided they are aware of it). Haigh (1996) presents evidence of conscious selection among players and Farrell et al. (2000a) show that whilst conscious selection can be detected it has little impact on estimates of the price elasticity of demand for lottery tickets. In contrast most lotteries also offer a random number selector that players can use to pick their numbers. In the UK this is called ‘Lucky Dip’, but it is usually called ‘Quick Pick’ elsewhere. This is not, however, normally introduced until players have had a chance to develop a system for selecting their numbers and so it may simply attract marginal players who do not want to invest much time in the purchase of a ticket or those that have set numbers but who also try a Lucky Dip ticket. Simon (1999) argues that this is one reason why Camelot may have delayed introducing the Lucky Dip facility for a year, to ‘entrap’ players who feel they cannot stop buying tickets with a certain number combination because they have already played it for a long period. In the case of the UK game, the Lucky Dip was not introduced until March 1996 and represents the ﬁrst innovation in the game intended to regenerate interest from players who might have been losing interest. It represents a new way for players to play the lottery.3

The importance of rollovers Rollovers are good for the game stake for two reasons. First, they attract high levels of sales and second, successive draws also see increased sales. Farrell et al. (2000b) show that the effect of a rollover on sales lasts for up to ﬁve draws following the rollover. They use a dynamic model speciﬁcation to estimate both the long- and short-run elasticity of the demand for tickets. The short-run elasticity simply tells us how demand changes in a single period following the price change, whereas the long-run elasticity tells us the dynamic response of demand to the price change after all responses and lags have worked through the system. The size of the longrun elasticity is of interest as it can signal addiction among players. The general hypothesis is that the demand for an addictive good will be higher the higher demand was in the recent past.4 It is found that there is evidence of addiction among lottery players; the short-run (static) elasticity is smaller than the long-run

168

S. Creigh-Tyte and L. Farrell

(dynamic) elasticity. The long-run elasticity takes account of the fact that price changes have more than a single period effect and is found to be approximately unity. Moreover since rollovers boost sales they may be a cause of addiction. Sales following a rollover are higher than the level of sales prior to the rollover and this is known in the industry as the ‘halo’ effect. Thus, rollovers have a greater impact than just increasing sales in the week in which they occur; there is a knock-on effect in the following draws’ sales. Players are attracted by the rollover and either new players enter or existing players play more, or both, and after the rollover those who entered remain and those who increased their purchases continue to purchase at the higher level. Shapira and Venezia (1992) ﬁnd that demand for the Israeli lotto increased in rollover weeks, and this added enthusiasm for the lotto carried over to the following week’s draw. In the UK, Farrell et al. (2000b) show that the halo decays within 5–6 draws by which point sales have returned to their post rollover level (Figure 14.2). However, a close succession of rollovers would have the effect of causing sales to ratchet upwards. The effect of rollovers on the game is, therefore, very important and complex. Were it not for the presence of rollovers, sales would have a strong negative trend. Players would soon tire of the game, experiencing lottery fatigue. Estimates by Farrell et al. (2000b) suggest that the half-life of the UK game would have been 150 draws (if there were no rollovers). That is sales would halve every three years (of weekly draws) if it was not for the presence of rollovers in the game. Mikesell (1994) found that in the case of US lotteries, sales tend to have peaked after about ten years of operation. Rollovers are therefore essential for stimulating interest in the game and this is reﬂected in the amount of advertising that the operators give to rollover draws and the fact that lottery operators even create artiﬁcial rollovers in the form of ‘superdraws’.

More sales as a proceedings of rollover sales

1.25

Actual decay Fitted decay

1.2 1.15 1.1 1.05 1 0.95 0.0 1

2 3 4 5 6 Time in weeks where week 1 is a rollover week

Figure 14.2 The halo effect for the case of the UK National Lottery.

7

Is the UK National Lottery experiencing lottery fatigue?

169

The choice of take-out rate The ‘price’ elasticity of demand for lottery tickets shows how demand varies with the expected value of the return from a ticket and it is this elasticity that is relevant in assessing the merits of the design of the lottery and the attractiveness of potential reforms to the design. That is, it tells us how demand would vary in response to changes in the design of the lottery – in particular, the tax rate on the lottery or the nature of the prizes. Lotteries are, typically, operated to maximise the resulting tax (or ‘good causes’) revenue that is typically a ﬁxed proportion of sales. Thus, knowledge of the price elasticity is central to choosing an appropriate take-out rate (see Appendix). The methodology to estimate the price elasticity of demand for lottery tickets is relatively simple. Price variation is derived from the fact that lottery tickets are a better bet in rollover than in normal weeks. The Appendix to this chapter shows how the expected value of a lottery ticket is derived. Previous work (outside of the UK) has attempted to estimate this elasticity by looking at how demand varies in response to actual changes in lottery design across time or differences across states.5 However, these have been few and far between and limited attempts have been made to control for other changes and differences that may have occurred. An important exception is Clotfelter and Cook (1990), who estimate the elasticity of sales with respect to the expected value of holding a ticket.6 The current estimates for the UK also exploit the changes in the return to a ticket induced by ‘rollovers’ that occurs when the major prize (the jackpot) in one draw is not won and gets added to the jackpot prize pool for the subsequent draw. This changes the expected return to a ticket in a very speciﬁc way. In particular, the expected return rises in a way that cannot be arbitraged away by the behaviour of agents. The elasticity generated by this method is published in Farrell et al. (2000) and Forrest et al. (2000). Farrell et al. report estimates of −1.05 (in the short run) and −1.55 (in the long run).7 Gulley and Scott (1989) report an estimate of −1.03.8 Although Europe Economics (2000) argues that ‘studies using American data typically ﬁnd a lower elasticity than for the UK, with an estimated elasticity closer to one, which is the revenue maximising level of elasticity’, in fact, the UK results are broadly similar to those found for the US state lotteries. Gulley and Scott (1989) also use price variation arising from changes in the expected value caused by rollovers. They report elasticity of −1.15 and −1.2 for the Kentucky and Ohio lotteries and elasticity of −1.92 for the multi-state Massmillions lotteries. The longrun elasticity given in Farrell et al. suggests that the take-out rate could be lowered to increase the sales revenue and thus the money going to good causes. However, the short-run elasticity and the estimate of Forrest et al. are not statistically signiﬁcantly different from one, suggesting that the current take-out rate is right.

The introduction of the midweek draw Over hundred lotteries worldwide run midweek draws and the majority of these are held on a Wednesday. In general, innovations to games are an endogenous response

170

S. Creigh-Tyte and L. Farrell

to ﬂagging sales. The midweek draw has seen lower sales than the Saturday draw but total weekly sales have risen (as can be seen from Figure 14.1). This second draw is a replica of the Saturday draw and therefore ensures that players, who ‘own’ a set of Saturday night numbers, will become locked into the midweek draw as well. An interesting question is whether the price elasticity of demand across the two draws is the same, as this determines if the optimal take-out rate across each draw should be the same. Forrest et al. (2000) calculate that the Saturday elasticity is −1.04 and the Wednesday elasticity is −0.88 and ﬁnd neither estimate to be statistically significantly different from one. Farrell et al. (1998) also test if players respond in the same way to price incentives across the lotto draws (i.e. Saturday and Wednesday). When considering the separate samples it appears that the demand on Wednesdays is less elastic than the demand on Saturdays. Examination of the associated standard errors reported in the paper, however, shows that the elasticity is not statistically signiﬁcantly different from each other and this explains why none of the interaction terms in their model, indicating a change in the slope of the demand curve over the two types of draw for the full sample regression, are signiﬁcant. The signiﬁcance of the Wednesday dummy in the full sample regression implies that there is a change of intercept and sales are signiﬁcantly lower on Wednesdays than Saturdays. In general, the results suggest that the demand curve shifts backwards towards the origin on Wednesdays, but the elasticity of demand is unchanged. Furthermore, there is no evidence that players engage in inter-temporal substitution given that less people play on Wednesday rollovers than on Saturday rollovers despite the higher expected return. To date, lower sales on Wednesdays have been continually boosted through frequent topped up jackpot ‘superdraws’ but it is important to remember that the greater the frequency of the ‘superdraw’ the quicker the players will tire of this innovation. It is, therefore, clear that less people play on Wednesdays than Saturdays but the introduction of the midweek draw has been successful in increasing the overall level of sales. The logical question that naturally occurs is whether there is an optimal frequency of draws? To date there is no research on how the frequency of draws affects participation in the game. However, logically, the closer the draws the easier it is to inter-temporally substitute play across the draws. This could result in low levels of play in normal draws as players wait for rollovers to occur. Whilst low levels of play in normal draws will increase the number of rollovers, the size of the rollover will be small and so the effect of a rollover in attracting high sales diminishes.

Thunderball The third innovation to the current on-line game has been the introduction of the Thunderball game. This numbers game is different in that it has the format of a typical lottery but is not pari-mutuel.9 An interesting feature of the paper by Forrest et al. (2000) is what the time trends reveal for the on-line game. The linear

Is the UK National Lottery experiencing lottery fatigue?

171

trend in the model is positive, reﬂecting growing sales as the game was introduced and the boost to sales given by the introduction of the midweek draw. However, the quadratic trend term is more interesting (and is negative) and suggests that interest in the midweek draw fell around June 1998. Camelot’s response to this falling interest was the introduction of the Thunderball game. Figure 14.3 shows the sales path for the Thunderball game since its ﬁrst draw on Saturday 12th June, 1999 – initial sales of £6.4 million per game have trended downwards to £5.2 million in March 2002. Whilst the odds of winning, Thunderball are much better than those for the on-line game, the former is still considerably less popular. This may in part be explained by the fact that the value of the top prize is relatively small compared to that offered by the on-line game. See Table 14.1. Current research by Walker and Young shows, in the context of the on-line game, that the skew in the prize distribution (that allows players to receive very large jackpots) is a key factor in 7,000,000 6,000,000

Sales (£)

5,000,000 4,000,000 3,000,000 2,000,000 1,000,000

Figure 14.3 Thunderball sales.

Table 14.1 Ways to win at Thunderball Winning selection

Odds

Prize value (£)

Match 5 and the Thunderball Match 5 Match 4 and the Thunderball Match 4 Match 3 and the Thunderball Match 3 Match 2 and the Thunderball Match 1 and the Thunderball

1 : 3,895,584 1 : 299,661 1 : 26,866 1 : 2,067 1 : 960 1 : 74 1 : 107 1 : 33

250, 000 5, 000 250 100 20 10 10 5

Source: http://www.national-lottery.co.uk

136

141

131

126

116

Draw number

121

111

106

96

101

91

86

76

81

71

66

61

56

51

46

36

41

31

26

21

16

6

11

1

0

172

S. Creigh-Tyte and L. Farrell

the game’s success. Research in the context of other betting markets by Golec and Tamarkin (1998) for the case of racetrack bettors and Garrett and Sobel (1999) for the case of US lottery games also show that bettors like skewness in the distribution of prizes. The Thunderball game does have a skewed prize distribution, but it appears that it is not sufﬁciently skewed as the value of the top prize is not sufﬁcient to attract players to the game. One of the important lessons learnt from the US state lotteries is that single large games (such as the multi-state lotteries) generate greater amounts of revenue than numerous small games. It is, therefore, not surprising to ﬁnd that the Thunderball game only attracts sales of around £5 million a week compared to the on-line game with sales of around £70–75 million. Such games may pick up marginal sales, but it is important that care is taken not to allow them to simply divert resources away from the main game, as this would be detrimental to the long-term future of the main game.

Special one-off draws Christmas 2000 and 2001 have seen further innovations to the format of the standard on-line game. These came in the form of a pari-mutuel game where players paid £5 for entry into two draws. The idea was to have two very large jackpots that would generate extra interest and revive a general interest in lottery play. Big Draw 2000 had two draws: one on 31st December, 1999 and one on 1st January, 2000. Big Draw 2001 had two draws on 1st January, 2001, one at 12.12 a.m. and the second at 12.31 a.m. These games are in part copies of Italy’s Christmas lottery draw that attracts a huge number of players. Table 14.2 shows the number of winners and size of the prize for Big Draw 2001. Table 14.2 Big Draw 2001 Category

Prize (£)

Winners

Total (£)

Percentages

Jackpot 4 + bonus 4 match 3 + bonus 3 match 2 + bonus 2 match Total

0 0 54,587 2,650 260 163 57

0 0 43 103 3,212 3,452 88,676 95,486

0 0 2,347,241 272,950 835,120 562,676 5,054,532 9,072,519

0.0 0.0 25.9 3.0 9.2 6.2 55.7 100.0

Source: http://www.national-lottery.co.uk Notes Game 1 Winning years drawn at 12.12 a.m. GMT on Monday 1st January, 2001: Sorted order: 1909 1920 1931 1982 1992 Bonus: 1911. Game 2 Winning years drawn at 12.31 a.m. GMT on Monday 1st January, 2001: First year: 1,620; Second year: 2,438. Number of game 2 jackpot (£1 million) winners: 5.

Is the UK National Lottery experiencing lottery fatigue?

173

It is interesting that for the Big Draw whilst the total sales were £24,739,425 (because each ticket cost £5) the total number of tickets sold was just less than 15 million. This is around the number of marginal tickets that are sold in the Thunderball game. Moreover, given the small number of tickets sold the probability of having no jackpot winners is large and not surprisingly the game did not generate any winners of the top two prizes. This game illustrates the points that have been made throughout this chapter that the design of the game must match the size of the playing population. As a one-off game it did generate a large amount of revenue but players are disheartened by games that are too hard to win (especially with such high prices), and this lack of enthusiasm may have dangerous effects on the main on-line game. Luckily draw 2 had a much higher probability of generating winners given the number of players, and successfully produced ﬁve millionaires.

Lottery Extra This is the latest innovation to the on-line game. It exploits the fact that players like large jackpots by allowing a jackpot that is not won to rollover into the following weeks’ jackpot prize pool. This continues until the jackpot grows to £50 million and is then shared by the second-prize winners. Tickets cost £1 and the draw uses the same numbers as the on-line game. Players simply choose whether or not to enter the extra draw and then whether to use the same numbers that they used for the main game or to use a lucky dip selection. Lottery Extra saw its ﬁrst draw on Wednesday 15th November, 2000. Figure 14.4 shows that the level of sales for this new game are currently around £1.2 million on a Saturday and £0.8 million on a Wednesday. Again the game does not appear to have a wide appeal but is picking up some sales. The key problem is

5,000,000 4,500,000 4,000,000

Sales (£)

3,500,000 3,000,000 2,500,000 2,000,000 1,500,000 1,000,000 500,000

Draw number

Figure 14.4 Lottery Extra sales.

136

131

126

121

116

106

111

96

101

91

86

81

76

71

66

61

56

51

46

41

36

31

26

21

16

6

11

1

0

174

S. Creigh-Tyte and L. Farrell

whether these are new sales or simply expenditure that is substituted away from the main on-line game. The potential impact of all the innovations that take the form of peripheral games is that if they simply direct expenditure from the main game then this will mean that total sales will not rise and the costs of launching the new game are lost, but more importantly we know there are economies of scale in lottery markets and competition is detrimental to total market sales (even if that competition comes from games operated by the same company). Large jackpots attract players so lots of small games effectively destroys the market. Innovation is necessary to stimulate interest but too many peripheral games are a dangerous means to try to regenerate interest.

Instants National Lottery scratch cards called Instants were launched at the end of March 1995 and cost £1 or £2 depending on the card. Sales to date would tend to suggest that the UK market for Instants is quite small. Figure 14.5 shows the weekly sales ﬁgures for scratch cards. When they were ﬁrst released sales peaked at just over £40 million per week in May 1995. Currently sales have fallen to as little as £10.5 million a week. We can see that this revenue is small compared to that generated by the on-line game (although greater than the sales that Thunderball or Lottery Extra have generated). The challenging question to answer is what is the potential to extend this market? There is very little analysis on the market for scratch cards within the UK. This is mainly due to the poor quality of the available data. Surveys of scratch-card players persistently under record the level of activity. Analysis of the aggregate data is hindered by the fact that there are many games each offering different returns. 50,000,000 45,000,000 40,000,000

Sales (£)

35,000,000 30,000,000 25,000,000 20,000,000 15,000,000 10,000,000

Week number

Figure 14.5 Instants sales.

361

349

325 337

301

313

277 289

265

241

253

217

229

205

181

193

157 169

133

145

97

109 121

85

73

49 61

37

25

1

0

13

5,000,000

Is the UK National Lottery experiencing lottery fatigue?

175

Table 14.3 National Lottery stakes (£ million) Financial year

National Lottery lotto game sales (£)

Instant sales (£)

Easy Play sales (£)

Lottery Extra sales (£)

Thunderball sales (£)

Big Draw sales (£)

Gross stake (£)

1994–95 1995–96 1996–97 1997–98 1998–99 1999–00 2000–01 Total to end 2000–01

1,156.8 3,693.7 3,846.6 4,712.7 4,535.9 4,257.0 4,124.2 26,326.9

33.9 1,523.3 876.5 801.0 668.7 560.8 546.1 5,010.3

0 0 0 0 23.1 1.3 0 24.4

0 0 0 0 0 0 48.1 48.1

0 0 0 0 0 194.1 240.2 434.3

0 0 0 0 0 80.6 24.7 105.3

1,190.6 5,217.0 4,723.2 5,513.8 5,227.8 5,093.8 4,983.3 31,949.3

Source: National Lottery Commission

New experiments in terms of offering cars and other goods, rather than money prizes, are currently being tested in the market place. Camelot remains diligent in its attempts to stimulate and expand this market. This is important, as the innovations to the on-line game are limited, so over time sustaining the value of contributions to good causes will become increasingly focussed on expanding other areas of the lottery product range.

Conclusion All the indicators show that since 1994 Camelot has followed the pre-existing models of lottery operation. They are continually innovating in order to stimulate demand to prevent lottery fatigue from impacting on sales and the revenue for good causes. However, potential dangers of too many peripheral games have been highlighted. Due to the downward trend in lottery sales we should expect to see a continued high level of innovation in the game. Camelot enjoys the advantage of a monopoly market, however; although J. R. Hicks once characterised a ‘quiet life’ as the greatest monopoly proﬁt, the UK National Lottery is not an easy market. It is a demanding market with no room for complacency. The Gambling Review Body (chaired by Sir Alan Budd) began work in April 2000 with the purpose of reviewing the ‘current state’ of the UK gambling industry – they published their ﬁndings in the Gambling Review Report in July 2001, including 176 recommendations. While consideration of the National Lottery was expressly excluded from their brief, the Report has clear implications for the Lottery (and the rest of the gambling industry). The most signiﬁcant recommendations for the Lottery are: a b

that betting on the UK National Lottery be permitted; that limits on the size of prizes and the maximum annual proceeds should be removed for societies’ lotteries, and that rollovers should be permitted; and

176 c

S. Creigh-Tyte and L. Farrell that there should be no statutory limits on the stakes and prizes in bingo games, and that rollovers should be permitted.

The thrust of the Report is to ‘extend choice for adult gamblers’ and simplify gambling regulation, while ensuring that children and other vulnerable persons are protected, permitted forms of gambling are kept crime-free and that players know what to expect. As such, the Budd Report will (all else being equal) increase the level of competition within the various gambling and leisure sectors for consumers’ Table 14.4 Trends in betting and gaming expenditure Net betting and gaming expenditure

£ Million

% Change

1991–92 1992–93 1993–94 1994–95 1995–96 1996–97 1997–98 1998–99 1999–2000 2000–01 On lotterya 1991–92 1992–93 1993–94 1994–95 1995–96 1996–97 1997–98 1998–99 1999–2000 2000–01 On other betting 1991–92 1992–93 1993–94 1994–95 1995–96 1996–97 1997–98 1998–99 1999–2000 2000–01

3,181 3,296 3,517 4,324 6,034 5,898 6,414 6,550 6,587 7,254

—

0 0 0 660 2,719 2,425 2,785 2,615 2,547 2,492

— — — — 312.0 −10.8 14.8 −6.1 −2.6 −2.2

3,181 3,296 3,517 3,664 3,315 3,473 3,629 3,935 4,040 4,762

—

3.6 6.7 22.9 39.5 −2.3 8.7 2.1 0.6 10.1

3.6 6.7 4.2 −9.5 4.8 4.5 8.4 2.7 17.9

Source: ONS Note a Calculated as 50 per cent of the National Lottery stake.

Is the UK National Lottery experiencing lottery fatigue?

177

Table 14.5 Trends in betting and gaming expenditure relative to total consumer spending Net betting and gaming expenditure On lotterya 1991–92 1992–93 1993–94 1994–95 1995–96 1996–97 1997–98 1998–99 1999–2000 2000–01 On other betting 1991–92 1992–93 1993–94 1994–95 1995–96 1996–97 1997–98 1998–99 1999–2000 2000–01

£ Million

Share of total consumer expenditure (%)

0 0 0 660 2,719 2,425 2,785 2,615 2,547 2,492

0 0 0 0.2 0.6 0.5 0.6 0.5 0.5 0.5

3,181 3,296 3,517 3,664 3,315 3,473 3,629 3,935 4,040 4,762

0.9 0.9 0.9 0.9 0.8 0.7 0.7 0.7 0.7 0.7

Source: ONS Note a Calculated as 50 per cent of the National Lottery stake.

discretionary spending, and also the competitiveness of the gambling industry as a whole, with other non-gambling sectors. As shown in Table 14.3, since 1994 the total National Lottery ticket sales have reached almost £32 billion over the seven ﬁnancial years to 2000–01. However, over 82 per cent of this total is attributable to the 6/49 lotto game, with over 15 per cent due to scratch cards. Preserving the core lotto game stake is clearly a priority in maintaining sales and hence the good causes funding streams. Moreover, the National Lottery exists within an increasingly competitive UK betting and gaming sector. As shown in Table 14.4, consumer expenditure on non-lottery betting and gaming has risen (almost) continually between 1994–95 and 2000–01. Although the share of such non-lottery betting and gaming in total consumer spending has fallen from 0.9 per cent in the year of the National Lottery’s launch to 0.7 per cent in 2000–01, the overall share of all betting and gaming in consumers’ expenditure has risen from 0.9 per cent in 1993–94 to 1.2 per cent in 2000–01 per cent; see Table 14.5. Therefore the challenge facing the lottery in the near future is to learn to be adaptive and innovative in an increasingly competitive environment.

178

S. Creigh-Tyte and L. Farrell

Appendix: the expected value of a lottery ticket The formal expression for the expected value of a lottery ticket was ﬁrst derived in the work of Sprowls (1970) and has subsequently been adopted and reﬁned by Lim (1995) and Scoggins (1995). Here we will consider the case where players are assumed to select their numbers uniformly.10 The size of the jackpot is equal to the sales revenue times the proportion of ticket sales in draw t going to the jackpot prize pool and plus any rolled over prize money from the previous draw. We denote Ct , as the sales revenue (aggregate consumption) and Rt as the amount rolled over; which for most draws will be zero. Finally π6t is the proportion of ticket sales in week t going to the jackpot prize pool. The size of the jackpot in draw t, is thus expressed as Jt (π6t , Rt ; Ct ) = Rt + π6t Ct

(1)

The probability that this jackpot is won, p6 , is determined by the characteristics of the game. For the UK National Lottery, players must select six numbers (x) from forty-nine (m) and the jackpot is shared among those players who selected the winning six number combination drawn at random without replacement.11 The probability of there being a rollover is equal to the probability that none of the players win the jackpot (1 − p6 )Ct . In the case of the UK National Lottery there are also smaller prizes awarded for matching any ﬁve, four or three of the main numbers and a further prize pool for matching any ﬁve main numbers plus a seventh bonus ball (5 + b). The expected value of holding a lottery ticket taking account of the smaller prizes is therefore12 V (Rt , π6t , πj t , p6 ; Ct ) = [1−(1−p6 )Ct ][π6t Ct +Rt ]+ πj t Ct /Ct (2) j

where j = 3, 4, 5, 5 + b, p6 is the probability of a single ticket winning the jackpot, pj is the probability of correctly selecting any j numbers, π6t is the proportion of ticket sales in draw t allocated to the jackpot prize pool and πj t is the proportion of ticket sales going to the j th prize pool in draw t so that, j πj t + π6t = (1 − τ ), j = 3, 4, 5, 5 + b, where τ represents the take-out. The take-out is the proportion of sales revenue not returned in the form of prizes, which covers the operator’s costs, proﬁts, tax, and in the UK, contributions to a number of good causes.13 It is straightforward (see Farrell and Walker, 1996) to show that VR > 0, Vp6 > 0 and Vτ < 0 where subscripts indicate partial derivatives. The effects of the level of sales, Ct , is more difﬁcult. In the case where R = 0 it is simple to show that VCt > 0 and VCt Ct < 0, but in general VCt = (p6 Ct (1 − p6 )Ct ((1 − τ )Ct + Rt ) − Rt (1 − (1 − p6 )Ct ))/Ct2

(3)

which is not necessarily monotonic and Figure 14.1 depicts the possibilities together with the relationship for R = 0. V (·) always asymptotes towards (1 − τ ), but for R > 0 it is from above and at a slower rate than for R = 0 when it is faster

Is the UK National Lottery experiencing lottery fatigue?

179

and from below. For R > 0 the relationship may attain a maximum for some ﬁnite Ct , but for sufﬁciently large R the relationship will be monotonically decreasing. V is always higher in rollover draws than in regular draws, irrespective of the level of sales. Thus, it is impossible to arbitrage away the differences in V no matter what the variation in sales. This implies that there will always be some exogenous variation in V arising from the random incidence of rollovers. It is, indeed, possible in theory for the expected value to exceed unity, the cost of a ticket, so the net expected return becomes positive.

Acknowledgements We are grateful to Sandra Stirling, Chris Jones, Stuart Poole and Christiane Radin for their help in preparing this chapter. Parts of this chapter draw on Lisa Farrell’s research with colleagues at Keele University.

Notes 1 Assuming that the mechanism employed to generate the winning numbers generates those numbers uniformly. Cook and Clotfelter (1993) refer to this non-uniformity as ‘conscious selection’. 2 There will also be more occasions when there are a large number of jackpot winners. That is, the variance in the number of jackpot winners will be higher under non-uniform choice. 3 Allowing the Lucky Dip will of course reduce the frequency of rollovers as it increases the level of coverage of the possible combinations. 4 Explicit models of addiction stem from the work of Becker and Murphy (1988). They present and test empirically a theoretical model of addiction. The novelty of this approach is that the individual can behave rationally. 5 See Vrooman (1976) and DeBoer (1985). 6 Scott and Garen (1994) estimate a model of participation in a US scratch card game using micro-data, but could not estimate a price elasticity since there are no rollovers in such games. 7 See a later section for the difference between the long- and short-run elasticity. 8 These results are based on an analysis of the sales time series. Using micro-data also enables a more precise estimation of the price elasticity of demand. Given that richer people may choose to play only in rollover weeks when the return is higher we need to control for income variation between those individual who play in normal weeks compared to those who play in rollover weeks. Simple time series studies such as those mentioned above may obtain biased price elasticities due to the inability, within the time series data, to control for income effects. Therefore it is important to check for any bias by comparing these results to the corresponding elasticity estimated using micro-data when controlling for the effects of income. Farrell and Walker (1999) ﬁnd estimates of −1.7 but this estimate was based on price variation arising due to a double rollover and this event attracted a lot of publicity that may have led to an unusually large response from players. 9 The other ﬁxed-odds game that the lottery launched was called ‘Easy Play’ and was based on the football pools. Vernon’s Easy Play ran from Saturday 15th August, 1998 to Saturday 8th May, 1999 (thirty-nine weeks), and was then shut down. 10 Cook and Clotfelter, 1993, pp. 636–7 speculate that the theoretical structure of the game is unchanged if individuals pick their numbers non-randomly (they call this ‘conscious selection’). Farrell et al. (2000b) consider this more complex conscious-selection case

180

S. Creigh-Tyte and L. Farrell

and prove that the most important theoretical properties of the game are indeed unaffected by this generalisation. They also show that conscious selection has a minimal impact on the estimated elasticity. 11 The probability of winning in this case is, then, 1/13983816. 12 It will be assumed, for expositional convenience, that the smaller prizes do not rollover. Whilst it is possible for them to do so, in practice they never have and the probability of them doing so is very small. 13 For the UK National Lottery Treasury the duty is 12 per cent of ticket sales, the retailer’s commission is 5 per cent, operator’s costs and proﬁts are 5 per cent, and good causes get 28 per cent.

References Becker, G. S. and Murphy, K. M. (1988), ‘A theory of rational addiction’, Journal of Political Economy, 96, 675–700. Budd, A. (2001), (Chairman of the Gambling Review Body), ‘The gambling review report’, Department for Culture, Media and Sport. Clotfelter, C. T. and Cook, P. J. (1990), ‘On the economies of state lotteries’, Journal of Economic Perspectives, 4(4), 105–119. Cook, P. J. and Clotfelter, C. T. (1993), ‘The peculiar scale economies of lotto’, American Economic Review, 83, 634–643. Creigh-Tyte, S. W. (1997), ‘Building a National Lottery: reviewing British experience’, Journal of Gambling Studies, 13(4), 321–341. Creigh-Tyte, S. W. and Farrell, L. (1998), ‘The economics of the National Lottery’, working paper No. 190, University of Durham. DeBoer, L. (1985), ‘Lottery taxes may be too high’, Journal of Policy Analysis and Management, 5, 594–596. Europe Economics (2000), ‘A report for the National Lottery Commission’, Review of the Economics Literature on Lotteries, London: Europe Economics. Farrell, L., Lanot, G., Hartley, R. and Walker, I. (1998), ‘It could be you: midweek draws and the demand for lottery tickets’, Society for the Study of Gambling Newsletter, no. 32. Farrell, L., Lanot, G., Hartley, R. and Walker, I. (2000a), ‘The demand for lotto: the role of conscious selection’, Journal of Business and Economic Statistics, April. Farrell, L., Morgenroth, E. and Walker, I. (2000b), ‘A time series analysis of UK lottery sales: the long-run price elasticity’, Oxford Bulletin of Economics and Statistics, 62. Farrell, L. and Walker, I. (1999), ‘The welfare effects of lotto: evidence from the UK, 1997’, Journal of Public Economics, 72. Forrest, D., Simmons, R. and Chesters, N. (2000), ‘Buying a dream: alternative models of the demand for Lotto’, University of Salford, Mimeo. Garret, T. A. and Sobel, R. S. (1999), ‘Gamblers favour skewness, not risk: further evidence from United States’, Economic Letters, 63. Golec, J. and Tamarkin, M. (1998), ‘Bettors love skewness, not risk, at the horse tracks’, Journal of Political Economy, 106. Gulley, D. O. and Scott, F. A. (1989), ‘Lottery effects on pari-mutuel tax revenues’, National Tax Journal, 42(1), 89–93. Haigh, J. (1996), ‘Lottery – the ﬁrst 57 draws’, Royal Statistical Society News, 23(6), February 1–2. Hicks, J. R. (1935), ‘Annual survey of economic theory: the theory of monopoly’, Econometrica, 3(1), 1–20.

Is the UK National Lottery experiencing lottery fatigue?

181

Lim, F. W. (1995), ‘On the distribution of lotto’, Australian National University working paper in Statistics, no. 282. Mikesell, J. L., (1994), ‘State Lottery Sales and Economic Activity’, National Tax Journal, 47, 165–171. Scoggins, J. F. (1995), ‘The lotto and expected net revenue’, National Tax Journal, 48, 61–70. Scott, F. and Garen, J. (1994), ‘Probability of purchase, amount of purchase and the demographic incidence of the lottery tax’, Journal of Public Economics, 54, 121–143. Shapira, Z. and Venezia, I. (1992), ‘Size and frequency of prizes as determinants of the demand for lotteries’, Organizational Behaviour and Human Decision Processes, 52, 307–318. Simon, J. (1999), ‘An analysis of the distribution of combinations chosen by the UK National Lottery Players’, Journal of Risk and Uncertainty, 17(3), 243–276. Sprowls, C. R. (1970), ‘On the terms of the New York State Lottery’, National Tax Journal, 23, 74–82. Vrooman, D. H. (1976), ‘An Economic analysis of the New York State Lottery’, National Tax Journal, 29, 482–488.

15 Time-series modelling of Lotto demand David Forrest

This chapter offers a critical review of attempts by British (and American) economists to model the demand for lottery products, in particular the on-line numbers game known as Lotto. Economists’ focus has been to attempt to illuminate the issue of whether or not take-out rates and prize structures have been selected appropriately for the goal of maximizing tax revenue. It will be argued that, notwithstanding the ingenuity shown in modelling exercises, data limitations imply that one must remain fairly agnostic about whether or not variations in takeout or prize structure would, in fact, be capable of further raising the tax-take. However, it will be suggested that an injection of competition into the supply of lottery services could have the potential to reveal more about the nature of demand and to lead to greater expenditure on lottery tickets and consequent increases in tax revenue. In many jurisdictions, the public lottery is the most highly taxed of all consumer products. The situation in the UK is typical. Legislation mandates that 50 per cent of bettors’ expenditure on lottery products as a whole (i.e. scratch cards as well as on-line games) should be returned to bettors. For most games, the face value of a ticket is £1. So, on average, 50 pence of this comes back to the bettor in prize money, leaving the lottery organization with the other 50 pence. One way of deﬁning price is to identify it with this take-out: in this case the cost of participation in the lottery would be 50 pence (on average). This is, in the parlance of the literature, the ‘effective price’ of a lottery ticket.1 A very high proportion of the effective price is accounted for by tax. In the UK, 12 pence goes directly to general tax revenue as ‘lottery duty’. A further 28 pence represents hypothecated tax,2 divided among a number of distribution agencies that fund projects in ﬁelds such as sports, arts, heritage and education. Collectively, this 28 pence is said by the government to go to ‘Good Causes’.3 Altogether then, 40 out of the 50 pence effective price for participation in the lottery represents tax. With the pre-tax price at 10 pence and the tax-inclusive price at 50 pence, the rate of tax could be quoted as either 400 or 80 per cent, depending on the choice of base. It is striking that the pre-tax price of 10 pence corresponds to the take-out proposed by Keynes to the Royal Commission on Gambling of 1932–3. Keynes advocated a weekly lottery with 85–90 per cent of sales revenue returned in prizes. His rationale was that the provision of such a state

Time-series modelling of Lotto demand

183

lottery would make it a more straightforward matter for public policy to curtail other forms of gambling. His recommendation as to take-out was therefore based on social policy considerations. By contrast, when a public lottery was eventually introduced in Britain over sixty years later, the legislation explicitly set the goal for the lottery as maximizing ‘revenue for Good Causes’ (i.e. tax-take). The resulting high take-out rate on the lottery4 implies a rate of tax with an order of magnitude comparable only with duties on petrol, alcohol and tobacco products, in Britain. A welfare justiﬁcation for such tax treatment appears fragile. Lotteries seem to be a fairly ‘soft’ form of gambling and are not associated with the serious externalities claimed (at least by government) to be linked with the use of petrol, alcohol or tobacco. Nor do estimates of demand functions indicate exceptional elasticity conditions in the lottery market that might make high taxation ‘optimal’. Of course, the government has not purported to have any welfare justiﬁcation for high lottery taxes. Rather, it argued for the introduction of the product itself solely as a vehicle for taxation, as evidenced by the remit of the legislation, which is to maximize tax revenue.5 Given the unusual emphasis on lotteries as tax vehicles, it is not surprising that most demand studies have been designed to answer questions concerning whether policy changes (in take-out or in odds- or prize-structures) would increase tax revenue for the government. The review here adopts the same perspective. But before proceeding, it is appropriate to underline that this is an unusual perspective for economists to adopt. The conventional mode of analysis in evaluating public policy is that provided by standard welfare economics, which gives due weight to the interests of consumers. But policy analysis in the area of lotteries, nearly always, gives an implicit weight of zero to changes in consumer surplus. However, Mason et al. (1997) and Farrell et al. (1999) calculated measures of excess burden for lottery taxes using estimates of Lotto demand curves for Florida and Britain, respectively. Mason et al. found, as was to be expected, that attributing some weight to consumer welfare would imply changes in lottery design while Farrell and Walker demonstrated how potentially inefﬁcient the ‘Good Causes’ tax was in that it appeared to impose a deadweight loss equal to nearly 30 per cent of the revenue raised.6 However, there is little ground for believing that debate on the future of lotteries will include a refocusing that accepts them as just another consumer product, therefore the debate in the literature, and here, proceeds on the basis of discussing whether policy changes will or will not increase revenue.

The effective price model Prior to the contribution of Gulley and Scott (1993), attempts to evaluate the sensitivity of Lotto demand to variations in take-out had been based primarily on comparing sales across jurisdictions offering different value in their lottery product. Results had been mixed but it might have been optimistic to expect to detect signiﬁcant price effect where the range of ‘prices’ observed was narrow.7 The insight of the Gulley–Scott model was that estimation of elasticity from timeseries modelling of Lotto demand in a single jurisdiction becomes possible once it

184

D. Forrest

is realized that, while the mandated take-out might always be the same, it is only deﬁned over a period. The constancy of the take-out over time does not prevent there being signiﬁcant variations in effective price from drawing-to-drawing of the same lottery game. For example, when the grand (or jackpot) prize is not won in one draw, it is commonly ‘rolled over’ (i.e. added to the grand prize for the following draw) resulting in signiﬁcantly higher expected value (lower effective price) for bettors. In many jurisdictions, expected value has even, on occasions, been observed to be positive (i.e. effective price negative). By studying the response of sales to this draw-to-draw variation in value, it is argued that inferences can be drawn with respect to price elasticity of demand and hence about the take-out that would be optimal from the perspective of tax revenue maximization. The Gulley–Scott model and its strengths and weaknesses are highlighted here because it was in essence imitated by ﬁrst-generation studies of Lotto demand in the UK. Gulley and Scott studied Lotto games in Kentucky and Ohio and two games in Massachusetts. Data related to various periods over 1984–91 and the number of drawings observed for individual games varied between 120 and 569. The demand equation for each of the four games was estimated by regressing the log of sales on the log of effective price. Following Clotfelter and Cook (1987), effective price was identiﬁed as the difference between the nominal price of a ticket ($1) and its expected value. Variation in expected value across observations was provided primarily by rollovers augmenting the prize fund. Control variables were restricted to a trend and a dummy variable capturing the tendency of draws on Wednesday (as opposed to Saturday) to generate lower sales.8 Ordinary least squares estimation of such a demand function would yield biased results to the extent that effective price is necessarily endogenous. The authors note that their price variable will be inﬂuenced, in an arithmetic sense, by sales. The reason is that, as sales increase, the number of combinations covered by bettors will increase, making it less likely that the grand prize will remain unwon (and rolled over to be the next draw). Expected value to ticket holders therefore improves (and effective price falls) when sales increase. The authors were the ﬁrst to graph this relationship between effective price and sales. For a ‘regular’ draw (no rollover from the previous drawing), effective price will decrease with sales, though at a decreasing rate, and will converge on the take-out rate (at very high levels of sales, it is likely that almost all number-combinations will be ‘sold’ and therefore that all of the advertised prize funds will be paid out to bettors). In a draw beneﬁting from a rollover from the preceding draw, the relationship is, however, different. The same ‘economies of scale’ effect as before occurs but account must also be taken of the fact that bettors in such a draw beneﬁt from ‘free’ money in the prize fund. The beneﬁt of the free money is spread more thinly the greater the sales. The relationship between effective price and sales will thus converge on the take-out rate from below (at very high levels of sales, the beneﬁt from the ﬁxed rolled-over funds will become very small on a per-ticket evaluation). The classic response to an endogenous problem is to employ instrumental variables but the Gulley–Scott model was more properly represented as two-stage

Time-series modelling of Lotto demand

185

least squares. This is because there is a related but separate issue. When a bettor considers the purchase of a ticket, he must, if he takes value into account, forecast the value of some price variable. An appropriate price variable according to the model is ‘one minus the mathematically expected value of a ticket’. But this varies with sales and sales are not known until after the betting period has closed. Bettors’ decisions on whether, and how many tickets, to purchase can therefore be based only on some ex ante concept of price and expected value. The formation of expectations with respect to price was therefore modelled in a ﬁrst-stage equation and ‘expected’ effective price was then included as a regressor in the secondstage (demand) equation. Gulley and Scott obtained expected effective price by regressing actual effective price on the amount rolled over into the current draw and on the level and square of the size of jackpot announced or predicted (according to state) by the lottery agency. The actual effective price was one minus the expected value of a ticket as it could have been calculated in advance had the (true) number of tickets that were going to be sold been known. Note that the Stage 1 equation included in the regressors only information available to bettors in advance of the draw. Bettors were assumed to act as if they were able to process this information accurately9 and so expected price was taken as the ﬁtted value from the ﬁrst stage equation. The structure of the Gulley–Scott model was thus: Stage 1 P = f (rollover, rollover 2 , jackpot, jackpot 2 , trend, wed) Stage 2 Q = g(Pˆ , trend, wed) where P is effective price, Pˆ is the ﬁtted value of effective price retrieved from the Stage 1 estimation, rollover is the dollar amount added to the draw from a rollover, jackpot is the lottery agency’s announced or predicted jackpot for the draw, wed is a dummy set equal to one if it is a midweek draw and Q is dollar sales. The price and quantity variable were in logs. In the estimated demand equation, trend was signiﬁcantly negative in Ohio and Kentucky but of opposite signs for the two Massachusetts games (perhaps indicating a shift of consumers between the two over time). The coefﬁcient on wed was always strongly signiﬁcant and conﬁrmed the lower popularity of midweek draws. But the focus of interest was the coefﬁcient on Pˆ which was always negative and very strongly signiﬁcant. The motivation for the Gulley–Scott paper was to assess whether state authorities had set the terms of each lottery to their own revenue from the operation. This was to be revealed by the estimate of the coefﬁcient on Pˆ which, given that the speciﬁcation of the model was log-linear, was an estimated elasticity. What would be the value for elasticity that would be consistent with maximizing behaviour by the state? The answer depends on being able to estimate the marginal cost of a lottery ticket to the ‘producer’. This can, in fact, be done, reasonably precisely though not exactly. For example, it is known that retailer commission on a lottery ticket is typically 5 cents. Total operating costs of the lottery agency are normally known but some of these will be ﬁxed (infrastructure) costs. Given the

186

D. Forrest

available information, Gulley and Scott assumed, reasonably, that the marginal cost, inclusive of retailer commission, was 8 cents. Using the identity that mr = p(1 + 1/γ ), where γ is elasticity of demand, and setting mr = mc to represent proﬁt maximization, allowed them to estimate that, if the typical effective price was $0.50 and mc was $0.08, then γ must have been −1.19 if proﬁt maximization was being achieved. Therefore, whether the elasticity estimated was −1.19 was the Gulley–Scott test for whether choice of take-out had been consistent with proﬁt maximization. Elasticity with respect to effective price was indeed measured as extremely close to −1.19 for both Kentucky and Ohio; but estimates for the two Massachusetts games were, respectively, much more and much less elastic than −1.19, indicating that one game should have been made more, and the other less, generous to maximize state revenue. Implementation of the model may be criticized on some matters of detail. First, the constant-elasticity speciﬁcation of demand (i.e. log-linearity) sits uneasily with the goal of the paper, which is to make recommendations to governments concerning lottery pricing. If demand really could be described as linear in logs, and if elasticity were say, less elastic than −1, then take-out could be increased indefinitely, always increasing revenue, to the point where sales were close to zero. Of course, a log-linear speciﬁcation may ﬁt the data well within the range of effective prices actually observed; but since log-linearity is unlikely to characterize the whole demand curve, recommendations have then to be couched as (e.g.) whether price should increase without it being possible to say by how much.10 Second, only a shift dummy represents the difference in demand conditions between Wednesday and Saturday, whereas one might speculate that if there are different bettors on the two nights, the restrictions that the slope coefﬁcients be equal in the two cases should at least be tested. One might also be sceptical regarding the authors’ discussion of policy implications for one game, from the ﬁnding that demand was much more elastic than −1.19. Understandably, a reduction in take-out was said to be indicated. But, as an alternative, it was proposed that the game should be made ‘easier’ (e.g. changing the format so that there are fewer combinations of numbers from which to choose). This would lower the effective price in any one draw because there would be a lower probability of the grand prize remaining unwon. On the other hand, rollovers would be less frequent and the incidence of very low-priced draws would fall. The overall impact of sales over a period would, in fact, need to be simulated and would be sensitive to assumed functional form. Policy conclusions as to the structure of the odds are therefore much more problematic than the authors imply. These criticisms are matters of detail that could be, and to some extent were, resolved in later applications of this pioneering model. However, there are much more fundamental ﬂaws inherent in the model, interpreted as a means of providing precise guidance on choice of take-out in state lotteries. These problems are just as relevant to the ﬁrst-generation studies on the UK Lottery (which was launched in November 1994) for which Gulley and Scott effectively served as a template. Of course, it is a familiar situation to economists that limitations inherent in data mean that they have no alternative but to ‘swallow hard’ and proceed with estimation

Time-series modelling of Lotto demand

187

notwithstanding known problems. But appropriate caution has then to be exercised in making policy recommendations, for example, one may have to recommend that a model indicates that take-out should ‘not be reduced’ rather than that it should be increased to some speciﬁc level.

Limitations of the effective price model The Gulley–Scott model, broadly followed in UK studies by Farrell et al. (1999) and Forrest et al. (2000b), has the potential to seriously mislead policy-makers for at least three reasons. All relate to the problem that the great bulk of variation in effective price is provided by the difference in prize funds between regular and rollover draws. This means that when one measures elasticity, the estimate is based largely on observing how sales have responded in the past to rollovers. The ﬁrst problem with this is that rollovers generate media interest and the consequent free publicity for the lottery draw may also be supplemented by an active advertising campaign by the lottery agency itself. If extra prize money were to be made available regularly by an adjustment of take-out, the same degree of consumer response may not occur because, to some extent, high sales in rollover weeks may be explained by abnormal levels of publicity reminding bettors to purchase their tickets. Second, the observed response to rollovers relates to variations in effective price that are transient. Players may not respond in the same way to a permanent change in effective price achieved by varying the take-out for regular (non-rollover) draws. For example, some players may currently engage in inter-temporal substitution and concentrate their annual purchase of Lotto tickets in weeks when the game offers untypical good value. They would not necessarily wish to increase their annual purchase of lottery tickets if the game offered better value over the year as a whole. Essentially, rollovers are weeks when the lottery is offering a ‘sale price’. Measuring increases in turnover during a sale is not likely to provide much information about what would happen if the product were available indeﬁnitely at that special ‘sale’ price. Third, rollovers deliver superior value to bettors but this is achieved solely by augmentation of the grand (or jackpot) prize pool. Whenever effective price has varied signiﬁcantly from its normal level, there has also been simultaneous variation in the structure of prizes. Observed responses in turnover, then, may not have been primarily to effective price but to the size of the jackpot. Hence, if takeout were reduced permanently for future games and the beneﬁt spread across all the prize pools, bettors may not necessarily respond as much as they have when the beneﬁt has been focused on the grand prize. Estimated ‘price’ elasticity, therefore, provides no reliable guidance on whether it would be appropriate to raise or lower take-out: the estimate of elasticity is calculated from the estimated coefﬁcient on effective price in the demand equation and this estimate will be subject to omitted variable bias (where the omitted variable is prize structure). For this not to be a problem, bettors would have to be assumed indifferent to prize structure, but this

188

D. Forrest

would imply that they were risk-neutral; in this case, according to the standard expected utility theory, they would not be participants in the Lotto draw at all. Together, these problems imply that one is unlikely to be able to form very deﬁnitive views from the effective price model concerning what level of takeout a state should set to maximize tax revenue. However, results from effective price model studies are not entirely empty of policy relevance. This is because all the problems noted tend to bias estimates of elasticity in the same direction. All three point to the difﬁculty that, because of the way Lotto games work, whenever ‘low’ effective price is observed the demand curve for that particular draw lies to the right of the regular demand curve.11 The effective price model, therefore, identiﬁes a spurious demand curve, which displays greater elasticity than the ‘true’ demand curves for regular and rollover draws. The effective price model may then be claimed to offer a lower-bound estimate of ‘true’ elasticity with respect to take-out. This may enable some (unambitious) policy guidance to be offered. For example, one study of a British Lotto game estimated elasticity of −0.88 on the basis of application of an effective price model. True elasticity may be viewed as likely to be closer to zero than this ﬁgure suggests. Given that the implication of an estimate of −0.88 is that take-out could be increased, one could then be more conﬁdent in the conclusion given known biases in the estimate of the demand equation. On the other hand, had elasticity been estimated at a level such as −1.50, one could not conﬁdently recommend lower take-out because an estimate of −1.50 might correspond to a true value consistent with revenue for the state already being maximized.

First-generation UK studies Britain was, famously, the last country in Europe to introduce a public lottery in the modern era.12 The product went on sale in November 1994. It is evidently the world’s largest Lotto game in terms of turnover (though per capita sales are not out of line with other jurisdictions) (Wessberg, 1999). Sufﬁcient data had been accumulated towards the end of the decade for the ﬁrst demand studies to appear. This ﬁrst generation of studies adopted the effective price model. Relevant papers are Farrell et al. (1999) and Forrest et al. (2000b). Forrest et al. (2002) proposed an alternative model but, as a point of comparison, also offered an estimation of the standard effective price model. Another study, Farrell and Walker (1999), estimated elasticity with respect to effective price but did so through the exploitation of cross-section data on the purchase of lottery tickets by four samples of individuals taken for four different lottery draws. One of the draws (by chance) featured a ‘double rollover’ and therefore a radically lower effective price than normal. The estimate of elasticity the authors produce is subject to the same qualiﬁcations as apply to time-series studies and is in fact yet more problematic because there is only one observation of ‘price’ different from the usual price and it is an extreme outlier (only six double-rollovers occurred in the ﬁrst seven years of the National Lottery).

Time-series modelling of Lotto demand

189

British studies differ in detail from American partly because of differences in institutional arrangements. In contrast to the norm in the US and other countries, Britain’s lottery is franchised to a private operator. It is awarded a seven-year licence, the terms of which specify a mandated take-out (50 per cent) measured over all games and the whole franchise period. Camelot plc won the initial licence against ﬁerce opposition; but the advantage of incumbency was such that it faced only one rival bidder for the second seven-year term. In fact, the second franchise was initially awarded to the rival bidder (The People’s Lottery) but this was overturned after a legal battle and ﬁnal arbitration by a committee headed by a prominent economist. This controversial episode is considered further below, because the ﬁnal judgement depended substantially on the view taken about the nature of the demand function that the literature, currently under review, has been trying to identify. The immediate relevance of the lottery being operated privately, but with a mandated take-out, is that it alters the value of the elasticity measure that would be consistent with the state maximizing the ﬁnancial beneﬁt to itself. In America, the state normally runs the lottery on its own account and its interests, therefore, lie in proﬁt maximization by the department operating the lottery. With a plausible assumption as to marginal cost, take-out is optimal when elasticity is −1.19. In Britain, by contrast, the government’s gain from the lottery is maximized when the rules for the licence are set such that revenue-net-of-prizes is as high as possible because this is the base on which tax may be levied. Hence, the test as to the appropriateness of the mandated take-out now, is that elasticity should have the value −1. Of course, the lottery operator would prefer to have the freedom to set its own price/prize levels. Proﬁt maximization that is subject only to the payment of 40 per cent of gross revenue to the government would imply a very different sort of lottery. There has been some confusion in the literature on this point. Farrell et al. (1999) take a well-informed guess that marginal cost to the operator is £0.06 for a ticket with a face value of £1 (this includes £0.05 retailer commission). Noting that mc = p(1 + 1/γ ) and setting mr = mc, they conclude that proﬁt maximization by the operator would imply an elasticity of −1.06. Of course, this is very close to the value of −1 for tax-revenue maximization and it would be impractical to hope to estimate elasticity so precisely as to distinguish whether take-out has been chosen to favour government interests or the interests of the private ﬁrm running the lottery. But this would not matter anyway because the proximity of −1.06 to −1 implies a near-coincidence of interest between the two. Unfortunately, the evaluation of −1.06 is incorrect. It is based on setting the value for p at £1. But this is the face value of a ticket. The authors’ demand model is in terms of effective price and they measure elasticity with respect to effective price. Hence, p should be set equal to the mean effective price for the game not to £1.13 On this basis, the elasticity consistent with proﬁt-maximization would be −1.12. Further, marginal cost to the private operator should include the 40 pence per ticket it must pay to the government as lottery duty and hypothecated tax. With this taken into account, Camelot would like to be at a point on the demand

190

D. Forrest

curve where γ = −6.11. With a linear demand curve with a gradient such as has been found in most studies, this would imply a nominal price for Lotto tickets of several pounds if the current amount of money were still paid out in prizes. Plainly the interests of the government and operator are very divergent and the necessity for a ﬁrm legislative framework if a lottery is franchised to private operation is underlined. To return to the differences between US and UK studies, British authors also have to take into account a feature of the British lottery known as ‘superdraw’. The operator’s licence permits it to hold back some prize money to be added to the prizes for occasional, special promotional draws, sometimes supposedly marking a notable event in the calendar. The option is exercised several times each year. In all but one of these superdraws, the operator has put the extra money into the grand prize fund, making the effect akin to that of a rollover.14 Obviously all the British authors have had to build superdraws into their models. This has the advantage of giving greater variation in effective price (rollovers tend to be similarly sized and have much the same impact on effective price each time) though there is the caveat that superdraws are not likely to be truly exogenous events: the operator, one would expect, would use them on occasions where sales may otherwise ﬂag and, indeed, when a new Wednesday drawing was introduced, a superdraw was declared for each of the ﬁrst three weeks. An innovation in the British applications is the introduction of a lagged dependent variable into the demand equation. This implies, of course, that they produce both a short- and long-run estimate of elasticity, where the latter is relevant to the policy issue central to this literature. The employment of lagged dependent term(s) has successfully captured the role of habit in Lotto play and permits the UK models to account for the tendency of a rollover to beneﬁt sales for some weeks beyond the rollover draw itself. This offered some hope that insight might be gained as to optimal game format in so far as design will determine frequency of rollover. Less promising is the interpretation of the signiﬁcance of lagged dependent variables in Farrell et al. (1999). They take it as evidence of addiction in lottery play, applying the Becker–Murphy concept of myopic addiction (Becker and Murphy, 1998). But habit and addiction are not the same thing. Becker and Murphy deﬁne an addictive good as one where the utility from current consumption at any given level depends on the amount of the good consumed in the past. Hence, for an addictive good, a model of consumption should include a lagged dependent variable and this should be signiﬁcant. However, there are other reasons why current purchases may depend on past purchases. For example, in the case of Lotto, sales for one draw may rise because there is a rollover and some of the marginal purchasers may take the opportunity of being at the sales booth to procure a ticket for the following draw at the same time. Some such transactions cost mechanism is a more plausible explanation of the signiﬁcance of lagged dependent terms than addiction, because lottery tickets do not obviously possess other characteristics that distinguish addictive goods according to the Becker–Murphy model: an addictive good, for instance, would be expected to have a large number of total abstainers and of heavy users but few light users; this sort of distribution of consumption

Time-series modelling of Lotto demand

191

Table 15.1 Elasticity estimates in the UK Lotto Period

Draws included

Observations Estimate of elasticity

Farrell et al. (1999) Nov 1994–Feb 1997 Forrest et al. (2000b) Nov 1994–Oct 1997

116 188

−1.55 −1.03

Forrest et al. (2002)

127 127

−1.04 −0.88

Saturday Saturday/ Wednesday Feb 1997–June 1999 Wednesday/ Saturday

has not been found for lotteries.15 There is no ﬁrm evidence that Lotto is addictive.16 Table 15.1 summarizes the ﬁndings of UK studies that use the effective price model (but differ from each other in time periods covered, functional form and number and types of control variables employed). In the ﬁrst study, Farrell et al. (1999) rejected that take-out was consistent with net revenue maximization (and therefore with the government’s stated goal) and recommended that prizes should be made more generous. No subsequent study has come to the same conclusion, later elasticity estimates all being close to −1. The outlying nature of the Farrell–Walker result could be attributed to the peculiar characteristics of the data period employed. They used the very ﬁrst 116 draws in Lotto and this has the general problem that behaviour may have been untypical when bettors had to learn about this new and unfamiliar product and new and hitherto unexperienced phenomena such as rollovers and double rollovers. A particular problem was that in this early period there occurred a unique circumstance in the British lottery, namely two double-rollovers in the space of a month (the ﬁrst two rollovers of the six that were to occur by the end of 2001). These double rollovers offered prizes at a level unprecedented in British gambling and the result was a media frenzy surrounding the two draws. The very high increase in sales may have been a response to the extraordinary level of publicity (which was not repeated for later large-jackpot draws as the concepts of Lotto became familiar), but these two outlying observations of price were very inﬂuential in the derivation of the high elasticity value. In any event, no such high elasticity has been found in studies that included later data. Forrest et al. (2000b) took the data period further and estimated elasticity very close to −1, indicating that the choice of take-out had been very precisely ‘correct’. But a ﬂaw in their study is that the midweek draw had been added to the games portfolio, and they accounted for variation between levels of play on Wednesdays and Saturdays with only a shift dummy variable. Implicitly, they imposed the untested restriction that slope coefﬁcients were equal in the determination of Wednesday and Saturday turnover. Forrest et al. (2002) estimated an effective price model for the period between the introduction of the midweek draw and the introduction of a third on-line game, Thunderball, in mid-1999. An F -test rejected equality of slope coefﬁcients

192

D. Forrest

between Wednesday and Saturday draws. Separate models to explain Wednesday and Saturday play yielded estimates of elasticity that were statistically insigniﬁcantly different from −1. But the point estimates can be regarded as lower-bound estimates of elasticity given the biases likely in the effective price model. Thus, the calculated elasticity of −0.88 for the larger Saturday draw could be taken as suggestive that, if anything, there would be scope for increasing take-out on Saturdays. Walker and Young (2001) presented a more complex model, reviewed below, which nevertheless included effective price/expected value and similarly indicated scope for making the Lotto game ‘meaner’. So a consensus appears to have emerged that the early Farrell–Walker ﬁnding that prizes should be increased was premature. Amongst other interesting ﬁndings in the UK studies, one may note the tendency of superdraw money to be less effective than rollover money in boosting turnover (Forrest et al., 2000b) and the tendency of interest in games to diminish with time after an initial upward trend (Forrest et al., 2000b; Walker and Young, 2001). The negative inﬂuence of trend, reﬂecting a tendency of bettors to become bored and disillusioned with games, appears to be a worldwide phenomenon and presumably accounts for the regular introduction of new games by Camelot and other lottery agencies. An under-explored issue is the extent to which these new games cannibalize existing sales, though Walker and Young (2002) ﬁnd some negative effect on Saturday sales from the introduction of Thunderball and Forrest et al. (2001) attempt a more general modelling of substitution between Camelot products. Paton et al. (2001) made the ﬁrst study of substitution between lottery games and an existing gambling medium (bookmaker betting).

Second-generation UK studies Recent UK work – by Forrest et al. (2002) and Walker and Young (2001) – is motivated by scepticism of the potential of the effective price model to yield ﬁrm conclusions on lottery policy with regard to take-out and game design. Forrest et al. explore bettor preferences with a view to understanding why lottery players participate in the game and this is the basis for proposing an alternative model that appears to track lottery sales at least as well as the standard effective price approach. Walker and Young choose to extend the traditional analysis to include the variance and skewness, as well as the mean, of the probability distribution of prizes. A fundamental problem for the effective price model is that it ignores the possibility that bettors’ behaviour may be explained by variations in prize structure as well as by the amount of money expected to be paid out in prizes. Implicitly, the model assumes risk-neutrality. But why would risk-neutral bettors accept such an unfair bet anyway? The resolution of this paradox must lie in bettors obtaining utility from the gambling process itself. Conlisk (1993) retains the conventional expected utility framework but adds a ‘tiny utility of gambling’ to the expected

Time-series modelling of Lotto demand

193

utility function so that the purchase of gambling products becomes consistent with risk neutrality (or indeed risk aversion). This approach does not, however, rescue the effective price model of Lotto demand. For players to be indifferent to prize structure, it must be assumed that bettors are risk neutral and that the amount of utility attached to the ownership of a lottery ticket is invariant with respect to prize structure. If both these assumptions held, then a demand model could proceed on the basis that a lottery ticket was fun to hold and effective price was the price of fun; the cheaper the fun, the greater the quantity demanded. But why do lottery tickets impart utility? Clotfelter and Cook (1989) suggested that Lotto players are ‘buying hope’ and Forrest et al. echo this, and current sentiment in the lottery industry, by portraying them as ‘buying a dream’. They suggest that lotteries represent a relatively non-stimulating mode of gambling and the fun is not in the process itself (number selection, etc.) but rather dreaming about the lavish lifestyle that becomes available to the biggest winners. From this point of view, the price of a ticket – which is now the face value of £1 – buys the right to such daydreams. When rollovers occur the value of the grand prize increases and the dream becomes yet more vivid. Lotto play actually delivers more utility in a rollover draw and this, rather than any improvement in expected value, accounts for the observed increase in sales. According to this model, price is a constant but the demand curve shifts according to how much more enjoyment players receive when contemplating a larger jackpot prize. Players may not seriously expect to win; but they enjoy the dream and this dream may be related to the largest prize they could win, that is, to the prospective size of the pool for the grand prize ( jackpot). Note that this emphasis on the lottery ticket as a consumer good rather than a ﬁnancial asset would imply that sales (at the never-changing nominal price of £1) depend not on the expected value of the prize a ticket holder may receive nor even perhaps on the expected value of the jackpot prize itself (which will take account of the number of winners with whom the jackpot would have to be shared) but on the maximum prize the ticket holder could possibly win, that is, the size of the jackpot pool. For Saturday draws between February 1997 and July 1999, Forrest et al. estimate both an effective price model of demand and a jackpot pool model. The speciﬁcation in the two models is the same except that (expected) effective price is replaced in the second model by (expected) size of jackpot pool. The jackpot pool is instrumented on the same set of variables as effective price. The performance of the rival models is then compared by means of a Cox test (Cox, 1961, 1962; Pesaran, 1974). The ﬁrst hypothesis tested is that the effective price model comprises the correct set of regressors and the jackpot pool model does not. This is extremely decisively rejected (test statistic −17.2, critical value at 5 per cent level ±1.96). The second hypothesis tested is that the jackpot pool model comprises the correct set of regressors and the effective price model does not. This is also rejected (test statistic +4.19, critical value again ±1.96). What do these results tell us? The ﬁrst test result implies that including the size of the jackpot pool in the sales equation would raise explanatory power. The

194

D. Forrest

failure to include it in past modelling means that existing elasticity estimates are based on coefﬁcient estimates that are subject to omitted variable bias. Suggestions that take-out rates are close to optimal are therefore unreliable. Unfortunately, this problem with the effective price model cannot be practicably resolved: given that almost all ‘price’ variation comes from rollovers, effective price and jackpot pool will be highly correlated and inclusion of both in a sales equation would yield unreliable parameter estimates because of severe colinearity. The jackpot pool model proves as successful as the traditional model in terms of ability to track past sales; but it would also be a fragile basis on which to make policy recommendations. The result of the second part of the Cox test implies that effective price as well as the size of jackpot pool inﬂuences aggregate bettor behaviour. Perhaps the more decisive rejection of the effective price than the jackpot pool model is indicative that the size of jackpot pool is particularly important to bettors and this should be taken into account in formulating arrangements for the game. Although only one instance, the story of the lottery draw on 19 September, 1998, points to the same conclusion. This was a Saturday when Camelot declared a superdraw, that is, it added free funds to the prize pool, offering bettors signiﬁcantly better value than usual. In fact, effective price fell from the usual £0.55 to 0.28, equivalent to the impact of a substantial rollover. But, on this one occasion, Camelot experimented by augmenting the second prize pool not the jackpot pool.17 The experiment was disastrous. Sales actually fell compared with the regular draw the week before. In no other rollover draw or superdraw have sales ever failed to increase substantially. This event is consistent with the implication of the alternative model in Forrest et al. that it is the size of the jackpot pool that is the driving force of Lotto sales and that the apparently ‘good’ performance of the effective price model relies on correlation between effective price and jackpot pool. Forrest et al. are cautiously agnostic in their conclusions: both take-out and prize structure are likely to matter but their relative importance is hard to assess when (with the one exception noted) effective price and prize structure have always moved closely together in the same direction. Walker and Young (2001), by contrast, attempt to provide precise policy guidance. They estimate a demand model employing data since the beginning of the lottery.18 Regressors include similar controls as employed in Forrest et al. (2000b) and the expected value of a ticket is still included (expected value equals one minus effective price); but, to the expected value (or mean) of the probability distribution of prize money, they add variance and skewness as regressors. The estimated coefﬁcients are positive on mean, negative on variance, positive on skewness. The positive sign on skewness appears to capture bettor interest in high prizes. Walker and Young use their estimated model to perform a series of simulations that predict the impact on sales of, ﬁrst, two possible variations in the split of total prize money across the various prize pools and, second, a change in the format of the game from 6/49 to 6/53.19 In the latter, Walker and Young predict that aggregate sales would fall by a little less than 10 per cent (if the current take-out were retained). This is a particularly relevant ﬁnding because, when the franchise for the second term of the UK lottery was awarded, the only substantive difference

Time-series modelling of Lotto demand

195

between the two bids was that Camelot offered a 6/49 game, whereas The People’s Lottery proposed a change to a 6/53 format. The ﬁnal rejection of The People’s Lottery bid was based fairly explicitly on the perceived risk that the change in format might lower sales (National Lottery Commission, 2000, para. 16). The empirical model in Walker and Young appears, however, to provide a fragile foundation on which to settle the controversial battle between the two aspirant lottery organizations. One problem is that the demand model is estimated by ordinary least squares whereas the three moments of the prize distribution included as regressors are in fact endogenous: variance and skewness are dependent on sales in an arithmetic sense, similar to the mean (as discussed above). The inability to instrument mean, variance and skewness in the model will introduce biases of unknown magnitude into the parameter estimates. A second problem is that the estimated coefﬁcient on skewness was statistically insigniﬁcant in the demand equation,20 yet the point estimate is used, and is inﬂuential, in the simulation. Of course, it must be conceded that the point estimate of the coefﬁcient, rather than zero, is the ‘best’ estimate of the ‘true’ coefﬁcient but its failure to be signiﬁcant implies a high standard error and therefore imprecision in the forecasting exercise for the different prize structure and game format scenarios. In introducing skewness, Walker and Young were picking up a tradition that began with Francis (1975) who analyzed effects of skewness in returns in ﬁnancial markets. Golec and Tamarkin (1998) and Woodland and Woodland (1999) explored skewness in betting markets in horse racing and baseball, respectively. In unpublished work Purﬁeld and Waldron (1997) tested the attractiveness of skewness to players of the Irish lottery. Garrett and Sobel (1999) included skewness in a cross-section model of sales across 216 on-line lottery games offered in various American states in various periods. But their measure of skewness was ﬂawed: they measured it from a simpliﬁed probability distribution for each game in which each prize level was represented by the mean prize pay-out from that pool; but this gives a hypergeometric distribution for which a strictly deﬁned measure of skewness does not exist. The measurement of skewness in a Lotto game is in fact difﬁcult and problematic. Consider the probability distribution of prize pay-outs for a single ticket for the UK game. Over 98 per cent of players receive nothing at all. There is a ﬁxed pay-out of £10 for any player matching three of the six numbers; the probability of receiving £10 is 0.0175. Once the £10 prizes have been paid, the remainder of the prize pool is split in predetermined proportions between four prize pools: one for bettors matching four balls, one for bettors matching ﬁve balls, one for bettors matching ﬁve balls but also the ‘bonus ball’, and one for bettors who have a claim on the grand prize because they have matched all six of the main numbers drawn. This produces a distinctive probability distribution. There are large mass points (spikes) at zero and £10, which together account for over 0.99 of the distribution. Corresponding to the remaining prizes, there is a continuous distribution. In principle, even the winning jackpot pool could deliver a low prize (e.g. £1) depending on how many bettors have chosen the winning combination of numbers; but, essentially, the continuous part of the probability distribution consists of

196

D. Forrest

four components, each with a local maximum corresponding to the mean pay-out to a winning ticket in each of the four prize funds. This is a ‘mixed distribution’ for which interpretation of skewness is difﬁcult. Consider the effect of a rollover on the prize probability distribution. The spikes at zero and £10 remain unaltered and the sections of the probability distribution corresponding to the lower prize pools remain virtually unaltered. The effect on measured skewness derives only from the translation to the right of the component corresponding to the jackpot pool. Given that most of the variation in skewness is, in fact, provided by rollovers, putting skewness into the demand equation is essentially equivalent to putting the jackpot into the demand equation, which is itself problematic given the correlation between expected value and jackpot. One is then only modelling a complex functional form of the effective price model and coefﬁcient estimates on skewness may prove unreliable.21 In fact, the skewness model suffers from precisely the same underlying problem as the effective price model. For skewness, as for expected value/effective price, almost all the variation we can observe comes from rollovers. But rollovers shift skewness in a very speciﬁc way, by affecting only the jackpot pool. It cannot safely be assumed that bettors would respond in the same way to a change in the skewness measure that was generated by modifying the structure of the other prizes.22 It is very important to consider these other prizes, for example, 38 per cent of prize money is spent on £10 prizes and it would be a fair question to ask whether this money could usefully be allocated to the other prize funds and, if so, in what proportions. Further, one cannot know how much of the extra sales attributed to variations in skewness when the jackpot is high represent inter-temporal substitution by lovers of skewness. Permanent changes in the prize structure, or changes in the game design that altered rollover frequency, might not elicit the expected response to the extent that bettors favouring skewness in returns may currently concentrate their lottery expenditure on draws where extra skewness is available. Once again, it must be admitted that the econometric evidence would provide a ﬂimsy basis for strong policy recommendations on lottery reform.

A proposal for restructuring the lottery industry This review has taken a pessimistic view of the extent of practical use that has emerged from the time-series modelling of Lotto demand. Information on the importance of trend or on the impact of the introduction of a new game is worth having; but on central questions concerning take-out, game design and prize structure in Lotto itself, very ﬁrm conclusions have not emerged. This is not the fault of the economists who have supplied the studies to which extensive references have been made above. They face the inherent problem that arrangements for the Lotto game in the UK, as in many jurisdictions, have remained static. Underlying odds and prize structures have never been changed. Any variation, such as in effective price, has been transient in nature and nearly always of similar magnitude, so that no strong information is contained in data sets to allow conclusions to be drawn

Time-series modelling of Lotto demand

197

on the consequences of various possible reforms. Even if Camelot were to change the rules of the game in the future, it would likely be a move prompted by faltering sales and, therefore, would not be the exogenous shock that would be required for bettor behaviour to be properly modelled.23 Walker and Young (2001) point out the obvious, that ideally one would like an experiment in which some bettors are offered one variant of the Lotto product and other bettors another variant. But they dismiss this idea as impractical: the Lotto game is subject to peculiar economies of scale. Any variation of the game offered to sub-groups of bettors would be unattractive because of ‘small’ jackpot size, and conclusions on how the product would sell to the whole population could not therefore be made. It was Clotfelter and Cook (1993) who ﬁrst drew attention to this often cited phenomenon of the peculiar scale economies of Lotto: lotteries with similar takeout rates will generate different levels of per capita sales according to the size of the market in the jurisdictions they serve. Interest in the lottery will be greater in states with a large population because what captures the imagination of buyers is the absolute size of the jackpot pool. Lotto in small states will be relatively unattractive because it cannot hope to pool sufﬁcient money to offer life-changing amounts as prizes on a regular basis; very large jackpots can be designed into a small state game but emerge only with a game design that induces frequent rollovers; in this case bettors become bored because they do not expect anyone to win in any given draw. Clotfelter and Cook presented empirical evidence of the scale economies effect in a cross-section regression of per capita sales on population size (and control variables) in US states. Ironically, their proposition, and its empirical veriﬁcation, considerably undermines the effective price model that they and Scott and Gulley constructed because it draws attention to the important, independent inﬂuence of the size of the jackpot pool. The policy response across the world, for example, in the Nordic countries and in Australia and in some American states, has been for separate jurisdictions to merge their jackpot pools from time to time to produce larger jackpots. So, for example, six countries join together in the Nordic Lottery so that, at the level of the grand prize, the lottery becomes supranational and is sold to a large population base. Peculiar scale economies appear effectively to give Lotto the status of a natural monopoly. Territorial exclusivity can be presented as indispensable since, otherwise, bettors’ money would fail to be concentrated sufﬁciently into a jackpot prize pool that offered the appealing possibility of an enormous pay-out to the winner of the ﬁrst prize. The force of the natural monopoly argument is demonstrated by the history of the football pools in Britain. Until the product innovation of 1946, known as the ‘treble chance’, there had been hundreds of pools companies in Britain; but once business switched from small stake/small prize betting to the new long-odds product, the number of ﬁrms rapidly contracted (to three by 1960). Bettors switched their weekly investments from small to large pools because only the latter could offer a genuinely life-changing level of ﬁrst prize and, as they switched, there was a dynamic process leading in the direction of monopoly (Forrest, 1999).

198

D. Forrest

But over the last twenty-ﬁve years, competition has been introduced into many industries hitherto regarded as natural monopolies. Particularly in public utilities, processes of deregulation have proceeded on the basis of separating those parts of an industry where there is genuine natural monopoly from those parts where there are no scale economies sufﬁcient to justify the granting of exclusivity of supply. Thus, a national grid for the distribution of electricity or gas might constitute a genuine natural monopoly but competition can be introduced into the relevant energy market by permitting access to the distribution system by competing producers: the production side of the electricity or gas industry is not a natural monopoly and, with regulation of terms of access to the national grid, vigorous competition can emerge. Perhaps, then, competition can be introduced into the provision of a Lotto game. Competition is the normal framework in which consumers reveal their preferences because ﬁrms have the incentive to experiment with different product speciﬁcations in order to gain market share. National Lottery players have had limited opportunity to reveal their preferences and econometric modelling has therefore been limited in its ability to estimate the appeal of different take-outs and different prize structures. It is contended here that a measure of competition is entirely feasible in the supply of the main lottery product. Deregulatory reform could proceed from the recognition that the principal natural monopoly element of the lottery is the grand, or jackpot prize. Lotto players’ stakes need to be channelled towards one national jackpot prize fund for the game to survive as a mass-participation gaming activity built on ‘selling a dream’. This, though, would still be possible with the following alternative institutional arrangements. A ‘National Lottery Commission’ (state or privately operated) would organize the Wednesday and Saturday draws and provide appropriate publicity focusing on the size of the jackpot.24 The Commission would licence ﬁt and proper organizations to operate their own lotteries, afﬁliated to the National Lottery.25 All would be obliged to pay lottery taxes at the current rate or whatever rate the government sets in the future. All would be required to put into the National Lottery jackpot pool the current proportion of stake allocated to the grand prize. But they would be free to dispose of remaining revenue as they thought appropriate. For example, they might allocate all the remaining revenue available for prizes to an extra ‘grand prize’ fund payable if one or more of their own clients won, or they might scrap only the ﬁxed £10 prizes and reallocate that money to the four-ball prize pool. In the early stages, diversity in product offerings would be likely as ﬁrms sought market share. In the mature market, whether or not all the suppliers would offer a similar prize structure would depend on heterogeneity of bettor preferences. If a standard product emerged, that could perhaps indicate something about whether the current prize structure is optimal. If product diversity were sustained, this would be indicative of heterogeneity of preferences for which the current Lotto game does not cater; in this case, the market for the Lotto product should increase with revenue beneﬁts for the government and good causes. In Britain at least, the betting industry is well established and there would be no shortage of organizations (e.g. the Pools industry, national bookmaking chains)

Time-series modelling of Lotto demand

199

with the capability to enter the lottery market. Outlets such as post ofﬁces and supermarkets could be part of their own licenced operation or become outlets for lotteries offered by new entrants such as the pools companies. In the context of this chapter, the principal beneﬁt of the new arrangements would be that competition would reveal bettor preferences and lead the industry to a more optimal prize structure. By appealing to buyers with different risk-return preferences, it should also enlarge the market. Of course, it could be argued that Camelot already offers alternative products to cater for heterogeneous preferences; for example, it introduced Thunderball, an on-line game offering less skewness in returns than Lotto. But it is likely that a monopoly operator is very cautious in experimentation because of fears that it will cannibalize the market for its own principal existing product. The interest of bettors in choice of risk/return proﬁles is illustrated by the success of the twice-daily ‘49s’ lottery game, organized by the bookmaking industry and sold only at licenced betting ofﬁces. It offers no jackpot prize for six ‘correct’ numbers but bettors can control variance in returns by betting on three, four or ﬁve correct numbers (at different odds). At present, this market ﬂourishes outside, and is lost to, the National Lottery. Further, National Lottery players include many who only wish to gamble in the context of the national event that is the twice-weekly draw, and they may not bet at all if they do not like the current speciﬁcations of Camelot’s Lotto game. Other advantages of deregulation would include the efﬁciency gains usually associated with the dismantling of monopoly. Further, reform would remove the serious problem of how to choose the monopoly operator for each seven-year licence. The ﬁrst licence renewal generated considerable and prolonged controversy, which can be attributed to the inevitability that there is unlikely to be much difference between serious candidates for the lottery franchise. If one of the candidates attempts to differentiate itself by proposing a change in the game design or prize structure, the Commission has no ﬁrm basis on which to predict whether the changes would raise or lower revenue. The award of the franchise is, therefore, always likely to be contentious; and disillusion with the process could in future lead to the incumbent never being challenged at all. It has been argued here that there may not, in fact, be a need for a third monopoly franchise to be awarded. An element of competition is feasible in the supply of lottery services notwithstanding the peculiar scale economies of Lotto.

Acknowledgements I acknowledge the usefulness of discussions on lottery issues with Neil Chesters, David Gulley, David Percy, Robert Simmons and Mike Vanning.

Notes 1 Clotfelter and Cook (1987) adopted this deﬁnition of the price of a lottery ticket after a discussion of possible alternatives. It has been widely adopted in the subsequent literature. Mason et al. (1997), however, reported that an alternative measure, equal to one divided by expected value (rather than one minus expected value) gave more

200

2 3

4 5

6 7 8

9 10

11

12 13

D. Forrest

satisfactory results when employed as a regressor in a study of demand for the Florida lottery. The ‘one’ here is the face value of the ticket. Legislation stipulates that the hypothecated tax rate should increase beyond 28 pence if lottery sales exceed a certain amount during the course of the operator’s licence, but sales have never been large enough to trigger this increase. Early in the history of the lottery, Connolly and Bailey (1997) examined the extent to which the expenditure on ‘Good Causes’ represented new expenditure and the extent to which it just substituted for spending by other government programmes. Given changes in the areas of expenditure deﬁned as ‘Good Causes’ and eligibility for lottery funding, further studies along these lines would now be timely. The take-out rate is approximately twice as high as in the two next most popular British gambling media: slot machines and horse-race betting (Moore 1997, table 1). The UK government was not alone in viewing the introduction of a lottery as a means of raising revenue rather than a means of delivering utility to consumers. Erekson et al. (1999) are amongst those who have demonstrated the importance of ﬁscal pressure in triggering take-up of lotteries by American states. In Canada, the ﬁrst lottery was introduced to cope with the ﬁnancial crisis in Quebec following huge unanticipated losses from the staging of the Olympic games. More recently, Spain justiﬁed a new lottery by its need to improve public ﬁnances to meet the conditions for membership of the new European Currency Zone. This should perhaps be regarded as an upper-bound estimate to the extent that it was calculated using an early estimate of the UK Lotto demand curve that displayed greater elasticity than the consensus value from later studies. DeBoer (1986), however, had employed panel data for seven American states over ten years and found signiﬁcant price elasticity (−1.19). Subsequent UK studies tended to be much less spartan in terms of the number of controls. One may speculate that Gulley and Scott were concerned to estimate a similar equation for each game and incorporating one-off inﬂuences pertaining to particular states would undermine this. Scott and Gulley (1995) for the US cases, and Forrest et al. (2000a) for the UK, tested and could not reject that bettors’ behaviour is consistent with rational expectations in terms of the efﬁcient processing of available information. The authors report, but do not emphasize, alternative estimates for a linear speciﬁcation of the demand curve. Notwithstanding that these are stated to be of the same magnitude as the log estimates, they appear (evaluated at the mean) to be quite different, for example, −2.50 rather than −1.20 for the Kentucky lottery. The policy implication would then change in that Kentucky would be advised to run a more generous lottery rather than be advised to continue with the current terms (an implication of the −1.20 estimate). No test for which functional form was more appropriate is reported but, commonly in Lotto demand studies, it is hard to choose between functional forms because effective price tends to cluster at two levels corresponding to rollover and regular draws. It is assumed here that bettors view high jackpots (for given effective price) positively. Some plausibility is added to this assumption by an incident in the UK lottery when, as a one-off experiment, the lottery agency added reserve funds it held to the second-prize pool for a draw in 1999. In terms of effective price/expected value of a ticket, the effect was akin to a rollover. But sales for that draw did not respond at all to the reduction in effective price. In all draws when effective price has been lowered by rollover into the grand prize pool, sales have increased substantially. Britain had had lotteries earlier in its history but they were ﬁnally legislated out of existence in 1826. This is in fact above the licence take-out of £0.50 because the operator is permitted, and chooses, to price discriminate across its products and imposes a higher take-out on on-line than on scratch-card players.

Time-series modelling of Lotto demand

201

14 Usually, the amount of ‘free money’ paid into the pool is not announced explicitly but rather Camelot guarantees a certain size of jackpot pool. However, the guarantee has always been binding and Camelot funds have been required to bring the jackpot up to the amount promised. 15 Particularly dubious is Farrell and Walker’s assertion that lottery tickets are less addictive than cigarettes. This is based on a comparison of the coefﬁcient on the lagged dependent variable in their study and that found in a cigarette demand equation estimated by Becker et al. (1994). The time period over which consumption of the two goods is deﬁned is quite different between the two cases. 16 Camelot’s other main product, scratch cards, provides an opportunity for rapidly chasing losses and is therefore regarded as a ‘harder’ form of gambling. Data limitations have so far prevented any economic studies of demand but one may suspect that, if suitable data became available, it would be worthwhile here to test for addiction. 17 In the UK lottery, six numbers are drawn from a set of forty-nine (without replacement) and the jackpot is then shared by bettors whose six numbers correspond exactly with those drawn. The draw also picks out a seventh number (the bonus ball). The second prize is shared by those whose entry comprises ﬁve of the main winning numbers plus the bonus ball number. 18 Wednesday and Saturday operations are both included with different levels of demand accounted for only by a shift dummy. 19 If the players had to choose six numbers from ﬁfty-three instead of forty-nine, the game would be harder to win and more rollovers would result. Impact on total sales requires simulation because mean-variance-skewness will be altered in both regular and rollover draws and the relative frequency of regular and rollover draws would change. 20 Five per cent level of signiﬁcance. 21 It is of interest that Forrest et al. (2000b) reported that when skewness was added to their version of the expected price model, it attracted a t-statistic of only 0.37. 22 Walker and Young do not use the one observation when skewness was atypically affected by a superdraw: they employ a dummy variable for the draw on 19 September, 1998 and thereby eliminate its inﬂuence on the sales-skewness relationship. This may be justiﬁed statistically but it is unfortunate not to consider information on bettor preferences that may be contained in the data from this episode. 23 Beenstock et al. (1999) were able to observe variations in lottery design in Israel but these, likewise, could be argued as being endogenous. 24 Possibly it could be self-ﬁnancing since it could sell television rights for coverage of the draws. 25 A matter to be resolved would be whether these organizations would own their sales terminals or lease them from the Commission.

References Becker, G. S. and Murphy, K. M. (1988), ‘A theory of rational addiction’, Journal of Political Economy, 96: 675–700. Beenstock, M., Goldin, E. and Haitovsky, Y. (1999), ‘What jackpot? The optimal lottery tax’, working paper, Hebrew University of Jerusalem. Clotfelter, C. T. and Cook, P. J. (1987), ‘Implicit taxation in lottery ﬁnance’, National Tax Journal, 40: 533–546. Clotfelter, C. T. and Cook, P. J. (1989), Selling Hope: State Lotteries in America, Cambridge, MA: Harvard University Press. Conlisk, J. (1993), ‘The utility of gambling’, Journal of Risk and Uncertainty, 6: 255–275. Connolly, S. and Bailey, S. J. (1997), ‘The National Lottery: a preliminary assessment of additionality’, Scottish Journal of Political Economy, 44: 100–112.

202

D. Forrest

Cook, P. J. and Clotfelter, C. T. (1993), ‘The peculiar scale economies of Lotto’, American Economic Review, 83: 634–643. Cox, D. L. (1961), ‘Tests of separate families of hypotheses’, Proceedings of the Fourth Berkley Symposium on Mathematical Statistics and Probability, Vol. 1, Berkeley: University of California Press. Cox, D. L. (1962), ‘Further results on tests of separate families of hypotheses’, Journal of the Royal Statistical Society, Series B, 24: 406–424. DeBoer, L. (1986), ‘Lottery taxes may be too high’, Journal of Policy Analysis and Management, 5: 594–596. Erekson, O. H., Platt, G., Whistler, C. and Ziegert, A. L. (1999), ‘Factors inﬂuencing the adoption of state lotteries’, Applied Economics, 31: 875–884. Farrell, L., Morgenroth, E. and Walker, I. (1999), ‘A time series analysis of UK lottery sales: long and short run price elasticities’, Oxford Bulletin of Economics and Statistics, 61: 513–526. Farrell, L. and Walker, I. (1999), ‘The welfare effects of Lotto: evidence from the UK’, Journal of Public Economics, 72: 92–120. Forrest, D. (1999), ‘The past and future of the British football pools’, Journal of Gambling Studies, 15: 161–172. Forrest, D., Gulley, O. D. and Simmons, R. (2000a), ‘Testing for rational expectations in the UK National Lottery’, Applied Economics, 32: 315–326. Forrest, D., Gulley, O. D. and Simmons, R. (2000b), ‘Elasticity of demand for UK National Lottery tickets’, National Tax Journal, 53: 853–863. Forrest, D., Gulley, O. D. and Simmons, R. (2001), ‘Substitution between games in the UK National Lottery’, working paper, University of Salford. Forrest, D., Simmons, R. and Chesters, N. (2002), ‘Buying a dream: alternative models of Lotto demand’, Economic Inquiry, 40: 485–496. Francis, J. C. (1975), ‘Skewness and investors’ decisions’, Journal of Financial and Quantitative Analysis, 10: 163–172. Garrett, T. A. and Sobell, R. S. (1999), ‘Gamblers favour skewness not risk: further evidence from United States’ lottery games’, Economics Letters, 63: 85–90. Golec, J. and Tamarkin, M. (1998), ‘Bettors love skewness, not risk, at the horse track’, Journal of Political Economy, 106: 205–225. Gulley, O. D. and Scott, F. A. (1993), ‘The demand for wagering on state-operated lottery games’, National Tax Journal, 45: 13–22. Mason, P. M., Steagall, J. W. and Fabritius, M. M. (1997), ‘The elasticity of demand for Lotto tickets and the corresponding welfare effects’, Public Finance Review, 25: 474–490. Moore, P. G. (1997), ‘Gambling and the UK National Lottery’, Business Ethics: A European Review, 6: 153–158. National Lottery Commission (2000), ‘Commission announces its decision on the next lottery licence’, News Release 24/00, December 19, 2000. Paton, D., Siegel, D. and Vaughan Williams, L. (2001), ‘A time-series analysis of the demand for gambling in the United Kingdom’, working paper, Nottingham Trent University. Pesaran, H. (1974), ‘On the general problem of model selection’, Review of Economic Studies, 41: 153–171. Purﬁeld, C. and Waldron, P. (1997), ‘Extending the mean-variance framework to test the attractiveness of skewness in Lotto play’, working paper, Trinity College, Dublin.

Time-series modelling of Lotto demand

203

Scott, F. A. and Gulley, O. D. (1995), ‘Rationality and efﬁciency in Lotto markets’, Economic Inquiry, 33: 175–188. Walker, I. and Young, J. (2001), ‘An economist’s guide to lottery design’, Economic Journal, 111: F700–F722. Wessberg, G. (1999), ‘Around the world in 80 games’, presentation to the Inter Toto Congress, Oslo. Woodland, B. M. and Woodland, L. M. (1999), ‘Expected utility, skewness and the baseball betting market’, Applied Economics, 31: 337–346.

16 Reconsidering the economic impact of Indian casino gambling Gary C. Anders

A brief history of Indian1 gaming Native American casinos result from the 1988 Indian Gambling Regulatory Act (IGRA). The IGRA is a federal law stemming from the US Supreme Court’s decision in the case of California v. Cabazon Band of Mission Native Americans. This decision found comparable Native American gambling legal, where a state has legalized any form of gaming. There has been a massive proliferation of Indian casinos throughout the country. Currently, 124 of the 557 federally recognized2 tribes operate gaming facilities. This industry of more than 120 casinos and 220 highstakes bingo games sprang from a single bingo hall on the Seminole reservation in Florida. High-stakes gaming grew as other Florida and California Indian tribes began offering cash prizes greater than that allowed under state law. When the states threatened to close the operations, the tribes sued in the federal court. In California v. Cabazon (1987), the Supreme Court upheld the right of the tribes as sovereign nations to conduct gaming on Indian lands. The court ruled that states had no authority to regulate gaming on Indian land, if gaming is permitted for any other purpose.3 In light of the favorable Supreme Court decision, the Congress passed P.L. 100-497, the IGRA in 1988 recognizing Indian gaming rights. The IGRA faced strong opposition from Las Vegas and Atlantic City. States, however, lobbied for the legislation in an effort to establish some control over tribal gaming. The Congress sought to balance Native American legal rights with the states’ interests and the gambling industry (Eadington, 1990). The IGRA allows any federally recognized tribe to negotiate a compact with its respective state government to engage in gambling activities. A tribal–state compact is a legal agreement that establishes the kinds of games offered, the size of the facility, betting limits, regulation, security, etc. Compacts ensure that tribal governments are the sole owners and primary beneﬁciaries of gaming. These compacts deﬁne the various allowable types of Indian gambling activities according to three classes. Class I is deﬁned as social games solely for prizes of minimal value or traditional forms of Native American gaming engaged in by individuals. Class II includes bingo, and electronic bingo-like games, punch boards, pull-tabs,

Economic impact of Indian casino gambling

205

as well as card games not explicitly prohibited by state law. Class III includes all other forms of gambling including slot machines, casino games, and pari-mutuel betting. The IGRA created a framework for regulation and oversight of tribal gaming with four interdependent levels: tribal, state, federal including the Department of Justice, the FBI, the Internal Revenue Service (IRS) and the Bureau of Indian Affairs (BIA), and ﬁnally, the National Indian Gaming Commission (NIGC). Class I gaming is regulated solely by tribes. Class II gaming is regulated solely by tribes, if they meet conditions set forth in the IGRA. Regulation of Class III gaming is governed by tribal–state compacts. In general, tribes enforce frontline gaming regulations. Tribes establish their own gaming commissions and operate tribal police forces and courts to combat crime. They adopt ordinances, set standards for internal controls, issue licenses for gaming operations, and provide security and surveillance measures. Tribes or management contractors also manage tribal gaming operations. States enforce the provisions of Class III gaming compacts, which include background checks of employees and management company personnel. Some states like Arizona, for example, coordinate background checks and other security measures with tribes. At the federal level, the Department of Interior determines which lands can be placed into reservation trusts, approves tribal–state compacts, rules on tribal gaming revenue allocation plans, and conducts audits of gaming operations. The Department of Justice enforces criminal violation of gaming laws, conducts background checks of key gaming employees, and conducts investigative studies. The FBI and BIA provide oversight on crimes committed on reservations. The NIGC approves tribal resolutions and gaming ordinances, and reviews terms of Indian casino management contracts. The NIGC has the authority to enforce civil penalties, impose ﬁnes, and to close an establishment. The IGRA provides tribal gaming operations with an exemption from the Freedom of Information Act. Unless the tribe agrees, federal and state regulators cannot publicly release or disclose ﬁnancial information. This protective measure also makes it nearly impossible to ascertain individual casino revenues. Furthermore, because this is primarily a cash-based business, problems exist for law enforcement ofﬁcers looking for a paper trail of records to trace all gaming activity of customers engaged in large scale transactions, and potential money laundering activities (US General Accounting Ofﬁce, 1997). Casinos range from the palatial Foxwoods casino in Connecticut to trailers in remote locations offering a few slot machines. Tribes do not have to pay taxes on their gaming revenues to the state or federal government. Some tribes have negotiated revenue sharing agreements with their state government. All tribes are, however, legally required to withhold state and federal income tax and Federal Insurance Contributions Act (FICA) from all non-Indian and non-resident Indian4 tribal employees, and report payments to independent contractors. Additionally, Indian tribes must report gaming winnings to the IRS; withhold federal income taxes of statutorily deﬁned gaming winnings and payments to non-resident aliens;

206

G. C. Anders

report per capita distributions of more than $600 to the IRS; and withhold federal income tax on distributions of $6,400 or more (Anders, 1998).

Introduction Since it was legalized, Indian gambling has grown to account for approximately $9.9 billion in annual revenues (McKinnon, March 13, 2001). Policy makers in the United States have been relatively slow to grasp the signiﬁcance of this development, in part because there has been little research on the economic and social impacts of gambling on both native and non-native economies. The purpose of this chapter is to provide an overview of the various policy issues related to Indian gambling in Arizona. It discusses three interrelated issues regarding Indian gaming, namely: casino monopoly proﬁts; community impacts, and tax revenue displacement. This chapter presents the results of regression analyses that conﬁrm the displacement effects of casinos by economic sector. An attempt is made to extrapolate the economic impact of casino enlargement on local government using the number of slot machines as a measure of gambling activity on an anticipated loss in tax revenue. Several years ago, my colleagues and I examined the ﬁscal impact of casinos in Arizona. We conducted a statistical test on Maricopa County Transaction Privilege Tax revenues from 1990–1996. The results of that test indicated a destabilization of county tax collections beginning in July 1993 (Anders et al., 1998). We argued that a displacement of taxable expenditures reduces potential state tax receipts, but that this leakage is masked by population and economic growth (Hogan and Rex, 1991). In response to our ﬁndings, The Economic Resource Group, Inc. of Cambridge, Massachusetts was contracted by Arizona gaming tribes to write a report on the social economic beneﬁts of Indian casinos (Taylor et al., 1999). While touting the positive impacts of Indian casinos, the authors attempted to discredit the validity of our work. Researchers at Arizona State University West have advanced an Arizonaspeciﬁc claim of off-reservation impacts resulting from Indian gaming . . .. Thus, the authors’ attempt to link tax collection shortfalls to the introduction of casino gaming cannot possibly be correct unless people withheld purchases of goods and services in anticipation that casinos would be opened in the future . . . . That said, the failure of Anders, et al., to pick up an effect of actual casinos capacity additions on State tax receipts suggest at a minimum that much more careful and controlled analyses must be undertaken if substitution claims are to be supported. (Taylor et al., 1999, 35–37) Taylor et al. (1999) did not provide an alternative explanation based on actual casino revenue data to counter our results. Instead, they advocated for Indian gaming using anecdotal examples of the use of casino proﬁts to help tribes. Before

Economic impact of Indian casino gambling

207

responding to the call for rigorous tests of the displacement hypothesis, I will raise three pertinent issues related to Indian casinos. These are: (1) that the IGRA has created highly proﬁtable gambling monopolies, (2) that proponents overstate the positive beneﬁts of gambling on communities, and (3) casinos result in an increasing loss of public sector revenues because of tax displacement.

Three little understood aspects of Indian gaming Good public policy should match the intended outcomes with the actual results. Indian gaming was a compromise effort designed to promote Native American economic development while deﬁning acceptable limits to tribal sovereignty within the US legal and political framework. The results of the IGRA have been far different than its architects could have anticipated. Instead of promoting Native American self-sufﬁciency, gaming has made a small number of Indians rich while leaving many tribes mired in poverty. Most of us have been sensitized to the historical conquest and the harsh treatment of aboriginal peoples. In this respect, we feel compassion for the hardships inﬂicted upon Native Americans. Indians have lost valuable lands and continue to experience a number of health-related problems including drug and alcohol abuse. The treatment of Native Americans evokes a profound sense of moral outrage. Still, we should not confuse the economy of casinos and resort hotels that beneﬁt a few tribal members with restitution to an entire race for past injustices. There is no question that the economic development of Indian reservations should be a high national priority, but we should be careful about the type of economy that is being developed. Indian casinos are highly proﬁtable monopolies beneﬁting a small percentage of native Americans Since the IGRA was passed in 1988, Indian casinos have become far more successful than anyone could have imagined. Indian casinos now generate almost $10 billion which is more than the annual revenues than all the casinos in Nevada combined (McKinnon, March 13, 2001). The basis for a proﬁtable casino is a large market. Many of the casinos in Arizona border populated urban areas, or are located on heavily traveled highways. Mostly anytime, day or night, the Arizona Indian casinos are full. Customers are often lined three or four deep waiting for their chance to play slot machines. There are even special buses that pick up passengers and bring them to the casinos. Yet, every time a person drops a dollar into a slot, or a video poker machine, on average, they will only get back 83 cents, or less.5 Even accounting for an occasional large pay-out, Indian casinos are phenomenally proﬁtable. Indian casino revenues are not publicly available because the IGRA speciﬁcally exempted tribes from the Freedom of Information Act. Without this ﬁnancial information it is hard to know deﬁnitively, but a conservative estimate is that the nineteen Arizona casinos earn about $830 million in net revenues per year

208

G. C. Anders

(Cummings Associates, 2001). In reality, the actual amount is probably much larger.6 Table 16.1 presents a context for understanding the concentration of Indian gambling in Arizona. Data on the tribal population and the number of machines are presented. Aside from the Navajo tribe with over 104,000 enrolled members, most Arizona Indian tribes are small. It is signiﬁcant to note that about 36 percent of the total Native population controls gaming. Moreover, the proﬁtability of gaming is highly skewed. As shown in Table 16.2 the average annual net revenues from gambling range from between $4,000 and $260,000 per capita. Casinos generate huge proﬁts for urban tribes while the larger, more impoverished tribes have largely been excluded from the gambling windfall.7 To garner public support, various Indian groups sponsor studies on gaming. For example, researchers at the University of Arizona recently released a study (paid for by the Arizona Indian Gaming Association) on the economic impacts of Indian casinos (Cornell and Taylor, 2001). According to this study, Indian casinos generated 9,300 jobs and spent $254 million in goods and services in 2000. They argue that Arizona Indian casinos had a total impact of $468 million of the state economy (Mattern, 2001). A review of the literature on impact studies demonstrates that ﬁndings such as these should be viewed with caution. The abuse of economic impact models to exaggerate the beneﬁts of various activities to gather public support is well known. Wang (1997), for example, explains how analysts inﬂate estimates to produce economic impacts that reﬂect a greater contribution to the community and therefore improve popular support and legitimacy. Using various multipliers derived from the literature, Wang demonstrates how the same activity can have a total economic impact ranging from $6.8 to $55.2 million. Harris’s (1997) discussion of the “use and abuse” of economic impact multipliers in the tourism industry asserts that data quality and accuracy is a critical consideration. Unless the economic model incorporates actual gambling revenue data then the estimated economic impact of Indian casinos are likely to be inaccurate.8 The basis for Cornell and Taylor’s asserted impacts is a complex Input–Output model of the state economy. This computer model portrays the economy in terms of a matrix of over 200 sectors that interact based upon assumed relationships between indirect and induced effects (i.e. jobs created, wages, sales, and output.) Induced effects represent “new” employment opportunities that are created when an economic activity starts up and produces goods or services that are sold to consumers both in the state and outside. Two types of data are used to drive the model: (1) tax rates, which are known, and (2) input projections, which are estimated. A critical assumption is that the economic activity in question represents a new expenditure stream that generates subsequent rounds of spending. Also, the model assumes that businesses in the same sector are homogeneous and that it does not matter which sector experiences the stimulus. Typically, economic impact studies of Indian gambling confuse ﬁrst and second round expenditures (Gazel, 1998). Money coming to Arizona Indian casinos is not “new” money, but redirected money. For example, before the advent of

742 1,025 7,466 824 773 11,257 503 6,946 1,353 196 104,565 3,315 2500 6,405 9,385 10,787 132 12,429 743 182 179,064 65,697

Ak-Chin Cocopah Colorado River Fort McDowell Fort Mohave Gila River Havasupai∗ Hopi∗ Hualapai∗ Kaibab-Paiute∗ Navajo∗ Pascua Yaqui Quechan Salt River San Carlos Tohono O’Odham Tonto-Apache White Mountain Yavapai-Apache Yavapai Totals Gaming tribes

475 475 475 475 475 900 0 0 475 475 0 900 475 700 900 1,400 475 900 475 475 9,925

Slot and video poker – authorized 475 468 456 475 180 900 0 0 0 0 0 500 475 700 500 592 337 496 475 475 7,504

Slot and video poker – in use 488 350 350 1,700 0 1,800 0 0 0 0 0 476 300 0 1,000 0 280 200 0 150 7,094

Number of bingo seats 13 0 5 45 0 62 0 0 0 0 0 0 8 92 6 28 5 5 5 6 280

Card tables

Notes ∗ Indicates tribes without casinos even though a Class III gaming compact may have been signed with the State of Arizona.

Source: Arizona Department of Economic Security, 2001, and Arizona Department of Gaming, July 2001.

2000 population

Reservation

Table 16.1 Arizona Indian tribal population and gaming capacity

3/94 8/96 8/98 5/94 10/93 9/93 12/93 5/95 11/92

closed closed

12/94 11/92 6/99 1/93 4/95 11/97

Date opened

148,999

12,341 n/a 13,624 17 n/a

5,488 255,303 3,991 63,930 260,989

750,400,000

85 23 1,531 2 98 5,559 153 2,904 1,080 120 89,250 4,519 n/a 3,122 14,571

Welfare cases + in FY 2000

64,016 45,659 6,108 57,646 23,286 7,995 0 0 0 0 0 15,083 19,000 10,929 5,328

Slot machine revenue per capita∗∗ (in $)

15,281,046

1,341,321 n/a 1,759,413 n/a n/a 8,436,698

692,589 n/a 908,687 n/a n/a

n/a 551,188 n/a 554,395 n/a n/a 4,379,254 260,449 n/a 265,232 802,127

44,114 n/a n/a 1,067,510 n/a n/a n/a n/a 8,497,477 504,431 n/a 513,721 1,553,059

22,777

n/a n/a

State assistance in FY 1997 (in $)

n/a n/a

Federal assistance in FY 1997 (in $)

40,060,797

3,039,354 n/a 3,322,039 1,034 n/a

23,869 8,064 338,571 348 26,143 1,729,888 36,317 860,911 262,798 22,718 24,718,964 1,276,495 n/a 817,955 3,575,329

Food stamps in FY 2000 (in $)

79.0 24.0 n/a 12.5 33.0

6.0 22.4 9.0 11.3 13.9 29.6 75.0 55.0 37.0 n/a 52.0 34.4 43.0 15.4 30.0

1990

13.2 n/a 20.4 0 7.0

5.7 13.2 4.9 6.4 7.8 18.1 9.0 15.2 20.2 12.5 17.3 21.5 33.3 8.7 18.4

2000

Unemployment rates

Notes ∗ Indicates tribes currently operating without a casino though they may be a part of the compact currently signed with the state of Arizona. ∗∗ Using Cumming’s $830,000,000 revenue estimate means that, on average, each of the 7,504 Indian casino slot and video poker machines earns over $100,000 per year. To calculate a conservative estimate of the per capita revenue from slot machines I multiplied the number of machines times $100,000 and divided the total by the number of enrolled tribal members listed in the 2000 census. + Indicates the number of households currently receiving food stamps. The number of household residents varies.

Source: Arizona Department of Economic Security, July 2001 and US Census Bureau, Census 2000 Summary File. More recent data for this study was unavailable.

Total

Ak-Chin Cocopah Colorado River Fort McDowell Fort Mohave Gila River Havasupai∗ Hopi∗ Hualapai∗ Kaibab-Paiute∗ Navajo∗ Pascua Yaqui Quechan Salt River San Carlos Tohono O’Odham Tonto-Apache White Mountain Yavapai-Apache Yavapai

Reservation

Table 16.2 Per capita slot machine revenue, unemployment rates, welfare and transfer payments for Arizona Indian reservations

Economic impact of Indian casino gambling

211

Indian gaming a person might have gone to a movie or restaurant, but now instead goes to a casino. The money spent at the casino comes from one sector where it diminishes sales and goes to another where it provides proﬁts. There is nothing inherently wrong about this, except that it is incorrect to postulate positive economic impacts without considering the corresponding economic loss. Based upon the available evidence there is a high probability that gambling conﬁned to local markets only results in income redistribution, and causes no net change in employment (Felsenstein et al., 1999). The reason for this is that casino proﬁts constitute expenditure losses for competing businesses. Although some of the winnings are recaptured from gambling that would have been undertaken outside of the state, the fact is that Arizona Indian casinos derive almost all of their business from state residents. This is a signiﬁcant difference that substantially reduces positive impacts (Rose, 2001). Furthermore, at least two casinos in Arizona, Ak Chin and the Yavapi-Apache, in Camp Verde have management contracts with outside companies like Harrah’s and Fitzgerald’s which result in up to 40 percent of the proﬁts being remitted back to corporate headquarters outside Arizona. This means that the leakage is even greater because of reduced second round expenditures. Figure 16.1 presents a model of the Arizona economy with casinos to demonstrate why gaming reduces potential multiplier effects. In Arizona, Indian gaming is a highly proﬁtable monopoly that has been able to internalize the beneﬁts and externalize the social costs. The nineteen casinos take in over $830 million in net revenue and, except for modest regulatory costs, none

Consumer Expenditure

Indian casinos

The State of Arizona experiences decreased tax revenues and increased liabilities.

Expenditure and income effects • direct employment • indirect employment • valued added

Leakages Weak linkages with nonreservation businesses. Consumption multiplier is lower than private sector. Profits remitted to out side management companies.

Figure 16.1 A model of the Arizona economy with Indian casinos.

212

G. C. Anders

of the proﬁt is shared with the state. The Cornell study estimated that casinos spend only $254 million on goods and services, and have a total impact of $468 million. Comparing these estimates with the net revenue it then appears that Indian casinos are responsible for a drain of at least $362 million from the Arizona economy, and this does not consider other negative impacts.9 Claims of positive impacts of Indian casinos are overstated Tribal leaders and the Arizona Indian Gaming Association (AIGA), an industry lobby, argue that Indian gambling generates spillover beneﬁts in the form of jobs, and taxable wage income. They point to the thousands of jobs created by casinos, and to the added purchasing power afforded to their employees and tribal members as a result of gambling. Defenders of Indian gaming argue that decreases in unemployment and a reduction in the number of families dependent upon welfare have reduced state and federal payments to tribes and thus saved money. At the same time casinos are said to have been responsible for improved health care, substance abuse programs, educational scholarships, and improving the housing stock and infrastructure of the reservation communities (Stern, June 18, 2001). While there is some truth in these assertions, they need to be considered in light of the evidence. It is clear that casinos have created jobs albeit with a corresponding job loss in other sectors. However, the available evidence does not support the assertion that gaming has substantively reduced Native American unemployment. According to data from the Arizona Department of Employment Security and the BIA there is no statistical difference in changes in unemployment between Arizona tribes with a casino and those without a casino. While individual tribes (e.g. Cocopah) have experienced a dramatic decrease in unemployment from 22.4 percent in 1990 to 13.1 percent in 2000, overall rates of unemployment for all tribes including tribes without casinos have shown a downward trend after peaking in 1994. (see Table 16.2). Furthermore, the rate of employee turnover in Indian casinos is high, and the residual level of permanent employment is much lower that one might assume.10 To be fair there are numerous examples where tribes have used casino proﬁts to improve the quality of life for tribal members. Infrastructure has been improved, housing has been built, social services and health care have been expanded. Yet, relative to the per capita proﬁts from gambling there is still unexplainable residual unemployment and continuing dependence on welfare and food stamps among tribes with casinos. For example, the Ak Chin tribe has a total enrollment of 742 people. Out of a labor force of 209 adults 5.7 percent were still unemployed in 2000. Despite the fact that the Ak Chin casino generated net revenues in excess of $64,000 per capita, there were eighty-ﬁve individuals still receiving public assistance and food stamps. Numerous other gaming tribes such as the Gila River, Pascua Yaqui, San Carlos, and White Mountain experience similar anomalies with unemployment and welfare assistance. Contrary to widely promoted misconceptions there is no evidence that gambling has signiﬁcantly improved the quality of life for most Native Americans.

Economic impact of Indian casino gambling

213

According to a report entitled Survey of Grant Giving by American Indian Foundations and Organizations recently released by Native Americans in Philanthropy (NAP), gaming on Indian reservations has yet to signiﬁcantly lower the high levels of poverty endemic to Indian people nationwide. The report found that poverty among Indians has actually risen during the past decade of the gaming boom, and now more than half of all reservation Indians lives below the poverty level more than four times the national average . . . Small tribes located near major urban areas have beneﬁted the most from the gaming boom. (Native Americas Magazine, 1997)

Displacement effects of Indian casinos are signiﬁcant My research with Donald Siegel has been directed towards understanding the ﬁscal impacts of commercial gambling. We found evidence to suggest that the opening of Indian casinos was related to a structural change in the state Transaction Privilege Tax (TPT)11 (Anders et al., 1998). In other words, Indian gaming proﬁts come at the expense of other taxable sectors of the state’s economy. This is because, on average, the State of Arizona collects about 5.5 percent of the revenue from taxable sales.12 Based on annual net revenues of approximately $830 million per year Indian casinos on-reservation gambling reduces taxes by approximately $47.3 million per year – depending upon the extent to which gambling is “exported” to tourists and seasonal residents.13 For the most part, economic growth and in-migration have masked these leakages. Unfortunately, due to the lack of Indian casino revenue data, our research approach required the use of fairly sophisticated statistical tests. These ﬁndings caused us to look closely at the question of whether, or not, Indian casinos have a negative impact on other forms of gambling (i.e. horse and dog racing, and lotteries) that contribute to state tax revenues. More recently, we empirically examined the relationship between the state lottery and Indian casinos. Using regression analysis with the number of slot machines as a proxy variable for casino revenues, we found that decreases in lottery sales are correlated with the growth in Indian gaming (Siegel and Anders, 2001). Thus, a consistent picture emerges from this stream of research. Gambling on an Indian reservation by-and-large constitutes a leakage from the taxable economy. Economic impact studies written in support of tribal casinos typically use expenditure and employment multipliers to demonstrate that the casinos beneﬁt regional economics through direct purchases and employment or through indirect multiplier effects. Unlike Las Vegas or Atlantic City which exports gambling to residents of other states (Eadington, 1999) the casinos in Arizona rely almost exclusively on local trafﬁc. About 94 percent of the patrons to Indian casinos are state residents, (McKinnon, March 21, 2001) which greatly affects the way in which casinos impact other parts of the local economy. As a result there is cannibalization of existing businesses and reduced tax revenue.

214

G. C. Anders

Now that these three issues have been addressed, the next section discusses a new, more extensive series of test of the negative economic impact of Indian casinos.

Econometric analysis of displacement The following explains a test of the sales tax revenues displacement that occurs when residents gamble at non-taxed Indian casinos. Displacement is the loss of government revenue as the result of an exogenous event. The problem with empirically testing displacement as a function of Indian casinos is complicated by the favorable economic conditions and population growth over the last decade. Since the opening of Indian casinos in November of 1992, Arizona has experienced rapid demographic and economic growth. From 1990 to 2000 the state’s population increased from 3.7 million to 5.1 million. As a result of increased personal spending, TPT grew dramatically from $1.9 billion in 1991 to $3.6 billion in FY2000. The hypothesis tested here is that the Indian casino diverts potential tax revenue stream and reduces the amounts that governments can collect. In other words, that potential taxable revenue was taken from the economy at a time when there was considerable growth and low unemployment. To test this hypothesis, it is necessary to perform a statistical analysis of the TPT collections as a function of variables that are likely to explain its variation over time. Regression analysis is generally considered an appropriate statistical tool for testing economic hypotheses. This approach requires the speciﬁcation of a formal model or equation that expresses the hypothesized relationships between dependent and independent variables. After testing a variety of variables and functional forms, an Ordinary Least Squared model was found to offer robust results with a parsimonious speciﬁcation.14 Using data on TPT revenues, Arizona population, number of tourists, personal income, and the number of slot and video poker machines in Arizona Indian casinos, a series of regression were conducted. Table 16.3 summarizes the variables and data sources used in this study. Due to the fact that some of the data are kept on a monthly basis for a ﬁscal year beginning July 1, and others are kept on a quarterly basis for a calendar year, it was necessary to standardize the series on quarterly basis. Since there are numerous Table 16.3 Variables and sources of data Variable

Series used

Data source

TPT

Arizona Department of Revenue

Population

Gross Transaction Privilege, Use and Severance Tax Collections Arizona Population

Tourist Slots

Estimated Visitor Count Machine Count

Center for Business Research, Arizona State University Arizona Ofﬁce of Tourism Arizona Department of Gaming

Economic impact of Indian casino gambling

215

collection categories that would not be anticipated to have any interaction with casinos, the four sectors having the greatest likelihood of competition with casinos were tested. To accomplish this the state TPT data were disaggregated to concentrate on four speciﬁc sectors: Restaurants/Bars, Amusements, Retail, and Hotels.15 The TPT collections from these four sectors are the dependent variables that were individually regressed against the independent variables: population, the number of tourists, and slot machines. The number of slot machines was used as a proxy variable. The equation also included a dummy numeric to capture the growth trend, and another for summer seasonal effects. The Ordinary Least Squares in the SPSS statistical program allowed for the speciﬁcation of the following model: TPTi = a + β1 Pop + β2 Slotst−1 + β3 Trend + β4 Q1 + U where i refers to the Transaction Privilege Tax collected from a speciﬁc sector; Pop is the population of the State of Arizona; Slotst−1 is the machine count on Arizona Indian casinos lagged by one period; Trend is a numerical term to account for growth; Q1 is a dummy variable for the ﬁrst quarter of the ﬁscal year (July, August, and September); U is a normally distributed error term. Each of these individual TPT collection classes were regressed against the set of independent variables noted above. The data set used ranged from the ﬁrst quarter of FY 1992 to the fourth quarter of 2000. With four explanatory variables and thirty-four observations, the model has thirty degrees of freedom. Table 16.4 summarizes the results of the econometric tests. The explanatory power of a model is reﬂected in the R 2 statistic. In these regressions the R 2 indicate that between 66 and 97 percent in the variation of TPT can be explained by variations in the independent variables. When the value of the Durbin–Watson (DW) is close to 2 it can be assumed that there is no serial correlation in the residuals. Of these models the R 2 statistics for Restaurants and Bars, and Retail are the highest, however, only for the Amusements TPT is the T -statistic for the β3 Slots parameter greater than the critical value at the 95 percent conﬁdence level. The most signiﬁcant ﬁnding is that the number of slot machines Table 16.4 Results of state TPT regressions using quarterly data Dependent variable

Parameter estimates Pop

Slots

Trend

Q1

R2

F

TPT Restaurants/ Bars TPT Amusements TPT Retail TPT Hotels

−0.39 (−0.85)

−0.19 (−1.3)

1.5 (2.7)∗

−0.21 (−5.0)∗

0.94

133 2.0

−2.5 (−2.1)∗ −0.09 (0.26) −0.97 (−1.1)

−1.0 (−2.7)∗ −0.13 (−1.1) −0.60 (−0.21)

4.2 (2.9)∗ 1.2 (2.8)∗ 1.70 (1.67)

0.09 (0.96) −0.09 (−2.9)∗ −0.56 (−7.2)∗

0.66 0.97 0.82

14 1.8 227 2.0 35 2.1

Notes T statistics in parentheses. ∗ Statistically signiﬁcant at the 95 percent conﬁdence level.

Test statistics DW

216

G. C. Anders

are negatively correlated with the TPT collection for each of the four sectors. The best ﬁt occurred with the Slots variable lagged one quarter, which implies that every increase in the number of slot machines had a future negative impact on the Amusements TPT. The impact of seasonality is quite strong as reﬂected by the T statistics for Trend and Q1. Also, Stationarity or changes in the underlying economic trend is a factor with economic time series because the regression assumes a stable relationship between dependent and independent variables. To some extent this has been corrected by the use of a trend variable, a seasonal dummy. Ideally, there should be enough observations to decompose the data set to capture the different trends.16 To achieve a model capable of seasonable decomposition, I ran another set of regressions, this time using monthly data using the following speciﬁcation (See Table 16.5): TPTi = a + β1 Slots + β2 Trend + β3 Q1 + β4 Q2 + β5 Q3 + U where i refers to the Transaction Privilege Tax collected from a speciﬁc sector; Slots is the machine count on Arizona Indian casinos for that period; Trend is a numerical term to account for growth; Q1 is a dummy variable for the ﬁrst quarter of the ﬁscal year (July, August, and September); Q2 is a dummy variable for the second quarter of the ﬁscal year; Q3 is a dummy variable for the third quarter of the ﬁscal year; U is a normally distributed error term. The model has eighty-three degrees of freedom with ﬁve explanatory variables.17 These results clearly demonstrate that on a statewide basis the incidence of Indian casinos has had a negative impact on the four sectors of the Arizona economy. These results conﬁrm at the 95 percent conﬁdence level the hypothesis of a revenue displacement particularly in the Amusements sector. The negative parameter indicates an inverse relationship between the growth of Indian casinos and decreases in horse and greyhound track revenues.18 Because population is concentrated in two urban areas, there is good reason to expect that these relationships will also be evident for Maricopa and Pima counties. In addition, these two counties have high concentrations of Indian casinos. Using the same format, regressions were preformed on county TPT collections excluding the collections from Maricopa and Pima counties. Owing to changes in the Department of Revenue collection methodology, it was not possible to get a complementary time series for all four sectors. Instead, complete series were only available for two sectors: Retail, and Bars and Restaurants. Again, regressions were run using quarterly observations from the ﬁrst quarter of 1991 to the fourth quarter of 2000. Table 16.6 presents a summary of the county results. These results similarly conﬁrm the negative relationship between Indian gaming and TPT at the county level. It is interesting that the displacement effect is present in less populated counties. There are several possible explanations for this. First, there are fewer entertainment choices in rural areas. Second, there are Indian casinos spread widely throughout the state. Finally, it could be that the most pronounced effects of gaming are in less populated areas where the drain is less dampened by economic growth.

266 14 57 31

0.096 (1.41) 0.031 (.840)

0.092 (0.162)

−0.758 (−1.01)

0.86

1.05 (1.53) 1.92 (2.10)∗

−0.272 (−1.36) −0.200 (−0.756)

0.92

Trend

Slots

54

82

F

R2

Tourist

0.93 0.44 0.77 0.64

Pop

−0.12 (3.9)∗ −0.18 (−1.8)∗ 0.05 (0.80) −0.25 (−3.0)∗

Test statistics

0.29 (−8.6)∗ −0.44 (−4.4)∗ −0.13 (−1.9)∗ −0.42 (−5.2)∗

Parameter estimates

Notes T statistics in parentheses. ∗ Statistically signiﬁcant at the 95 percent conﬁdence level.

TPT Retail less Maricopa and Pima TPT Restaurants/Bars less Maricopa and Pima

Dependent variable

−0.39 (−11.7)∗ −0.15 (−1.5)∗ −0.133 (−2.0)∗ −0.73 (−9.0)∗

−1.0 (−14.6)∗ 0.88 (4.2)∗ −0.09 (−0.66) 0.49 (2.9)∗

F

−0.12 (−1.85)∗ −0.36 (−1.7)∗ −0.90 (6.9)∗ −0.008 (−0.47)

Q3

R2

Q2

Trend

Slots Q1

Test statistics

Parameter estimates

Table 16.6 Results of county TPT regressions

TPT Restaurants/Bars TPT Amusements TPT Retail TPT Hotels

Dependent variable

Table 16.5 Results of state TPT regressions using monthly data

1.94

1.91

DW

2.1 1.6 2.2 1.7

DW

218

G. C. Anders

Impacts on city taxes City taxes collections is another important area that may be impacted by the construction of hotels and resorts on Indian reservations. At least three tribes in close proximity to the Phoenix metropolitan area are building resorts. The Ak-Chin Indian Community and Harrah’s Entertainment, which operates the tribal casinos, have already opened a 146-room resort (McKinnon, March 13, 2001). The Gila River Indian Community is building a $125 million resort and spa south of Phoenix with 500 rooms, two 18-hole golf courses, and an industrial park (McKinnon, May 16, 2001). While the Fort McDowell casino is building a 500-room resort, and the Salt River casino resort will build a resort complex with 1,000 rooms, it is reasonable to anticipate that these developments will negatively impact cities that derive tax revenue from hotels and bed taxes (Schwartz, 2001). The logic for anticipating a dramatic revenue loss is based upon the following rationale. 1

2

3

4

5

The resort industry generates substantial taxes for cities, counties and the state. As shown in Table 16.7, Valley cities apply between a 3.8 and 6.5 percent tax on hotel and motels. The state and counties also receive revenue from taxes on hotels. If one includes property and state income taxes, the total tax contribution is large. The economic downturn is already putting pressure on tourism, and thus taxes from resorts and other tourism-related sources decrease. At the same time overbuilding may increase competition between existing hotels. The tribes are planning posh resorts with large conference centers and amenities including golf courses that will draw business and tax dollars away from off-reservation properties. These will be ﬁrst-class facilities with great locations in an established market. The room prices can be subsidized with gambling proﬁts the same way that Las Vegas casinos subsidize their meals and rooms in order to draw customers (Eadington, 1999). Even if the tribes do not choose to subsidize room costs with gaming proﬁts their prices can be substantially lower simply because no local taxes will be charged.

Assuming a total increase of 2,146 hotel rooms from Indian resort construction, the anticipated tax loss would be approximately $1.48 million dollars per year for city governments.19 There is also a possibility of increased tax losses from golf clubs and other amusements that will be available at the casino properties.

Caveats and conclusions Since 1992 there have been approximately nineteen Indian casinos established in Arizona. These casinos have generated hundreds of millions of dollars in proﬁts for tribal communities. Although halted by a recent Federal Court decision, Governor Hull is engaged in compact negotiations with Arizona’s Indian tribes. It has been publicized that in return for revenue sharing that could result in $83 million per year for the State of Arizona, the tribes will be able to increase the number of

Economic impact of Indian casino gambling

219

Table 16.7 City hotel and bed taxes City

Tax base Bed Total Total amount Number of Tax/per rate∗ tax∗∗ tax∗∗∗ collected in FY hotel rooms room (in $) 1999–2000 (in $)

Carefree Cave Creek Fountain Hills Mesa Phoenix Scottsdale Tempe

2.0 2.5 1.6 1.5 1.8 1.4 1.8

3.0 4.0 3.0 2.5 3.0 3.0 2.0

5.0 6.5 4.6 4.0 4.8 4.4 3.8

520,051 21,784 87,034 1,365,447 21,289,336 7,173,484 1,635,517

409 24 125 4,961 22,375 13,316 5,452

1,271.52 907.68 696.27 275.24 951.48 538.71 299.98

Source: Individual cities Tax Audit Departments, June 2001; Individual cities Convention and Visitors Bureau, June 2001; Arizona Ofﬁce of Tourism, June 2001; Northern Arizona University Ofﬁce of Tourism and Research Library, June 2001. Notes ∗ The base rate is the total sales tax charged by the city before the additional bed tax is applied. This does not include any state or county taxes. ∗∗ The bed tax is the additional tax rate that is applied by cities in addition to the sales tax on hotel and motel rooms. ∗∗∗ The total tax is the sum of the tax base and the bed tax added together.

slot machines to 14,675 and offer “house-backed” table games such as blackjack (Pitzl and Zoellner, 2002). At a time when the State of Arizona is experiencing an increasingly larger projected deﬁcit such an increase in revenues may look attractive. But, the combined impact of increased gaming may be underestimated. Cummings Associates estimates that an increase in the number of slot machines in the Phoenix metro area will increase tribal gaming revenues to about $1.3 billion, or one-sixth of the annual state budget (MacEachern, 2001). Table 16.8 demonstrates how an expansion in the number of slot machines would further erode state and local taxes. If we assume that the current average slot machine net revenue is approximately $100,000 per year, then each increase of 1,000 slot machines would result in a $1.0 million increase in gambling revenues. Assuming a combined tax loss of 9.07 percent to state, county, and city governments20 we can anticipate that the annual displacement would grow from $68 to $133 million per year. The available statistical evidence clearly demonstrates that Indian casinos do have a signiﬁcant negative economic impact on the state economy. Furthermore, an expansion of Indian casinos and resorts will exacerbate the tax drain on local government. Certainly, Arizona public ofﬁcials should consider these ﬁndings as they continue to grapple with the negotiation of new compacts for Indian gaming. There is nothing here that is infallible. The data used are subject to economic cycles and structural changes that affect the reliability of the econometric results. Also, there could be inaccuracies in the application of an average slot machine net revenue value for all casinos. But rather than relying on emotional appeal or casual reasoning, I have tried to use the existing data in a logical and well reasoned

220

G. C. Anders Table 16.8 Estimated impact of an increase in slot machines Number of slot machines

Estimated annual revenues (in $)

Estimated revenue displacement (in $)

7,504∗ 10,925∗∗ 14,675∗∗∗

750,400,000 1,092,500,000 1,467,500,000

68,061,280 99,089,750 133,102,250

Source: Arizona Department of Gaming. http://www.gm.state.az.us/.htm Notes ∗ Current number of gaming devices in Arizona casinos. ∗∗ Number of gaming devices authorized by existing compacts with the State of Arizona. ∗∗∗ Maximum number of gaming devices limited by statue.

manner to inform the public. The point is that we should use whatever empirical information we have to weigh both the costs and the beneﬁts of Indian gaming.

Acknowledgements The author acknowledges the talented assistance of Christian Ulrich and Robyn Stout. Kathy Anders provided invaluable help in collecting the data, and in sorting out the complexities of Arizona taxes. Special thanks to Don Siegel, David Paton and Roger Dunstan for their comments and suggestions. All remaining errors are mine.

Notes 1 The terms Native American and American Indian will be used interchangeably without intending offense. 2 Tribes can be recognized in only two ways: by an act of Congress, or; a lengthy and complex recognition process with the Department of the Interior. The Assistant Secretary makes the ﬁnal determination on tribal recognition for Indian Affairs. Currently, over 109 groups are seeking recognition from the Department of Interior for the purposes of establishing a reservation. 3 In two famous cases (Seminole Tribe v. Butterworth) and (California v. Cabazon Band of Mission Indians), the courts found that when a state permits a form of gambling including bingo or “Las Vegas” nights, then the tribes have the right to conduct gaming operations on their own land (Chase, 1995). 4 All Native Americans pay federal income, FICA and social security taxes; however, Indians who live and work on federally recognized reservations are exempt from paying state income and property taxes. Each tribe sets its own membership rules. In order to be eligible for federal beneﬁts most tribes require a person to have one-fourth quantum blood degree of his tribe’s blood to be an enrolled member. Some tribes have additional qualiﬁcations for membership. 5 The current compacts establish a minimum pay-out of 80 percent for slot machines and 83 percent for electronic poker or blackjack determine this percentage. Arizona Department of Gaming. http://www.gm.state.az.us/history2.htm.

Economic impact of Indian casino gambling

221

6 The typical casino derives approximately 70 percent of its total revenues from slot and other electronic gaming machines (Eadington, 1999). Indian casinos derive a much higher percentage from slot machines. 7 This is also a national trend. The General Accounting Ofﬁce’s (1997) study of Indian gaming found that a small portion of Indian gambling facilities accounted for a large share of the aggregate gaming revenue. Similarly, the California Research Bureau estimates that 7 percent of the Indians in California are members of gaming tribes. Most California tribes are small ranging from 50 to 400 people. The forty-one gaming tribes have about 18,000 members (Dunstan, 1998). 8 According to Leven (2000) when there are offsetting changes in local demand, as in the case of Indian casinos, the net multiplier can be less than one and even negative. 9 The basis for this conclusion is the difference between casino net revenues ($830 million) and the total economic impact ($468 million). If total spending instead of net revenues is used, the resulting difference is much greater. 10 This is based on several factors including high job turnover for the low wage hourly jobs, and the required educational level for jobs in the “back of the house” operations of casinos that include accounting, ﬁnancial management, marketing, and human resource management functions. 11 Technically, the State of Arizona does not have a “sales tax” paid by consumers for the goods they purchase. Instead, a tax on sales is paid by the vendor for the privilege of doing business in the state. 12 There is a complex series of taxes by industry class code. These rates ranged from 1 percent to 5.5 percent depending on the type of business. Effective June 1, 2001 the state TPT rates increased 0.06 percent as the result of Proposition 301. For more information see: http://www.revenue.state.az. 13 Since it is a net revenue ﬁgure, it does not reﬂect how much is actually spent at casinos. TPT taxes are based on spending so the actual displacement effect would be even larger. 14 Parsimony is the use of the fewest number of explanatory variables and the simplest functional form in order to achieve an acceptable level of statistical signiﬁcance. 15 This approach follows from previous ﬁndings regarding the sectoral impacts of casinos (Siegel and Anders, 1999). 16 Other functional forms (i.e. log linear and ﬁrst differences) and other variables including Personal Income, a data set maintained by the US Bureau of Economic Analysis, were also tried with mixed results. 17 A dummy variable for the fourth quarter is not necessary. 18 “In 1993 horse and dog tracks pumped $8.5 million into the state’s budget. That dropped to $3 million last year. Revenues at the state’s four-dog track’s live races have plunged to $77 million from $116.3 million from 1993 to 1999. The story at the state’s live horse-racing tracks where revenues slid to $46.5 million from $80.6 million in the same period (Sowers and Trujillo, 2000). 19 This amount was computed by taking the average room tax ($688) times the number of new hotel rooms (2,146). 20 This is based on the sum of state and Maricopa County tax rates equaling 7.87 percent, plus a city tax of 1.8 percent.

References Anders, Gary C., Siegel, Donald, and Yacoub, Munther (1998), “Does Indian casino gambling reduce state revenues: Evidence from Arizona.” Contemporary Economic Policy, XVI (3), 347–355. Anders, Gary C. (1998), “Indian gaming: Financial and regulatory issues.” The Annals of the American Academy of Political and Social Science, March, 98–108.

222

G. C. Anders

Chase, Douglas W. (1995), The Indian Gaming Regulatory Act and state income taxation of Indian casinos: Cabazon Band of Mission Indians v. Wilson and County of Yakima v. Yakima Indian Nation. Tax Lawyer, 49(1), 275–284. Cornell, Stephen and Taylor, Jonathan (2001), “An analysis of the economic impacts of Indian gaming in the state of Arizona, ”Udall Center for Studies in Public Policy, June. Cummings Associates (2001), “The revenue performance and impacts of Arizona’s Native American casinos,” February 16. Dunstan, Roger (1998), Indian Casinos in California. Sacramento, CA: California Research Bureau. Eadington, W. R. (1990), Native American Gaming and the Law. Reno, Nevada: Institute for the Study of Gambling. Eadington, William R. (1999), “The economics of casino gambling.” The Journal of Economic Perspectives, 13(3), 173–192. Felsenstein, Daniel, Littlepage, Laura, and Klacik, Drew (1999), “Casinos gambling as local growth generation: Playing the economic development game in reverse.” Journal of Urban Affairs, 21(4), 409–421. Gazel, Ricardo (1998), “The economic impacts of casino gambling at the state and local levels.” The Annals of the American Academy of Political and Social Science, March, 66–85. Harris, Percy (1997), “Limitation on the use of regional economic impact multipliers by practitioners: An application to the tourism industry.” The Journal of Tourism Studies, 8(2), 1997, 50–61. Hogan, Tim and Rex, Tom R. (1991), “Demographic trends and ﬁscal implications,” in McGuire, Therese J. and Naimark, Dana Wolfe (eds) State and Local Finance for the 1990s: A Case Study of Arizona. School of Public Affairs, Arizona State University, Tempe, 37–44. Leven, Charles, L. (2000), “Net economic base multipliers and public policy.” The Review of Regional Studies, 30(1), 57–60. National Gambling Impact Study Commission (1999), National Gambling Impact Study Commission Report, http://www.ngisc.gov. Mattern, Hal (2001), “Indian gaming: $468 Mil impact.” The Arizona Republic, June 21, D2. MacEachern, Doug (2001), Slots in the city. The Arizona Republic, April 29, V1,2. McKinnon, Shaun (2001), “Ak-Chin open resort hotel.” The Arizona Republic, March 13, D1,6. McKinnon, Shaun (2001), “Indian casinos savvy advertisers.” The Arizona Republic, March 21, D1,5. McKinnon, Shaun (2001), “Tribes Bet on Future.” The Arizona Republic, May 16, D1,3. Native Americas Magazine (1997), Indian gaming having little effect on poverty, February 18, p. 3. Pitzl, Mary Jo and Zoellner, Tom Hull (2002), “Tribes OK gaming deals.” The Arizona Republic, February 21, A1, A2. Rose, Adam (2001), “The regional economic impacts of casino gambling,” in Lahr, M. L. and Miller, R. E. (eds) Regional Science Perspectives in Economic Analysis. Elsevier science, 345–378. Siegel, Donald and Anders, Gary (1999), “Public policy and the displacement effects of casinos: A case study of riverboat gambling in Missouri.” Journal of Gambling Studies, 15(2), 105–121. Siegel, Donald and Anders, Gary (2001), “The impact of Indian casinos on state lotteries: A case study of Arizona.” Public Finance Review, 29(2), 139–147.

Economic impact of Indian casino gambling

223

Sowers, Carol and Trujillo, Laura (2000), “Gaming drains millions in potential taxes.” The Arizona Republic, September 16, A1,12. Stern, Ray (2001), “Casinos boom raises odds for addiction.” Scottsdale Tribune, June 18, A1,14. Schwartz, David (2001), Tribal Casinos Expand into Resorts. Lasvegas.com Gaming Wire, May 25, http:www.lasvegas.com/gamingwire/terms.html. Taylor, Jonathan, Grant, Kenneth, Jorgensen, Miriam, and Krepps, Matthew (1999), Indian gaming in Arizona, social and economic impacts on the state of Arizona. The Economic Resource Group, Inc. May 3. US General Accounting Ofﬁce (1997), A Proﬁle of the Indian Gaming Industry. GAO/GGD96-148R. Wang, Phillip (1997), “Economic impact assessment of recreation services and the use of multipliers: A comparative examination.” Journal of Parks and Recreation Administration, 15(2), 32–43.

17 Investigating betting behaviour A critical discussion of alternative methodological approaches Alistair Bruce and Johnnie Johnson

Introduction It is frequently observed that the investigation and analysis of betting behaviour occupies researchers from a wide range of disciplinary backgrounds, within and beyond the social sciences. Thus, aspects of betting activity, at the individual and collective levels, raise important questions for, inter alia, theoretical and applied economists, decision theorists, psychologists, those interested in organisational behaviour, risk researchers and sociologists. One consequence of this diverse research community in relation to betting, is the sometimes sharp distinction in approach to the investigation of betting-related phenomena that exists between researchers with different disciplinary afﬁliations. This creates a fertile environment for the comparative evaluation of alternative methodological traditions. The aim of this contribution is to explore an important strand of the methodological debate by discussing the relative merits of laboratory-based and naturalistic research into betting behaviour. This involves a comparison of a methodological tradition, laboratory-based investigation, which has dominated the psychological approach to understanding betting with the study of actual in-ﬁeld betting activity, which is more closely associated with the recent economic analysis of betting phenomena. It seems reasonable to suggest that the traditional emphasis on laboratory at the expense of naturalistic investigation probably owes much to the alleged beneﬁts of the former. Thus, proponents of laboratory-based investigation have tended to stress its advantages in terms of cost, the ability to isolate the role of speciﬁc variables and the opportunity it affords for conﬁrmation of results via replication of experiments under tightly controlled conditions. At the same time, naturalistic work is often criticised by laboratory advocates for its absence of control groups and its inability to control the number of observations in the categories of activity under scrutiny (see Keren and Wagenaar, 1985). Whilst a central contention of this chapter is that there is a compelling case for more emphasis on naturalistic vis à vis laboratory-based work, this is not to deny that the co-existence of these quite different investigative techniques yields

Investigating betting behaviour

225

beneﬁts in terms of our overall understanding of the ﬁeld. Indeed, as Baars (1990) observes: Without naturalistic facts, experimental work may become narrow and blind: but without experimental research, the naturalistic approach runs the danger of being shallow and uncertain. To some degree the legitimacy of these differing traditions reﬂects the fact that each offers unique insights into different aspects of betting. Thus, for example, laboratory-based work permits a richer understanding of the individual cognitive processes that lie behind betting decisions. Naturalistic research, by contrast, focuses on the investigation of observable decision outcomes in non-controlled settings. The distinction between process and outcome is signiﬁcant here, in reﬂecting those features of betting that engage more closely the interest of psychologists and economists, respectively. This chapter is structured in three main parts. First, we explain the particularly fertile opportunities for naturalistic research that are available to betting researchers, compared with naturalistic enquiry in other areas of behavioural analysis. A key feature here is the discussion of the richness of various aspects of the documentary material in relation to betting. The second part of the discussion addresses the particular difﬁculties associated with laboratory-based investigation of betting behaviour. Three main areas of weakness associated with the laboratory setting are considered. The section on ‘Calibration in naturalistic betting markets’ reports signiﬁcant empirical distinctions between observed naturalistic behaviour and behaviour in the laboratory, which serve to illustrate the limitations of laboratory work in this area.

Betting research: the opportunities for naturalistic inquiry Whilst the discussion that follows relates to the advantages of naturalistic research in the speciﬁc context of horse-race betting in the UK, many of the issues raised apply equally in the context of other forms of betting and wagering in the UK, as well as to horserace and other betting activity in non-UK settings. Naturalistic research into betting enjoys signiﬁcant advantages, from a purely pragmatic perspective, over naturalistic research in other areas of decision-making under uncertainty. A key factor in the potential for naturalistic betting research is the existence of a particularly rich qualitative and quantitative documentary resource for the analysis of a range of betting-related phenomena. Before describing the data in greater detail, it is instructive, given the focus of this chapter, to explain brieﬂy the nature of horse-race betting in the UK. Essentially, for any horserace in the UK, there are two parallel forms of betting market available to the bettor, the pari-mutuel market and the bookmaker market. The pari-mutuel market, whilst globally more signiﬁcant, is very much a minority market in the UK,

226

A. Bruce and J. Johnson

relative to the bookmaker market, which accounts for around 90 per cent of horserace betting activity. For each form of market, there are, equally, two sub-markets, the on-course market relating to bets placed at the racecourse, and the off-course market, where bets are placed in licensed betting ofﬁces. Whilst on and off-course betting are clearly separate bodies of activity in terms of location, there are important institutional linkages between the two parts of the market, especially in relation to bookmaker markets. Thus, for example, off-course bookmakers may manage their potential liabilities by investing funds, via their agents, in the on-course market. This will affect the pattern of odds in the on-course market, which in turn affects the odds available off-course, given that the odds reported (and available to bettors) off-course are those obtaining at that time in the on-course market. One of the appealing features of off-course bookmaker-based markets in particular, in data terms, is the fact that all betting decisions are individually recorded on betting slips, by the decision maker. Availability of samples of betting slips, therefore, immediately gives the researcher access to a set of key features relating to the betting decision, which permits insights into a range of issues. Thus, a betting slip relating to a bet placed in a betting ofﬁce routinely carries information relating to: 1 2 3

4

5 6 7

The particular horse(s), race(s), race time(s) and race venue(s) selected, thereby offering explicit details of the individual decision and its context. The stake, that is, the extent of the ﬁnancial commitment to the decision. The type of bet; this indicates, for example (i) whether success is a function of a single correct decision (the ‘single’ bet), or of several simultaneously correct decisions (‘multiple bets’) and (ii) whether or not the bet has ‘insurance’ features that generate some return if the horse is ‘placed’ as well as if it wins its race (e.g. ‘each-way’ bets) and so on. Whether the bet was placed at Starting Price, Board Price or Early Price,1 thereby offering insights into the bettor’s subjective evaluation of the ‘value’ inherent in prices offered. Whether tax2 was paid at time of bet placement (prior to October 2001), a factor that can be held to indicate the bettor’s level of conﬁdence in a bet. Exactly when the bet was placed, thus allowing insights into the value of information in evolving betting markets. Exactly where the bet was placed, thereby facilitating cross-locational analysis.

Clearly, the detail available from the betting slip has the potential to add significantly to our understanding of a range of aspects of the betting decision. Even with the comparatively high levels of electronic sophistication, which exist in bookmaking organisations and betting ofﬁces in the UK today, the betting slip remains overwhelmingly dominant as the means by which a bet is registered by the consumer. As a basis for the clear identiﬁcation of individual decisions, this is in marked contrast to other markets for state-contingent claims, such as markets for various ﬁnancial instruments, where the data relating to individual decisions are elusive. Factors inhibiting the analysis of individual decisions in such contexts

Investigating betting behaviour

227

include the employment of agents to conduct transactions and the fact that many decisions may simply result from the automatic implementation of predetermined trading rules. Beyond the data detailed on the betting slip, but available in the public domain, the results of races allow an unequivocal insight into the performance of the betting decision. Again, the contrast with other ﬁnancial markets is compelling. In most ﬁnancial contexts, the market duration is not deﬁned, so that unequivocal statements regarding decision performance are not feasible. A further characteristic of betting markets, which derives from their ﬁnite nature, is that the researcher has access to a very large, and continually expanding, set of ‘completed’ markets on which to base analysis. Within the aggregate of betting markets, there is scope for distinguishing between (and hence scope for comparative analysis across) different types of horse race, according to a variety of criteria such as the class or grade of the race, the form of race (e.g. handicap vs non-handicap) or the number of runners in the race. All of these additional characteristics are readily accessible in the public domain. Thus, to a degree, the researcher has the opportunity to control for aspects of the decision setting (e.g. complexity, see e.g. Bruce and Johnson, 1996; Johnson and Bruce, 1997, 1998), which might be regarded as potentially inﬂuential in determining the nature of the decision, by comparing subsets of races comprising sufﬁcient numbers to guarantee statistically meaningful results. The pari-mutuel betting market in the UK, operated by the Horse-race Totalisator Board (the ‘Tote’) offers a distinct set of data possibilities that reﬂects its different structure and mechanisms. The pari-mutuel market relies on an electronic system that obviates the need for individual bettors to complete betting slips, but which generates valuable aggregated information relating to the comparative betting activity of on-course, off-course and credit account bettors. Furthermore, the UK betting environment offers the near-unique opportunity to compare betting behaviour between two materially different market forms across a common set of betting events, thereby offering potentially valuable insights into the effect on betting of institutional peculiarities of market process and institution. This permits ﬁeld-testing of issues that have emerged as central to the research agenda of the experimental economics school (see, e.g. Smith, 1989; Hey 1991, 1992). A further beneﬁt associated with the use of naturalistic data for analysis of betting behaviour has increased in signiﬁcance in recent years. Thus, both the ‘civilising’ of the betting ofﬁce environment as a result of successive episodes of deregulation and the widening awareness of gambling, which has been promoted by the National Lottery, have meant that bettors constitute a more representative sample of the aggregate population in terms of most demographic variables. The factors discussed above serve to explain the particular appeal of naturalistic enquiry to betting researchers in a UK horse-racing context, in terms of data richness and volume, opportunities for comparative investigation and increasing representativeness. Beyond these particular advantages, it is worth noting, more generally, that horse-race betting markets feature a set of characteristics that

228

A. Bruce and J. Johnson

conform closely to an inﬂuential perspective on the essence of naturalistic decisionmaking proposed by Orasanu and Connolly (1993). Thus, factors regarded as distinctive to naturalistic decision-making are as follows. 1

2

3

4

5

6

7

The existence of poorly-structured problems: horse races can be seen as poorly structured in that there are, for example, no rules regarding how the decision problem should be addressed, how to combine different forms of information or how to select a particular form of bet. Uncertain dynamic environments: betting markets are inherently uncertain, dealing with conjecture regarding the unknown outcome of a future event, and dynamic as evidenced by the fast-changing and turbulent nature of prices in betting markets. Shifting, badly-deﬁned or competing goals: the motivations of bettors are complex and individual. For example, bettors may value ﬁnancial return, intellectual satisfaction or the social interaction associated with betting (see Bruce and Johnson, 1992). Equally, motivations may shift during, for example, the course of an afternoon’s betting activity, depending on the bettor’s experience or the emerging pattern of results. Action/feedback loops: a central feature of the betting task is that it involves making decisions based on the analysis of various forms of information/feedback from, inter alia, previous horseraces, informed sources (e.g. trainers, owners and ‘form’ experts). Action/feedback loops are central to the continual adjustment and reﬁnement of decision-making models as the information set evolves. Time stress: the signiﬁcant majority of betting activity on horse races takes place within a highly condensed time frame (typically around 20–30 minutes) prior to the start of the race, at which point the market is closed. High stakes: whilst the level of stake is at the discretion of the bettor, a key factor in the context of this discussion is that the stake represents the bettor’s own material resource. Multiple players: whilst betting markets for individual horse races vary in terms of the number of participants, betting is a mass consumption activity in the UK.

A fuller discussion of the degree to which the horse-race betting environment captures these essential characteristics of the naturalistic decision setting is presented in Johnson and Bruce (2001).

Laboratory-based research: a critical perspective The aim of this section is to provide a critical assessment of the potential and limitations of laboratory-based investigation in providing insights into aspects of decision-making behaviour. It is important to stress here that whilst the understanding of decision-making behaviour is a key theme of the betting research agenda, the analysis of betting behaviour offers a signiﬁcant insight into wider

Investigating betting behaviour

229

decision-making. This section provides the basis for a consideration, in the following section, of examples of empirically-observable distinctions between laboratory and naturalistic behaviour in particular relation to betting. The discussion of the laboratory environment focuses on three main areas of concern: the nature of the decision task in which participants are required to engage, the nature of the participants themselves and the character of the environment in which laboratory-based tasks are performed. It is, of course, the case that consideration, respectively, of the individual, task and environment represents an artiﬁcial separation. Clearly, the behaviour observed reﬂects the simultaneous inﬂuence of and interaction between factors at each level. As a loose organisational device, however, such a separation offers advantages in terms of identifying different forms of inﬂuence. The decision task One of the more important shortcomings of the laboratory investigation of betting and decision performance, in general, is its tendency to characterise the betting decision task as a discrete and clearly-speciﬁed problem that generates a similar discrete decision/betting ‘event’. One reason for this is the need to present subjects with a comprehensible and unambiguous problem, which in turn allows the researcher to identify an unambiguous and discrete response by the subject for the purposes of analysis. However, it has been observed (Orasanu and Connolly, 1993) that real life decision tasks rarely take this form. The decision maker will generally have to do considerable work to generate hypotheses about what is happening, to develop options that might be appropriate responses, or even to recognise that the situation is one in which choice is required or allowed. At the same time, for other forms of decision, including betting tasks, the types of process described above may be wholly inappropriate, the decision response resulting perhaps from an intuitive rather than an analytical approach. The laboratory decision task, as generally speciﬁed, tends towards the ‘simple analytical’ form, in the interests of generating identiﬁable events and measurable effects. It thereby offers little by way of insight into more complex analytical or more intuition-based decision and betting problems. Equally, it is frequently the case that laboratory decision tasks involve a choice between a prescribed menu of alternatives deﬁned with unrealistic precision in what is essentially a ‘one shot game’. Where the laboratory attempts to explore the interactions between a succession of decisions and feedback from previous decisions in the sequence, there are dangers of over-structuring the decision–feedback–decision relationship. In particular, the practical realities of a laboratory experiment may tend to condense the time frame within which this process is allowed to operate, compared with the often more extended period within which interaction occurs in the naturalistic setting.

230

A. Bruce and J. Johnson

In contrast, within real world betting contexts, decision makers are often faced by tasks which are poorly structured, involving uncertain and dynamic events with action/feedback loops. It is not surprising that subjects’ cognitive processes, which are effectively ‘trained’ in such real world contexts, become attuned to such tasks. Consequently, individuals develop strategies, such as changing hypotheses, which are designed to handle the often redundant and unreliable data associated with real world decision tasks (Anderson, 1990; Omodei and Wearing, 1995). These approaches can be functional in dynamic real world environments (Hogarth, 1980) but prove inappropriate when tackling the ‘static’ tasks provided in laboratory experiments, which often involve more reliable and diagnostic data. It is not surprising, therefore, that the consensus to emerge from experimental enquiry is one of poor quality decision-making resulting from a range of systematic biases. Subjects who are presented with misleading and artiﬁcial tasks in experiments, involving perhaps non-representative questions for which the cues they normally employ are invalid, are not surprisingly likely to make errors. Those who point to the bias caused by the artiﬁciality of the tasks presented in the laboratory, highlight studies which demonstrate that small changes in experimental design can produce results that suggest good or poor judgement (Beach et al., 1987; Ayton and Wright, 1994). Eiser and van der Pligt (1998) summarise concern with the nature of the decision task experienced in laboratory experiments as follows: experimental demonstrations of human ‘irrationality’ may depend to a large extent on the use of hypothetical problems that violate assumptions that people might reasonably make about apparently similar situations in everyday life. In laboratory investigations subjects’ risk taking and quality of decision-making are often assessed using tasks that have a pre-deﬁned correct solution. For example, in experiments designed to assess the accuracy of individuals’ subjective probability estimates (calibration), typical general knowledge tests are often employed – where subjects may, for example, be asked to decide whether the Amazon or the Nile are the longer rivers and to assess the probability of their answer being correct. These effectively become tests of the individuals’ assessments of the accuracy of their memories. However, in real world settings, particularly in a betting context, individuals are often required to make judgements about future events. Individuals appear to employ different cognitive processes when making judgements on memory accuracy compared with predictions about the future. These latter cognitive processes appear less subject to bias (e.g. Wright, 1982; Wright and Ayton, 1988). Consequently, laboratory experiments, which typically rely on tasks with pre-deﬁned correct solutions, may signiﬁcantly underestimate the ability of individuals to make judgements concerning the future in real world betting contexts. One of the advantages of laboratory investigations is the ability to isolate the effects of certain variables on the betting decision. However, to achieve this aim these studies are often conﬁned to exploring the effects of a limited set of variables. The danger is that this oversimpliﬁes the full richness and complexity of

Investigating betting behaviour

231

the decision task faced in real betting environments. As a result the correlations observed may be spurious and miss the impact and interaction of unexpected variables on the bettor’s decision. A related issue concerns measurability. Laboratory studies of betting behaviour often lack clear objective measures of the factors inﬂuencing betting decisions or of their consequences. Consequently, these experiments often rely on subjective measures of betting performance and of factors inﬂuencing the bettor’s decisions, such as the degree of perceived risk. However, in real world betting environments, such as horse-racing tracks, the horse selected by the bettor can be compared with the winner of the race. This acts as an unequivocal, objective measure of performance. Similarly, the odds, the stake or type of bet selected (e.g. a ‘single’, which requires only one horse to win to collect a return vs an ‘accumulator’, which requires several horses to win to be successful) act as objective measures of risk associated with the betting decision. Finally, in relation to decision task, caution is urged in aggregating or comparing the results of various laboratory investigations, since these often employ a heterogeneous set of research designs. In contrast, the decision task in real world betting environments remains reasonably consistent and more conﬁdence can be placed in aggregating results from this rather more homogeneous group of studies. The laboratory subject Concerns regarding material differences between decision tasks framed for the purposes of laboratory investigation and those that occur in the natural setting are mirrored by a concern that subjects taking decisions in laboratory experiments and those operating in the natural environment may be fundamentally dissimilar. The emphasis in this section is on the potential problems resulting from the fact that laboratory and naturalistic decision-making, respectively, is generally undertaken by individuals with different levels of expertise in the form of the decision task under scrutiny. Subjects employed in laboratory experiments are often asked to make judgements about issues outside their experience. There is an established tradition, in academic research into betting, of employing college students as subjects in laboratory studies, most of whom have little experience of betting in real world contexts, let alone expertise. Lack of expertise is likely to affect both the decision process employed and the quality of the resulting decisions. Interestingly, though perhaps unsurprisingly, whilst the majority of laboratory studies suggest that individuals’ subjective probability judgements are not well calibrated (i.e. are not well correlated with corresponding objective probabilities), studies conducted in naturalistic environments have found that individuals with expertise in a particular domain are often well calibrated when making judgements associated with that domain (see, e.g. Hoerl and Falein, 1974; Kabus, 1976; Murphy and Brown, 1985; Smith and Kida, 1991). These observed distinctions between novice and expert performance would appear to compromise the researcher’s ability to generalise ﬁndings from

232

A. Bruce and J. Johnson

laboratory settings using novice subjects to real world contexts where decision makers are familiar with the decision task and environment. In particular, this suggests that the employment of naïve subjects in laboratory investigations of betting behaviour may produce misleading results. Further, the lack of familiarity of subjects with the decision task increases their susceptibility to misunderstanding or bias in the way that instructions are interpreted. Such problems of interpretation would be far less likely to apply to decision makers in their familiar natural setting. Experienced individuals might be expected to internalise, through observation, the validity of certain cues to make judgements associated with their particular task environment. This would, in general, allow them to make accurate judgements within their familiar decision domain, especially where decision tasks are repetitive in nature, where decision-relevant stimuli remain fairly constant or where probability assessments are involved (see, in this context, Phillips, 1987). A more general discussion of the interaction between expertise and form of task is offered in Shanteau (1992). Further, a number of studies demonstrate large differences between the decision strategies of experts and novices in terms of the way they think, the information employed, speed and accuracy of problem solving and the nature of decision models employed (e.g. Larkin et al., 1980). Crandall and Calderwood (1989) identify the particular ability of experts to interpret ambiguous cues, select and code information and understand causal models. Whilst the link between expertise and decision performance appears quite robust across a range of settings, it should be acknowledged that experienced decision makers in certain contexts still appear vulnerable to the use of heuristics, which result in decision biases (e.g. Northcraft and Neale, 1987; Smith and Kida, 1991). It is clear that, in general, expertise plays an important role in inﬂuencing decision quality. Laboratory experiments that employ naïve subjects, are unlikely therefore, to adequately assess the nature or quality of betting decisions made in real world contexts by ‘experts’ operating and receiving feedback in their familiar task domain. The section on ‘Calibration in naturalistic betting markets’ demonstrates, in the context of betting, and speciﬁcally in relation to calibration of subjective and objective probabilities, how real bettors in their natural environment generate levels of performance that are wholly inconsistent with laboratory predictions. Beyond the issue of expertise, a further concern with laboratory subjects relates to their necessary awareness that their behaviour in laboratory experiments is the subject of close scrutiny. Such scrutiny may, itself, materially affect that behaviour. Aspects of this problem, which is essentially a function of the interaction between the individual and the environment, are discussed more fully below. Finally, it is important to acknowledge that the individual subject is, in the context of participation in an experiment, likely to have a tightly-focused objective relating to the assigned decision task. The laboratory, as an abstraction from the subject’s normal experience, is essentially a capsule that isolates the subject from the competing objectives and concerns, which are present in the case of the naturalistic decision-maker. The multi-objective naturalistic decision-maker’s behaviour

Investigating betting behaviour

233

is likely, therefore, to be materially affected by the need to address conﬂict between objectives, resulting in tradeoffs and compromises. The laboratory environment The advantages of the laboratory setting in decision-making research are well established in terms of its ability to enable multiple experimentation under highlycontrolled conditions. The essence of the naturalistic setting, is, by contrast, its chaotic and uncontrolled character, which renders each decision episode unique in terms of the precise environmental conditions obtaining at the time the decision is made. This raises the immediate question: does the imposition of tight ecological control inevitably compromise the ability of laboratory simulations to shed useful insights into decision behaviour in the natural setting? If absence of control is a deﬁning feature of the natural environment, is it disingenuous to contend that behaviour in closely prescribed settings reﬂects that which we observe in the ﬁeld, in terms, inter alia, of motivation, decision characteristics or decision outcomes? Such fundamental reservations regarding laboratory investigation would curtail further discussion of its potential. It is, arguably, more fruitful to reﬂect in greater depth on the nature of the limitations that laboratory simulation of decision-making embodies. This section, therefore, considers three principal areas where the laboratory setting involves potentially damaging abstraction from naturally occurring conditions. This involves discussion of, respectively: 1 2 3

the laboratory as a consciously-scrutinised environment; the oversimpliﬁcation inherent in the laboratory setting; and incentives and sanctions in laboratory and natural settings.

The laboratory as a consciously-scrutinised environment The fact that the laboratory constitutes a consciously-scrutinised setting for investigating behaviour raises two forms of potentially distortive inﬂuence that might counsel caution in interpreting laboratory-generated results. First, there is the danger that subject awareness of scrutiny may materially inﬂuence behaviour. Subjects may, for example, be keen to project a particular image of themselves as decisionmakers in general and gamblers in particular. This may, under scrutiny, lead to modiﬁcations to the structure of their decision processes, the manner in which they process information and/or the risk strategies and decisions that they adopt. Such ‘observation effects’ are a well-established concern. To a degree they may be mitigated by the particular design of the experiment: hence, for example, the real behavioural focus for scrutiny may be hidden if the experimenter contrives an alternative core decision problem. Subjects are, therefore, less sensitive to scrutiny of their behaviour in relation to the area that is genuinely under investigation. Of course, such diversionary tactics in experimental design may in themselves generate behaviour that is merely an artefact of the design, by encouraging subjects

234

A. Bruce and J. Johnson

to devote an inappropriate level of attention to the real problem under scrutiny. It is, of course, important to note that observation effects are not conﬁned to the laboratory; naturally occurring behaviour may also be susceptible where scrutiny by investigators is evident. In many cases, however, the naturalistic research design can ensure that subjects are entirely unaware that their behaviour is under scrutiny and hence any distortive effects of observation can be ruled out. A related issue concerns the danger that the investigator may compromise the experimental ‘purity’ of the exercise via the way in which the laboratory setting is conﬁgured. This is, essentially, an extension of the argument that the task design may be over-inﬂuenced by consideration of the behavioural phenomena under investigation. There is a ﬁne line, both at the task and environmental level, between an experiment that allows behavioural traits to be manifested against a neutral background and one that, consciously or otherwise, channels behaviour along particular lines anticipated by the investigator. The oversimpliﬁcation inherent in the laboratory setting Apart from any biases, which may be attributed to investigator or subject consciousness of their roles, the laboratory setting is constrained, from an operational point of view, in terms of the complexity of the designed environment. There are various layers to this argument. First, compared with the richness of many natural settings, the laboratory is necessarily limited to offering an environment that features only the basic or salient characteristics of the natural world: a signiﬁcant degree of abstraction is inevitable. There is then a danger that, in attempting to isolate particular variables for scrutiny, investigators may inadvertently miss or modify critical variables that are inﬂuential in the natural setting. Second, in a dynamic sense, laboratory characterisation of an evolving and uncontrolled real environment is, as noted above, a necessarily controlled process. To the extent that a degree of randomness may be designed into any investigation, the scope for random variation is limited by the parameters imposed by the experimental setting. Further, it should be acknowledged that the simple aggregation of individually identiﬁed relationships between pairs of variables in the laboratory in building an overall understanding of the decision process fails to capture the full richness of interdependence between variables. Hence, there may be a tendency to miss inﬂuences that are signiﬁcant in identifying the type of reasoning required in complex natural environments (Woods, 1998). The set of concerns discussed in this section highlights a general problem with the laboratory that, paradoxically, is frequently cited as a strength of this type of experimental approach; that is, the ability to isolate, manipulate and scrutinise the impact of a particular variable, whilst maintaining control over the wider environment. This neglects the fact that real decision environments are frequently characterised by simultaneous and unpredictable variation across a range of factors. Decisions are, therefore, invariably taken against a turbulent and chaotic background. Artiﬁcial isolation of individual variables denies the opportunity to observe interactive effects and (Cohen, 1993) runs the risk of, for example, amplifying the

Investigating betting behaviour

235

signiﬁcance of biases in the laboratory setting, which might not emerge against the ‘noisier’ background of the natural environment. As Eiser and van der Pligt (1988) note: It is therefore precisely because many studies fail to simulate the natural context of judgement and action that ‘errors’ and ‘biases’ can be experimentally demonstrated with such relative ease. The above points illustrate the general difﬁculties of capturing both static and dynamic naturalistic complexity in a synthetic environment. A further area of concern relates to the ability of the laboratory to replicate the particular protocols, customs and institutional peculiarities of real settings that are increasingly regarded as inﬂuential in explaining behaviour in real contexts. The work of the experimental economics school has been particularly inﬂuential in drawing attention to the importance of environmental idiosyncrasies in shaping behaviour in market contexts. Waller et al. (1999), for example, identify three forms of inﬂuence: 1 2 3

‘institutional effects’, the rules and conventions within which market activity takes place; the nature of incentives; and the existence of learning opportunities associated with information availability.

Clearly, from the standpoint of a laboratory investigator, these types of factors pose a particular challenge. An acknowledgement that speciﬁc details of setting or subtle nuances of process may materially affect outcomes imposes an additional burden on a medium of enquiry that, as noted above, must necessarily be limited to a relatively simple conﬁguration. A rather more fundamental concern, though, relates to the fact that the potentially inﬂuential aspects of institutional detail, convention, custom, process and protocol are each factors that emerge or evolve over time in the natural setting: they are, in other words, purely naturalistic phenomena in origin. As such, an attempt to transpose such factors into a laboratory environment may be regarded as wholly inappropriate. There is no obvious reason why factors that originate from, that serve to deﬁne and that are inﬂuential in, a particular naturalistic setting should carry a similar inﬂuence in a laboratory environment. Hence, the potential for laboratory-based work to further our understanding in this area might be regarded as highly limited. By contrast, in the particular context of UK horse-race betting, the coexistence of two distinct forms of betting market permits the naturalistic investigation of settings with signiﬁcantly differing institutional frameworks and differences in process and custom. The section on ‘Calibration in naturalistic betting markets’ demonstrates how naturalistic investigation is able to demonstrate the signiﬁcance of these factors in determining market outcomes.

236

A. Bruce and J. Johnson

Incentives and sanctions in laboratory settings The material distinctions that exist between the nature of incentives and sanctions, which characterise the laboratory vis à vis the natural environment, constitute a further basis for circumspection in relation to the results of laboratory-based investigation. There are various aspects of this distinction that merit attention. First, there is the issue of participation, whereby subjects in laboratory simulations require positive incentives to take part in experiments. Incentives may take the form of, inter alia, payments, free entry into a prize draw or simply, in the case of cohorts of college students, for example, peer pressure. The important issue here is that subjects observed in the natural setting participate voluntarily in the activity under scrutiny. There would appear to be strong prima facie grounds for suggesting that those who participate voluntarily may be expected to behave quite differently from those whose participation requires incentives and who would not, ordinarily, have engaged in the activity under investigation. Most pertinently, perhaps, naturalistic subjects face different incentives/sanctions structure in that, in the context of betting, they are investing their own resources with the associated prospect of material personal gain or loss. Laboratory subjects, by contrast, have no expectations of signiﬁcant gain or loss associated with their participation. Any material ‘rewards’ provided tends to be trivial. Apart from any ﬁnancial rewards or penalties, real bettors are likely to be subject to higher levels of arousal than those in laboratory simulations, with potentially material effects on behaviour. Brown (1988) observes that ‘some form of arousal or excitement is a major, and possibly the major, reinforcer of gambling behaviour for regular gamblers’. Clearly, even where there is an acknowledgement of the potential inﬂuence of arousal or stress in the natural setting, it is ethically problematic to submit laboratory subjects to inﬂuences that may be injurious to their health. Yates (1992) in questioning the ability of laboratory investigation to capture this feature of the natural environment, argues: there is reason to suspect that the actual risk-taking behaviour observed in comfortable, low-stakes, laboratory settings differs in kind, not just in degree from that which occurs in the often stressful, high-stakes, real-world context. The discussion in the following section of the relative rates of calibration between subjective and objective probabilities in the laboratory and the naturalistic decision setting, may be indicative of the relative potency of incentives and sanctions in the different contexts. This section has identiﬁed three areas of concern with aspects of the laboratory environment that might be expected to limit the usefulness of results derived in this type of setting. Together with the limitations relating to laboratory subjects and the speciﬁcation of tasks in the laboratory, they invite the view that there are strong reasons for evaluating with caution the signals that emerge from laboratory enquiry vis à vis empirical, naturalistic research in relation to decision-making in general and betting in particular.

Investigating betting behaviour

237

Calibration in naturalistic betting markets The preceding section discussed a number of shortcomings of laboratory-based experiments in the understanding of decision processes and decision outcomes. This has important implications for the exploration of the behaviour of horse-race bettors in their naturalistic betting environments, either at the racetrack or in betting ofﬁces. It has been argued that differences in the nature of the real world betting task, the complexity and dynamic nature of the real world betting environment and the degree of expertise of seasoned bettors are likely to result in clear distinctions between results obtained from laboratory and naturalistic studies of betting behaviour. In order to illustrate such distinctions, this section will contrast the degree of calibration observed in subjective probability assessments in laboratory experiments with those observed in bets placed at the racetrack. Calibration describes the degree to which subjective and objective probability assessments are correlated. Its importance derives from its value as a key measure of decision quality and is reﬂected in its prominence as a research theme within the decision-making literature. In the context of betting, the issue of calibration is of particular importance since the success of betting decisions hinges on the quality of subjective probability assessments. Calibration in laboratory studies The clear conclusion that emerges from laboratory-based investigations is that individuals’ subjective probability estimates are generally not well calibrated. Three main sources of underestimation have been observed. These are underestimation, respectively, of the subjective probability of events considered undesirable by the subject (e.g. Zackay, 1983) of events that are easy to discriminate (e.g. Suantek et al., 1996) and of tasks that have high base-rate probabilities (Ferrell, 1994). Analogously, overestimation occurs in events considered desirable or for events that are hard to discriminate or for which low base-rate probabilities apply. These deviations from perfect calibration have been attributed to the limited cognitive capacity of decision makers who rely on heuristics or rules of thumb to simplify the decision-relevant data associated with complex decision environments. These heuristics have been demonstrated to result in a range of systematic biases (e.g. Kahneman and Tversky, 1972; Cohen, 1993), leading, it is argued, to poor calibration. Calibration of pari-mutuel bettors To explore the extent to which these results were mirrored in real world contexts, the calibration of racetrack bettors was investigated (Johnson and Bruce, 2001). In particular, the staking behaviour of UK pari-mutuel horse-race bettors was examined for each of 19,396 horses in 2,109 races at forty-nine racetracks during 1996. It has been argued that the proportion of money placed on a given horse in a parimutuel market reﬂects the bettors’ combined subjective view of its probability of

238

A. Bruce and J. Johnson

Table 17.1 Comparison of bettors’ aggregate subjective probability judgements and horses’ observed (objective) probability of success Proportion of money staked on an individual horse in a race

Mean subjective probability

Mean objective probability

n

0.0–0.1 0.1–0.2 0.2–0.3 0.3–0.4 0.4–0.5 0.5–0.6 0.6–0.7 0.7–0.8 0.8–0.9 0.9–1.0 Total

0.05 0.14 0.24 0.34 0.44 0.54 0.64 0.73 0.83 0.97

0.04 0.13 0.26 0.32 0.47 0.68 0.82 0.69 1 1

11,795 4,590 1,850 679 309 120 38 13 1 1 19,396

success. If too little money were placed on a horse then the odds offered by the parimutuel operator would appear attractive and knowledgeable bettors would continue to bet such that the odds on a given horse reﬂect the market’s best estimate of its true probability of winning (see Figlewski, 1979). To explore the degree of calibration, horses were grouped into categories based on the proportion of money bet on them in a race. This offered an indication of the bettors’ subjective probability assessment concerning the horses’ chances of success. The objective winning probability of horses in a particular category was calculated by dividing the total number of winners in that category by the total number of runners in that category over the period. Perfect calibration in a category would exist if the objective probability of a horse in that category winning, matched its subjective probability of winning. The results presented in Table 17.1 (see Johnson and Bruce, 2001) clearly indicate a close correspondence between objective and subjective probabilities and suggest that the staking patterns of bettors closely reﬂect horses’ true probabilities of success. To formally test this observed effect a conditional logit model was employed (for a full derivation see Johnson and Bruce, 2001) to model the objective probability of horse i winning race j based on the bettors’ subjective probability assessments. In particular, the following equation was developed that related the objective probability of horse i in race j, (pijo ), with nj runners and the subjective probability of that horse (pijs ) (as per McFadden, 1974; Bacon-Shone et al., 1992):

β

i=1

pijs

pijs o pij = ni

β

for i = 1, 2, . . . , nj

(1)

Investigating betting behaviour

239

The parameter ∃ is determined by maximising the joint probability of observing the results of the 2,109 races in the sample. In fact, the estimated value of ∃ was 1.0802, which was not signiﬁcantly different from 1. This implies, from equation (1), that the objective probability of horse i winning race j is not signiﬁcantly different from the proportion of money staked on horse i in race j ; that is, we observe almost perfect calibration amongst racetrack pari-mutuel bettors. This result is in sharp contrast to the generally poor calibration that is observed in laboratory studies. A number of reasons might be suggested for this. They are as follows: 1

2

3

The majority of laboratory studies employ naïve subjects with little taskspeciﬁc knowledge, and this may hinder achievement of good calibration. However, the majority of racetrack bettors have some experience of betting and this may aid their calibration, since previous research suggests that experienced bettors learn to take account of a wide range of race-speciﬁc factors (e.g. Neal, 1998). It appears that experience and domain-speciﬁc knowledge aids calibration. For example, good calibration has been observed in other naturalistic studies amongst experienced auditors (e.g. Smith and Kida, 1991), experienced weather forecasters (e.g. Murphy and Brown, 1985), in the prediction of R&D success by experienced managers and in the prediction of ﬁnishing position by experienced horse-race bettors (Hoerl and Fallin, 1974). However, experience alone does not guarantee good calibration, since some naturalistic studies have identiﬁed poor calibration; for example, amongst experienced estate agents (Northcraft and Neale, 1987) and physicians (e.g. Bennett, 1980). It appears that those experienced individuals not used to assessing uncertainty in terms of probabilities (e.g. estate agents, physicians) are less likely to be well calibrated (Ferrell, 1994), whereas those who routinely employ more probability concepts in their normal domain (e.g. weather forecasters) are more likely to be well calibrated. The latter description most aptly describes experienced horse-race bettors since an essential ingredient of betting is the assessment of value in a horse’s odds (which reﬂect its subjective probability of success). Horse-race bettors are clearly spurred by the prospect of ﬁnancial gains and by non-pecuniary beneﬁts (e.g. increased esteem amongst peer group) associated with a successful bet. This is unmistakably observed by witnessing the frenzy of excitement at the racetrack as the race reaches its climax! The incentives available to racetrack bettors may help to explain the accuracy of their subjective judgements since research suggests that calibration is improved when motivation exists for accurate judgements (e.g. Beach et al., 1987; Ashton, 1992). If incentives are given in laboratory experiments they are often small, offering little by way of real welfare implications for subjects – this is unlikely to aid good calibration. The naturalistic environment of the racetrack may aid calibration since bettors become aware of the types of data that should be employed: those which are irrelevant and the cues that are vital to their success. This is particularly

240

4

A. Bruce and J. Johnson true in turbulent, fast changing and complex environments such as betting markets. However, individuals who have developed skills in such naturalistic environments may not fare well in calibration experiments involving more static tasks with more reliable data; the type often conducted in the laboratory (e.g. McClelland and Bolger, 1994; Omodei and Waring, 1995). In addition, research suggests that calibration is often better when individuals make predictions about the future (as is the case for horse-race bettors) rather than when they are required to assess the accuracy of their memory; which is typically required in laboratory calibration studies (e.g. Wright and Ayton, 1988). Bettors beneﬁt from regular unequivocal and timely feedback on their betting performance and this may aid them in appropriately adjusting their subsequent actions. Other groups that also receive regular, timely feedback associated with their judgements (e.g. weather forecasters) are also shown as being well calibrated (Murphy and Winkler, 1977). Poor calibration amongst physicians may be explained by the often long time-lags involved in receiving feedback on their judgements and the broad cross-section of conditions for which they are required to make judgements. Pari-mutuel bettors, on the other hand, repeatedly engage in a uniform activity, spread over a uniform and short time scale. Successive pari-mutuel betting markets are reasonably consistent in terms of information presentation and time frame. Bettors become familiar with the processes and rhythms of the market and receive regular immediate feedback. It is likely that these conditions aid learning and improve calibration. The lack of regular feedback over the long term in laboratory studies may help to explain the poor calibration observed there.

Calibration of bettors in bookmaker markets As noted above, in the UK, two forms of betting markets co-exist at racetracks – the pari-mutuel and the bookmaker markets. The main difference between these markets is that bettors can ‘take a price’ in bookmaker markets, whereby the odds offered by a bookmaker at the time the bet is struck will be the odds used to calculate returns if the horse wins. Consequently, returns to privileged information are insurable by ‘taking a price’. However, in pari-mutuel markets other bettors, who subsequently bet on the same horse, can erode these returns. Given the close physical proximity of the parallel bookmaker and pari-mutuel markets at UK racetracks, it is interesting to compare the calibration of bettors’ judgements in these markets and, in particular, to explore the impact of differences in the institutional characteristics of these markets on bettors’ subjective judgements. Two models were developed using logistic regression (see Bruce and Johnson, 2000). One modelled the relationship between the starting price (in bookmaker markets) of a horse and its objective probability of success (based on results of 2,109 races in 1996). The other modelled the relationship between the horse’s pari-mutuel odds and its objective probability (as in the earlier study discussed above). Consequently, functions were developed to determine

Investigating betting behaviour

241

0

Ln (win probability)

–2 –4 –6 –8 –10 –12 –14 –2.00

Bookmaker Tote Reference line win prob = 1/(1 + odds) 0.00

2.00

4.00

6.00

8.00

Ln (odds)

Figure 17.1 Predicted win probabilities.

the objective probability of winning for horses with particular: (a) bookmaker odds and (b) pari-mutuel odds. These functions are shown in Figure 17.1. The reference line in Figure 17.1 represents the situation where the odds perfectly reﬂect the horse’s objective probability of success. For example, if horses with odds of 4/1 won 20 per cent of races in which they ran (i.e. one in ﬁve) we could conclude that the bettors subjective judgements associated with such horses were perfectly calibrated. Consequently, the reference line in Figure 17.1 represents the situation where for a horse at odds of a/b the objective probability is given by 1/(1 + a/b). It is clear from Figure 17.1, as indicated above, that the judgements of pari-mutuel bettors’ are almost perfectly calibrated. However, in bookmaker markets there appears to be a strong tendency to overestimate the chance of outsiders and to marginally underestimate the chance of favourites – the so-called ‘favourite–longshot’ bias. For example, horses with odds of 50/1 only actually win 1 in 127 races, whereas horses with odds of 1/2 win seven of ten races. These results are surprising given the close physical proximity of the parimutuel and bookmaker markets and the availability of computer screens at various locations, displaying the latest odds in the two markets. This makes it relatively easy for bettors to compare odds in the two markets and to choose the market in which to bet. It is interesting to note that in the US, where no parallel bookmaker market exists, the subjective judgements of pari-mutuel bettors are not well calibrated, displaying the familiar favourite–longshot bias (Snyder, 1979). The presence of a parallel bookmaker market may help to improve the calibration

242

A. Bruce and J. Johnson

observed in pari-mutuel markets in the UK. In particular, the odds available in bookmaker markets and their evolution may provide a yardstick to bettors, against which to compare pari-mutuel odds. Those with privileged information are likely to bet with bookmakers, where their returns are insurable (by ‘taking a price’). Consequently, the observation of price movements in these bookmaker markets may provide insights to pari-mutuel bettors concerning the existence of privileged information; their subsequent bets are likely to be more informed and hence better calibrated. It might be argued that bettors in bookmaker markets also have the opportunity to observe changes in bookmaker odds; however, if privileged insiders have already forced the odds to a position that reﬂects the horse’s true chance of winning the race, subsequent ‘follower’ behaviour on the part of less-informed bettors will reduce the odds still further, leading to a starting price that is poorly calibrated. Pari-mutuel bettors also beneﬁt from two further advantages over bettors in bookmaker markets: 1

2

Pari-mutuel markets operate in a more uniform and mechanical manner than bookmaker markets. The amounts staked and the prevailing odds in the parimutuel market are displayed and regularly updated on numerous computer screens at racetracks. Odds are determined solely by the relative amounts staked on each horse. In bookmaker markets, by contrast, the odds are determined partially by the relative amounts wagered on different horses but also by the opinion of the bookmakers themselves. Consequently, whilst market moves are readily identiﬁed in pari-mutuel markets, odds changes in bookmaker markets must be interpreted; they may, for example, represent a change in bookmakers’ opinions or may represent a ‘false move’ created by bookmakers to stimulate the demand for certain horses. The uniformity of each pari-mutuel market enables bettors to become attuned to the market rhythm; it allows them to interpret market information that should enable them to focus more on the betting decision problem per se, resulting in better calibration than exists in bookmaker markets. Bookmaker markets at racetracks are competitive, with a number of bookmakers competing for bettors’ business. All bookmakers set their own odds – which reﬂect their opinion of the horse’s chance of success, the relative weight of money on each horse and their desire to attract high betting turnover (ideally spread across all horses in the race). Consequently, bettors are faced by an array of odds in bookmaker markets and may spend considerable time searching for the best value available. This activity can distract them from the central decision task – with negative implications for calibration. Pari-mutuel bettors’ calibration is unlikely to be adversely affected in this manner since they are faced with a single odds value for each horse and they can, therefore, focus directly on the task of selecting the horse on which to bet, without the distraction of searching for ‘value’.

A further explanation for the differences in calibration observed in the two parallel markets is offered by Shin (1992, 1993) and Vaughan Williams and Paton

Investigating betting behaviour

243

(1997). They suggest that bookmakers seek to protect themselves from informed bettors. It is argued that these bettors may have access to privileged information, which could suggest that a longshot has a signiﬁcantly better chance of success than the odds indicate. Bets placed by privileged insiders on such longshots may have a major impact on bookmakers’ proﬁt. Consequently, Shin (1992, 1993) and Vaughan Williams and Paton (1997) argue (and provide evidence to support the view) that bookmakers artiﬁcially depress the odds on longshots to reduce the potential impact of privileged insiders. This would help to account for the poor correlation between subjective probabilities inherent in bookmakers’ odds for longshots compared to their objective probability of success. Whilst the extent to which bookmakers deliberately shorten odds is not clear, the existence of this practice suggests that the calibration of bettors in bookmaker markets is almost certainly signiﬁcantly better than that indicated in Figure 17.1. In summary, investigation of calibration in real world betting markets suggests that bettors’ (certainly pari-mutuel bettors’) subjective judgements are signiﬁcantly better correlated with objective probabilities than those of subjects in laboratory experiments. In seeking to explore this discrepancy we identify the experience of racetrack bettors, their motivation for success, their familiarity with the environment and its attendant information cues and the regular, unequivocal feedback they receive as key features, which are often absent from laboratory experiments. Furthermore, in explaining the different degrees of calibration observed between bettors in the pari-mutuel and bookmaker markets, we highlight the importance of institutional features. Without naturalistic enquiry it would be difﬁcult to predict in advance the inﬂuence that a market’s structural characteristics, mode of information presentation etc. might have on calibration.

Conclusion This chapter has advocated the increasing exploration of betting behaviour in naturalistic environments. A range of concerns has been identiﬁed with laboratory experiments, including the use of naïve subjects with little betting experience, the lack of appropriate incentives, the artiﬁciality of the tasks presented to subjects and the sterility of the environments in which the betting tasks are performed. These concerns have been highlighted by exploring differences between the results of calibration studies conducted in the laboratory and those conducted in naturalistic betting environments. In particular, structural features of betting markets are identiﬁed, which may be difﬁcult to reproduce in the laboratory, but that appear to signiﬁcantly inﬂuence behaviour. Clearly, in spite of the limitations discussed, laboratory investigation retains a number of features that allow it to contribute to an enriched understanding of betting behaviour. The contention of this chapter is not, therefore, that the laboratory should be abandoned; rather that the interests of the betting research agenda are best served by shifting the balance between laboratory-based and naturalistic research towards the latter, whilst acknowledging the complementary nature of the different approaches.

244

A. Bruce and J. Johnson

Notes 1 For many horseraces, bettors have the option, when operating off-course in bookmaker markets, of either nominating that their bet be settled at Starting Price (SP), Board Price or Early Price. SPs represent the odds available at the racetrack at the culmination of the betting market (the start of the race) and are the subject of independent adjudication. Board Prices are prices at the racetrack that change throughout the immediate pre-race period, depending on the relative weight of support for different selections. These evolving odds patterns are transmitted to betting ofﬁces during this pre-race or show period. Early Prices are odds relating to future races that are offered by off-course bookmakers. They are generally available until a short period prior to the show period. Where bettors elect to have their bets settled according to Board or Early Prices, they are said to ‘take’ a price. 2 Betting tax, applicable to all off-course bookmaker market bets in the UK until its abolition in October 2001, was payable at the same rate (i.e. 9 per cent) either at the time of bet placement on the stake alone, or (in the event of a successful bet) on total returns.

References Anderson, J. R. (1990). The Adaptive Character of Thought. Hilsdale, NJ: Erlbaum. Ashton, R. H. (1992). ‘Effects of justiﬁcation and a mechanical aid on judgment performance’. Organizational Behavior and Human Decision Processes, 52, 292–306. Ayton, P. and Wright, G. (1994). ‘Subjective probability: What should we believe’? In G. Wright and P. Ayton (eds), Subjective Probability (pp. 163–183). Chichester: Wiley. Baars, B. J. (1990). ‘Eliciting predictable speech errors in the laboratory’. In V. Fromkin (ed.), Errors in Linguistic Performance: Slips of the Tongue, Ear, Pen and Hand. New York: Academic Press. Bacon-Shone, J. H., Lo, V. S. Y. and Busche, K. (1992). Modelling Winning Probability. Research report, Department of Statistics, University of Hong Kong, 10. Beach, L. R., Christensen-Szalanski, J. and Barnes, V. (1987). ‘Assessing human judgement: Has it been done, can it be done, should it be done?’ In G. Wright and P. Ayton (eds), Judgmental Forecasting (pp. 49–62). Chichester: Wiley. Bennett, M. J. (1980). Heuristics and the Weighting of Base Rate Information in Diagnostic Tasks by Nurses. Unpublished doctoral dissertation, Monash University, Australia. Brown, R. I. F. (1988). ‘Arousal, reversal theory and subjective experience in the explanation of normal and addictive gambling’. International Journal of Addictions, 21, 1001–1016. Bruce, A. C. and Johnson, J. E. V. (1992). ‘Toward an explanation of betting as a leisure pursuit’. Leisure Studies, 14, 201–218. Bruce, A. C. and Johnson, J. E. V. (1996). ‘Decision-making under risk: effect of complexity on performance’. Psychological Reports, 79, 67–76. Bruce, A. C. and Johnson, J. E. V. (2000). ‘Investigating the roots of the favourite–longshot bias: an analysis of decision making by supply- and demand-side agents. Journal of Behavioral Decision Making, 13, 413–430. Cohen, M. S. (1993). ‘Three paradigms for viewing decision biases’. In G. A. Klein, J. Orasanu, R. Calderwood and C. E. Zsambok (eds), Decision Making in Action: Models and Methods (pp. 36–50). Norwood, NJ: Ablex. Crandall, B. and Calderwood, R. (1989). Clinical Assessment Skills of Experienced Neonatal Intensive Care Nurses. Yellow Springs, OH: Klein Associates Inc. Eiser, J. R. and van der Pligt, J. (1988). Attitudes and Decisions. London: Routledge.

Investigating betting behaviour

245

Ferrell, W. R. (1994). ‘Discrete subjective probabilities and decision analysis: elicitation, calibration and combination’. In G. Wright and P. Ayton (eds), Subjective Probability (pp. 410–451). Chichester: Wiley. Figlewski, S. (1979). ‘Subjective information and market efﬁciency in a betting market’. Journal of Political Economy, 87, 75–88. Hey, J. D. (1991). Experiments in Economics. Oxford: Blackwell. Hey, J. D. (1992). ‘Experiments in economics – and psychology’ In S. E. G. Lee, P. Webley and B. M. Young (eds), New Directions in Economic Psychology – Theory, Experiment and Application. Aldershot: Edward Elgar. Hoerl, A. E. and Falein, H. K. (1974). ‘Reliability of subjective evaluation in a high incentive situation’. Journal of the Royal Statistical Society, 137, 227–230. Hogarth, R. M. (1980). Beyond Static Biases: Functional and Dysfunctional Aspects of Judgemental heuristics. Chicago: University of Chicago, Graduate School of Business, Center for Decision Research. Johnson, J. E. V. and Bruce (1997). ‘A probit model for estimating the effect of complexity on risk-taking’. Psychologicaal Reports, 80, 763–772. Johnson, J. E. V. and Bruce, A. (1998). ‘Risk strategy under task complexity: A multivariate analysis of behaviour in a naturalistic setting’. Journal of Behavioral Decision Making, 11, 1–17. Johnson, J. E. V. and Bruce, A. C. (2001). ‘Calibration of subjective probability judgements in a naturalistic setting’. Organizational Behavior and Human Decision Processes, 85, 265–290. Kabus, I. (1976). ‘You can bank on uncertainty’. Harvard Business Review, May–June, 95–105. Kahneman, D. and Tversky, A. (1972). ‘Subjective probability: A judgement of representativeness’. Cognitive Psychology, 3, 430–454. Keren, G. and Wagenaar, W. A. (1985). ‘On the psychology of playing blackjack: Normative and descriptive considerations with implications for decision theory’. Journal of Experimental Psychology: General, 114(2), 133–158. Larkin, J., McDermott, J., Simon, D. P. and Simon, H. A. (1980). ‘Expert and novice performance in solving physics problems’. Science, 20, 1335–1342. McClelland, A. G. R. and Bolger, F. (1994). ‘The calibration of subjective probabilities: theories and models 1980–94. In G. Wright and P. Ayton (eds), Subjective Probability (pp. 453–482). Chichester: Wiley. McFadden, D. (1974). ‘Conditional logit analysis of qualitative choice behaviour’. In P. Zarembka (ed.), Frontiers in Econometrics: Economic Theory and Mathematical Economics (pp. 105–142). New York: Academic Press. Murphy, A. H., and Brown, B. G. (1985). ‘A comparative evaluation of objective and subjective weather forecasts in the United States’. In G. Wright (ed.), Behavioral Decision Making (pp. 178–193). New York: Plenum. Murphy, A. H. and Winkler, R. L. (1977). ‘Can weather forecasters formulate reliable forecasts of precipitation and temperature?’ National Weather Digest, 2, 2–9. Neal, M. (1998). ‘ “You lucky punters!” A study of gambling in betting shops’. Sociology, 32, 581–600. Northcraft, G. B. and Neale, M. A. (1987). ‘Experts, amateurs and real estate: An anchoringand-adjustment perspective on property pricing decisions’. Organizational Behavior and Human Decision Processes, 39, 84–97.

246

A. Bruce and J. Johnson

Omodei, M. M. and Wearing, A. J. (1995). ‘Decision-making in complex dynamic settings – a theoretical model incorporating motivation, intention, affect and cognitive performance’. Sprache and Kognition, 14, 75–90. Orasanu, J. and Connolly, T. (1993). ‘The reinvention of decision making’. In G. A. Klein, J. Orasanu, R. Calderwood and C. E. Zsambok (eds), Decision Making in Action: Models and Methods. Norwood, NJ: Ablex. Phillips, L. D. (1987). ‘On the adequacy of judgmental forecasts’ In G. Wright and P. Ayton (eds), Judgmental Forecasting (pp. 11–30). Chichester: Wiley. Shanteau, J. (1992). ‘Competence in experts: the role of task characteristics’. Organizational Behavior and Human Decision Processes, 53, 252–266. Shin, H. S. (1992). ‘Prices of state-contingent claims with insider traders, and the favourite– longshot bias’. Economic Journal, 102, 426–435. Shin, H. S. (1993). ‘Measuring the incidence of insider trading in a market for statecontingent claims’. Economic Journal, 103, 1141–1153. Smith, J. F. and Kida, T. (1991). ‘Heuristics and biases: expertise and task realism in auditing’. Psychological Bulletin, 109, 472–485. Smith, V. D. (1989). ‘Theory, experiment and economics’. Journal of Economic Perspectives, 3, 151–169. Snyder, W. (1978). ‘Horse-racing: The efﬁcient markets model’. Journal of Finance, 78, 1109–1118. Suantek, L., Bolger, F. and Ferrell, W. R. (1996). ‘The hard–easy effect in subjective probability calibration’. Organizational Behavior and Human Decision Processes, 67, 201–221. Vaughan Williams, L. and Paton, D. (1997). ‘Why is there a favourite–longshot bias in British racetrack betting markets?’ Economic Journal, 107, 150–158. Waller, W. S., Shapiro, B. and Sevcik, G. (1999) ‘Do cost-based pricing biases persist in laboratory markets?’ Accounting, Organizations and Society, 24, 717–739. Woods, D. D. (1998). ‘Coping with complexity: The psychology of human behaviour in complex systems’. In L. P. Goodstein, H. B. Anderson and S. E. Olsen (eds), Tasks, Errors and Mental Models. London: Taylor & Francis. Wright, G. (1982). ‘Changes in the realism and distribution of probability assessment as a of question type’. Acta Psychologica, 52, 165–174. Wright, G., and Ayton, P. (1998). ‘Immediate and short-term judgmental forecasting: personologism, situationism or interactionism?’ Personality and Individual Differences, 9, 109–120. Yates, J. F. (ed.) (1992). Risk Taking Behaviour. Chichester: John Wiley. Zakay, D. (1983). ‘The relationship between the probability assessor and the outcomes of an event as a determiner of subjective probability’. Acta Psychologica, 53, 271–280.

18 The demand for gambling A review David Paton, Donald S. Siegel and Leighton Vaughan Williams

Introduction A rapid global expansion in gambling turnover has heightened interest in identifying the ‘optimal’ level of regulation in the gambling industry. Key policy objectives in this regard include determining the ideal structure of gambling taxes, maximising the net social beneﬁt of gambling and devising optimal responses to environmental changes, such as the growth of Internet gambling. Successful formulation of policy in these areas depends on the availability of good empirical evidence on demand characteristics and substitution patterns for various types of gambling activity. Policymakers in many countries are especially interested in assessing substitution effects for national and state lotteries, since they have become increasingly dependent on this source of revenue. They are also interested in determining how regulatory changes affect the demand for alcohol, tobacco, entertainment services and other consumer products that generate substantial tax revenue. The question of whether these products are substitutes or complements for gambling has important revenue implications when gambling regulations are modiﬁed. The purpose of this chapter is to review the available evidence on this topic. In assessing this evidence, we place signiﬁcant weight on academic literature that has been subject to peer review. However, we also consider consultancy-based reports where we consider these are particularly noteworthy. The following section begins with a discussion of forces that are likely to affect the demand for various gambling products. In the section on ‘Approaches to estimating gambling demand and substitution’ we outline the standard methodological approach. In the section ‘Review of the empirical literature’ we provide a comprehensive review on this topic. We summarise our ﬁndings in the ﬁnal section.

Substitutes and complements in gambling A major competitive threat to ﬁrms in this industry consists of two factors that inﬂuence demand: goods or services that constitute substitutes and complements. Economic theory predicts that a rise (decline) in the demand for complementary goods will increase (reduce) demand. On the other hand, close substitutes can potentially reduce proﬁtability by capturing market share and intensifying internal

248

D. Paton, D. S. Siegel and L. V. Williams

rivalry. In particular, the introduction and expansion of lotteries in various countries may have reduced the demand for conventional gambling services. In the US, there is evidence of substitution between Indian casinos and lotteries (Siegel and Anders, 2001) and substitution between riverboats and other businesses in the entertainment and amusement sector (Siegel and Anders, 1999). The proliferation of new products and services in the gambling industry (including the lottery), in conjunction with the rise of Internet gambling, increases the threat posed by substitutes. For example, the growth of offshore Internet betting had a signiﬁcant impact on the recent decision by the UK Government to reduce betting taxation. However, only limited data are available on the price sensitivity of the demand for gambling as a whole, or for particular gambling activities. There are a number of reasons for this, most notably the difﬁculty of generating accurate estimates of such price-elasticity from existing data sources. In a number of countries, for example Australia, gambling has been heavily restricted until recent years. There have also been signiﬁcant changes in the quantity of gambling products, and their relative market share, but this has, arguably, been driven more by regulatory changes than changes in price. Further, in many instances the effective price for the consumer is established via government regulation rather than by actions of the market. For example, in the US the ‘pay-out’ rate on a state lottery is not established by market forces, but rather by the state legislature. However, economic theory suggests that most forms of gambling should be relatively insensitive to price, due to two factors: 1

2

First, unlike normal consumer goods, the price of gambling is not readily apparent to the buyer. Insofar as consumers are not aware of the ‘true’ price or changes in the price, it is likely that they will be less responsive to price changes than if they had full information. It is also especially difﬁcult for the consumer to determine the true price where there are infrequent or highly variable pay-outs. One might also argue that gamblers will be more concerned about the odds and hence more responsive to tax/price changes, the greater is the probability of winning any particular bet. Second, there is some evidence of brand loyalty among gamblers to particular products (see, e.g. Monopolies and Mergers Commission, 1998), suggesting only limited substitution of one gambling form for another by consumers. The less substitutable a good is, in general, the less price responsive it is likely to be. For example, gambling machines have a signiﬁcantly lower pay-out ratio (higher price) than most casino table games, yet gambling machines are still very popular within casinos, indicating a lack of substitution by these gamblers based on price.

It is also important to note that the overall (general) responsiveness of demand for a particular type of gambling activity can differ from its speciﬁc responsiveness as measured at any given price, or tax rate. In general, the higher the level of the price, or tax rate, the higher the price elasticity. Whatever the measurement difﬁculties, another potentially serious substitution threat is growth in the underground or shadow economy. In this context, we refer

The demand for gambling

249

to illegal gambling establishments, which do not pay taxes. Schneider (1998, 2000a,b) provides evidence of growth in the shadow economies of all OECD countries. In the UK, Schneider estimates that the percentage of GDP represented by the shadow economy has risen from 9.6 per cent in 1989–1990 to 13 per cent in 1996–1997. Unfortunately, he cannot disaggregate these ﬁgures by type of activity, such as tobacco, alcohol, drugs, prostitution and gambling, so we cannot determine how much gambling activity has actually gone underground. More generally though, Schneider attributes at least some of the rise in the shadow economy to increases in taxes on items such as alcohol and tobacco. It is important to note that alcohol and tobacco are often consumed while individuals are gambling. Thus, for some consumers, alcohol and tobacco are part of the gambling ‘experience’. Our point is that many licensed premises have gambling and cigarette machines and many individuals who frequent betting shops smoke in the premises. It is potentially interesting to note that some of the same individuals or groups that smuggle alcohol and tobacco can also potentially provide gambling services. That is certainly the case in the US. The bottom line is that higher taxes on alcohol and tobacco could also reduce the demand for gambling. Note that the notion of complementarities implies that a relaxation in gambling regulation (e.g. a reduction in taxes) may increase the demand for alcohol and tobacco. An interesting theoretical perspective is that gambling, alcohol consumption and smoking constitute three types of addictive behaviours, which can still be examined through the lens of rationality (see Becker and Murphy, 1988 and Becker et al., 1991, 1994).1 If our conjecture that gambling, smoking and drinking are net substitutes is true, there is a second potential threat to the proﬁtability of the gambling industry – a decline in the demand for complementary goods. Ultimately the question of whether gambling, alcohol and tobacco are indeed substitutes or complements is an empirical issue. Answering this question is the key to understanding the implications of regulatory changes on each of these commodities. Substitution and a decline in demand for (potentially) complementary goods may have quite serious impacts on the nature of gambling in the UK. An examination of recent economic trends indicates that the UK gambling industry is becoming more competitive (see Paton et al., 2001b, 2002). Recent proposals to liberalise regulation governing casinos and slot machine gambling in the UK will potentially have even more signiﬁcant impacts on the structure of the gambling industry. An appreciation of the direction and magnitude of price and substitution effects is crucial to understanding the impact of such changes. Thus, in the next two sections we provide a comprehensive review of the available empirical evidence in these areas.

Approaches to estimating gambling demand and substitution The standard approach to estimating elasticity and substitution effects in the academic literature is to specify a demand equation such as the following: Qit = a0 + a1 Pt + a2 Yt +

βj Pj t + βZi + ut

(1)

250

D. Paton, D. S. Siegel and L. V. Williams

where Qi is a demand-based variable (such as turnover or tax revenue) for gambling sector i; Pi is the average price in gambling sector i; Y is income or other related variable; Z is a vector of other factors that effect demand in a gambling sector i; u is a stochastic error or classical disturbance term; Pj is the average price in gambling sectors that are rival to i, j = i; and the subscript t indicates a particular time period. We expect that a1 < 0, that is, an increase in price leads to a reduction in demand. The magnitude of a1 provides an estimate of the response of demand in sector i to a change in price. If the demand function is speciﬁed in logarithms, then a1 gives a direct estimate of the price elasticity. In this case, a1 < −1 implies that the product is price elastic and −1 < a1 < 0 implies the good is price inelastic. Similarly, a2 provides an estimate of the income elasticity of demand. If a2 > 0, then gambling in sector i can be considered as a normal (rather than an inferior) good. If a2 > 1, then gambling is said to be a luxury good. Lastly, βj represents the cross-elasticity of demand in i with respect to the price of sector j . βj < 0 implies that sectors i and j are complements whilst, βj > 0 implies they are substitutes. The academic literature on the estimation of such models suggests two key methodological problems: the deﬁnition and measurement of prices and the identiﬁcation of the model. A common deﬁnition of the price of a unit gamble is one minus the expected value (see, e.g. Paton et al., 2001a). In a study of the demand for horse racing, Suits (1979) deﬁnes price as the pari-mutuel takeout rate, or the fraction of total wages withheld by the state. With bookmaker betting, such data are generally not available.2 In this case, an alternative approach is to use changes to the tax rate as a proxy for changes in price (as used, e.g. by Europe Economics, 2000). Several studies of lotteries (Vrooman, 1976; Vasche, 1985, and Mikesell, 1987) also measure the price by the takeout rate. Gulley and Scott (1993), on the other hand, contend that the true price of lotteries should be based on a probabilistic calculation, related to the expected value of the bet. They calculated these ‘pseudoprices’ for lottery games in Massachusetts, Kentucky and Ohio and estimated demand equations. Obtaining enough information to compute prices for lotteries and (Native American) Indian casino games in the US (which was the focus of the analysis presented in Siegel and Anders, 2001) is impossible. The Native American tribes are not required to publicly report the relevant data. Furthermore, they are reluctant to disclose any information about their casino operations. The second key methodological issue is the question of whether equation (1) is identiﬁed (econometrically). Put another way, the own-price and the prices of substitutes are all potentially endogenous to the quantity demanded. Estimation of equation (1) without taking account of this is likely to lead to biased estimates. A standard solution to this problem in the literature is the use of ‘instrumental’ variables that do not enter into equation (1) but that are correlated with the endogenous variable to identify the effect of each variable. For example, Paton et al. (2001a) use tax rates to identify own-price in their betting equation. In the context of lotteries, it is common to use exogenous events such as rollovers or superdraws to identify the lottery price. These events increase the jackpot out

The demand for gambling

251

of proportion to the amount staked (see, e.g. Cook and Clotfelter, 1993; Gulley and Scott, 1993; Farrell et al., 1999; Farrell et al., 2000; Forrest et al., 2000a,b). In support of this methodology, Forrest et al. (2000b) ﬁnd evidence that participants in the UK National Lottery are able to efﬁciently process the information available to them. Speciﬁcally, they ﬁnd that players act as if they can, on average, forecast the level of sales for a given drawing. A complementary approach that has been used to identify substitution effects is to examine the impact on demand of speciﬁc events, such as a regulatory change permitting a rival form of gambling to operate. Examples of studies based on this approach include Anders et al. (1998) on the impact of Indian casinos, Siegel and Anders (2001) on substitution between lotteries and casinos, and Paton et al. (2001a) on the impact of the introduction of the National Lottery on betting demand in the UK.

Review of the empirical literature There have been several studies of the demand characteristics of gambling, the vast majority either being in the US or the UK. The salient characteristics and key ﬁndings of the most important of these studies are summarised in Table 18.1. Our discussion of the evidence summarised in this table is organised as follows. First, we consider the evidence relating to own-price elasticity of demand for various gambling sectors. We then examine the more limited evidence relating to socio-economic factors – income, unemployment and so on. Finally, we discuss substitution among different gambling sectors and potential displacement of tax revenue. Own-price elasticity of demand Lotteries One might expect, a priori, that lotteries – which are characterised by a low ticket-cost combined with a very low chance of winning – are likely to be highly insensitive to price across a broad range of prices. Indeed, it may be thought unlikely that lotteries could operate at their current levels in the presence of existing pay-out and tax rates if their demand was sensitive to price. This perception would, however, appear to contrast with the ﬁndings of some econometric studies. For instance, Farrell et al. (1999) ﬁnd that the demand for the UK National Lottery is highly elastic. They report a short-run price elasticity that is close to unity (−1.05), but a long-run elasticity that exceeds unity (−1.55).3 This ﬁnding could be spurious, since it is based on data from the initial years of the UK National Lottery, when there was substantial media frenzy surrounding double rollovers (extremely large prizes).4 Studies based on data from subsequent years (Farrell et al., 2000; Forrest et al., 2000a,b, 2002) report elasticity close to unity. It is important to note that the magnitude of the elasticity has important policy implications. For instance, elasticity in excess of unity implies that the ‘pricing’ of lotteries

Country

United States

United States

United States

United States

United Kingdom

United States United States

United States

United Kingdom

United Kingdom

United Kingdom

United Kingdom

United Kingdom

Author(s)

Anders et al. (1998)

Anders and Siegel (1998)

Siegel and Anders (1999)

Siegel and Anders (2001)

Paton et al. (2001a)

Suits (1979) Thalheimer and Ali (1995)

Gulley and Scott (1993)

Farrell et al. (1999)

Farrell et al. (2000)

Forrest et al. (2000a)

Forrest et al. (2002)

European Economics (1998, 1999, 2000)

UK betting establishments

UK National Lottery

UK National Lottery

UK National Lottery

Lottery in the US states of Massachusetts, Kentucky and Ohio UK National Lottery

Horse racing in the US Horse racing in the US

Native American (Indian) casinos in the US state of Arizona Native American (Indian) casinos in the US state of Arizona Riverboat casinos in the US state of Missouri Lottery and Indian casino gambling in the US state of Arizona Lottery and betting establishments in the UK

Type of gambling activity

The demand for betting is relatively inelastic with respect to the betting tax (an estimate of −0.6 to −0.7)

Short-run elasticity close to unity (−1.05); long-run elasticity exceeds unity (−1.55) The demand for the lottery has an elasticity that is close to unity (estimates range from −0.80 to −1.06) The demand for the lottery has an elasticity that is close to unity (−1.03) The demand for the lottery has an elasticity that is close to unity (−1.04 and −0.88 for Wednesday/Saturday draws)

Introduction of National Lottery did not reduce conventional betting demand; strong evidence of substitution between the UK National Lottery and conventional betting establishments; demand for betting is elastic (estimates ranging from −1.19 to −1.25) Demand for horse-race betting is moderately elastic (−1.59) Demand for horse-race betting is highly elastic (−2.85 to −3.09). The introduction of state lotteries reduced betting demand. Some evidence of price-induced substitution between betting and lotteries Demand for the lottery is moderately elastic (−1.15, −1.92 and −1.20, respectively)

The growth of Indian casinos is associated with the displacement of revenue from conventional establishments (which are subject to tax) An expansion of riverboat casinos is associated with a decline in expenditure on other forms of entertainment and recreation An expansion in Indian casinos is associated with a decline in lottery revenues, especially for games offering big prizes

The establishment of Indian casinos destabilised the collection of sales tax revenue in Arizona

Findings

Table 18.1 Key empirical studies of demand characteristics and substitution effects for various types of gambling activity

The demand for gambling

253

is inconsistent with the stated goal of the regulator – revenue maximisation. That is, ofﬁcials could generate additional revenue by reducing the price or takeout rate and increasing the potential prize pool. Typical of the later results is that of Forrest et al. (2000a). They estimate the steady-state long-run price elasticity of demand for UK lottery tickets as −1.03, which is not statistically different from revenue-maximisation. Looking outside the UK, Clotfelter and Cook (1990) use cross-sectional data across states in the US and estimate an elasticity of sales with respect to the payout rate to be −2.55 for Lotto and −3.05 for ‘Numbers’ games. However, they admit that these estimates ‘are not very stable to alternative speciﬁcations’ (p. 114). In a study that is closer in execution to the UK based research reported above, Gulley and Scott (1993) use time series data and ﬁnd that the demand for lotteries in Massachusetts, Kentucky and Ohio was price elastic (−1.15, −1.92 and −1.20, respectively).5 A study conducted in 1997 by Business and Economic Research Limited (BERL), for the New Zealand Lotteries Commission, one of the few papers to estimate elasticity for a range of gambling sectors, estimated a price elasticity for New Zealand Lotto of −1.054, very close to the estimates for the UK. In sum, the evidence from the US suggests that price elasticity of the lottery is greater than one, while corresponding estimates from the UK are close to unity. The obvious explanation for this difference is that lotteries in the US tend to be operated by public institutions (individual states) whereas the UK National Lottery is privately run. In order to maximise government revenue, a state-run lottery should set prices so as to maximise proﬁts. To do this, the price should be such that marginal cost equals marginal revenue. Assuming that marginal cost is non-zero, then marginal revenue is also positive and elasticity is necessarily in excess of unity. In the UK, the Government taxes the National Lottery at a ﬁxed proportion of revenue. To maximise tax receipts, price should be set so as to maximise sales revenue. This implies that the marginal revenue is equal to zero that, in turn, implies unitary elasticity.6 Another possible contributory factor to the disparity in estimates is that there is a single national lottery in the UK, while numerous states in the US have lotteries. That is, there are more available substitutes in the US for lottery players. In fact, it is often quite easy for consumers to play the lottery in neighbouring states. This results in a situation where the consumer demand is much more sensitive to price changes than in the UK. There are a number of possible explanations for the apparent difference between the econometric ﬁndings and the more qualitative assessment that demand for lotteries is likely to be insensitive to their price: 1

As mentioned above, a ﬁnding that demand for lotteries is sensitive at high prices – owing to current levels of taxes – does not mean that the demand is necessarily sensitive at lower prices and tax rates. In fact, faced with an inelastic demand curve, to maximise proﬁts a producer will continue to raise prices until eventually demand becomes elastic. Elasticity increases because at high prices, substitutes may emerge that are not viable at lower prices.

254 2

D. Paton, D. S. Siegel and L. V. Williams Most quantitative studies estimate the responsiveness of demand to price using consumers’ reaction to occasional big pay-outs, or superdraws, that are announced in advance and accompanied by advertising campaigns. It is unclear whether the consumer reaction to these occasional events is a good guide to how the demand for lotteries would change if tax reductions increased pay-outs on a permanent basis.

Gambling machines, casinos and betting The evidence on the price elasticity for gambling machines, casinos and betting is more limited than that relating to lotteries. In part, this reﬂects the difﬁculties of obtaining price data for these sectors. Gambling machines may provide more feedback to the consumer on total returns than lotteries, in the sense that they are played repeatedly, and consumers will have some idea of the rate at which they lose. This in itself may imply that the demand for gambling machines is more price sensitive than that for lotteries. In fact, one of the few studies to provide direct evidence on own-price elasticity in this sector is the BERL (1997) paper referred to above. This estimated the elasticity of demand for gambling machines and casinos in New Zealand to be just −0.8 (i.e. somewhat inelastic to price). The earliest study on the demand for betting was Coate and Ross (1974), which examined the effect of off-track betting on racetrack wagering. However, owing to data deﬁciencies the authors were unable to provide an estimate of either the price or income elasticity of the demand for wagering. Other early studies include those of Gruen (1976), Suits (1979), Morgan and Vasche (1979, 1980, 1982) and Pescatrice (1980). These studies focus primarily on the price-elasticity of racetrack wagering demand in the US. They do not consider, however, substitute products such as state lotteries and spectator sports. Still, such work has provided elasticity estimates, Suits (1979) for example, providing an estimate of the demand for betting on horse racing, which is quite elastic (−1.59). Thalheimer and Ali (1995) included substitute products such as a state lottery in the demand relationship speciﬁcation. They found a particularly high own-price elasticity of demand for betting, in their examination of pari-mutuel revenue at three racetracks using annual data for the period 1960–1987. They conclude that the elasticity of turnover (‘the handle’) with respect to price (as measured by the takeout rate) at these racetracks is between −2.85 and −3.09. Unfortunately this study makes no attempt to correct for nonstationarity of the variables. The reported values of the Durbin–Watson statistic indicate that this may be a problem, which would imply that the reported elasticity could be biased. In contrast, the 1997 BERL Report estimated the price elasticity of demand for betting on racing in the pari-mutuel system of New Zealand as −0.720, signiﬁcantly lower than the US estimates. BERL’s ﬁndings suggest that the demand for betting on racing in New Zealand was less sensitive to price changes than gambling machines, casinos or lotteries. The relative importance of bookmakers and of off-course betting in the UK suggests that it is unlikely that these results tell us much about betting demand in this country. A series of industry-commissioned reports by Europe Economics

The demand for gambling

255

(1998, 1999, 2000) investigate the elasticity of betting turnover with respect to taxation rates (rather than total price) in the UK and in Ireland. They estimate this elasticity to be in the region of −0.6 to −0.7. Paton et al. (2001a) derive elasticity estimates using both taxation rates as a proxy for price and direct data on prices derived from bookmakers takeout rates. Using monthly, between January 1990 and April 2000 inclusive, they ﬁnd that the elasticity of betting demand with respect to tax is between −0.62 and −0.63 (conﬁrming the Europe Economics estimates) and the price elasticity to be within −1.19 to −1.25. The authors point out, however, that these estimates rely on only a limited number of changes to the tax rate and should be interpreted with caution. Speciﬁcally, they point out that policymakers should not rely on these ﬁndings to forecast the impact of larger changes in tax rates, since they are based on relatively small changes in tax rates. The recent structural change in betting taxation implemented in the UK in October 2001 is likely to be much more informative about the nature of betting demand, but to date there is no academic evidence relating to this period. A related issue raised by Paton et al. (2001a) is whether the elasticity of demand is increasing over time as additional substitute products appear on the market. We are unaware of any evidence to date relating to this point. Socio-economic factors A number of studies examine the links between gambling expenditure (or growth in expenditure) and variables related to wider economic issues such as average earnings or unemployment. For Australia, Bird and McCrae (1994) showed that total gambling expenditure grew at an average of 15.5 per cent per year between 1973 and 1992, compared to the Consumer Price Index increase of 9.1 per cent. However, betting on racing increased at only 10.5 per cent. Tuckwell (1984) isolated those factors that most strongly inﬂuenced the level of betting with the totalisator and bookmakers in Australia. The main inﬂuences on totalisator betting were the level of real wages, unemployment and lottery turnover. This suggests an association between the level of totalisator betting and the level of disposable income. For bookmakers, the ﬁndings were not as conclusive. Tuckwell found only a weak association between turnover and real wages. Bookmaker betting may be somewhat insulated from changes in the level of per capita disposable income by the higher preponderance of wealthy gamblers who use this betting medium. He also found a persistent decline in real per capita turnover over time. Thalheimer and Ali (1995) (introduced above) ﬁnd a strongly signiﬁcant positive effect of income on pari-mutuel betting in the US. The authors also present evidence suggesting that this relationship is non-linear. They ﬁnd the effect to be positive at low levels of income, but at higher levels, further increases in income are associated with reductions in the betting turnover. The authors attribute this quadratic relationship to the greater opportunity costs of attending racetracks in terms of lost income. An alternative explanation is that the correlation is spurious as discussed above. Certainly, Paton et al. (2001a) ﬁnd no evidence of such a

256

D. Paton, D. S. Siegel and L. V. Williams

quadratic effect for the UK. They estimate that income (as measured by average earnings) has a signiﬁcantly positive impact on betting demand. However, using a variety of speciﬁcations, they are unable to ﬁnd any impact on lottery demand. Further, they ﬁnd that the rate of unemployment has no additional impact on the demand in either sector. Substitution and revenue displacement Lotteries and betting The theoretical rationale for displacement is based on the economic principle of substitution, that is, money spent on gambling is money that could be spent on other goods and services. For example, the closest substitutes for sports betting are most likely to be other forms of gambling such as casinos, horse racing, bingo, Internet gambling, and lotteries. Our discussion in the section on ‘Substitutes and complements in gambling’, suggests that the strength of substitution between the major forms of gambling has increased over time. In the US, state governments and operators of casinos (Native American tribes, Riverboats and casinos in Nevada and Atlantic City) have pursued aggressive marketing strategies to capture a larger share of the gambling market. In the UK, the establishment of the National Lottery in November 1994 and the introduction of various related games since then have posed an equal threat to the market share of betting. On the other hand, such trends may not only affect market shares, they may also increase the total market size for gambling. For example, Paton et al. (2001a) contend that the introduction of the National Lottery may have led to a climate in which gambling as a whole became more socially acceptable. Thus, it is possible that regulatory liberalisation in one sector may lead to both substitution and complementarities. This point is illustrated in Paton et al. (2001a), who employ a series of structural stability tests to examine whether betting demand was signiﬁcantly affected by the introduction of the National Lottery or any subsequent lottery game. Using a variety of econometric speciﬁcations, they conclude that, in fact, there was no signiﬁcant impact on betting demand. In other words, although the lottery clearly captured market share from betting, this substitution effect was completely offset by the market expansion effect. The authors go on to demonstrate that, despite this expansion, once the lottery had been established, price changes did indeed induce signiﬁcant substitution between sectors. The magnitude of the cross-price elasticity of betting demand with respect to lottery price was estimated to be between +0.26 and 0.75. The cross-price elasticity of lottery demand to betting price was between +0.48 and 0.68. These ﬁndings are consistent with some recent evidence from the US. For example, Mobilia (1992) found that the existence of a state lottery led to a relatively small decrease in attendance at pari-mutuel tracks, but had no signiﬁcant effect of real handle (gross revenues) per attendee. Similarly, Siegel and Anders (2001; discussed in more detail below) found no evidence of substitution between horse and dog racing and lotteries in Arizona. On the other hand, Thalheimer and Ali (1995)

The demand for gambling

257

report much stronger evidence of substitution between the state lottery and parimutuel betting. In particular, they estimated that over the period 1974–1987 the presence of the Ohio State Lottery resulted in a decrease in attendance-related revenue at the three pari-mutuel racetracks in the market area of 17.2 per cent, and a decline of 24 per cent in handle-related revenue.7 Casinos and lotteries Anders et al. (1998) examined the impact of the introduction of Indian casino gambling on transaction privilege taxes (basically, sales taxes) in the US state of Arizona. This is a critical issue for policymakers in states with Indian casinos, since activity on Indian reservations, including casino gambling, is not subject to any state or federal taxes. The authors estimated the following time series regression: LTPTt = β0 + β1 LEMPLt + β2 LRETAILt + ut

(1)

where LTPT is the logarithm of Transaction Privilege Taxes (basically sales taxes); LEMPL is the logarithm of employment; LRETAIL is the logarithm of retail sales; u is a classical disturbance term and the subscript t indexes month t. Brown–Durbin–Evans and Chow tests for structural stability of regression equations revealed that the expansion of Indian casinos induced a structural change (decline) in projected sales tax revenues. The authors also estimated regressions of the following form: LTPTt = β0 + β1 LEMPLt + β2 LRETAILt + β3 CASINOt + ut

(2)

where CASINO is a dummy variable that is equal to one after the introduction of Indian casinos in June 1993; otherwise it is zero. Consistent with the displacement hypothesis, they found that β3 is negative and statistically signiﬁcant. A series of additional econometric tests revealed strong evidence of leakage from taxable sectors, such as restaurants and bars, to these non-taxable gambling establishments. The authors also argued that these displacement effects were currently being masked by strong economic growth and favourable demographic trends.8 Inevitably, a downturn in the local economy would force the state to take action to stem these leakages. This is exactly what transpired in the years since the paper was published. Another paper by Siegel and Anders (1999) examined revenue displacements from riverboat gambling in the US state of Missouri. Unlike Indian casinos, riverboats are subject to state taxation. Using county level data for the St Louis and Kansas City metropolitan areas (the two largest cities in Missouri), the authors estimated regressions of the following form: LSALESTAXikt = βk0 + βi1 LSALESTAXj lt + βi2 LKCRIVt + βi3 LSTLRIVti4 + βi4 LOTHRIVt + βi5 YEARt + ut (3)

258

D. Paton, D. S. Siegel and L. V. Williams

where SALESTAX denotes sales taxes, KCRIV, STLRIV and OTHRIV are adjusted quarterly gross revenues generated by riverboats in Kansas City, St Louis, and other parts of the state, respectively (Kansas City and St Louis are the two largest states in Missouri), i indexes ﬁve industries that could potentially experience the displacement effects, j indexes an industry classiﬁcation (SIC 799) for miscellaneous forms of entertainment, which includes riverboat gambling, k denotes the eleven counties located within Kansas City and St Louis or within driving distance to the riverboats, l represents the (six) counties where riverboats are located, t is the time period (quarterly observations), and the L preﬁx signiﬁes that each variable is computed as a logarithmic change (from year to year) and u is a classical disturbance term. The authors found that in SIC 799, all of the coefﬁcients on the riverboat variables (βi2 , βi3 and βi4 ) are negative and signiﬁcant. That is, the statistical evidence strongly suggests that an expansion in riverboat activity is associated with a decline in expenditures on other forms of entertainment and recreation. A third paper by Siegel and Anders (2001) ﬁnds strong evidence of revenue displacement from lotteries to casinos in the US state of Arizona. This is a major public policy concern in the US because states have become dependent on lotteries to fund educational programmes and other off-budget items. From the perspective of policymakers, lotteries are an attractive source of revenue, because it is less painful and politically less risky than conventional tax increases. Using monthly data for the years 1993–1998, provided by Arizona ofﬁcials, the authors estimate variants of the following equation: Log LOTTt = α + δS Log NUMSLOTSt + δH Log HORSEt + δD Log DOGt + δy YEARt + ut

(4)

where LOTT denotes monthly lottery revenues, NUMSLOTS is the number of slot machines in Indian casinos, HORSE represents the racetrack handle and DOG is the greyhound track handle and YEAR is a dummy variable denoting the year. The authors use different lottery games, such as Lotto, Fantasy Five and Scratchers, as dependent variables. This enables them to assess the impact of Indian casinos on various types of lottery games.9 Note that δS can be interpreted as the elasticity of lottery revenues with respect to slot machines. The substitution hypothesis implies that δS < 0. Estimation of this parameter will help state policymakers predict the impact of casino expansion on state revenues. The evidence presented in the paper suggests that δS is indeed negative and statistically signiﬁcant, that is, an expansion of slot machines is associated with a reduction in lottery revenues.10 As mentioned above, they did not ﬁnd evidence of substitution between horse and dog racing and lotteries. The strongest displacement effects were found for the big prize lottery games. Thus, the ﬁndings imply that, at least for Arizona, there is indeed a ‘substitution effect’. Indeed, they found stronger evidence of substitution in Arizona than in Minnesota, where Steinnes (1998) reported that Indian casinos had a negative, but lesser impact on the Minnesota lottery.

The demand for gambling

259

Further evidence of complementarities is provided by McMillen (1998), in a report commissioned by the New Zealand Casino Control Authority. The author argued that the introduction of casinos had not resulted in a reduction in spending on other forms of gambling, but instead has led to an expansion in the total expenditure on gambling. In other words, the impact of casinos on the overall national gambling market was judged to be one of complementarities rather than substitution. The report further noted that casinos appeared to have been a catalyst for change in other forms of gambling. A survey of casino patrons conducted as part of the study found that if money had not been spent on casino gambling, 37.5 per cent said that they would have spent it on other forms of entertainment, 25.7 per cent on housing items, 8.7 per cent on other forms of gambling, while 6 per cent would have saved the money. Fifteen per cent of the respondents did not reply to the question. In summary, there is strong evidence of positive cross-price elasticities across different forms of gambling. In other words, a decrease (an increase) in price in one sector will signiﬁcantly decrease (increase) the demand in other sectors. However, the expansion of particular sectors due to a looser regulatory environment seems to have ambiguous effects. The extant literature suggests that this can have both a negative effect on existing forms of gambling due to reduced market share, as well as a positive effect due to market expansion. In the US, the introduction of Indian Casinos seems to have had a net negative impact on lottery revenue. On the other hand, the introduction of the National Lottery in the UK probably had no overall impact on traditional betting.

Conclusion In this paper, we have presented a comprehensive review of empirical studies on the demand for gambling. Our purpose in this section is to brieﬂy summarise some of the key stylised facts. The key conclusions are as follows: 1

2

3

The overwhelming majority of the evidence suggests that the long-run priceelasticity of demand for the UK lottery is close to unity, that is, the revenuemaximising level. Note that this result differs from the US ﬁndings, where authors typically ﬁnd that the long-run price-elasticity of demand for state lotteries is greater than unity. The disparity between the US and the UK is largely due to the fact that the private operators of the UK National Lottery set prices with the objective of maximising revenue, whereas in the US the public institutions that manage lotteries (individual state governments) set prices so as to maximise proﬁts. Another likely determinant of the greater sensitivity of lottery demand to changes in price in the US is that while there is a national lottery in the UK, most states in the US have lotteries. That is, there are more substitutes available in the US for lottery players. Evidence for the price elasticity of other forms of betting is more mixed. There are some studies indicating that the price elasticity of betting is fairly

260

4

5

6

7

D. Paton, D. S. Siegel and L. V. Williams high, but this work is less authoritative than that pertaining to lotteries, due to more imprecise measures of ‘true’ prices. There is a strong need for research on the impact of the recent UK betting tax changes on the demand for betting. There has been a less systematic study of the income-elasticity of various forms of gambling, but the evidence tends to suggest that the elasticity of gambling with respect to income is positive, that is, gambling is a normal good. There is mixed evidence on substitution effects between various forms of gambling, and between gambling and the availability of other leisure opportunities, although a number of studies have identiﬁed clear evidence of substitution between different leisure and gambling sectors. There is contradictory evidence from the US and UK on the impact of a State or National Lottery on other forms of gambling, which may be related to the impact of regulatory changes. These changes do seem to have a signiﬁcant market expansion impact (complementarities). At the same time, regulation and price changes tend to lead to signiﬁcant substitution between sectors. Thus, the overall impact of liberalisation of casinos/machines in the UK is difﬁcult to predict based on current evidence. There is a large amount of evidence from the US that expansion of the casino sector has a signiﬁcant net negative impact turnover and taxation revenues from State lotteries.

These conclusions need to be interpreted with some caution. In particular, ownand cross-price and elasticity estimates are only relevant to the range of data contained within the sample. Thus, predicting the impact of signiﬁcant policy that will have a major impact either on the price within a sector or on its competitive challenges is extremely difﬁcult. A further key unresolved issue is the question of the precise magnitudes of cross-price elasticity for substitutes and complements for gambling, such as alcohol and tobacco.

Notes 1 See Orphanides and Zervos (1995) for an interesting extension of the Becker and Murphy model. 2 See Paton et al. (2001a) for an exception. 3 Farrell et al. (1999) also provide a test of Becker and Murphy’s (1988) theory of rational addiction for lottery tickets. They ﬁnd that lottery play is indeed characterised by addictive behavior. Not surprisingly, however, gambling is found to be less physically addictive than other goods that may be physically addictive, such as cigarettes. 4 We are indebted to David Forrest for this observation. 5 The authors found that the demand for Massachusetts MegaBucks was inelastic (−0.19). 6 Again, we are grateful to David Forrest for clariﬁcation on this point. 7 See also earlier studies in the same vein by Simmons and Sharp (1987) and Gulley and Scott (1989). 8 Phoenix is one the fastest growing cities in America and also has a large population of retirees. Three of the most proﬁtable Arizona Indian casinos are Fort McDowell, Gila River and Ak-Chin, which are all located in the Phoenix metropolitan area. A fourth

The demand for gambling

261

casino offering table games has just opened on the Salt River reservation. A case before the Arizona Supreme Court will determine if they are also allowed to have slots. 9 Powerball, a very popular game, which was added in 1994, was not included in this dataset. 10 The actual level of displacement is difﬁcult to measure because of favourable economic and demographic factors that may have offset decreasing lottery sales.

References Anders, Gary and Siegel, Donald (1998). ‘An economic analysis of substitution between Indian casinos and the State Lottery’, Gaming Law Review, 2(6): 609–613. Anders, Gary, Siegel, Donald and Yacoub, Munther (1998). ‘Does Indian casino gambling reduce state revenues? Evidence from Arizona’, Contemporary Economic Policy, 16(3): 347–355. Becker, Gary S. and Murphy, Kevin M. (1988). ‘A theory of rational addiction’, Journal of Political Economy, 96: 675–700. Becker, Gary S., Grossman, Michael and Murphy, Kevin M. (1991). ‘Rational addiction and the effect of price on consumption’, American Economic Review, 81: 237–241. Becker, Gary S., Grossman, Michael and Murphy, Kevin M. (1994). ‘An empirical analysis of cigarette addiction’, American Economic Review, 84: 396–418. BERL (1997). ‘Sensitivity analysis of gross win to price elasticities of demand’. In Responsible Gaming: A Commentary by the New Zealand Lotteries Commission on the Department of Internal Affairs’ proposals for gaming and gambling. Contained in Gaming – A New Direction for New Zealand, and its Associated Impact Reports, New Zealand Lotteries Commission, Wellington. Bird, Ron and McCrae, Michael (1994). ‘The efﬁciency of racetrack betting markets’. In Efﬁciency of Racetrack Betting Markets: Australian Evidence, Donald B. Hausch, Victor S.Y. Lo and William T. Ziemba (eds), London: Academic Press, pp. 575–582. Clotfelter, Charles T. and Cook, Philip J. (1990). ‘On the economics of state lotteries’, Journal of Economic Perspectives, 4(4): 105–119. Coate, D. and Ross, G. (1974). ‘The effect of off-track betting in New York city on revenues to the city and state governments’, National Tax Journal, 27: 63–69. Cook, Philip J. and Clotfelter, Charles T. (1993). ‘The peculiar scale economies of Lotto’, American Economic Review, 83(3): 634–643. Europe Economics (1998). ‘The impact of the 1996 reduction in betting duty’, A Report for Betting Ofﬁces Licensees Association, Ltd., November. Europe Economics (1999). ‘The potential impact of off-short and Internet betting on government tax revenues’, A Report for Betting Ofﬁces Licensees Association, Ltd., January. Europe Economics (2000). ‘The potential impact of off-shore and Internet betting on government tax revenues: an update to reﬂect new evidence’, A Report for Betting Ofﬁces Licensees Association, Ltd. Farrell, Lisa, Hartley, Roger, Lanot, Gauthier and Walker, Ian (2000). ‘The demand for Lotto: the role of conscious selection’, Journal of Business and Economic Statistics, 18(2): 226–241. Farrell, Lisa, Morgenroth, Edgar and Walker, Ian (1999). ‘A time series analysis of UK lottery sales: long and short run price elasticities’, Oxford Bulletin of Economics and Statistics, 61(4): 513–526.

262

D. Paton, D. S. Siegel and L. V. Williams

Forrest, David, Gulley, David O. and Simmons, Robert (2000a). ‘Elasticity of demand for UK National Lottery tickets’, National Tax Journal, 53(4), part 1: 853–864. Forrest, David, Gulley, David O. and Simmons, Robert (2000b). ‘Testing for rational expectations in the UK National Lottery’, Applied Economics, 32: 315–326. Forrest, David, Simmons, Robert and Chesters, Neil (2002). ‘Buying a dream: alternative models of demand for Lotto’, Economic Inquiry, 40(3): 485–496. Gruen, A. (1976). ‘An inquiry into the economics of racetrack gambling’, Journal of Political Economy, 84: 169–177. Gulley, O. David and Scott, Frank A. (1989). ‘Lottery effects on pari-mutuel tax returns’, National Tax Journal, 42: 89–93. Gulley, O. David and Scott, Frank A. (1993). ‘The demand for wagering on state-operated Lotto games’, National Tax Journal, 46(1): 13–22. McMillen, Jan (1998). Study on the Social and Economic Impacts of New Zealand Casinos, Australian Institute for Gambling Research. Mikesell, John L. (1987). ‘State lottery sales: separating the inﬂuence of markets and game structure’, Journal of Policy Analysis and Management, 6: 251–253. Mobilla, Pamela (1992). ‘Trends in gambling: the pari-mutuel racing industry and effect of state lotteries, a new market deﬁnition’, Journal of Cultural Economics, 16(2): 51–62. Monopolies and Mergers Commission (1998). Ladbroke Group PLC and the Coral Betting Business: A Report on the Merger Situation, London: Monopolies and Mergers Commission. Morgan, W. D. and Vasche, J. D. (1979). ‘Horseracing demand, pari-mutuel taxation and state revenue potential’, National Tax Journal, 32: 185–194. Morgan, W. D. and Vasche, J. D. (1980). ‘State revenue potential of pari-mutuel taxation: a comment’, National Tax Journal, 33: 509–510. Morgan, W. D. and Vasche, J. D. (1982). ‘A note on the elasticity of demand for wagering’, Applied Economics, 14: 469–474. Orphanides, Athanasios and Zervos, David. (1995). ‘Rational addiction with learning and regret’, Journal of Political Economy, 103: 739–758. Paton, David, Siegel, Donald S. and Vaughan Williams, Leighton (2001a). ‘A time series analysis of the demand for gambling in the United Kingdom’, Nottingham University Business School Working Paper Series, 2001. II. Paton, David, Siegel, Donald S. and Vaughan Williams, Leighton (2001b). ‘Gambling taxation: a comment’, Australian Economic Review, 34(4): 427–440. Paton, David, Siegel, Donald S. and Vaughan Williams, Leighton (2002). ‘A policy response to the e-commerce revolution: the case of betting taxation in the UK’, Economic Journal, 112(480): 296–314. Pescatrice, D.R. (1980). ‘The inelastic demand for wagering’, Applied Economics, 12: 1–10. Schneider, Friedrich (1998). Further Empirical Results of the Size of the Shadow Economy of 17 OECD Countries. Paper presented at the 54th Congress of IIPF, Cordoba, Argentina and Discussion Paper, Economics Department, University of Linz, Austria. Schneider, Friedrich and Enste, Dominik H. (2000a). ‘Shadow economies: size, causes and consequences’, Journal of Economic Literature, 38(1): 77–114. Schneider, Friedrich (2000b). The Value Added of Underground Activities: Size and Measurement of the Shadow Economies and the Shadow Economy Labor Force all over the World, Discussion Paper, Economics Department, University of Linz, Austria.

The demand for gambling

263

Siegel, Donald S. and Anders, Gary (1999). ‘Public policy and the displacement effects of casinos: a case study of riverboat gambling in Missouri’, Journal of Gambling Studies, 15(2): 105–121. Siegel, Donald S. and Anders, Gary (2001). ‘The impact of Indian casinos on state lotteries: a case study of Arizona’, Public Finance Review, 29(2): 139–147. Simmons, S. A. and Sharp, R. (1987). ‘State lottery effects on thoroughbred horse racing’, Journal of Policy Analysis and Management, 6: 446–448. Steinnes, Donald (1998). Have Indian Casinos Diminished Other Gambling in Minnesota? An Economic Assessment Based on Accessibility, Mimeo. Suits, Daniel B. (1979). ‘The elasticity of demand for gambling’, Quarterly Journal of Economics, 93: 155–162. Thalheimer, Richard and Ali, Mukhtar (1995). ‘The demand for pari-mutuel horserace wagering and attendance with special reference to racing quality, and competition from state lottery and professional sports’, Management Science, 45(1): 129–143. Tuckwell, R. (1984). ‘Determinants of betting turnover’, Australian Journal of Management, December. Vasche, Jon David (1985). ‘Are lottery taxes too high?’, Journal of Policy Analysis and Management, 4: 269–271. Vrooman, David (1976). ‘An economic analysis of the New York State Lottery’, National Tax Journal, 29: 482–489.

Index

Adams, B. R. 63 Adjusted Time ratings 108 Alexander, C. 73 Ali, M. 43, 45, 53, 64, 254–6 Ali, M. M. 3, 30, 67 analysis of covariance 106 Anders, G. C. 206, 213, 248, 250, 256–8 Anderson, J. R. 230 arbitrage opportunities 82 Asch, P. 19, 30, 32, 43 Ashton, R. H. 239 asset pricing models 138 asset returns 138 attelé races 96, 103; versus monté races 101 Avery, C. 125 Ayton, P. 230, 240 Baars, B. J. 225 Bacon-Shone, J. H. 238 Barsky, S. 115 Beach, L. R. 239 Becker, G. S. 179, 190, 249 Becker–Murphy concept: of myopic addiction 190 Bennett, M. J. 239 Benter, B. 63–4 Benter, W. 108, 110 best case scenario 18 ‘best’ quotes 129–31 betting at the Tote 30–40 betting behaviour 224, 228; laboratory-based research 228–36; naturalistic research into 224–8 betting line 114 betting market efﬁciency 43–4; quantifying 44

betting markets 30, 45, 95, 195; role of turnover 45–50, 61; skewness 195 betting returns 43–4, 49; skewness 43 betting volume 48 betting with bookmakers 30–40, 254 bettor’s utility function 53 Beyer, A. 108 Bhagat, S. 146 bias 99–100; in the forecasts 151 Bird, R. 41, 255 Blackburn, P. 2 Bolger, F. 240 Bolton, R. N. 108, 110 bookmaker: betting 192, 250; betting market 39; handicaps 114, 124–5; markets 226, 240–2; odds 30, 38–40; returns 2–3, 6, 10 bookmakers 2, 32, 35, 82, 84, 87, 91, 121 breakage 43–4, 50, 61; costs 43–4, 51–2, 63 Brier probability score 98 British betting markets 31–2; efﬁciency and 32 British pari-mutuel (Tote) market 2 British racecourses 30; betting 31 Brohamer, T. 108 Brown, B. G. 231, 239 Brown, R. I. F. 236 Brown, S. J. 156 Bruce, A. C. 227–8, 237–8, 240 Bureau of Indian Affairs (BIA) 205 Busche, K. 30, 43, 45, 62, 81 ‘cafe-courses’ 95 Cain, M. 3, 15, 30–1, 33, 35, 41 Calderwood, R. 232 calibration index 99 California v. Cabazon 204

266

Index

California v. Cabazon Band of Mission Indians 220 Carron, A. 115 casinos 206, 254, 257; ﬁscal impact of 206 Chapman, R. G. 108, 110 Chevalier, J. 125 city taxes: impacts on 218 Clarke, S. 115 Clotfelter, C. T. 169, 179, 184, 193, 199 Coate, D. 254 Cochran, W. 122 Cohen, M. S. 235, 237 ‘Collateral Form’ 108 Conditional Logistic Regression model see multinomial logit model Conlisk, J. 63, 192 Connolly, T. 228–9 constant-elasticity speciﬁcation 186 Cook, P. J. 169, 179, 184, 193, 199 Cornell, S. 208 corporate governance 146 Courneya, K. 115 covariance decompositions 95–6, 98 Cox, D. L. 193 Crafts, N. 30–32 Crafts, N. F. R. 68–73, 75 Craig, A. T. 63 Crandall, B. 232 Creigh-Tyte, S. 165 cross-price elasticity 260 cubic utility function 58 cubic utility model 53–9, 64 Curley, S. P. 98

Dare, W. H. 115, 148 Davidson, R. 64 DeBoer, L. 179 decision–cost theory 47–9 demand (bettors) 115 deregulatory reform 198 diminishing marginal returns 80 discrete-choice probit model 135 discrimination index 99 displacement 214–17; effects 257–8 dividends: under the new method 22 Dobson, S. 125 Dolbear, F. T. 30 ‘double rollover’ 188 Dowie, J. 30, 32 Drapkin, T. 108 Dunstan, R. 221

Durbin–Watson statistic 254 each-way bets 38–41 Eadington, W. R. 204, 218 economies of scale 184; in lottery markets 174 Efﬁcient Markets Hypothesis (EMH) 31, 41, 67; racetrack betting 67 Eiser, J. R. 235 elasticity: of betting turnover 255 English rugby league 115; matches 114 Erekson, O. H. 200 ‘exotic’ bets 32 expected operator loss 24, 27 expected payout 27 expected returns 3, 48, 63 expected utility 6–7 expected value (EV) 26 Fabricand, B. P. 43 Falein, H. K. 231, 239 Fama, E. 33 fancied horses 35 Farmer, A. 121–2 Farrell, L. 165, 167, 178–9, 183, 187–8, 191 favorite–underdog point spread 143 favourable bets 25–6 favourite–longshot anomaly 63 ‘favourite–longshot’ bias 2–8, 11–15, 43–4, 68, 70–3, 77, 81–2, 88, 91, 93, 96, 119, 241 favourite–underdog bias 114–15, 119–21, 124 Federal Insurance Contributions Act (FICA) 205 Felsenstein, D. 211 Ferrell, W. R. 239 Figlewski, S. 68, 238 football betting market 136–7 forecast errors 154–5, 157 forecast price 68 Forrest, D. 115, 125, 187–8, 197 Forsyth, R. 108 Francis, J. C. 195 Freedom of Information Act 207 French trotting 95 Friedman, M. 3 Gabriel, P. E. 2, 8, 10, 30–1, 33, 35, 37, 40 gambling 30, 247; demand characteristics 247; on horse racing 30; machines 254; substitutes and complements in 247–9

Index 267 gambling markets: non-optimizing behavior 45 Gambling Review Report 175 Gandar, J. 115, 120, 126, 132 Garen, J. 179 Garicano, L. 115 Garrett, T. A. 172, 195 Gazel, R. 208 Goddard, J. 125 Golec, J. 3, 8, 44, 53–6, 64, 115, 120, 126, 135, 148, 172, 195 ‘Good Causes’ tax 182–3, 200 Gray, P. 115, 120, 132, 135 Gray, S. 115, 120, 132, 135 Grifﬁth, R. M. 2, 88 Gruen, A. 254 Gulley, O. D. 169, 183, 185, 250 Gulley–Scott model 183–5, 187 Gulley–Scott test 186 Haigh, J. 115, 167 Hall, C. D. 43, 62, 81 handicap betting 114, 127; markets 114–15, 124–6, 131–2 handicap betting market 129 handicapping 107 ‘harness racing’ 95 Harrison, G. W. 47 Harris, P. 208 Harvey, G. 115 Hausch, D. B. 19, 43, 62–3, 67, 82 Henery, R. 115 Henery, R. J. 82 heterogeneity of information: in ﬁnancial markets 80 heteroskedasticity-consistent estimated standard errors 50, 53 Hey, J. D. 227 high-stakes gaming 204 Hoerl, A. E. 231, 239 Hogan, T. 206 Hogarth, R. M. 230 Hogg, R. V. 63 home–away bias 114–15, 119–22, 124–6 home-ﬁeld advantage 115 horse betting market 80 horse-race betting 106, 225; average expected returns 107; favourite–longshot bias 67; markets 67–8, 106–7 horse race: bettors 239; handicapping methods 106

Horse-race Totalisator Board (Tote) 18, 28, 33, 227 horses’ win probabilities 47–8 horse track betting 43 horse wagering: market inefﬁciency 43 Hurley, W. 50 IGRA 205, 207 incapacitating injuries 153–5 incremental optimisation 109 index betting 114–15, 123–4, 127; market 119, 122, 124–6, 129, 131 index ﬁrms 121, 124, 126–7, 130 Indian casino gambling: economic impact 204, 208 Indian casinos 208, 212–13; claims of positive impacts of 212; displacement effects 213; employee turnover in 212 Indian Gambling Regulatory Act (IGRA) 204 information: diminishing marginal value of 80, 85–8, 91; rising marginal value 80 information-driven arbitrage 83 injury spells 152–6 inside information 14–15, 31, 33, 36–7, 80, 92, 104; marginal impact of 92 insider trading 14 institutional background 117–19 Internal Revenue Service (IRS) 205 inter-state markets 87, 90–1 Irish Horse-racing Authority (IHA) 18, 28 Jaffe, J. F. 136 Japan Racing Association (JRA) 44, 49; tracks 54–5, 58 Jefferis, R. H. 146 jockeys 88 Johnson, J. E. V. 227–8, 237–8, 240 Kabus, I. 231 Kahneman, D. 4–5, 82, 237 Keren, G. 224 Kida, T. 231–2, 239 Kiefer, N. M. 162 Kimball, M. S. 64 Lacey, N. 115 Larkin, J. 232 Leven, C. L. 221 Lim, F. W. 178 Lo, V. S. Y. 30 lotteries 251, 257 lotteries and betting 256

268

Index

‘lottery duty’ 182 lottery fatigue 165 lottery tickets 169, 178, 193; as a consumer good 193; expected value of 178; price elasticity of demand 169 Lotto demand 182–4; time-series modelling 182 Lotto play 8, 182 Lucky Dip 179 McClelland, A. G. R. 240 McCrae, M. 41, 255 McCririck, J. 41 McDonald, S. S. 115, 148 McDonough, L. 50 MacEachern, D. 219 McFadden, D. 238 MacKinnon, J. G. 64 McKinnon, S. 213 McMillen, J. 259 Malatesta, P. H. 146 Malkiel, B. G. 33 marginal value of information 81 market efﬁciency 44, 67 Markowitz, H. 4, 8 Markowitz utility function 5, 9 Marsden, J. R. 2, 8, 10, 30–3, 35, 37, 40 Mattern, H. 208 mean subjective probability 90 media tips: impact on prices 77 midweek draw 169 Mikesell, J. L. 168, 250 Minus pools 21–3, 28 Mobilia, P. 256 money laundering activities 205 monté 96; races 103 Moore, P. G. 200 Mordin, N. 108 Morgan, W. D. 254 multinomial logit model 108, 110 Murphy, A. H. 231, 239–40 Murphy, K. M. 179, 190, 249 Murphy’s decomposition 98 nagging injuries 152–4, 162; hazard rates for 153; hypothesis 156 National Football League (NFL) 115 National Indian Gaming Commission (NIGC) 205 National Lottery 177 National Lottery games 165 National Lottery scratch cards 174 National Association of Racing (NAR) 44; tracks 54–5, 58

naturalistic betting markets 237; calibration in 237–43 Neale, M. A. 232, 239 Neural Network models 108 New method 20–1 Norman, J. 115 Northcraft, G. B. 232, 239 objective probabilities 45 odds–arbitrage competition 64 off-course bettors 2, 87 Omodei, M. M. 230, 240 on-course bettors 87 on-line game 166 opening prices 33 opportunity cost 45, 47 optimal bets 48 optimisation algorithm 109 optimization theory 47 Orasanu, J. 228–9 Orphanides, A. 260 Osborne, E. 115, 120, 126 Osborne, M. J. 83 other tipsters only (OTO): classiﬁcation of racehorses 69, 77 out-of-state bettors 91 outsider bettors 81, 84, 87 overtime games 139, 141 over–under bets 135–7, 139 over–under betting: line 144; market 135; strategies 142 pace ratings 108 pacing races 95 pari-mutuel betting 18, 50; markets 47, 81, 240, 242; monopoly 95; and the place pool 18–19; role of breakage 50–3 pari-mutuel bettors 237, 240–3; calibration of 237–40 pari-mutuel operators 18, 20–2, 238 pari-mutuel systems 30–1 partial anticipation hypothesis 159 participation uncertainty 153–6 Paton, D. 62, 64, 67, 242–3, 249–50, 255 payout 22–3, 25 Peirson, J. 32 Pesaran, H. 193 Pescatrice 254 Phillips, L. D. 232 Phlips, L. 32 Pierson, J. 2 Pitzl, M. J. 219

Index 269 place bets 32, 38 place pool 19 player injuries: and price responses 145 ‘pleasure’ bettors 81, 121–2, 126, 131–2 PMH 95–6 PMU (Pari-Mutuel Urbain) outlets 95–6 point spread 115, 137, 148, 151, 157, 160; betting 135–6; bias 154; ‘closing’ 136; market-determined 136; and injury spells 151; and scores 148–51; wager 147 point spread betting market 146 point spread conditional on the player 154 potential operator loss 23–4 power utility 53; function 3–4; models 54–8 Pratt, J. W. 64 predicted place dividends 18, 21; the British/Irish method 18, 20–2 predicted place pay-outs 21 price elasticity 251, 253; of demand 259; of the lottery 253 price movements 69, 71, 75; analysis 71–2 probability score: and its decompositions 98 ‘professional’ bettors 121–2, 131–2 Purﬁeld, C. 8, 195 Quandt, R. E. 14, 19, 30, 43, 82 ‘Quick Pick’ see ‘Lucky Dip’ 167 racecourse betting 35; markets 31 racetrack bettors 43, 239 Racing Post 119, 130 Radner, R. 80 rates of return 70, 72, 75; analysis 72–3; on tipped and non-tipped horses 70 research costs 48 restrictive ﬁlters 143–4 revenue displacement 256–9; from lotteries to casinos 258 Rex, T. R. 206 risk-averse behaviour 8, 13; for longshots 13 risk-averse bettors 8 risk-loving behaviour 8, 13; for favourites 13 risk-neutral assumption 12 risk-neutral bettors 45, 192 risk-neutrality 53 risk-return preferences 44, 199

rollovers 167–8, 170, 179, 187, 250 Rose, A. 211 Rosett, R. N. 45, 64 Ross, G. 254 Rubinstein, A. 83 Sauer, R. D. 2, 43, 62, 120, 126, 148 Savage, L. 3 scale economies: of Lotto 197 Schlenker, B. 115 Schneider, F. 249 Schnytzer , A. 93 Schwartz, B. 115 Scoggins, J. F. 178 score differences ordering 148 Scott, D. 108 Scott, F. A. 169, 183, 185, 250 Seminole Tribe v. Butterworth 220 shadow economy 249 Shanteau, J. 232 Shaw, R. 93 Shilony, Y. 93 Shin, H. S. 14–15, 30, 37, 68, 242–3 Shin’s model 15, 38 Sidney, C. 41 Siegel, D. S. 213, 248, 250, 256–8 Simmons, R. 115, 125 Simon, J. 167 Singh, N. 80 skewed prize distribution 172 skewness 196; aversion 44; neutrality 53 skewness–preference hypothesis 44, 53 skewness–preference model 43 Smith, J. F. 231–2, 239 Smith, V. 45, 47 Smith, V. D. 227 Snedecor, G. 122 Snyder, W. 32, 72, 93, 241 Sobel, R. S. 172, 195 Sosik, J. 115, 120, 132 special one-off draws 172 sports betting markets 135 ‘spread bets’ 115 Sprowls, C. R. 178 standard method 19, 21: disadvantages 19–20; in the place pool 19 starting prices (SPs) 14–15, 32–6, 68–70, 72, 77; favourable 35; pay-outs 34 Steinnes, D. 258 Stern, R. 212 Stigler, G. J. 80 Stiglitz, J. E. 80 streak variables 135

270

Index

Suantek, L. 237 subjective probabilities 45 substitution 249–51, 256; effect 258 Suits, D. B. 254 superdraws 168, 170, 190, 250 supply (bookmakers) 115 Swidler, S. 93 Tamarkin, M. 3, 8, 44, 53–4, 56, 64, 115, 120, 126, 135, 148, 172, 195 tax revenue displacement 206 Taylor, J. 206, 208 Taylor series approximation 53 Terrell, D. 121–2 Thaler, R. H. 30, 32, 62, 93, 115 Thalheimer, R. 254–6 the ‘Brier Score’ 98 the Gabriel and Marsden anomaly 2 The Gold Sheet 137 The Racing Rag 67, 70 The Sporting Life 78 The Super League 118 theoretical model of addiction 179 Thompson, R. 146 Thunderball game 170–1, 173, 199 ‘tiny utility’ model 63 tipster information: impact on bookmakers’ prices 67 total pool 20 Tote betting market 39 Tote odds 8, 30, 38 Tote payout 6–7, 34 Tote place pool 38 Tote returns 2, 6–10 touched prices 33 ‘track take’ 50–1 transaction costs 70, 73, 115, 130, 147–8, 161; mechanism 190 Transaction Privilege Tax (TPT) 213, 257 tribal–state compacts 204–5 true winning probabilities 15–16 Tuckwell, R. 255 Tuckwell, R. H. 32 Tversky, A. 4–5, 82, 237 UK Lotto 191; elasticity estimates 191–2 UK National Lottery 168, 175; halo effect 168 US National Gambling Impact Study Commission 165 US pari-mutuel system 3

US sports betting markets 122 utility 3 van der Plight, J. 235 Vasche, J. D. 250, 254 Vaughan Williams, L. 38, 62, 64, 67, 125, 242–3 Venezia 168 Vergin, R. 115, 120, 132 Victorian markets 87 volume of betting 50 Vrooman, D. H. 179, 250 Wagenaar, W. A. 224 Waldron, P. 8, 195 Walker, I. 178–9, 183, 188, 191–2, 195 Walker, J. M. 47 ‘walking wounded’ 152 Walls, W. D. 43, 45, 62 Wang, P. 208 Warner, J. B. 156 weak-form efﬁciency 68, 115, 119 Wearing, A. J. 230, 240 weight-equivalence ratings 108 Weitzman, M. 3 Wessberg, G. 188 White, H. 50, 53 win bet 32 Winkler, R. L. 136, 240 winning payout 32 winning probabilities 50–1, 81, 83, 88–9 Winsome and other tipsters (WAOT) 69, 71–2, 77; classiﬁcation of racehorses 69 Winsome only (WO) 69, 77; classiﬁcation of racehorses 69 Woodland, B. 115, 133 Woodland, L. 115, 133 Woodlands, B. M. 195 Woodlands, L. M. 195 Woods, D. D. 234 worst case scenario 23 Wright, G. 230, 240 Yates’ covariance decompositions 99–100 Yates, J. F. 98, 236 Young, J. 192, 195 Zackay, D. 237 Zeckhauser, R. J. 64 Zervos, D. 260 Ziemba, T. 30, 32 Ziemba, W. T. 19, 62–3, 93, 115 Zoellner, T. 219

Against a background of extraordinary growth in the popularity of betting and gaming across many countries of the world, there has never been a greater need for a study into gambling’s most important factor – its economics. This collection of original contributions drawn from such leading experts as David Peel, Raymond Sauer, Stephen Creigh-Tyte and Donald Siegel covers a rich variety of interesting and topical themes, including: • • • • •

betting on the horses over–under betting in football games national lotteries and lottery fatigue demand for gambling economic impact of casino gambling

This timely and comprehensive book covers all the bases of the economics of gambling and is a valuable and important contribution to the ongoing and growing debates. The Economics of Gambling will be of use to academics and students of applied, business and industrial economics, as well as being vital reading for those involved or interested in the gambling industry. Leighton Vaughan Williams is Professor of Economics and Finance and Director of the Betting Research Unit at Nottingham Trent University, UK.

Leighton Vaughan Williams

The Economics of Gambling

Edited by Leighton Vaughan Williams

First published 2003 by Routledge 11 New Fetter Lane, London EC4P 4EE Simultaneously published in the USA and Canada by Routledge, 29 West 35th Street, New York, NY 10001 Routledge is an imprint of the Taylor & Francis Group This edition published in the Taylor & Francis e-Library, 2005. “To purchase your own copy of this or any of Taylor & Francis or Routledge’s collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk.”

© 2003 Leighton Vaughan Williams for selection and editorial matter; individual contributors their chapters All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging in Publication Data Vaughan-Williams, Leighton. The economics of gambling / Leighton Vaughan-Williams. p. cm. Includes bibliographical references and index. 1. Gambling. 2. Gambling – Economic aspects. I. Title. HV6710 .V38 2002 2002031818 338.4 7795–dc21 ISBN 0-203-98693-8 Master e-book ISBN

ISBN 0-415-26091-4 (Print Edition)

Contents

List of ﬁgures List of tables List of contributors 1

Introduction

vii ix xiii 1

L EIGHTON VAUGHAN WILLIAMS

2

The favourite–longshot bias and the Gabriel and Marsden anomaly: an explanation based on utility theory

2

MICHAEL CAIN, DAVID PEEL AND DAVID LAW

3

Is the presence of inside traders necessary to give rise to a favorite–longshot bias?

14

ADI SCHNYT ZER AND YUVAL SHILONY

4

Pari-mutuel place betting in Great Britain and Ireland: an extraordinary opportunity

18

DAVID JACKSON AND PATRICK WALDRON

5

Betting at British racecourses: a comparison of the efﬁciency of betting with bookmakers and at the Tote

30

JOHN PEIRSON AND PHILIP BLACKBURN

6

Breakage, turnover, and betting market efﬁciency: new evidence from Japanese horse tracks

43

W. DAVID WALLS AND KELLY BUSCHE

7

The impact of tipster information on bookmakers’ prices in UK horse-race markets MICHAEL A. SMITH

67

vi Contents 8 On the marginal impact of information and arbitrage

80

ADI SCHNYTZER, YUVAL SHILONY AND RICHAR D T HOR NE

9 Covariance decompositions and betting markets: early insights using data from French trotting

95

JACK DOWIE

10 A competitive horse-race handicapping algorithm based on analysis of covariance

106

DAVID EDELMAN

11 Efﬁciency in the handicap and index betting markets for English rugby league

114

ROBERT SIMMONS, DAVID FORREST AND ANTHONY C UR R AN

12 Efﬁciency of the over–under betting market for National Football League games

135

JOSEPH GOLEC AND MAURRY TAMARKIN

13 Player injuries and price responses in the point spread wagering market

145

RAYMOND D. SAUER

14 Is the UK National Lottery experiencing lottery fatigue?

165

STEPHEN CREIGH-TYTE AND LISA FARRELL

15 Time-series modelling of Lotto demand

182

DAVID FORREST

16 Reconsidering the economic impact of Indian casino gambling

204

GARY C. ANDERS

17 Investigating betting behaviour: a critical discussion of alternative methodological approaches

224

ALISTAIR BRUCE AND JOHNNIE JOHNSON

18 The demand for gambling: a review

247

DAVID PATON, DONALD S. SIEGEL AND LEIGHTON VAUGHAN WILLIAMS

Index

265

Figures

2.1 4.1 4.2

8.1 9.1 9.2 13.1 13.2 14.1 14.2 14.3 14.4 14.5 16.1 17.1

Range of the indifference map in (µ, p) space for Markowitz Utility Function Potential operator loss as a function of the fraction of the pool (fmax ) bet on the favourite Expected operator loss for three values for the probability of the favourite being placed as a function of the fraction of the pool (fmax ) bet on the favourite The dynamics of the favourite–longshot bias Vincennes winter meeting 1997/98 Winning proportion (y) against probability assigned (x) ﬁfty-seven odds ranges (A) Score differences; (B) point spreads; (C) forecast errors Distribution of forecast errors PS − PSPLAY National lottery on-line weekly ticket sales The halo effect for the case of the UK National Lottery Thunderball sales Lottery Extra sales Instants sales A model of the Arizona economy with Indian casinos Predicted win probabilities

6 23

24 89 97 97 149 158 166 168 171 173 174 211 241

Tables

2.1 2.2 2.3 2.4 2.5 5.1 5.2 5.3 5.4 5.5 5.6 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 6.10 7.1 7.2 7.3 7.4

Simulated Tote and bookmaker returns Pari-mutuel and bookmaker pay-outs for winning bets (1978): cumulative Mean bookmaker returns at starting price odds Pari-mutuel and bookmaker pay-outs for winning bets (1978): non-cumulative Estimated Tote pay-out moments Average winning pay-outs per £1 bet Average winning pay-outs per £1 bet Opening prices, SPs and mean drift (expressed as percentage of drift from opening price) Average drift (expressed as percentage of drift from opening price) Average place pay-outs per £1 bet for all placed horses Average pay-outs for £1 each-way bets on all placed horses (including win and place) z-statistics for Japanese horse tracks ordered by turnover z-statistics grouped by index of breakage Distribution of returns for all Japanese races Distribution of returns from all JRA races Distribution of returns from all NAR races Unconditional power and cubic utility function estimates Returns for all Japanese races conditional on a heavy favorite Returns from JRA races conditional on a heavy favorite Returns from NAR races conditional on a heavy favorite Utility function estimates conditional on a heavy favorite Classiﬁcation of racehorses by incidence of tips Null and alternative hypotheses related to price movements Mean price movements from early morning to SP, measured in percentage probabilities (max-early and mean-early baseline) Signiﬁcance of differences in means of horse categorised by tipping status

9 10 10 10 11 34 34 36 38 39 39 46 52 55 56 57 58 59 60 61 62 69 70 71 72

x

List of tables 7.5 7.6 7.7 7.8 8.1 8.2

9.1 11.1A 11.1B 11.2 11.3A 11.3B 11.4A 11.4B 11.5A 11.5B 12.1 12.2 12.3

12.4

12.5

13.1 13.2 13.3

Returns to proportional stakes inclusive of transaction costs by tipping status in per cent Comparison of rates of return in the current and Crafts datasets, by direction and magnitude of price movement Returns to a level stake of £1 per bet, current dataset, by price movement and tip status Signiﬁcant price movers as a percentage of total runners in tip categories Regression of mean win frequency against mean subjective probability Basic statistics on the ﬂow of useful information (per minute during the relevant time interval) Decompositions of PMH and PMU probability scores OLS estimation of actual points differences in handicap betting with twenty trials OLS estimation of actual points differences in index betting Example of index betting from a match with point spread (8–11) Simulation results from handicap betting on all home teams Simulation results from index betting on all home teams Simulated win rates from betting on favourites or underdogs in the handicap betting market Simulated returns from betting on favourites or underdogs in the index betting market Simulated returns from various betting strategies applied to lowest handicaps Simulated returns from various betting strategies applied to best quotes in the index market Summary statistics for NFL point spread and over–under bets during the 1993–2000 seasons Regression estimates for tests of market efﬁciency for NFL over–under bets during the 1993–2000 seasons Market efﬁciency tests for NFL over–under bets during the 1993–2000 seasons adjusted for overtime games and point spread Over–under betting strategies winning percentages. The proﬁtability of over–under betting strategies for National Football League games over the 1993 through 2000 seasons, for combined totals and for individual years Favorite–underdog point spread betting strategies using the over–under line. The proﬁtability of point spread betting strategies for National Football League games over the 1993 through 2000 seasons Score differences and point spreads for NBA games Injury spell durations and hazard rates Forecast errors of point spreads by game missed

73 75 76 77 90 91 102 123 123 127 128 128 129 129 130 131 137 139

140

142

143 150 153 159

List of tables 14.1 14.2 14.3 14.4 14.5 15.1 16.1 16.2 16.3 16.4 16.5 16.6 16.7 16.8 17.1

18.1

Ways to win at Thunderball Big Draw 2001 National Lottery stakes (£ million) Trends in betting and gaming expenditure Trends in betting and gaming expenditure relative to total consumer spending Elasticity estimates in the UK Lotto Arizona Indian tribal population and gaming capacity Per capita slot machine revenue, unemployment rates, welfare and transfer payments for Arizona Indian reservations Variables and sources of data Results of state TPT regressions using quarterly data Results of state TPT regressions using monthly data Results of county TPT regressions City hotel and bed taxes Estimated impact of an increase in slot machines Comparison of bettors’ aggregate subjective probability judgements and horses’ observed (objective) probability of success Key empirical studies of demand characteristics and substitution effects for various types of gambling activity

xi 171 172 175 176 177 191 209 210 214 215 217 217 219 220

238 252

Contributors

Gary C. Anders is Professor of Economics at Arizona State University West. He received his PhD from Notre Dame University. He has written extensively on the economic impact of casino gambling and Native American economic development. Philip Blackburn is Senior Economist at Laing and Buisson, a leading health and social care analysis ﬁrm. He was previously an economist for the Ofﬁce for National Statistics. After gaining his MA in Economics at the University of Kent in 1994, he researched into various racetrack betting markets. Alistair Bruce is Deputy Director of Nottingham University Business School and Professor of Decision and Risk Analysis. He has published widely in economics, management and psychology journals in the area of decision making under uncertainty, with particular reference to horse-race betting markets. Kelly Busche’s research has been concentrated on the economics of horse-race betting. He is now retired and continues to work on betting. Michael Cain is Reader in Management Science at the University of Wales, Bangor. He has published in a number of journals, including the Journal of the American Statistical Association, Journal of Risk and Uncertainty, Naval Research Logistics, the American Statistician, and Annals of the Institute of Statistical Mathematics. Stephen Creigh-Tyte is Chief Economist at the Department for Culture, Media and Sport, and Visiting Professor in Economics in the Department of Economics and Finance at the University of Durham. He has authored over 100 books, articles and research papers, covering labour, small-business, cultural sector and gambling economics. Anthony Curran is a recent graduate in Business Economics with Gambling Studies at the University of Salford. He is presently a freelance researcher. Jack Dowie is a Health Economist and Decision Analyst, who recently took up the newly created chair in Health Impact Analysis at the London School of Hygiene

xiv List of contributors and Tropical Medicine. He retains a long-established interest in betting markets, and is actively involved in French trotting. David Edelman is Associate Professor of Finance at the University of Wollongong (Australia). He has published widely in the areas of ﬁnance, data mining, and statistical theory, as well as on horse-race betting. He is an avid racegoer and jazz pianist. Lisa Farrell completed her PhD The economics of lotteries, in 1997. Lisa’s research area is applied microeconomics, with a focus on lotteries and gambling. Her work spans the theoretical and microeconometric aspects of these issues. She is currently employed as a Senior Lecturer, Department of Economics, University of Melbourne. David Forrest is Lecturer in Economics in the University of Salford. He has published extensively in his ﬁelds of current interest, notably the economics of gambling, economics of sport and valuation issues in cost–beneﬁt analysis. Joseph Golec is Associate Professor of Finance at the University of Connecticut. He has published on the efﬁciency of gambling markets, mutual fund compensation practices and healthcare services in various ﬁnance and economics journals. David Jackson is a Research Fellow in Statistics at Trinity College, Dublin. His Sports Statistics publications include papers related to gambling and others investigating psychological momentum in contests that are decided by a series of trials. Johnnie Johnson is Professor of Decision and Risk Analysis and Director of the Centre for Risk Research in the School of Management at the University of Southampton. His research focuses on risk perception, risk management and decision making under uncertainty, particularly in relation to decisions in betting markets. David Law is Lecturer in Economics at the School of Business, University of Wales, Bangor. His research interests are in ﬁnancial and gambling markets, and economic development. He has published articles in Economica, Journal of Forecasting and Journal of Risk and Uncertainty. David Paton is Head of the Economics Division at Nottingham University Business School. He has published widely on subjects including betting markets, the economics of advertising and the economics of fertility. He is married to Paula and has three children, Stanley, Archie and Sadie. David Peel is Professor of Economics at Cardiff Business School. His research interests are in macroeconomics, forecasting, nonlinear systems and gambling markets. He has published extensively in a variety of journals, including the American Economic Review, Journal of Political Economy, International Economic Review and the European Economic Review.

List of contributors xv John Peirson is Director of the Energy Economics Research Group at the University of Kent. He has researched into various aspects of betting and the economics of uncertainty. He is particularly interested in the efﬁciency of different betting markets. Raymond D. Sauer is Professor of Economics at Clemson University. His studies of wagering markets supplement his interest in the economics of regulation and industrial organization. His papers have appeared in numerous journals, including the American Economic Review, Journal of Finance, and Journal of Political Economy. Adi Schnytzer is Professor of Economics at Bar Ilan University and has published widely in the areas of comparative economics, public choice and the economics of gambling. Yuval Shilony is at the economics department of Bar Ilan University. His areas of research are: economic theory, industrial organization, markets of contingent claims and economics of insurance. Donald S. Siegel is Professor of Economics at Rensselaer Polytechnic Institute. He received his bachelor’s, master’s, and doctoral degrees from Columbia University. He has taught at SUNY–Stony Brook, ASU, and the University of Nottingham, and served as an NBER Faculty Research Fellow and an ASA/NSF/BLS Senior Research Fellow. Robert Simmons is Lecturer in Economics at the University of Salford. He has published widely on sports economics, the economics of gambling and labour economics. Michael A. Smith is a Senior Lecturer in Economics at Bath Spa University College and has taught widely in higher education institutions in the UK. His research interests include the efﬁciency of ﬁxed-odds horse-race betting markets, the operations of betting exchanges, and Internet betting media. Maurry Tamarkin earned a PhD degree in Finance from Washington University in St Louis, USA. He is an Associate Professor at Clark University in Worcester, MA, USA. In addition to gambling, his research interests include discount rates and real options. Richard Thorne is a biologist with a special interest in computer networks, the Internet and horse racing. Leighton Vaughan Williams is Professor of Economics and Finance, Head of Economics Research, and Director of the Betting Research Unit at the Nottingham Trent University. His research interests include risk, asymmetric information, ﬁnancial and betting markets. He has published extensively. Patrick Waldron obtained his PhD in Finance from the Wharton School of the University of Pennsylvania and has been a Lecturer in Economics at Trinity

xvi List of contributors College Dublin since 1992. His research interests are mainly in the economics of betting markets, with particular emphasis on horse racing and lotteries. W. David Walls is Associate Professor of Economics at the University of Calgary. He has also held positions at the University of California-Irvine and the University of Hong Kong. In addition to horsetrack betting, his research focuses on transportation economics, energy markets, and the motion picture industry.

1

Introduction Leighton Vaughan Williams

When I was asked to consider putting together an edited collection of readings on the theme of the ‘Economics of Gambling’, I was both excited and hesitant. I was excited because the ﬁeld has grown so rapidly in recent years, and there is so much new material to draw upon. I was hesitant, however, because I knew that a book of this nature would not be truly satisfactory unless the papers included in it were new and hitherto unpublished. The pressures of time on academics have perhaps never been greater, and it was with this reservation in mind that I set out on the task of encouraging some of the leading experts in their ﬁelds to contribute to this venture. In the event, I need not have worried. The camaraderie of academics working on the various aspects of gambling research is well known to those involved in the ‘magic’ circle, but the generosity of those whom I approached surpassed even my high expectations. The result is a collection of readings which draws on expertise across the spectrum of gambling research, and across the global village. The papers are not only novel and original, but also set the subject within the existing framework of literature. As such, this book should serve as a valuable asset for those who are coming fresh to the subject, as well as for those who are more familiar with the subject matter. Topics covered include the efﬁciency of racetrack and sports betting markets, forecasting, lotteries, casinos, betting behaviour, as well as broad literature reviews. The twenty-nine contributors hail from nineteen academic institutions, as well as government service, from as far aﬁeld as the UK, USA, Australia, Canada, Israel and Ireland. In many cases, the contributions would, in my opinion, have gone on to be published in top-ranked journals, but the authors lent their support instead to the idea of a single volume that would help promote this ﬁeld of research to a wider audience. In all the cases, the authors have provided papers which are valuable and important, and which contribute something signiﬁcant to the burgeoning growth of interest in this area. It has been a joy to edit this book, and my deepest gratitude goes to all involved. Most of all, though, my thanks go to my wife, Julie, who continues to teach me that there is so much more to life than gambling.

2

The favourite–longshot bias and the Gabriel and Marsden anomaly An explanation based on utility theory Michael Cain, David Peel and David Law

Introduction Research on gambling markets has focused on the discovery and explanation of anomalies that appear to be inconsistent with the efﬁcient markets hypothesis; see Thaler and Ziemba (1988), Sauer (1998), and Vaughan Williams (1999) for comprehensive reviews of the salient literature. The best-known anomaly in the literature on horse-race gambling is the so-called ‘favourite–longshot bias’, where the return to bets on favourites exceeds that on longshots. This was ﬁrst identiﬁed by Grifﬁth (1949), and conﬁrmed in the overwhelming majority of later empirical studies; see below for some further discussion. A second apparent anomaly was reported by Gabriel and Marsden (1990 and 1991), who compared the returns to winning bets in the British pari-mutuel (Tote) market with those offered by bookmakers at starting prices. They reported the striking ﬁnding that Tote returns to winning bets during the 1978 British horseracing season were higher, on average, than those offered by bookmakers; even though, they suggested, both betting systems involved similar risks and the payoffs were widely reported. Consequently, they suggested that the British racetrack betting market is not efﬁcient. As noted by Sauer (1998) in his recent survey, the Gabriel and Marsden ﬁnding calls for explanation. That is one of the main purposes of this chapter. We will show that the relationship between Tote returns and bookmaker returns is more complicated than implied in the Gabriel and Marsden study. Whilst Tote pay-outs are higher than bookmakers for longshots, this is not the case for more favoured horses; also see Blackburn and Peirson (1995) for additional evidence consistent with this point. In addition, we argue that bets on the Tote are fundamentally different from bets with bookmakers since the bettor is uncertain of the pay-out. Whilst bettors have some limited information on the pattern of on-course Tote betting via Tote boards, off-course bettors have no such information and the pay-out is determined by the total amount bet. If Tote bettors did have full information on pay-outs, then, the fact that the Tote paid out £2,100 on winning bets of £1 in the Johnnie Walker handicap race at Lingﬁeld on 12 May 1978 whilst the bookmaker SP odds were only 16 to 1, would in itself invalidate the usual economists’ notions of arbitrage processes and market efﬁciency. Assuming, then,

The favourite–longshot bias

3

that the Tote pay-out is uncertain whilst bookmaker returns are essentially certain, expected returns will be equalised only if the representative punter is risk-neutral, an assumption implicit in Gabriel and Marsden, and in previous analyses of the relationship between Tote and bookmaker returns; see, for example, Cain et al. (2001). However, the assumption that the representative bettor is risk-neutral is not consistent with the stylised fact derived from empirical work on racetrack gambling, that there is a favourite–longshot bias; bets on longshots (low-probability bets), have low mean returns relative to bets on favourites, or high probability bets. This has been documented by numerous authors for both the UK (bookmaker returns) and for the US pari-mutuel system (see, e.g., Weitzman, 1965; Dowie, 1976; Ali, 1977; Hausche et al., 1981 and Golec and Tamarkin, 1998). The standard explanation for this empirical ﬁnding has been that the representative punter is locally risk-loving; see, for example, Weitzman (1965) and Ali (1977). However, Golec and Tamarkin (1998) have recently shown for US pari-mutuel data that a cubic speciﬁcation of the utility function, of the Friedman and Savage (1948) form, that admits all attitudes to risk over its range, provides a more parsimonious explanation of the data than a risk-loving power utility function with exponent greater than unity. We will show that, if the representative bettor is not everywhere risk-neutral, an explanation of both the observed relationship between Tote and bookmaker returns and the favourite–longshot bias can still be provided. This is the second main aim of the chapter.

Theoretical analysis Utility and the favourite–longshot bias It is assumed that the representative bettor has utility function, U (·) and total wealth, w. With odds against winning of o and win probability, p, the expected pay-out to a unit bet is µ = p(1 + o) + (1 − p)0 = p(1 + o) and hence o = (µ/p) − 1 = (µ − p)/p. If the punter stakes an amount s, the expected utility of return is E = E(U ) = pU (w + so) + (1 − p)U (w − s) s(µ − p) + (1 − p)U (w − s) = pU w + p

(1)

The optimal stake for the punter is such that (∂E/∂s) = 0 and (∂ 2 E/∂s 2 ) < 0 so that s(µ − p) (2) (µ − p)U w + = (1 − p)U (w − s) p and s = s(µ, p; w){if E > U (w)}. Substituting s = s(µ, p) into equation (1) gives expected utility, E, as a function of µ and p, and hence we may obtain an indifference map in (µ, p) space. It is thus possible to differentiate equation (1)

4

M. Cain, D. Peel and D. Law

with respect to p and equate to zero in order to ﬁnd the combinations of expected return, µ, and probability, p, between which the bettor is indifferent. This produces dE s(µ − p) s(µ − p) sµ U w+ =U w+ − U (w − s) − dp p p p s(µ − p) dµ + sU w + p dp ds s(µ − p) = 0 (3) − (1 − p)U (w − s) − (µ − p)U w + p dp and hence, in view of equation (2), equation (3) reduces to dµ µ 1 U (w + (s(µ − p)/p)) − U (w − s) = − dp p s U (w + (s(µ − p)/p))

(4)

so that µ A A(w − s) dµ = 1− − dp p e es

(5)

where e=

(w + so)U (w + so) U (w + so)

and

A=1−

U (w − s) U (w + so)

When w = 1 = s, the assumption made by Ali (1977) and Golec and Tamarkin (1998), equation (5) simpliﬁes to µ dµ A = 1− dp p e

(6)

If U (0) = 0, then equation (6) reduces to µ 1 dµ = 1− dp p e

(7)

where e = e(X) = e(µ/p) is the elasticity of U (·) at X = µ/p. Observe from equation (7) that the slope of the equilibrium expected return–probability frontier will be positive (or negative) depending on whether the elasticity is greater than (or less than) unity. Clearly, with a power utility function which is everywhere riskloving, the (µ, p) frontier will be everywhere upward sloping – the traditional favourite–longshot bias.

The favourite–longshot bias

5

A condition for the favourite–longshot bias is that (dµ/dp) > 0, in order that the mean return–probability relationship is not constant or declining throughout its range. It is perhaps surprising to ﬁnd that this condition is consistent with a utility function that is not everywhere risk-loving over its range. As an illustration, consider the utility function proposed by Markowitz (1952), where agents are initially risk-loving for an increase in wealth above their customary or normal level of wealth, and then subsequently risk-averse. Conversely, for a decrease in wealth, they are initially risk-averse and then risk-loving. The Markowitz utility function is more general than that proposed by Kahneman and Tversky (1979), which is everywhere risk-averse for ‘gains’ and everywhere riskloving for ‘losses’. As a consequence, the Markowitz speciﬁcation can explain the experimental observations set out in Kahneman and Tversky (1979). If we deﬁne the current level of wealth as w, and the level of utility associated with w as U¯ , then the utility function U = U¯ + h 1 − e−b(x−w) − b(x − w)e−b(x−w) (8) deﬁnes utility for increases in wealth above w, where x is wealth measured from w to ∞; h and b are positive constants. From equation (8) the marginal utility and the second derivative for an increase in wealth are given by ∂U = hb2 (x − w)e−b(x−w) ∂x and ∂ 2U = hb3 e−b(x−w) ∂x 2

(9)

1 − (x − w) b

(10)

From equation (9) the marginal utility of an increase in wealth is always positive, as required, and from equation (10) the agent is risk-loving when (1/b) > x − w, and risk-averse when (1/b) < x − w. Consequently, the utility function initially exhibits risk-loving behaviour and then risk-aversion for increases in wealth above current wealth. For a decrease in wealth below w, we deﬁne the utility function as (11) U = U¯ − k 1 − e−a(w−x) − a(w − x)e−a(w−x) where x is measured from 0 to w, and k and a are positive constants. ∂U = ka 2 (w − x)e−a(w−x) ∂x ∂ 2U 3 −a(w−x) 1 − (w − x) = −ka e ∂x 2 a

(12) (13)

From equations (12) and (13) we observe that the marginal utility of wealth is always positive, and that for decreases in wealth below current wealth, the function

6

M. Cain, D. Peel and D. Law

0.9 0.8 0.7

0.6 0.5 0.4 0.3 0.2 0.1 0

0.2

0.4

0.6

0.8

p

Figure 2.1 Range of the indifference map in (µ, p) space for Markowitz Utility Function.

initially exhibits risk-aversion (w − x < (1/a)), then risk-loving behaviour (w − x > (1/a)). Employing equations (8) and (11) together, we have a mathematical form that accords with the Markowitz hypothesis. We consider the expected utility the agent derives from a gamble at odds o to 1, with a stake s of one unit, where the probability of the outcome occurring is p. The expected utility of this gamble is given by: E = p[U¯ + h(1 − e−bo − boe−bo )] + (1 − p)[U¯ − k(1 − e−a − ae−a )]

(14)

where E represents expected utility. In order to induce a person to gamble, expected utility has to be greater than or equal to the certain utility of not gambling, U¯ . Consequently, in order for gambling to maximise utility we require that ph(1 − e−bo − boe−bo ) − (1 − p)k(1 − e−a − ae−a ) ≥ 0

(15)

In Figure 2.1 we plot the indifference map in (µ, p) space for a range of probabilities observed in horse-racing markets: 0.007 ≤ p ≤ 0.9, where h = 1, k = 1, b = 1 and a = 0.1. We observe from Figure 2.1 that the favourite–longshot bias is consistent with a utility function that is not everywhere risk-loving. Tote and bookmaker pay-outs Because the Tote pay-out (return) is uncertain, the distribution of Tote returns has a different form than that of bookmaker returns. The mean, variance and skewness

The favourite–longshot bias

7

of the return from a unit bet with a bookmaker at starting price odds of o = X − 1 to 1 are Mean:

µ = pX = p(1 + o)

Variance: σ 2 = p(1 − p)X 2 = Skewness:

µ2 (1 − p) p

µ3 = p(1 − p)(1 − 2p)X 3 =

µ3 (1 − p)(1 − 2p) p2

If the (uncertain) Tote pay-out, T , to a winning unit bet on a horse with probability p of winning, has mean E(T ), variance V (T ) and skewness S(T ), the corresponding moments of the Tote return before the result of the race is known, are: Mean: pE(T ) Variance: Skewness:

p(1 − p)[E(T )]2 + pV (T ) p(1 − p)(1 − 2p)[E(T )]3 + 3p(1 − p)E(T )V (T ) + pS(T )

The ratio of skewness to variance of return for the bet with a bookmaker (at starting price) is X(1−2p) = µ(1−2p)/p, and the corresponding ratio for Tote returns is: pS(T ) + p(2 − p)E(T )V (T ) Skewness = (1 − 2p)E(T ) + Variance pV (T ) + p(1 − p)[E(T )]2 Consequently, assuming that the distribution of Tote returns to a winning bet exhibits positive skewness, the ratio of skewness to variance of return is always relatively higher for bets with the Tote than those with bookmakers; even if the mean Tote pay-out is the same as that at starting price. Clearly, this characteristic will be implicitly relevant when punters are choosing between bets with bookmakers and the Tote. Also, the perceived distribution assumed for Tote pay-outs will be relevant to the decision. For the representative bettor to be indifferent between a bet with bookmakers and one with the Tote, we require that the expected utility from the bet with a bookmaker at starting prices and that with the Tote are the same. As bookmaker odds are known and the Tote odds are uncertain, this implies that pU (w + X − 1) = pE[U (w + T − 1)]

(16)

When the bettor is risk-neutral, equation (16) reduces to X = E(T ), and under the assumption that bettor expectations are rational, this yields the relationship T =X+ε where ε is a random error with mean zero.

(17)

8

M. Cain, D. Peel and D. Law

Gabriel and Marsden estimated the linear model that nests equation (17) and found a slope coefﬁcient signiﬁcantly greater than unity, and an intercept that was signiﬁcantly negative. Clearly, one interpretation of their results is that the market is not necessarily inefﬁcient, but rather that punters are not well-described by the riskneutral assumption. We note immediately that the assumption of risk-neutrality of the representative punter is inconsistent with the near universal empirical evidence for the favourite–longshot bias. If agents are everywhere risk-loving, Jensen’s inequality implies that E[U (w + T − 1)] > U (E[w + T − 1]), and with equation (16), this implies that E(T ) < X. If we assume that agents are risk-averse, Jensen’s inequality implies that E[U (w + T − 1)] < U (E[w + T − 1]), and hence, from equation (16), that E(T ) > X. The assumption that bettors are essentially everywhere risk-averse with utility functions such that (dµ/dp) > 0, would therefore be consistent with the favourite–longshot bias, and also with Tote odds exceeding bookmaker odds on average. However, the assumption of risk-aversion would not be consistent with starting price returns exceeding Tote returns for favourites, which may, in fact, be a feature of the data considered in the section on ‘Empirical results’. Reconciliation is possible if we assume that bettors exhibit risk-loving behaviour for favourites and riskaverse behaviour for relative longshots, so that the utility function has the shape envisaged by Markowitz (1952). In this regard, it is interesting that Golec and Tamarkin (1998) suggested that the favourite–longshot bias is consistent with the existence of risk-averse bettors exhibiting a preference for skewness on longshots.1 This is also a reason offered to explain Lotto play; see, for example, Purﬁeld and Waldron (1997).

A model for Tote odds The Tote odds t = T − 1 can be regarded, for given p, as a non-negative positively skewed random variable and hence can be modelled as a (k, λ), a Gamma random variable with shape parameter k and scale parameter λ. For this distribution the probability density function is f (t) =

e−λt λk t k−1 , (k)

t > 0,

where (·) is the Gamma function, and the ﬁrst three moments are: Mean:

E(t) = E(T ) − 1 =

k λ2 2k Skewness: S(t) = S(T ) = 3 λ from which it follows that 2V (t) S(t) = V (t) E(t) Variance:

k λ

V (t) = V (T ) =

(18)

The favourite–longshot bias

9

or equivalently, S(T ) 2V (T ) = V (T ) [E(T ) − 1] Since the Tote deducts 16 per cent from the win pool in the UK, the mean Tote pay-out to a unit stake is 0.84 = p(1 + E(t)), and hence E(T ) = 1 + E(t) = 1 +

k (0.84 − p) =1+ p λ

For small p we would expect E(t) to be large and hence k large and/or λ small. Thus, we might take λ = β/(0.84 − p) and k = β/p for some constant β, to be estimated. We require to solve equation (16) or equivalently, U (w + o) ≡ U (w + X − 1) = E[U (w + T − 1)] ≡ E(w + t)

(19)

for the particular utility function U , where o, t are odds and X, T pay-outs of bookmakers and Tote, respectively. For the Markowitz utility function of (8) and (11), we have that ∞ e−λt λk t k−1 ¯ E[U (w + t)] = U + h dt 1 − e−bt − bte−bt (k) 0

b b −(k+1) ¯ 1 + (k + 1) =U +h 1− 1+ λ λ and equation (19) reduces to b(k + 1) b −(k+1) (1 + bo)e−bo = 1 + 1+ λ λ

(20)

In general, equation (20) does not appear to be inconsistent with either o < (k/λ), o = (k/λ) or o > (k/λ), and which one of these occurs will depend critically on the values of the constant b of the utility function and k, λ of the Tote odds distribution. In the particular case of equation (20) with b = 1, and with the parameter β = 0.3, so that λ = 0.3/(0.84 − p) and k = 0.3/p for various values of the underlying probability p, it is found that o > E(t) if p > 0.46 but E(t) > o if p < 0.46. For example, Table 2.1 gives the solution, o, of equation (20) in this case, for a range of values of E(t) = k/λ generated by a range of values of p. Thus, we have shown how mean Tote returns in excess of starting price returns for longshots are compatible with an expected utility approach. Next, we re-examine the Gabriel and Marsden data set. Table 2.1 Simulated Tote and bookmaker returns p E(t) o

0.50 0.68 0.72

0.10 7.4 4

0.05 15.8 8.3

0.01 83 40.4

10

M. Cain, D. Peel and D. Law

Empirical results The data set used for comparison of Tote and bookmaker returns consists of 1,429 races from the 1978 racing season, and differs slightly from that employed by Gabriel and Mardsen in our omission of races in Ireland, and a small number of races where horses were withdrawn whilst under starter’s orders. The data were obtained from the Raceform Up-to-Date Form Book 1978 Flat Annual. Table 2.2 reports some summary statistics for the relative returns to winning bets in the form reported by Gabriel and Marsden. It appears from the table that Tote returns are higher than starting price returns, though Gabriel and Marsden do not report relative returns for more favoured horses. Columns ﬁve and six of Table 2.2 show that our data set has the same qualitative features as that employed by Gabriel and Marsden (GM). In Table 2.3, we report some summary statistics for returns from bookmakers obtained when betting all horses in a given range of odds. It is clear that these Table 2.2 Pari-mutuel and bookmaker pay-outs for winning bets (1978): cumulative T −X X

Odds range of Winner

Number of observations

Pari-mutuel return (T)

Return at bookmaker odds (X)

%

o < 10 o < 15 o < 20 All

1,208 1,347 1,375 1,429

3.831 5.599 7.445 8.652

3.531 4.353 4.591 5.313

6.9 13.8 23.8 25.5

GM%

T −X X

8.9 19.0 26.6 28.7

Table 2.3 Mean bookmaker returns at starting price odds Odds(s) range

Number of observations

Mean return (µ)

o wj ⇔ πi /πj < wi /wj because from the regression of the bettors’ bias it follows that wi > wj ⇔ pi /wi < pj /wj . Thus, now there is a favorite–longshot even if z = 0. On the other hand, it is evident from equation (4) that, as z grows, so does the extent of the bias because the brackets decline in z. Of course, the disparity between bettors’ actual behavior and Shin’s assumption regarding their behavior, also undermines his estimation of z as the extent of insider participation in the market.

Effect of inside traders

17

Notes 1 See, for example, Ali (1977), Asch and Quandt (1987), Thaler and Ziemba (1988), and Hurley and McDonough (1995). 2 In the case of grouping by p, the groups were of virtually equal size and hence regular OLS was run. It should be noted that, although in the reported regression, there were forty groups, almost identical results were obtained when the data were divided into ﬁfteen groups and when a regression was run on all 41,688 horses as individual groups.

References Ali, M. (1977),“Probability and utility estimates for racetrack betting,” Journal of Political Economy, 85, 803–15. Asch, P. and Quandt, R. E. (1987),“Efﬁciency and proﬁtability in exotic bets,” Economica, 54, 289–98. Cain, M., Law, D. and Peel, D. A. (1996), “Insider trading in the Greyhound betting market,” Paper No. 96–01, Salford Papers in Gambling Studies, Center for the Study of Gambling and Commercial Gaming, University of Salford. Hurley, W. and McDonough, L. (1995), “A note on the Hayek hypothesis and the favorite– longshot bias in pari-mutuel betting,” American Economic Review, 85, 949–55. Quandt, R. E. (1986), “Betting and equilibrium,” Quarterly Journal of Economics, XCIX, 201–7. Shin, H. S. (1991), “Optimal betting odds against insider traders,” The Economic Journal, 101, 1179–85. Shin, H. S. (1992), “Prices of state contingent claims with insider traders, and the favorite– longshot bias,” The Economic Journal, 102, 426–35. Shin, H. S. 1993, “Measuring the incidence of insider trading in a market for state-contingent claims,” The Economic Journal, 103, 1141–53. Thaler, R. H. and Ziemba, W. T. (1988), “Pari-mutuel betting markets: racetracks and lotteries,” Journal of Economic Perspectives, 2, 161–74.

4

Pari-mutuel place betting in Great Britain and Ireland An extraordinary opportunity David Jackson and Patrick Waldron

The British/Irish method of calculating place dividends in pari-mutuel pools differs fundamentally from the older method that is used in the United States and elsewhere. The attraction of the newer method to pari-mutuel operators is that the predicted place dividends (the ‘will pays’) on each horse can be accurately displayed to punters before the race. We show that the British/Irish method can result in ‘minus pools’. We describe a simple overall betting strategy, which gives the punters, on aggregate, a substantial positive expected return. In a best case scenario from the punter’s point of view, the pari-mutuel operator can expect to lose over 50 per cent of the total place pool in certain races.

Pari-mutuel betting and the place pool Horse Racing Ireland (HRI) (formerly the Irish Horse-racing Authority (IHA)) and the Horse-race Totalisator Board (the Tote) in Britain run pari-mutuel betting in the two countries respectively. We are concerned with an extraordinary anomaly, which has existed for over twenty years but has only recently attracted the attention of serious gamblers, in the way these two bodies run the place pool. The anomaly results directly from the method that the British and Irish pari-mutuel operators use to calculate the dividend for horses that are placed, that is, ﬁnish ﬁrst or second in races of 5–7 runners or ﬁrst, second or third in races of eight or more runners. The method introduced in Britain in the mid-1970s and in Ireland in 1995 is fundamentally different from that used throughout the most of the world. The new method allows the predicted place dividends to be displayed prior to the race in a manner similar to the predicted win dividends. In the standard method, the place dividend on any horse depends on which other horses are also placed and hence accurate predictions cannot be displayed prior to the race. The new method has a serious drawback, however, in that it can frequently lead to minus pools whether or not the operator pays a minimum proﬁt (say 5 or 10 per cent of the stake) on short-odds placed horses. Since this anomaly was discovered in 1998, it has led to considerable losses in the place pool for the pari-mutuel operators in both countries. Indeed the pari-mutuel operator can expect to lose money in the majority of races if punters, on aggregate, were to bet in the manner that we will describe. The strategy, however, results in an unstable equilibrium since individual

Pari-mutuel place betting in Britain and Ireland

19

punters have an incentive to free ride by betting only on the horses yielding high expected returns. Pari-mutuel: Pari-mutuel or Tote betting is pool betting. The punters bet against one another and not against the organisers of the pool. Exact dividends are not known until after the event. In theory, the operator should not be risking his own money. Place pool: The pari-mutuel pool we are interested in is the place pool. In races of ﬁve, six or seven runners the punter bets on a horse to ﬁnish either 1st or 2nd. With eight or more runners a place bet is successful if the horse ﬁnishes 1st, 2nd or 3rd. Occasionally, in races with sixteen or more runners the operator also pays a dividend on the horse ﬁnishing fourth. General principles for sharing out the place pool • • •

Operator retains a proportion of the pool to cover costs, etc. Divide the remainder among the successful punters. ‘If you can’t win you can’t lose’.

The operator has a good deal of control over the ﬁrst two of these general principles, namely how much he takes from the pool and the manner in which he divides the remainder among the successful punters. However, except when a dead heat occurs, he is bound by tradition (and possibly by fear of riots) to at least give ‘money back’, though not necessarily anything more, to successful punters.

The standard method – USA and most other places The standard method in the place pool, described in more detail by Hausch and Ziemba (1990), Asch and Quandt (1984, 1986) is to use the losing bets to pay the winning bets, with the operator taking a percentage from the losing bets (or in some jurisdictions from the total pool). Apart from the major exceptions, which are Britain, Ireland and Australia, this is basically the method which is used in countries where pari-mutuel place pools operate. The standard method Step 1. Step 2. Step 3.

Operator deducts some fraction τ of Losing bets. The losing bets (after deductions) are divided into two or three equal portions according to the number of placed horses. Punters who backed each placed horse receive a pro rata share of one of these equal portions plus their stakes.

Disadvantages of the standard method As Box 4.1 illustrates, the main disadvantage of the standard method is the existence of real dividend uncertainty for the punter. In general, the place dividend on

20

D. Jackson and P. Waldron

any horse depends on which other horses are placed. From the operator’s point of view this means that unique pre race predicted dividends (will pays) cannot be displayed, as, for example, they are displayed for the win pool. In addition a minor irritant from the operator’s point of view is that income is variable. However, it is clear that unless the operator has a policy of paying a minimum guaranteed proﬁt on successful bets, income can never be negative even if there are no losing bets.

Box 4.1 Example: Standard method • • •

Five runners. Two places £600 on favourite; £100 on each of the other four runners Total pool = £1,000; Deductions τ = 20% of losing bets

(a) Favourite placed Losing bets = £300; Deductions = £60 Dividends (to a £1 stake) will be ∗ £1.20 on favourite ∗ £2.20 on the other placed horse (b) Favourite unplaced Losing bets = £800; Deductions = £160 Dividends ∗ £4.20 on both placed horses

The new method in Britain and Ireland Under the new method introduced into Britain in the 1970s and more recently into Ireland, the pari-mutuel operator takes a deduction from the total pool, not just from the losing bets. This step is not radically different from the standard method but what happens next is. New method Step 1. Step 2. Step 3.

Operator deducts some fraction τ of the total pool. The total pool (after deductions) is divided into two or three equal portions according to the number of placed horses. Punters who backed each placed horse receive a pro rata share of one of these equal portions (with a minimum guarantee of money back).

Pari-mutuel place betting in Britain and Ireland

21

We illustrate the new method with the same ﬁve-runner race that we used previously but with deductions of 16 per cent of the total pool, rather than the 20 per cent of losing bets in the standard method. Box 4.2 Example: New method • • •

Five runners. Two places £600 on favourite; £100 on each of the other four runners Total pool = £1, 000; Deductions τ = 16% of total pool

Total pool (after deductions) = £840 Calculated dividends (to a £1 stake) are • •

£420/600 = 70 pence for the favourite £420/100 = £4.20 for other horses

Because the calculated dividend for the favourite in this example is less than £1, then, if the favourite is placed, the guarantee of at least money back to a successful punter comes into play. The pari-mutuel operator must subsidise the calculated dividend. The possibility of a minus pool If the favourite in this race is placed, the operator loses £20 overall. And this is merely giving ‘money back’ to those punters who backed the favourite. If the operator were to pay a minimum dividend of say £1.10 on the successful favourite he would lose £80 in this example. Of course, if the favourite is unplaced his pay-out is only £840 and he wins £160. Predicted place pay-outs The new method is an even simpler method than the standard method and the place dividend on any horse does not depend on what other horses are placed. This allows the operator to overcome the main disadvantage of the standard method and display the predicted place dividends for each horse before the race, in exactly the same manner as the predicted win dividends are displayed. As far as we can tell, this appears to be the main reason why the new method was adopted by some pari-mutuel operators, but as we have seen the disadvantage of the new method is the possibility of minus pools if large amounts of the pool are bet on one horse. We concentrate henceforth on the two-place case. The generalisation to three or four places is straightforward.

22

D. Jackson and P. Waldron

Dividends under the new method Let fi = Fraction of ‘place pool’ bet on horse i. Then •

fi = 1

Calculated dividend ci ci =

1−τ 2fi

Since fi < 1 the calculated dividend, ci is bounded below by (1 − τ )/2 •

Declared dividend di and Operator Policy c if ci > 1 di = i 1 if ci ≤ 1 Alternative Policy for pari-mutuel operator if ci > 1.1 ci 1.1 if c∗ ≤ ci ≤ 1.1 di = 1 if ci ≤ c∗

Policy (1)

Policy (2)

Where c∗ is the smallest calculated dividend for which the operator is prepared to round up the declared dividend to 1.1. Possible choices for c∗ 1−τ < c∗ < 1.1 2 a b c

Always pay 10 per cent proﬁt; Always pay £1 if calculated dividend is below £1.10; Sometimes pay £1.10.

In illustrating the new method, we will assume the simple policy (1) above whereby the dividend is either the calculated dividend exactly or money back. This simple policy ignores breakage (rounding dividends – usually down) but breakage is really not relevant to the anomalies that the new method throws up.

Minus pools As we have seen minus pools are a possibility and a sensible operator should be interested in how costly these minus pools can be. Let fmax = the fraction of the pool bet on the favourite. Then if the fraction bet on the favourite is large, speciﬁcally if fmax > (1−τ )/2, then when the favourite is placed since the calculated dividend is less than £1, the operator is going to have to subsidise the dividend. The total pay-out for the pari-mutuel operator when the favourite is placed is given below. Basically the pay-out is half the pool (after deductions) on one of

Pari-mutuel place betting in Britain and Ireland

23

the horses and perhaps a good deal more on the favourite if the fraction bet on the favourite is large. •

Pay-out when the favourite is placed = (1 − τ )/2 + Max {(1 − τ )/2, fmax }

•

Potential operator loss = pay − out − 1.

Figure 4.1 below plots the potential operator loss as a function of the fraction of the pool that is bet on the favourite. It illustrates, for deductions of 20 per cent, that when the fraction of the pool that is bet on the favourite is less than 40 per cent then the operator always retains 20 per cent of the pool. As the fraction rises above 40 per cent, the subsidy, when the favourite is placed, starts to eat into his proﬁt, reaching break even point when 60 per cent of the pool is bet on the favourite. The operator starts to incur losses when the fraction rises above this and in a worst case scenario can lose 40 per cent of the total pool when the fraction approaches 1. •

Worst Case scenario (Two places paid) Operator can lose 40 per cent, that is (1 − τ )/2 of the pool

In general Worst Case scenario (k places paid, k = 2, 3, 4) Operator can lose ((k − 1)/k)(1 − τ ) of the pool (e.g. 53 13 per cent for k = 3 and τ = 20 per cent). 40

Loss in %

•

20

0 0.20

0.40

0.60

0.80

1

–20 Fraction bet on favourite

Figure 4.1 Potential operator loss as a function of the fraction of the pool (fmax ) bet on the favourite.

24

D. Jackson and P. Waldron

Expected operator loss The potential operator losses are substantial, but a loss can occur only if the favourite is placed. The expected losses depend on the true (unknown) probability of the favourite being placed as well as on the fraction of the pool that is bet on the favourite. Let pmax = probability that the favourite is placed. Then when fmax > (1 − τ )/2: •

Expected pay-out = pmax {fmax + (1 − τ )/2} + (1 − pmax )(1 − τ )

The worst case scenario becomes inevitable as pmax , fmax → 1: •

Expected loss → (1 − τ )/2; Half the pool after deductions.

Figure 4.2 is a chart of expected operator loss as the fraction bet on the favourite increases, for three values for the true probability of the favourite being placed. 1 2 3

For p = 1 – The favourite is certain to be placed and potential losses become certain losses. For p = 5/6 – The favourite is very likely to be placed but occasionally the operator will win, even when practically the whole pool is bet on the favourite. For p = 1/3 – This is a low value for the probability of a favourite being placed but the operator can still only expect to break even as fmax tends to 1.

p=1

40

p =1 p = 5/6 p = 1/3

p = 5/6

Loss in %

20

p = 1/3

0 0.2

0.4

0.6

0.8

1

–20 Fraction bet on favourite

Figure 4.2 Expected operator loss for three values for the probability of the favourite being placed as a function of the fraction of the pool (fmax ) bet on the favourite.

Pari-mutuel place betting in Britain and Ireland

25

Making the outsiders favourable bets Of course, if a large fraction of the pool is bet on the favourite, then some or all of the other horses are likely to be favourable bets. •

Consider the aggregate of bets on all the outsiders.

If fmax is the fraction of the pool bet on the favourite then the aggregate of the total pool bet on outsiders = 1 − fmax . The aggregate pay-out on the outsiders is half the net pool when the favourite is placed and the total net pool when the favourite is unplaced. Hence •

Expected aggregate pay-out on outsiders is greater than the amount bet on outsiders if 1−τ pmax + (1 − τ )(1 − pmax ) > 1 − fmax 2 1−τ ⇔ fmax > (1) pmax + τ. 2

For example for τ = 20 per cent the aggregate pay-out on the outsiders will always be greater than the total invested on them if fmax > 60 per cent. •

If even a small group of punters collude to keep fmax high enough then the aggregate of all outsiders will be favourable.

Making all the outsiders favourable bets simultaneously On aggregate, the outsiders are favourable bets if a large fraction of the pool is bet on the favourite. Indeed, all the outsiders individually can be favourable bets simultaneously. •

Assume that the fraction fi of the pool that is bet on each horse except the favourite is in proportion to its probability (pi ) of being placed.

Since two places are being paid it follows that pi = 2 and hence that

pi = 2 − pmax

outsiders

⇒ for the outsiders fi =

pi (1 − fmax ). 2 − pmax

26

D. Jackson and P. Waldron

•

If an outsider is placed, the dividend di will be half the net pool divided by the fraction bet on that horse di =

1−τ (1 − τ )(2 − pmax ) = 2fi 2pi (1 − fmax )

For the outsiders the dividend is inversely proportional to the probability of the horse being placed. When is a bet on an individual outsider a favourable bet? Since the expected value of a bet is the probability of the bet being successful multiplied by the dividend it follows that •

The expected value of a bet on the outsider is di p i =

(1 − τ )(2 − pmax ) 2(1 − fmax )

This expression is the same for each outsider and it follows that •

The expected value is greater than unity provided

1−τ pmax + τ (1a) 2 This is exactly the same condition, see inequality (1), that applied for the aggregate of all outsiders to be favourable. Also, we see that the expected value of a bet on an outsider tends to inﬁnity as fmax tends to one. Of course as fmax tends to one the amounts bet on the outsiders are small but nonetheless the expected value of these small bets is large. fmax >

Can the favourite be a favourable bet? Yes, of course, if the fraction of the pool bet on it is small. But if the favourite is a favourable bet then, by deﬁnition, it must be paying better than money back and a

If favourite pays better than money back then

b

Percentage of pool bet on favourite < (1 − τ )/2 = 40% (τ = 20%) and

c

We are in that area of Figure 4.1 where no subsidy is necessary. The operator is guaranteed a proﬁt of 20 per cent and the aggregate of punters are guaranteed to lose 20 per cent. It follows that all horses cannot be favourable bets simultaneously (see note).

Pari-mutuel place betting in Britain and Ireland

27

Note: Suppose the operator has a liberal policy of always paying a minimum proﬁt, say £1.10 instead of money back. This is not the assumption we have been making here, but if that is the case and a large fraction of the pool is bet on a favourite who has a high probability of being placed then in that case, of course, all horses in that race can be favourable bets simultaneously.

Forming a favourable portfolio of bets • • •

When the fraction bet on the favourite is large, then the outsiders are favourable bets and the operator may be in an expected loss situation If the operator expects to lose, then punters in the aggregate expect to win. How can we exploit this situation? Operators expected pay-out per pound invested = pmax {fmax + (1 − τ )/2} + (1 − pmax )(1 − τ )

• • • • •

The expected pay-out increases as fmax increases, but it also depends on pmax the probability of the favourite being placed Expected pay-out → 1 + (1 − τ )/2 as pmax , fmax → 1 Expected operator loss tends to half the net pool The public controls fmax but not pmax , the probability that the favourite is actually placed Critical condition (two places paid) for the expected pay-out to be greater than unity as fmax → 1 is pmax >

•

1 2τ = for τ = 20% 1+τ 3

(2)

In general (k places paid) the expected pay-out is greater than unity if pmax >

3 1 kτ = or for k = 3, 4 resp for τ = 20%. (k − 1) + τ 11 4

(3)

So the portfolio of bets which we are suggesting for the public as a body is to invest the vast majority of their funds on the favourite and minimal amounts on the outsiders. This will be a favourable portfolio in the two place case as long as inequality (2) is satisﬁed, that is, pmax > 1/3 when deductions are 20 per cent. Of course the larger the pmax actually is the more favourable the portfolio becomes, achieving returns of up to 40 per cent when the favourite is nearly certain to be placed. In the general case, with k places paid, such a portfolio is a favourable one as long as inequality (3) is satisﬁed.

Conclusions •

The new method for calculating the place dividends as it was used in Britain until the beginning of 1999, and in Ireland until the beginning of 2000 was fundamentally ﬂawed.

28

D. Jackson and P. Waldron

•

When the total fraction of the pool bet on the favourite is ‘large’ then the operator should expect to lose. If the public bets as we are suggesting then the favourite must be placed for the operator to lose. However, in nearly every race that is run, the true probability of the favourite being placed is high enough for the operator to expect to lose if the public, as a body, bets in the proposed manner. The existence of these minus pools does not depend on the operator paying a minimum guaranteed proﬁt on short odds horses which under the standard method is a necessary condition for the existence of minus pools. In many races, where the authors have been using the method since 1998, the pools are sufﬁciently small so that a single large investor can dominate the betting and form a favourable portfolio of bets. He does this by investing most of his money on the favourite and reasonable amounts on the outsiders (amounts similar to the total amounts that the public bets on these horses) but need not rely on any collusion from the aggregate of the rest of the punters. Indeed, the greater his investment, the greater will be both the absolute expected proﬁt and the percentage proﬁt on his investment. For a pari-mutuel pool this is truly an extraordinary anomaly.

•

• •

Acknowledgements Our thanks are due to the Horse-race Totalisator Board and to the Irish Horse-racing Authority who have unwittingly supported this research.

Postscript Sadly, good things come to an end and must then be written up. There were thirty coups in Britain in 1998 of which one was unsuccessful when the favourite was unplaced and a slightly larger number in Ireland in 1998–99 with the favourite being placed on each occasion. Although we were conservative in choosing only races with a long odds on favourite, we were fortunate in having only one favourite unplaced from approximately sixty-ﬁve races. Our model predicted 3–4 failures. Our offers to the IHA and the Tote in Britain to ﬁx the problem by writing a little extra software were refused. However, both have now quietly introduced a change in how the place dividends are calculated when large amounts are bet on the favourite and the favourite is placed. Basically, they claw back money from the fraction of the pool allocated to the other placed horses in order to avoid subsidising the dividend on the favourite. They ﬁnd it necessary to do this calculation manually after the race is over. It takes them a considerable time and means that in this situation the pre-race predicted place dividends for all horses apart from the favourite are grossly inﬂated. However, the basic method of dividing the total pool remains and predicted place dividends, which are accurate in the majority of races, are still displayed beforehand.

Pari-mutuel place betting in Britain and Ireland

29

References Asch, P. and Quandt, R. (1986) Racetrack Betting, Dover, MA: Auburn House. Asch, P., Malkiel, B. and Quandt, R. (1984) ‘Market efﬁciency in racetrack betting’. Journal of Business, 57, 165–75. Hausch, D. and Ziemba, W. (1990) ‘Locks at the racetrack’. Interfaces, 20, 41–8.

5

Betting at British racecourses A comparison of the efﬁciency of betting with bookmakers and at the Tote John Peirson and Philip Blackburn

It is shown that, at British racecourses, bookmakers offer more/less favourable odds on favourites/outsiders compared to the Tote system of betting. This would seem to suggest semi-strong inefﬁciency between these parallel markets. However, the degree of inefﬁciency between the odds in these two markets falls as the market operates and the structures of the two markets suggest that it is not efﬁcient for the odds offered by the two markets to converge exactly. Though systematic differences exist in the odds offered by the two markets, the variation in the differences in Tote and bookmaker odds is great. This variation is likely to hinder adjustment between the two markets. It is noted that the differences between the two markets are compatible with proﬁt maximisation by bookmakers and efﬁcient behaviour by bettors.

Introduction Gambling on horse racing is generally regarded as an important source of information for the study of decision-making under uncertainty. Betting markets are particularly good examples of contingent claims markets, see Shin (1992). The markets are simple and complete in that odds are offered on all horses. The return from a successful bet is clear and the uncertainty is resolved rapidly and at a known time. Economists have investigated and attempted to explain the evidence on betting on horse races and used their conclusions to consider more complicated ﬁnancial markets, for example, see Thaler and Ziemba (1988) and Shin (1991). Empirical interest has focused on the relation between the odds offered by bookmakers and pari-mutuel systems, and the probability of types of horses winning, in particular whether these odds are efﬁcient and whether insider information exists, good examples are Dowie (1976), Ali (1979), Crafts (1985), Asch and Quandt (1987 and 1988), Dolbear (1993), Lo and Busche (1994) and Vaughan Williams and Paton (1997a). However, only the studies by Vaughan Williams and Paton (1997b), Gabriel and Marsden (1990)1 and Cain et al. (2001) have investigated the comparative efﬁciency of the two modes of betting available at British racecourses: with bookmakers and the Totalizator (hereafter the Tote).2 These parallel markets offer an opportunity to investigate the efﬁciency between two markets operating under uncertainty. The bettors, the so called punters, would appear to

Betting at British racecourses

31

have access to the same information and, assuming semi-strong efﬁciency,3 one would expect the odds given by the bookmakers and Tote to be the same. Gabriel and Marsden (1990) concluded that the odds offered by the Tote are more generous and that the two markets are not semi-strong efﬁcient. It was suggested that the difference in odds was caused by the presence of insider information and the market did not adjust to the presence of this information. This conclusion would appear to be an important example of where the Efﬁcient Markets Hypothesis is directly refuted by empirical evidence. Cain et al. (2001) considered Gabriel and Marsden’s propositions and evidence. They found that the difference between Tote and bookmaker returns on winning horses depends on the probability of the horse winning and the existence of insider information. The present analysis uses a longer data set and shows that a more complete investigation does not arrive at the same conclusions as Gabriel and Marsden (1990) and Cain et al. (2001). The systematic differences that we ﬁnd in the data on odds given by bookmakers and the Tote are not consistent with the general conclusion of Gabriel and Marsden (1990) and Cain et al. (2001). The systematic differences that we observe are compatible with efﬁcient behaviour on the part of the bookmakers and punters. Given the short duration of racecourse betting markets and the imperfect information held by punters, we believe that price information is imperfect in these markets and that the two sets of prices are not exactly comparable. However, the evidence suggests that during the market the bookmakers’ odds move substantially in the direction of the Tote odds and this adjustment does not appear to be consistent with the existence of insider information. Thus, the market responses are more consistent with the Efﬁcient Markets Hypothesis than has previously been suggested and is compatible with the view of Vaughan Williams and Paton (1997b). The chapter is made up of ﬁve further sections. First, the important characteristics of British horse-racing betting markets are discussed in the section on ‘British markets for racecourse betting on horse racing’. Second, the different notions of efﬁciency that are relevant to betting on horse racing are examined in the section on ‘Efﬁciency and British betting markets’. Third, the empirical analysis is conducted in the section by the same name. Fourth, the evidence is explained in terms of proﬁt maximisation by bookmakers and efﬁcient behaviour by punters in the section on ‘Interpretation of the results’. Finally, a concluding section draws together the important arguments and ﬁndings.

British markets for racecourse betting on horse racing British racecourse betting has two forms. Punters can bet at the Tote or with on-course bookmakers. The Tote is a pari-mutuel system, where there is a proportionate deduction from the total sum bet and the remainder is returned to winning punters in proportion to their stakes. The winning pay-out is declared for a £1 stake.4 An important characteristic of betting on the Tote is that the punter is not guaranteed the odds. However, at the racecourse and Tote outlets, potential dividends are displayed on public electronic monitors. Thus, punters are informed about which horses are being supported at the Tote and which are not. For horses

32

J. Peirson and P. Blackburn

that are not well supported, moderate sized bets can signiﬁcantly alter the winning pay-out. There are different types of Tote bet, for example ‘win’, ‘place’ and ‘exotic’ bets. A win bet is concerned solely with whether the horse wins. A place bet is for a horse to be placed in the ﬁrst 2–4 runners and can only be made in conjunction with a win bet. This is an example of commodity tying, see Phlips (1989). The average pay-out for on-course Tote betting to win is 84 per cent and for place betting it is 76 per cent. The setting of books on a horse race is different from the operation of a parimutuel system.5 The intention of bookmakers is to offer odds that attract bets and make a proﬁt over a period of time. There is some confusion over whether bookmakers attempt to make the same proﬁt whatever horse wins or are willing to take a risk to gain higher expected proﬁts, see the discussion in the Royal Commission of Gambling (1978). Peirson (1988) showed that bookmakers would have to be perfectly risk-averse to wish to follow the former strategy. The anecdotal evidence appears to support the view of expected proﬁt maximising bookmakers as ‘if one of the fancied horses wins, the bookmakers lose, but if one of the outsiders win, they win’ Royal Commission on Gambling (1978, p. 471). This systematic result is presumably the result of a conscious strategy. Unlike the Tote, bookmakers are engaged in a complicated example of decisionmaking under uncertainty. They do not have perfect information on the demand for betting on different horses. About 15 minutes before the start of a race, bookmakers post opening prices, which are based on past form and anticipated demand. As bookmakers may have information that is inferior to that possessed by insiders, the opening show is usually regarded as being a conservative estimate of the ﬁnal odds in the market – this is shown in Table 5.3. Bookmakers then alter odds according to the ﬂow of betting, their subjective probabilities of horses winning and the odds offered by other bookmakers and, presumably, the Tote. At the racecourse, punters usually take the bookmakers’ odds offered at the time the bet is made. In this case, the punters are sure of the amount of a winning pay-out. The ﬁnal odds offered by bookmakers are the starting prices (SPs). The SPs are used to settle most off-course bets with bookmakers. The SPs are recorded by the Sporting Life and the Press Association.6 Betting on a horse being placed is also possible.7 However, such bets can only be made in conjunction with an equal win bet. The average pay-out with on-course bookmakers has been estimated at 90 per cent for win bets and for the place element of each way bets ‘it is certainly lower than on bets to win’, Royal Commission on Gambling (1978, p. 475).

Efﬁciency and British betting markets Racecourse betting has been used as an example to test the efﬁcient markets hypothesis, see, for example the studies by Dowie (1976), Snyder (1978), Tuckwell (1983) and Asch et al. (1984), Crafts (1985) and the discussion by Thaler and Ziemba (1988). These studies have considered the relation between offered odds and the probabilities of horses winning, and the possibility of proﬁtable insider

Betting at British racecourses

33

information. These two types of studies are testing for weak and semi-strong efﬁciency, respectively. The market for betting on horses is similar to other ﬁnancial markets, where there is publicly available information, an uncertain outcome and insider information. Only the studies by Gabriel and Marsden (1990) and Cain et al. (2001) have considered the relation between British bookmaking and Tote betting markets. According to the assumption of semi-strong efﬁciency, see Fama (1970) and Malkiel (1987), current prices reﬂect historical information and, more importantly, all relevant publicly available information. At racecourses, Tote odds are broadcast on electronic monitors and bookmakers ‘chalk-up’ and ‘shout out’ offered odds. Gabriel and Marsden (1990, pp. 879 and 883) suggested that ‘tote payoffs were consistently higher than identical bets made at [bookmakers’] starting price odds’ and ‘the market fails to satisfy semi-strong efﬁciency conditions’. Cain et al. (2001) suggest that the differences between Tote and bookmakers’ winning returns depends on the probability of the horse winning and the existence of insider information. These conclusions are investigated in the empirical analysis of the following section.

The empirical analysis Data from the whole of the 1993 season of ﬂat racing was used to examine the efﬁciency of betting with the Tote and bookmakers. The empirical analysis consists of three tests: the differences in Tote and bookmakers’ odds; drift in bookmakers’ odds; and the pay-outs for place bets. The standard test uses a t-statistic for the difference in sample means and where appropriate a paired t-test is carried out. The analysis considers separately the Tote and bookmakers’ winning returns for horses with low and high probabilities of winning. Data was collected from Sporting Life 1993 Flat Results and Raceform 1993 Flat Annual. A total of 3,388 races from March to November were included. Races with dead heats for ﬁrst place were excluded. Data was recorded on opening prices, ‘touched prices’, SPs, position of horse, winning tote, place tote, number of runners in race, racecourse, date, age of runners in race and handicap grade. Data on potential Tote winning and place dividends on horses that lost or were not placed was not available, as it is not published and is only kept by the Horse-race Totalizator Board for three months. Before the empirical analysis, it is appropriate to consider other relevant evidence: 1

The difference between the reported average winning pay-outs of the Tote and bookmakers of 84 and 90 per cent, respectively, are incompatible with the Tote consistently offering more favourable odds than bookmakers. For some odds at least, bookmakers must offer more favourable odds than the Tote. This simple evidence contradicts Gabriel and Marsden’s general conclusion.

34

J. Peirson and P. Blackburn

2

As the odds change, it is not possible to follow exactly the odds offered by bookmakers. These odds are literally chalked-up and shouted out. Professional gamblers and bookmakers employ men to report these odds quickly. Average punters cannot easily directly compare Tote and bookmakers’ odds. It is the case that different bookmakers offer different odds. The odds on the Tote are continuous. The odds offered by bookmakers are discrete and not all ‘integer odds’ are used, for example, between odds of 10/1 and 20/1, the used odds are 11/1, 12/1, 14/1 and 16/1. Commonly, bookmakers have minimum bets of £5, whilst the smallest Tote bet is £2. Additionally, some bettors may ﬁnd betting with bookmakers intimidating and prefer the Tote’s friendlier counter-service. On-course betting with the Tote is about 5 per cent of the turnover of on-course bookmakers (McCririck, 1991, p. 59).

3

4

5

Tote and starting price pay-outs In Table 5.1, the average pay-out for winning £1 bets with the Tote and at bookmakers’ SPs prices is given. As Gabriel and Marsden (1990) found, there is an apparent clear and statistically signiﬁcant difference in favour of betting on the Tote. However, the average Tote pay-out is approximately 35 per cent lower than that previously reported, but the SP pay-out is about the same. Table 5.2 reports the differences between Tote odds and SPs for different ranges of SPs. All the differences are statistically signiﬁcant. However, only for the ﬁrst three ranges of SPs does the Tote offer more favourable returns. For the favourable Tote odds, the differences in means may appear large. However, the construct of equal £1 winning bets inﬂates the importance of these differences, as much less money is bet on outsiders than on fancied horses. Table 5.1 Average winning pay-outs per £1 bet Range of odds

Observations

Tote mean

SP mean

Difference

t-Value

All

3,388

7.20 (10.86)

6.09 (5.97)

1.11 (6.62)

9.74

Note Standard deviations are in parentheses.

Table 5.2 Average winning pay-outs per £1 bet Range of odds

Observations Tote mean

SP mean

Difference

% Difference t-Value

SP ≥ 20/1 10/1 ≤ SP < 20/1 5/1 ≤ SP < 10/1 5/2 ≤ SP < 5/1 Evens ≤ SP < 5/2 SP < Evens

140 539 926 855 649 279

26.27 (8.96) 12.34 (2.15) 6.61 (1.26) 3.47 (0.65) 1.64 (0.40) 0.64 (0.22)

13.70 (24.80) 3.50 (7.24) 0.19 (2.36) −0.18 (0.91) −0.11 (0.39) −0.03 (0.17)

34 22 3 −5 −7 −5

Note Standard deviations are in parentheses.

39.97 (29.17) 15.84 (8.13) 6.80 (2.85) 3.29 (1.12) 1.53 (0.55) 0.61 (0.26)

6.54 11.24 2.52 −5.96 −6.90 −2.70

Betting at British racecourses

35

The pattern of more/less favourable odds on favourites/outsiders offered by bookmakers compared to the Tote is compatible with the evidence of the Royal Commission on Gambling (1978) that the pay-outs on bets with bookmakers is greater than that for the Tote and the evidence of Cain et al. (2001). It contradicts the conclusion of Gabriel and Marsden (1990, p. 874) of the ‘persistently higher Tote returns’. Two issues arise from the results of Table 5.2. First, why should bookmakers choose to offer odds on the fancied horses that are more favourable than the Tote? Second, how can bookmakers attract betting on the less fancied horses when the odds offered appear to be so much less than those given by the Tote? The remainder of the empirical analysis and theoretical investigation investigate the evidence of Table 5.2 and consider its implications. The market in the returns for fancied horses From Table 5.2, it is clear that, compared to the Tote and on average, bookmakers offer favourable SPs on the fancied horses, taken as being horses with SPs of 5/1 or less. Why should bookmakers wish to offer such favourable odds? The bookmakers dominate the market in racecourse betting, being responsible for 95 per cent of the turnover and most of the volume of betting is on fancied horses.8 Presumably, they wish to achieve this outcome because the volume of betting creates greater expected proﬁts than simply ensuring that they make the same proﬁt whatever the outcome of the race and match the returns offered by the Tote. However, to do this they must be offering a better product than the Tote. Betting with bookmakers is a superior product for three reasons. First, bets are made at known odds whilst, in the case of Tote, the actual return on a winning bet is not known exactly until the end of betting and depends on the ﬁnal amounts bet on the winning horses and all horses. Second, with the Tote, an additional bet on a winning horse reduces the average return as the total pool has to be spread over a larger amount bet on the winning horse. Finally, bookmaker SP payments for the three groups of most fancied horses are 6 per cent better than those with the Tote. However, there are two reasons why bets with the bookmaker may be less attractive than using the Tote. First, for the fancied horses, though on average the SPs are better than the Tote returns, the Tote returns are not always better. For the fancied runners, the Tote returns were greater than the SPs for 32 per cent of winning horses and worse for 68 per cent. Thus, for about a third of winning horses, a small bet placed with the Tote would secure a greater return than with the Bookmakers. Second, bookmakers alter the odds offered on horses. Most on-course bets with bookmakers are struck at the currently offered odds and these vary across the duration of the betting market. The volume of betting on different horses, odds on other horses and presumably forecast Tote returns affect the odds offered on a particular horse, see Royal Commission on Gambling (1978). It is common to notice that the odds on outsiders drift out and the odds on clear favourites drift in. This drift can be interpreted as bookmakers protecting themselves from insider

36

J. Peirson and P. Blackburn

Table 5.3 Opening prices, SPs and mean drift (expressed as percentage of drift from opening price) Range of odds

Observations

Mean drift

OP/SP

Tote mean

SP ≥ 20/1 10/1 ≤ SP < 20/1 5/1 ≤ SP < 10/1 5/2 ≤ SP < 5/1 Evens ≤ SP < 5/2 SP < Evens

140 539 926 855 649 279

39.30 (38.42) 20.20 (27.52) 17.93 (28.98) 12.50 (30.33) 5.43 (29.33) −4.81 (23.40)

18.86/26.27 10.27/12.34 5.61/6.61 3.08/3.47 1.55/1.64 0.67/0.64

39.97 15.84 6.80 3.29 1.53 0.61

Note Standard deviations are in parentheses.

information on the likelihood of a horse winning. The degree of insider information is likely to be reﬂected in the volume of betting for particular horses. Table 5.3 measures the drift from opening prices of different SP odds for winning horses. The drift is measured by the difference of starting and opening prices divided by the latter. The major conclusion to be drawn from Table 5.3 is that, in each odds category, there is a large degree of variation in the drift (this is shown by the high standard deviation relative to the mean drift). So the direction and magnitude of the drift is varied and the returns offered by bookmakers will not always be in excess of the Tote returns. For the most fancied horses, the average drift reduces the returns and brings the Tote and Bookmaker returns closer. The drift for the second most fancied groups of horses increases the difference in returns, but the average drift is a small proportion of the standard deviation. For the third most fancied group of horses, and on average, the opening price is less than the Tote return and the SP exceeds the Tote return by nearly the same amount. This implies that the drift starts off taking the bookmaker odds in the direction of the ﬁnal Tote return, but the adjustment, on average, over shoots and contains a lot of noise. Market in the returns for unfancied horses It appears that bookmakers offer, on less fancied horses (here taken as horses with SPs of 5/1 or more), less favourable odds than the Tote. The following analysis suggests that the simple conclusion to be drawn for the Table 5.2 has to be strongly qualiﬁed for four reasons. First, for the unfancied horses the differences between Tote and bookmaker returns are heavily skewed with a few very large differences.9 Such positive outliers tend to occur in small pools where there is very little support in the Tote betting market for these unfancied horses. The consequence of the existence of these outliers is that the mean difference for the remaining horses is much less. The importance of these outliers can be seen by the absolute size of the standard deviations of the Tote samples in Table 5.2 relative to the difference of the Tote and SP means for three groups of unfancied horses. Bookmakers refuse to match these outlier Tote returns as it exposes them to a risk of a large loss, the reasons

Betting at British racecourses

37

for this behaviour are explained below and should not be regarded as evidence of semi-strong inefﬁciency. Second, betting on the horse with the Tote depresses the winning return. By comparison, bookmakers are expected to accept reasonable sized bets at the posted odds. The same absolute bet will depress the winning return for an unfancied horse more than for a much fancied horse because the pool of bets on the unfancied horse is less. Thus, bookmakers do not have to match predicted Tote returns on unfancied horses. For example, take the case of the total bet with the Tote on a race being £1,190 and the predicted return on a particular horse is twenty. A punter betting £10 on this horse will reduce the winning return and, thus, receives a return of £160.81 for his bet. This represents a reduction of 16 per cent from the predicted return. Thus, for quite moderate sized bets and Tote pools that are not unduly small, bookmakers can offer substantially lower odds and still remain competitive. Third, the drift in the odds of unfancied horse is on average in the direction of reducing the difference between Tote and bookmaker winning returns, see Table 5.3. As noted above, there is a large degree of variation in the percentage drift. The drift may be regarded as the response of bookmakers to the volume of betting on different horses. If a horse receives little support its odds will drift out. If a horse receives an unexpected amount of support, this may be taken as representing betting supported by insider information. The bookmaker will protect their position by reducing the odds and reducing support for the horse. Holders of such insider information will not use the Tote to place bets as this automatically reduces the return the insider receives. Thus, where insider information is perceived to exist bookmakers will contrive to force their SPs down and they will be much lower than the Tote returns. It may not be appropriate to regard this as an example of semi-strong inefﬁciency, as bookmakers can be interpreted as responding to information contained in the volume of betting, see Vaughan Williams and Paton (1997b). Gabriel and Marsden (1990) and Cain et al. (2001) suggest that bookmakers protect themselves by reducing the odds on heavily supported horses. Shin (1991) has developed a famous theoretical model that shows how bookmakers protecting themselves against insider trading will offer odds with the well-known favourite– longshot bias. Unfortunately, the evidence of drift in bookmaker odds for winners and all other runners does not support this hypothesis. In Table 5.4, the drift from opening to starting prices is reported for the chosen six SP categories. In no case are the differences in drift statistically signiﬁcant between winning and non-winning horses. Additionally, the differences are not quantitatively important, the largest difference being less than 2 per cent and all others being less than 1 per cent. Cain et al. (2001) provide evidence that estimates of a Shin measure of the degree of insider information are related to the discrepancy in the Tote and bookmaker winning returns. This proposition, which the present author is sympathetic to, would not appear to be compatible with the evidence of Table 5.4. Crafts has provided evidence (1985 and 1994) that there are horses on which the odds shortened signiﬁcantly and went on to win. However, these are a small proportion of all winning horses and may get lost in the large number of winning horses considered

38

J. Peirson and P. Blackburn

Table 5.4 Average drift (expressed as percentage of drift from opening price) Range of odds

SP ≥ 20/1 10/1 ≤ SP < 20/1 5/1 ≤ SP < 10/1 5/2 ≤ SP < 5/1 Evens ≤ SP < 5/2 SP < Evens

Losing runners

Winning runners

Observations

Mean drift

Observations

Mean drift

10272 (51.32) 9447 (30.28) 8446 (28.64) 4292 (30.34) 1849 (28.41) 482 (23.98)

39.74 22.13 17.88 12.40 4.84 −5.61

140 (38.42) 539 (27.52) 926 (28.98) 855 (30.33) 649 (29.33) 279 (23.40)

39.30 20.20 17.93 12.50 5.43 −4.81

Difference

t-Value

0.44 1.93 −0.05 −0.10 −0.59 −0.80

0.13 1.57 −0.05 −0.09 −0.44 −0.45

Note Standard deviations are in parentheses.

in Table 5.4. At the aggregate level, it is difﬁcult to identify winning horses as having their odds shortened more than other runners, but see Vaughan Williams and Paton (1997b). This idea is important to the Shin model (1991) and is commonly accepted, for example, see Vaughan Williams (1999). As noted in the section on ‘Efﬁciency and British betting markets’, the odds offered by bookmakers are discrete. The drift in bookmakers’ odds towards the Tote odds is likely to be restricted to a degree by the discrete odds bookmakers use. The effect of this can be considered by assuming that when updating odds the bookmakers calculate the exact odds they wish to offer and only when this exceeds an allowed odds category by a sufﬁcient margin do they change to this new category. Thus, when odds drift out, on average, bookmakers will offer less than the Tote odds because of this ratchet effect. This risk-averse strategy applied to odds drifting out will produce Tote returns in excess of bookmakers, but by an unknown margin. Fourth, bookmakers’ odds are used to settle bets on horses to be placed in a race. The SPs for place bets may be more or less favourable than the Tote returns. It is common to make place bets on outsiders, as they are more likely to be placed rather than win a race. However, with bookmakers in the United Kingdom, it is only possible to make a place bet with a bet to win. For example, bookmakers and the Tote require that an equal win bet is also made, this is called an each-way bet. The return to the place part of an each-way bet with a bookmaker is a certain fraction of the offered odds for a win bet and the number of places depends on the number of runners and whether the race is a handicap.10 In Tote betting, it is possible to bet on horses to be placed only. The separate Tote place pool is the total of place bets minus a 24 per cent take. The pool is divided equally between the place positions. For each place, the dividend is calculated by dividing the allocated pool by the total bet on the placed horse. Table 5.5 gives the place pay-outs for Tote and bookmakers on unfancied horses. In two out of the three SP ranges, the bookmakers offer more favourable odds for place bets and in the third case the difference is very small and statistically insigniﬁcant. As the betting with bookmakers on outsiders is often in the form of each way bets, the unfavourable SPs of Table 5.2 compared with the Tote odds are offset

Betting at British racecourses

39

Table 5.5 Average place pay-outs per £1 bet for all placed horses Range of odds

Observations Tote mean

SP ≥ 20/1 702 10/1 ≤ SP < 20/1 1,935 5/1 ≤ SP < 10/1 2,883

SP mean

Difference

% Difference t-Value

6.56 (7.55) 6.48 (2.96) 0.08 (6.83) 1 2.50 (1.50) 2.90 (0.57) −0.40 (1.39) −14 1.37 (4.42) 1.57 (0.34) −0.20 (4.41) −13

0.30 −12.70 −2.44

Note Standard deviations are in parentheses.

Table 5.6 Average pay-outs for £1 each-way bets on all placed horses (including win and place) Range of odds

Observations Tote mean

SP ≥ 20/1 702 10/1 ≤ SP < 20/1 1,935 5/1 ≤ SP < 10/1 2,883

SP mean

Difference

% Difference t-Value

13.69 (22.98) 10.84 (11.99) 2.85 (15.03) 26 6.18 (9.08) 5.60 (6.14) 0.58 (4.76) 10 2.84 (6.13) 2.97 (3.65) −0.13 (4.71) −4

5.02 5.39 −1.45

Note Standard deviations are in parentheses.

by the favourable bookmakers’ odds for placed outsiders. A comparison of the returns on each way bets on outsiders with bookmakers and at the Tote is given in Table 5.6. This shows that the differences between Tote and bookmaker returns are reduced and in one case the sign of the difference is actually reversed.

Interpretation of the results Bookmakers could attempt to set perfect books and/or match the returns offered by the Tote. The evidence presented here suggests that bookmakers attempt to do something different than this. It is also clear that the punters will not always have incentives to equalise completely the returns between betting with the Tote and bookmakers. This evidence is summarised and interpreted. The most important point to make about the Tote/bookmaker betting market is that there is considerable variation in the returns offered. This has not been emphasised sufﬁciently in previous discussions of the market. The distribution of the differences in winning Tote and bookmaker returns shows great variation and there is great variation in these differences during the operation of the market. Thus, it is not sufﬁcient to consider only what happens to the average differences but it is appropriate to consider the distribution of differences as well. It is suggested that bookmakers wish to maximise expected proﬁt and that most bets are placed on horses with a high probability of winning. Thus, it is important that bookmakers offer better returns on fancied horses than the Tote to attract such betting. On average this is correct, but the favourable odds are only of the order of about 6 per cent better. Additionally, on-course bookmakers pay out on the odds listed at the time of the bet and, unlike the Tote, the effect of a substantial bet is not to reduce the predicted return to the punter. However, for an important minority of fancied horses the Tote returns are better than the SPs of the bookmaker.

40

J. Peirson and P. Blackburn

Additionally, the bookmaker odds drift and except for the most fancied group of horses drift out. So with these qualiﬁcations, the SPs are on average more favourable than the bookmaker odds offered during the operation of the market. For two out of the three fancied groups of horses the drift in odds reduces the average difference in Tote/bookmaker winning returns. Thus, the behaviour of bookmakers and punters is compatible with a more complicated idea of efﬁciency that amongst other things embraces the idea that the market is characterised by variation and uncertainty. For betting on unfancied horses, the Tote on average gives a better return. However, this result is biased by the presence of unsupported horses with large Tote returns. Betting with the Tote on an unsupported horse will have an important effect on the winning return because of the increased size of the pool of winning bets. For all categories of unfancied horses, the bookmakers’ odds drift in a manner that reduces the average Tote bookmaker difference in winning returns. However, there is again a large variation in the drift of bookmaker odds. It is not yet possible to detect in aggregate data that bookmakers are efﬁcient in reducing the odds on horses on which insider information is revealed by the volume of betting though see Vaughan Williams and Paton (1997). The bookmakers’ odds are used to settle bets for horses to be placed. For unfancied horses and each way bets, there is less difference than between Tote and bookmaker winning returns and for one group of horses the bookmakers returns are better.

Conclusion The observed odds offered by bookmakers and the Tote can be used to test the hypothesis of semi-strong efﬁciency. Gabriel and Marsden (1990) concluded that the Tote offers more favourable odds than bookmakers, which implies semistrong inefﬁciency. A more thorough empirical investigation shows that, at British racecourses, bookmakers offer more/less favourable odds on favourites/outsiders compared to the Tote system of betting, a result that was also found by Cain et al. (2001) and Vaughan Williams and Paton (1997b). This and other evidence is compatible with efﬁcient behaviour by bookmakers and punters operating in a market characterised by much variation and particular structural characteristics. The conclusion that can be drawn from this study is that the two betting markets are not identical and perfect information on odds does not exist. The pattern of differences between the odds of the two markets is of a different and more complicated nature than that suggested by Gabriel and Marsden (1990) and Cain et al. (2001). However, there is a systematic movement in the odds offered by bookmakers towards those of the Tote. Thus, an efﬁcient movement in prices in the two markets appears to exist but it is difﬁcult to conclude whether it is complete. The market structures and imperfect information suggest that the two markets would not be expected to offer exactly the same odds. The average differences in Tote and bookmakers’ winning returns are small compared to the distributions of these differences. This indicates that most if not close to all of the feasible convergence between the two markets takes place.

Betting at British racecourses

41

Acknowledgements We are very grateful for the advice and suggestions of Andrew Dickerson and Alan Carruth.

Notes 1 Gabriel and Marsden (1991) published a correction, the contents of which do not affect the present study. 2 Bird and McCrae (1994) consider some Australian evidence that suggests equalisation of prices in two different betting markets. 3 The semi-strong form of the Efﬁcient Markets Hypothesis asserts that prices reﬂect all publicly available information, see Malkiel (1987). 4 The minimum dividend to a £1 stake is £1.10, implying a return of 10 p. 5 Sidney (1976) contains a detailed discussion and description of on-course and off-course bookmaking. 6 To be exact, these prices are those of Rails and Tatersalls bookmakers, see Sidney (1976). 7 The rules used to settle each-way bets are: 2–4 runners – win only; 5–7 runners – 1/4 of the odds for 1st and 2nd; 8 or more runners – 1/5 of the odds for 1st, 2nd and 3rd (in non-handicap races); 8–15 runners – 1/4 of the odds for 1st, 2nd and 3rd (in handicap races); 16 or more runners – 1/4 of the odds for 1st, 2nd, 3rd and 4th (in handicap races). 8 See McCririck (1991), p. 59 and Cain et al. (2001), p. 203. 9 Cain et al. (2001) also make a similar observation. 10 See footnote 7 for details of the rules used to settle each-way bets with bookmakers.

References Ali, M. M. (1979), ‘Some evidence on the efﬁciency of a speculative market’, Econometrica, 47, 387–92. Asch, P., Malkiel, B. G., and Quandt, R. E. (1984), ‘Market efﬁciency in racetrack betting’, Journal of Business, 57, 165–75. Asch, P. and Quandt, R. E. (1988), ‘Betting bias in exoctic bets’, Economic Letters, 28, 215–19. Asch, P. and Quandt, R. E. (1987), ‘Efﬁciency and probability in exotic bets’, Economica, 54, 289–98. Bird, R. and McCrae, M. (1987), ‘Tests of the efﬁciency of racetrack betting using bookmakers odds’, Management Science, 33, 1552–62. Bird, R. and McCrae, M. (1994), ‘Efﬁciency of racetrack betting markets: Australian evidence’, in D. Hausch, V. S. Y. Lo, and W. T. Ziemba (eds), Efﬁciency of Racetrack Betting Markets, Academic Press, London, pp. 575–82. Cain, M., Law, D. and Peel, D. A. (2001), ‘The incidence of insider trading in betting markets and the Gabriel and Marsden anomaly’, The Manchester School, 69, 197–207. Crafts, N. (1985), ‘Some evidence of insider knowledge in horse racing betting in Britain’, Economica, 52, 295–304. Crafts, N. F. R. (1994), ‘Winning systems? Some further evidence on insiders and outsiders in British horse race betting’, in D. B. Hausch, V. S. Y. Lo, and E. T. Ziemba (eds), Efﬁciency of Racetrack Betting Markets, Academic Press, London, pp. 545–9. Dolbear, F. T. (1993), ‘Is racetrack betting on exactas efﬁcient’, Economica, 60, 105–11. Dowie, J. (1976), ‘On the efﬁciency and equity of betting markets’, Economica, 43, 139–50.

42

J. Peirson and P. Blackburn

Fama, E. (1970), ‘Efﬁcient capital markets: a review of theory and empirical work’, Journal of Finance, 26, 383–417. Gabriel, P. E. and Marsden, J. R. (1990), ‘An examination of market efﬁciency in British racetrack betting’, Journal of Political Economy, 98, 874–85. Gabriel, P. E. and Marsden, J. R. (1991), ‘An examination of market efﬁciency in British racetrack betting: errata and corrections’, Journal of Political Economy, 99, 657–9. Hausch, D. B., Ziemba, W. T. and Rubinstein, M. (1981), ‘Efﬁciency of the market for racetrack betting’, Management Science, 27, 1435–52. Lo, V. S. Y. and Busche, K. (1994), ‘How accurately do bettors bet in doubles?’, in D. B. Hausch, V. S. Y. Lo and W. T. Ziemba (eds), Efﬁciency of Racetrack Betting Markets, London, Academic Press, pp. 465–8. Malkiel, B. G. (1987), ‘Efﬁcient markets hypothesis’, in J. Eatwell, M. Milgate and P. Newman (eds), The New Palgrave: Finance, London, Macmillan. McCririck J. (1991), World of Betting, London, McCririck. Peirson, J. (1988), ‘The economics of the setting of odds on horse races’, Fourth International Conference on the Foundations and Applications of Utility, Risk and Decision Theory, Budapest, June 1988. Phlips, L. (1989), The Economics of Price Discrimination, Cambridge, Cambridge University Press. Raceform 1993 Flat Annual, Newbury, Raceform. Royal Commission on Gambling (1978) vols I & II, Cmnd 7200, London HMSO. Shin, H. S. (1991), ‘Optimal betting odds against insider traders’, Economic Journal, 101, 1179–85. Shin, H. S. (1992), ‘Prices of contingent claims with insider traders and the favourite– longshot bias’, Economic Journal, 102, 426–35. Sidney C. (1976), The Art of Legging, London, Maxline. Snyder, W. W. (1978), ‘Horse racing: testing the efﬁcient markets model’, The Journal of Finance, 33, 1109–18. Sporting Life Flat Results 1993, London, Mirror Group Newspapers. Thaler, R. and Ziemba, T. (1988), ‘Anomalies and parimutuel betting markets: racetracks and lotteries’, Journal of Economic Perspectives, 2, 161–74. Tuckwell, R. H. (1983), ‘The thoroughbred gambling market: efﬁciency, equity and related issues’, Australian Economic Papers, 22, 106–18. Vaughan Williams, L. and Paton, D. (1997a), ‘Why is there a favourite–longshot bias in British racetrack betting markets?’, Economic Journal, 107(1), 150–8. Vaughan Williams, L. and Paton, D. (1997b), ‘Does information efﬁciency require a perception of information inefﬁciency?’, Applied Economics Letters, 4, 615–17. Vaughan Williams, L. (1999), ‘Information efﬁciency in betting markets: a survey’, Bulletin of Economic Research, 53, 1–30.

6

Breakage, turnover, and betting market efﬁciency New evidence from Japanese horse tracks W. David Walls and Kelly Busche

In this research we analyze more than 13,000 new races run at eighteen Japanese horse tracks. We examine the relationship between breakage (the rounding down of pay-outs to winning wagers), betting turnover (the dollar amounts bet), and betting market efﬁciency. The evidence across Japanese horse tracks indicates that tracks with high turnovers are more informationally efﬁcient than tracks with low turnovers. We also ﬁnd that breakage costs are systematically related to betting market efﬁciency. We investigate the possibility that bettors have preferences over the skewness of betting returns in addition to their level and variance, and we relate this to betting turnover as well. The new evidence leads us to reject the skewness–preference model at tracks with a high volume of betting; however, the skewness–preference model is consistent with betting behavior at tracks with low betting turnovers.

Introduction A slew of empirical research on horse track betting ﬁnds that bettors do not behave in a way consistent with market efﬁciency. The results of Ali (1977), Fabricand (1977), Hausch et al. (1981), Asch and Quandt (1987), Asch et al. (1982, 1984), and other authors all point toward market inefﬁciency in horse wagering.1 Few published papers have found evidence consistent with market efﬁciency: Busche and Hall (1988), Busche (1994), and Busche and Walls (2000) are among those who have used the same empirical methods as previous researchers and obtained results consistent with optimizing behavior on the part of racetrack bettors. The most well-established market inefﬁciency – known in gambling parlance as the favorite–longshot bias – is that the favorite or low-odds horses are systematically underbet relative to the longshot or high-odds horses.2 Bettors appear not to be optimizing because they could reallocate bets from longshot horses to favorite horses in a way that would increase the expected returns for the same amount bet. Many explanations have been offered for the observed betting bias in wagering markets ranging from psychological explanations based on misperceptions (Slovic et al., 1982) to arguments that racetrack bettors have a love of risk (e.g. Asch and Quandt, 1990). Sauer (1998) states in a recent survey article on the economics of wagering markets that, “Work documenting the source of variation

44

W. D. Walls and K. Busche

in the favorite–longshot bias would be particularly useful” (p. 2048). In this paper we take an empirical stab at uncovering how the favorite–longshot bias is related to breakage and betting turnover, and also how it is related to bettor preferences over the moments of the returns distribution. In this mostly empirical chapter we analyze more than 13,000 races run at horse tracks across Japan in 1999 and 2000. The races come from horse tracks operating under the Japan Racing Association (JRA) and the National Association of Racing (NAR), the tracks differing primarily in the betting turnover: JRA tracks have an average daily turnover of about 3 million American dollars, while the NAR tracks have an average daily turnover of about 30,000 American dollars. Our sample of data is unique in that we have a large number of races across tracks that differ by several orders of magnitude in bet turnover, yet all venues are in the same country.3 We ﬁnd that betting market efﬁciency is systematically related to breakage costs – the cost associated with the rounding down of pay-outs on winning bets. We construct an index of breakage costs and ﬁnd that races with higher breakage costs are more likely to be measured as inefﬁcient. Betting behavior for races with very small breakage costs is consistent with market efﬁciency. Our results suggest that ignoring the effect of breakage may bias statistical tests toward rejection of the hypothesis of market efﬁciency. We examine market efﬁciency at each track and relate it to betting turnover. Finding that bettors at low-turnover tracks do not equalize betting returns across alternative bets, while bettors at high-turnover tracks do, is consistent with the hypothesis that bettors make non-optimizing decisions when the cost of such errors is small. But they do not make such errors when the cost is large. Bettors at highturnover tracks bet as if they are maximizing betting returns, while bettors at low-turnover tracks may trade off returns for the consumption of a beer and a hot dog, and the excitement of occasionally hitting the longshot. Finally, we examine the skewness–preference hypothesis put forward by Golec and Tamarkin (1998). This hypothesis formalizes the “thrill of hitting the longshot” by including skewness explicitly into the representative bettor’s utility function. The evidence in support of this hypothesis varies with the volume of betting. The skewness–preference, risk-aversion model is better supported with data from our low-volume tracks where returns are not equalized across horses of different win probabilities. At high-volume tracks where bettors’ behavior seems consistent with equalizing expected returns, we ﬁnd evidence of risk preference and skewness aversion! The following section discusses brieﬂy the metric of betting market efﬁciency that has been commonly used in the literature. We then proceed to examine empirically turnover, breakage, and skewness preference in all the following sections except the ﬁnal section that concludes the chapter.

Quantifying betting market efﬁciency The most direct way to examine betting market efﬁciency is to test whether bettors allocate bets across horses to equalize returns. This is equivalent to testing if

Breakage, turnover, and betting market efﬁciency

45

bettors’ subjective probabilities are equal to the objective probabilities. The method of grouping data and calculating statistics to test the market efﬁciency hypothesis was developed by Ali (1977). First, horses in each race are grouped by rank order of betting volume; the horse with largest bet fraction is the ﬁrst favorite, the horse with the second largest bet fraction is the second favorite, and so on.4 The fractions of money bet on horses in each rank are the subjective probabilities (Rosett, 1965) and these probabilities are compared to the objective probabilities (fractions of wins in each rank).5 Rosett (1965) showed that if risk-neutral bettors have unbiased expectations of win probabilities, then the proportion of money bet on a horse will equal the win probability. If the difference between subjective probability and objective probability is zero, the return from each horse will be equalized at the average loss due to the track’s extraction of a portion of the betting pool. We can test the null hypothesis that the subjective probability (ψ) equals the objective probability (ζ ) in each favorite position by treating the number of wins as a binomial statistic.6 For a sample of n observations, the statistic z = (ψ − ζ ) n/ζ (1 − ζ ) (1) has a limiting normal distribution with mean zero and unit variance.7 Very large or small z-statistics, as compared with the upper or lower percentage points of the normal distribution, provides statistical evidence of overbetting or underbetting on horses in each favorite position. In the empirical work that follows below, data on bet volumes, odds, and race outcomes were obtained from the eighteen Japanese horse tracks listed in Table 6.1. Since we were able to obtain the exact bet volumes, we were able to compute the bet fractions for each horse in a race directly as opposed to imputing them from the odds.8

The role of turnover in betting markets9 The divergences from market efﬁciency in our results reported here, and in the results of previous researchers, are inversely related to the betting turnover. This relationship has been hinted at by other authors, but it has only been confronted directly by Walls and Busche (1996) and Busche and Walls (2000).10 Evidence of non-optimizing behavior, in the data analyzed in this chapter and in all prior studies, is present only at racetracks with betting turnovers of a few thousand dollars per race. When the turnover is scaled up by orders of magnitude, to a few hundred thousand dollars per race, we ﬁnd no signiﬁcant deviations from market efﬁciency. Our ﬁndings provide further non-experimental support for the decision– cost theory. Also, because we only examine horse tracks in Japan, cultural factors can be ignored. Economists seem to have had a fascination with anomalies while ignoring the mantra of opportunity cost. It is fortunate that the mountain of evidence of non-optimizing behavior in gambling markets and in economic experiments has prompted some economists to re-think the representative consumer’s optimization problem in terms of the cost of making decisions. Smith (1991) has challenged

477 286 288 288 192 192 192 192 384 969 858 1,350 1,467 1,560 1,016 942 1,288 1,082

Tokyo Nakayama Kyoto Hanshin Kokura Chukyo Hakodate Sapporo Fukushima Urawa Kawasaki Mizusawa Sonoda Nagoya Kamiyama Niigata Saga Arao

457,866 425,726 346,535 295,186 205,636 163,556 150,141 148,343 138,529 9,477 9,427 4,252 4,195 1,417 1,365 738 515 441

Turnoverb 2 −0.42 −1.51 0.78 −0.25 0.56 0.19 0.02 −0.86 1.09 −1.73 1.25 −0.51 −0.72 −0.06 0.24 0.78 −0.08 −1.78

1 −0.77 −0.62 −1.11 −1.48 0.78 −0.32 1.11 0.28 0.54 1.84 −1.52 0.85 0.48 −1.10 −0.54 −1.39 −1.00 −0.37

Favorite positiona

0.17 1.41 0.20 0.38 −0.62 −0.64 −1.01 −0.49 0.02 0.22 1.29 −0.06 0.06 0.96 −0.28 0.79 0.36 1.87

3 0.42 0.12 −0.76 1.20 −0.03 0.04 −1.24 −0.17 −0.37 0.45 0.54 0.49 1.05 0.85 0.52 1.63 −0.76 1.27

4 −0.99 1.55 1.04 −1.15 −2.00 0.50 −0.93 −0.84 −1.13 0.89 −0.92 −1.52 1.07 0.62 0.03 −0.76 0.80 2.29

5

Notes a z-statistics are listed by favorite position for each track. b Turnover is listed in 102 Yen (approximately equal to US dollars at then-current exchange rates).

Races

Track name

Table 6.1 z-statistics for Japanese horse tracks ordered by turnover

−0.12 −1.13 −1.20 0.08 −0.18 0.20 −0.43 −1.24 −0.88 −0.43 0.93 0.24 −0.68 −0.38 0.05 −1.10 1.45 0.07

6 −0.32 0.29 −0.10 −0.09 −0.31 −1.27 −0.07 0.67 −2.32 −0.97 0.70 0.39 0.74 0.33 2.01 1.30 2.32 −0.43

7

−0.21 −0.57 −0.35 −0.72 1.74 −0.39 1.61 1.04 0.41 1.89 0.40 2.79 1.33 1.29 2.11 4.15 1.50 −0.21

8

0.34 0.71 −0.03 2.09 −0.88 0.09 −0.01 0.96 0.59 0.66 −0.48 0.51 0.54 2.38 −0.62 0.32 1.50 2.54

9

Breakage, turnover, and betting market efﬁciency

47

the interpretation placed upon evidence from experimental studies, and Smith and Walker (1993a,b) develop what they call a decision–cost theory in which the size of payoffs affects the efﬁciency of outcomes. Smith and Walker go on to review thirty-one experimental studies and ﬁnd that the pattern of results is consistent with optimization theory: When the costs of optimization errors increase, the size of the errors decreases and risk aversion replaces risk seeking. Harrison (1989, 1992) argues that the anomalies observed in experimental economic markets simply reﬂect the fact that the opportunity cost of non-optimizing decisions is tiny.11 When the potential gain to participants in gambling experiments increases, the percentage who appear risk averse also increases (Hershey et al., 1982; Battalio et al., 1990). Horse tracks around the world use the pari-mutuel system of betting.12 In parimutuel betting markets, the track operator extracts a percentage of the betting pool and returns the remainder to winning bettors in proportion to their individual stakes on the outcome of the race. The net return per dollar from a bet on a particular horse i is given by w −1 (2) Ri = (1 − t) xi where t is the track take; xi is the total amount bet on horse i; and w = i xi is the total amount bet on all horses. As the returns on each bet depend on the total amount bet on all horses, the actual payoffs to bets are not determined until all bets have been made. If the proportion of the total betting pool bet on each horse were equal to each horse’s win probability, returns across all horses would be equalized and the betting market could be considered efﬁcient in that bettors have exploited all betting margins. However, if the pattern of betting resulted in a particular horse yielding a statistically larger expected return than another horse, this would be evidence against the hypothesis of market efﬁciency. Suppose there were a single risk-neutral professional bettor who knew the horses’ true win probabilities, and for simplicity also assume that there was a single underbet horse.13 The bettor’s decision problem is to maximize expected returns w+B −B (3) E(R) = π(1 − t)B x+B by choice of his bet B, where π is the horse’s win probability. For the bettor, maximizing expected returns yields the optimal bet B ∗ π(1 − t)(w − x)x ∗ −x (4) B = (1 − π(1 − t)) The optimal bet is a function of the amount currently bet on that horse, the total bet on all other horses, the track take t, and the horse’s win probability. If a horse is sufﬁciently underbet, the bettor could place a bet that would have a positive return in expectation. Suppose, for example, a horse with a 0.15

48

W. D. Walls and K. Busche

win probability has already attracted a 10 percent share of the win pool. If the track take is 18 percent, advertised odds will be 8.2 : 1, and a $1 bet on this horse would have an expected net return obtained by evaluating equation (2): π(1 − t)(w/x) − 1 = 0.15 × 0.82 × (1/0.1) − 1 = $0.23. The bettor’s optimal bet on this horse depends upon the value of the total betting volume w, the initial amount bet on the horse, x, and the win probability π. If the original win pool were $10,000, with $1,000 already bet on the selected horse, the proﬁt-maximizing bet for the professional bettor is $123.50. The professional’s bet causes the odds to fall from 8.2 to 7.39 : 1 and expected net return is $13.38. The professional’s bet removes all further proﬁtable bets in the example.14 When there are multiple professional bettors competing to make the proﬁtable bets, the odds converge even more rapidly toward the level implied by market efﬁciency (Adams et al., 2002). From equation (3) the expected proﬁt is 123.5(0.15)(0.82)(10123.5/11123.5) − 123.5 = 13.38. If the professional bettor made a bet of $262.26, the odds on this horse would be reduced to (1 − t)10,262.26/1262.26 = 6.67 and this would drive the professional’s return to zero. The proﬁt maximizing bettor drives the ﬁnal track odds to between the initial odds of 8.2 and the zero-return odds of 6.67. Optimal bets and expected returns are scaled by the total pool size: If the win pool were two orders of magnitude larger ($1,000,000) then expected returns would also increase by two orders of magnitude ($1,338).15 The magnitude of returns effectively constrains the professional’s research costs incurred in estimating horses’ win probabilities: Research will be proportional to the size of the betting pool. In the example given in the previous paragraph, if only one underbet horse could be found on each race day, the professional bettor with an alternative employment opportunity paying $100 per day could not proﬁtably spend any money on research at the racetrack with a $10,000 win pool. However, at the $1,000,000 track, the bettor could spend up to $1,238 per day on research before becoming indifferent between professional betting and his alternative employment.16 In the event that research costs and betting volume made it unproﬁtable for a professional to bet at the track, an outside investigator would observe returns across horses that reﬂect the risk preferences of the remaining non-professional bettors. With small betting pools, the reward to low-variance predictions about a horse’s win probabilities is small, so it is unlikely that a professional bettor would be willing to incur the research cost involved in ﬁnding underbet horses. At racetracks with small volumes of betting, the enjoyment of a day at the races is perhaps sufﬁcient to attract people who treat their day at the races as consumption of entertainment. The consumption involved in recreational betting may include accepting greater than minimum required average losses to achieve the occasional thrill of hitting the longshot.17 If that is the case, the examination of betting data from tracks with small betting volumes would be expected to show that longshots are overbet. At racetracks with large volumes of betting, some bettors may proﬁtably become professionals, researching horses’ win probabilities in order to ﬁnd horses sufﬁciently underbet to yield high returns in expectation.18

Breakage, turnover, and betting market efﬁciency

49

If racetracks are populated by both consumers who value thrills like the possibility of hitting the longshot by betting on extreme favorites, and professional investors who value only returns, we should expect that underbet horses will be rarer at racetracks with larger turnovers. Where rewards are sufﬁciently high, bettors will research more and markets will be measured as more efﬁcient.19 A testable implication of this view is that large betting volume racetracks will be measured as more efﬁcient than small volume tracks. We now confront this prediction of the decision–cost theory with empirical evidence across Japanese racetracks with a wide range of bet volumes. Table 6.1 shows the z-statistics for the null hypothesis that betting returns are equalized across horses for each track. Each horse track is shown as a separate row in the table, and the tracks are listed in decreasing order of betting turnover. The ﬁrst nine tracks, members of the JRA, have betting turnover in the hundreds of thousands per race. Among JRA tracks, only Hanshin, Kokura, and Fukushima show any evidence that betting returns are not equalized across betting alternatives when testing at the 10 percent marginal signiﬁcance level. This is not strong evidence of systematic betting market inefﬁciency: at the 10 percent marginal signiﬁcance level we would expect to ﬁnd 10 percent of the z-statistics in the critical region as a result of chance, but we ﬁnd only four out of eighty-one which is about half of what we would expect to ﬁnd. The bottom nine rows of Table 6.1 consist of the NAR tracks, which have betting turnover from the hundreds to slightly less than ten-thousand per race. Seven of these nine tracks show evidence that bettors have not equalized the returns across betting alternatives; only Sonoda and Kawasaki showed no evidence of underbetting or overbetting. Testing again at the 10 percent marginal signiﬁcance level, chance would lead us to expect about eight signiﬁcant z-statistics for the nine tracks with nine-horse races. But we ﬁnd thirteen signiﬁcant z-statistics, more than half again above what would we would expect from chance. The pattern that emerges from the raw z-statistics indicates that horse tracks with larger betting turnover are measured as being more informationally efﬁcient. To relate the z-statistics in each row of Table 6.1 to the betting turnover requires the construction of a metric to quantify how the z values differ from zero as a group. If we treat each row of z-statistics as a nine-dimensional vector, an intuitive way to quantify the vector of z-statistics is to measure its deviation from the zero vector in terms of Euclidean distance. This is precisely the norm of the z-vector normi =

9

2 j =1 zij

i = 1, . . . , 18

(5)

where i indexes the eighteen individual horse tracks and j indexes the favorite position at each track. We regressed the norm of the z-vector on the betting turnover and obtained the following results normi = 4.259 − 5.22e−6 × Turnoveri + residuali [0.565]

[2.01e−6]

(6)

50

W. D. Walls and K. Busche

where White’s (1980) heteroskedasticity-consistent estimated standard errors are reported in brackets below the respective coefﬁcient estimates.20 The R 2 for the regression was 0.24. The coefﬁcient on turnover is negative and statistically different from zero at the 5 percent signiﬁcance level. This is strong evidence that the z-statistics reported in analyzes of racetrack betting markets are closer to zero when bet volumes are high. These empirical results show that the volume of betting is an important determinant of observed betting market efﬁciency across Japan’s horse tracks.

The role of breakage in pari-mutuel betting21 In betting markets, the gross return per dollar from a bet on a particular horse i is given by

w Ri = (1 − t) xi

(7)

where t is the “track take” – the percentage that the racetrack operator extracts from the betting pool as the fee for coordinating the gambling market; w is the total amount bet on all horses; and xi is the total amount bet on horse i. The track take is the primary source of revenue for racetracks and it is often about 0.17 or 0.18, although it is as high as 0.26 at racetracks in Japan. The track take is removed from the pool before any other calculations or payoffs are made. We explain below how returns to bettors are functions of relative amounts bet across horses; the track take does not affect the allocation of wagers across horses, although it does reduce the amount bet by any individual bettor.22 As a secondary source of revenue, and to simplify pay-outs, race track operators typically round payoffs down – often to the nearest lower 20 cents for a $2 bet; the rounding down of payoffs is called breakage in betting industry parlance. Where the exact payoff corresponding to the advertised odds might indicate $12.77 or $12.67 winning payoffs to $2.00 bets, the actual payoffs will be $12.60 for each bet, those 17 and 7 cents, respectively, removed as breakage. The methodology employed by previous researchers was to add track take and breakage together. However, track take and breakage affect the behavior of bettors differently.23 Track take alters the returns from horses across win probabilities: Expected return of horse i with πi probability of winning is πi ∗ R − 1. A horse that attracts $1,000 of a $10,000 win pool at a track with 16 percent take will have odds of (1 − 0.16) × 10000/1000 − 1 = 7.40, so the gross return from a winning bet will equal $8.40. A horse that attracts 50 percent of that win pool will have a gross return from a $1 winning ticket of $1.68. If the track takes were increased to 18 percent, those same horses would have gross returns of $8.2 and $1.64. Percentage increases of track takes and gross returns are equal, so changes in the track take do not change the relative proﬁtability of betting different horses; the results of an experimental betting market conﬁrm this prediction (Hurley and McDonough, 1995).

Breakage, turnover, and betting market efﬁciency

51

Breakage cost differentially affects the returns across horses. A bettor placing a bet on a horse with an anticipated 10 percent chance of winning would anticipate odds of 8.3 : 1 with a 17 percent track take, and breakage would reduce the pay-out by an expected 10 cents: Rather than getting paid $16.60 for a $2 winning ticket, which would be the pay-out if actual odds were between 8.3 and 8.4, if actual odds turn out to be 8.29 the payment would be reduced by 20 cents to $16.40. With breakage distributed uniformly between 0 and 20 cents, the expected reduction is 10 cents on a $2 winning ticket. Although the expected reduction on the payment of winning tickets is a constant 10 cents, the cost is borne more heavily by winning tickets on favorite horses since 10 cents on an even–odds winner paying $2 is more than the 10 cents on a longshot winner paying $100 for a $2 winning ticket. The expected cost to all participants is a function of the odds times the probability that purchasers of the tickets will become winners. An index of expected breakage cost can be constructed by examining the components of breakage cost. The ﬁrst component can be approximated by the odds on the particular horse: horses with low odds will have relatively high breakage per dollar returned on a winning ticket. The second component is related to the bet fraction because it approximates the probability that the ﬁrst component will be realized. A metric of breakage for a particular horse could be constructed by multiplying these two components. Consider the breakage for a particular horse that has 49 percent of the pool bet on it: With a 17 percent track take, the exact odds would be 1.59, but due to breakage a winning ticket would be paid only $1.50 upon winning due to breakage. Since the win probability can be approximated by the bet fraction, this horse would add 0.779 [= 0.49 × 1.59] to the breakage index. A horse collecting only 5 percent of the win pool would add 0.053 [= 0.05 × (0.05/0.83 + 1)]. The expected breakage cost for a particular race can be approximated by summing the breakage over the individual horses i (xi /w)Oddsi , where i indexes horses within a race. We calculate the index of breakage for each of the races in our sample and sorted the races in decreasing order of the index which ranged from 0.0795 to 0.3303. Then we divided the sorted races into twenty-six equal groups of approximately 500 races each. The z-statistics for the null hypothesis of equal returns across horses in each favorite position were calculated for each subgroup and they are displayed in Table 6.2. In the thirteen breakage groups shown in the top half of the table we ﬁnd fourteen z-statistics that are signiﬁcantly different from zero at the 10 percent marginal signiﬁcance level. Since we are testing at the 10 percent level, we would only expect about twelve (10 percent × 13 × 9), so there is evidence that bettors are not equalizing returns across betting alternatives for the high-breakage groups. In the thirteen breakage groups comprising the bottom thirteen rows of the table, we ﬁnd only eight z-statistics that are signiﬁcant at the 10 percent marginal level compared to the twelve that we would expect to ﬁnd as the result of sampling variation. Thus, the lower breakage groups appear to be consistent with the hypothesis of bettors equalizing returns across favorite positions, while the high breakage groups are inconsistent with this hypothesis. To relate the z-statistics in each row

52

W. D. Walls and K. Busche

Table 6.2 z-statistics grouped by index of breakage Indexa

0.3303 0.2745 0.2465 0.2257 0.2116 0.1996 0.1900 0.1806 0.1731 0.1666 0.1607 0.1549 0.1494 0.1444 0.1394 0.1347 0.1305 0.1262 0.1224 0.1183 0.1138 0.1092 0.1045 0.0987 0.0913 0.0795

Favorite positionb 1

2

3

4

5

6

7

8

9

−1.37 −1.17 −0.12 −1.01 −1.12 −1.53 0.14 0.42 −0.17 −1.22 −1.58 0.79 −0.16 0.71 −0.11 1.35 2.03 −0.21 −0.91 −2.09 0.81 1.13 −0.54 0.46 1.23 −1.47

0.94 0.34 1.50 −1.13 0.58 0.87 −1.10 −1.87 0.88 0.49 −0.91 −0.66 −1.21 −0.00 −0.05 −1.61 −0.62 0.13 1.15 1.56 0.22 −2.34 −1.10 0.13 −0.75 1.08

2.56 1.72 −1.28 1.06 0.01 1.53 −0.19 0.04 −1.09 0.53 3.19 0.22 1.13 −0.38 1.55 0.07 0.17 −0.76 −0.83 2.40 −1.56 1.05 1.85 −0.23 −0.74 −1.02

−2.02 −0.87 −1.71 1.09 0.15 1.44 1.08 1.61 1.63 2.53 1.08 1.26 1.00 0.68 0.12 −0.83 0.36 0.60 −0.22 1.11 −0.63 0.14 0.23 −0.55 −0.47 0.67

−0.23 1.83 −0.10 −0.03 −0.49 −0.59 −1.10 3.26 −0.32 −0.35 0.97 −0.89 −0.05 −0.09 −0.67 0.33 −1.55 0.57 0.01 −1.74 0.35 0.33 −0.69 1.84 −0.58 1.21

−0.43 0.36 1.36 1.42 1.44 −1.35 0.78 0.33 −1.57 −0.66 −0.61 −0.78 0.62 −0.69 0.40 −0.49 −1.39 −0.70 1.54 1.04 0.45 −1.06 0.08 −0.24 −0.21 −0.84

−0.17 −0.02 0.07 0.44 0.12 2.04 2.76 −0.18 1.70 0.81 0.89 −0.82 1.68 0.36 −1.61 1.22 1.47 0.56 −1.33 −0.64 1.23 0.68 −0.00 −1.41 0.45 −1.05

0.52 0.64 1.71 1.34 1.35 −0.11 1.59 −1.28 −0.13 0.65 0.64 1.04 0.65 0.36 1.15 1.38 0.41 1.70 2.18 1.63 0.82 1.25 0.16 0.43 0.98 1.01

−0.48 0.67 3.71 2.89 1.34 0.75 0.09 0.89 1.60 −0.41 0.13 0.64 −0.89 −0.47 −0.03 0.23 1.19 0.06 1.30 −1.41 0.79 0.17 1.77 0.10 1.59 0.45

Notes a The index of breakage is deﬁned in the main text. b z-statistics are listed by favorite position for each index grouping.

of Table 6.2 to the breakage cost requires the construction of a metric to quantify how the z values differ. from zero as a group. If we treat (as we did in the section on “The role of turnover in betting markets”) each row of z-statistics as a nine-dimensional vector, we again can quantify vector of z-statistics in terms of its Euclidean distance from the zero by taking the norm of the vector normi =

9

2 j =1 zij

i = 1, . . . , 26

(8)

where i indexes the races grouped by breakage and j indexes the favorite position within each group. We regressed the norm of the z-vector on the index of breakage and obtained the following results: normi = 2.437 + 5.272 × Breakagei + residuali [3.391]

[2.314]

(9)

Breakage, turnover, and betting market efﬁciency

53

where White’s (1980) heteroskedasticity-consistent estimated standard errors are reported in brackets below the respective coefﬁcient estimates.24 The R 2 for the regression was 0.15. The coefﬁcient on breakage is positive and statistically different from zero at the 5 percent level. This is strong evidence that the z-statistics reported in analyzes of racetrack betting markets are biased away from zero by ignoring breakage costs.

The skewness–preference hypothesis Modeling bettors’ utility functions Two other ways of quantifying and testing betting behavior are based on alternative speciﬁcations of a representative bettor’s utility function. Modeling bettors’ utility is based primarily on the work of Ali (1977) where a representative bettor has utility function u(·). A bet on horse h returns Xh dollars if the horse wins and zero otherwise. The utility function is normalized so that the utility of a winning bet on the longest-odds horse is unity and the utility of any losing bet is zero. In this formulation, the utility of a winning bet on horse h is u(xh ) = pH /ph , where pH is the objective win probability on the least-favorite horse and ph is the objective win probability on horse h. Power utility β

Ali (1977) ﬁt a power function to approximate utility so u(xh) = αxh and he estimated this using a logarithmic transformation ln u(xh ) = α + β ln x + µ

(10)

In this model risk-neutrality is implied if the exponent β equals unity, risk preference is indicated if β is greater than unity, and risk aversion is indicated if β is less than unity. Modeling utility as a power function is arbitrary and it implies constant relative risk aversion. As an alternative Golec and Tamarkin (1998) suggest using a cubic utility model. Cubic utility Golec and Tamarkin (1998) suggest that we approximate the unknown utility function u(xh ) by expanding a third-order Taylor series approximation.25 The Taylor series approximation results in the following cubic utility model that can be estimated using standard linear regression: u(xh ) = α + β1 x + β2 x 2 + β3 x 3 + µ

(11)

In this model risk-neutrality is implied when β2 = 0, risk preference is implied when β2 > 0, and risk aversion is implied when β2 < 0. Skewness-neutrality is implied when β3 = 0, and skewness preference and aversion are implied when

54

W. D. Walls and K. Busche

β3 > 0 or 2 and FP/SP > 2, respectively) tend to be over-bet by some margin, hence the relatively high SP losses. Although a proportion of the price movements in these categories represent proﬁtable arbitrage opportunities, a further proportion may represent unsuccessful attempts to follow the ‘smart money’. A direct comparison cannot be made in respect of semi-strong and strong-form efﬁciency; whilst the former is a prime focus in the current study, Crafts was more interested in the latter. It is useful nonetheless to analyse the current data using the Crafts’ price movement categories, by tip status, as in Table 7.7. Table 7.8 then shows the number of runners moving signiﬁcantly in the market by tip status as a percentage of all runners in each tip category. Much of the data in these tables has to be treated with caution because of the lack of statistical signiﬁcance, but still has value in being highly suggestive, given Crafts’ ﬁndings. The data in Tables 7.7 and 7.8 show that, in principle, the knowledge of a horse being napped substantially improves the bettor’s chances of exploiting high early prices relative to SP, especially in the case of Winsome tips, on which high returns could have been made at mean- and max-early, assuming these odds were available to real wagers, and at SP. This conﬁrms the overall impression gained from Tables 7.3 and 7.5. Table 7.8 conﬁrms that WAOT and WO status is a fair predictor of which horses would move most in the dataset: nearly one-third of WAOT horses contracted signiﬁcantly, and one-ﬁfth of WO horses. In addition, the knowledge that a horse is not napped at all is useful in that the NOT category horses are not only less likely to contract substantially; those horses that do contract in this category are also associated with negative returns (bar a modest max-early proﬁt in the 1.5 to less than 2 category, amounting to £0.16 per £ bet, generated by only two high priced winners, that is, outside the 10–1 Crafts’ division). Furthermore, the average SP of NOT runners greatly

18 100 15.25 0.73 0.97 0.09 145 567 20.37 0.64 −0.01

21 171 10.94

0.53 0.78 −0.05

N/A N/A N/A

N/A N/A

≤10/1

N/A N/A

N/A N/A N/A

0.18 0.30 −0.47

6 53 10.17

All

1.41 −0.09

68 329 17.13

0.62 0.78 −0.28

6 37 13.95

≤10/1

ME ∗ /SP ≥ 2.0

N/A N/A

N/A N/A N/A N/A N/A

−0.63 −0.38

−1.00 −1.00 −1.00

0 7 0

All

−0.64 −0.13

20 297 6.3

−1.00 −1.00 −1.00

0 1 0

≤10/1

SP/ME ∗ ≥ 2.0

54 804 6.29

−0.39 −0.32 −0.07

−0.48 −0.40 −0.20 N/A N/A N/A

9 61 12.86

≤10/1

11 145 7.05

All

1.5 ≤ SP/ME ∗ < 2.0

Notes * ME refers to the mean-early price, this being more representative of generally available prices. 1 Crafts used trade newspaper betting forecast prices as the baseline for measurement, as opposed to mean- and max-early ﬁxed odds in this study (ﬁxed odds markets were infrequent in 1978, the year from which data was drawn). As in the current study SP was the destination price. Crafts claimed that the impact of insider information could be distinguished from that of publicly available information, which would be discounted by bookmakers by the time of opening show. This distinction cannot easily be made in a study of ﬁxed odds, as the impact of the two types of information work simultaneously on early morning ﬁxed odds. 2 Crafts measured price movements by the ratio of newspaper forecast price (FP) to SP (odds contracting to SP), and the ratio of SP to FP (odds extending to SP), with classes of magnitude 1.5 to less than 2; and 2+. 3 Returns in Table 7.6 (and Table 7.7) are calculated to a £1 level stake per bet, as this was the staking used by Crafts, and ignore transaction costs. 4 Because of the characteristics of SP betting forecasts at the time the Crafts data refer to, he limited his study to horses with an FP and/or SP of 10–1 or less. For purposes of comparison, the same procedure was adopted in Table 7.6; this gives the added beneﬁt of allowing a relative appraisal of the performance of long and short priced runners.5

Current dataset Won Lost % winners Average proﬁt per £ bet, at Mean-early Max-early Starting price Crafts dataset Won Lost % winners Average proﬁt per £ bet, at Forecast price Starting price

All

1.5 ≤ ME ∗ /SP < 2.0

Table 7.6 Comparison of rates of return in the current and Crafts datasets, by direction and magnitude of price movement

8 22 26.67 2.02 2.37 0.89 3 10 23.08 1.69 2.23 0.69 5 29 14.71 0.43 0.65 −0.09 2 39 4.88 −0.26 −0.17 −0.55

1.59 1.89 0.62

3 18 14.29

0.66 1.00 0.05

6 36 14.29

0.82 1.14 0.14

4 90 4.26

−0.02 0.16 −0.40

≤10/1

8 27 27.59

All

1.5 ≤ ME/SP < 2.0

−0.36 −0.28 −0.67

0.65 0.86 −0.14

1 6 14.29

−0.28 −0.17 −0.64

−0.41 −0.32 −0.70 1 17 5.56

1 8 11.11

0.31 0.41 −0.27

2 9 18.18

1.31 1.53 0.13

2 14 12.50

≤10/1

1 10 9.09

0.20 0.29 −0.33

2 10 16.67

1.06 1.25 −0.23

2 16 11.11

All

ME ∗ /SP ≥ 2.0

−0.74 −0.71 −0.61

3 116 2.52

0.36 0.60 1.11

8 29 21.62

N/A N/A N/A

0 0 N/A

N/A N/A N/A

0 0 N/A

All

−0.67 −0.63 −0.46

2 37 5.13

−0.05 0.08 0.42

7 24 22.58

N/A N/A N/A

0 0 N/A

N/A N/A N/A

0 0 N/A

≤10/1

1.5 ≤ SP/ME ∗ < 2.0

Note * ME refers to the mean-early price, this being more representative of generally available prices.

WAOT Won Lost % winners Average proﬁt per £ bet, at Mean-early Max-early Starting price WO Won Lost % winners Average proﬁt per £ bet, at Mean-early Max-early Starting price OTO Won Lost % winners Average proﬁt per £ bet, at Mean-early Max-early Starting price NOT Won Lost % winners Average proﬁt per £ bet, at Mean-early Max-early Starting price

Tip status

Table 7.7 Returns to a level stake of £1 per bet, current dataset, by price movement and tip status

−1.00 −1.00 −1.00

0 4 0

−1.00 −1.00 −1.00

0 2 0

N/A N/A N/A

N/A N/A N/A

0 0 N/A

−1.00 −1.00 −1.00

0 1 0

N/A N/A N/A

0 0 N/A

N/A N/A N/A

−1.00 −1.00 −1.00 0 0 N/A

0 0 N/A

≤10/1

0 1 0

All

SP/ME ∗ ≥ 2.0

The impact of tipster information

77

Table 7.8 Signiﬁcant price movers as a percentage of total runners in tip categories Category

WAOT WO OTO NOT All

Total runners in category

Contracting (ME*/SP ≥ 1.5)

Extending (SP/ME* ≥ 1.5)

Number

%

Number

%

174 169 1,033 2,902 4,278

53 33 53 112 251

30.46 19.53 5.13 3.86 5.87

1 0 39 123 163

0.57 0 3.8 4.2 3.8

Note * ME refers to the mean-early price, this being more representative of generally available prices.

overestimates their true chance of winning, evidenced by substantial SP losses on these runners. The data on NOT runners in Tables 7.7 and 7.8 is difﬁcult to square with the association claimed by Crafts between signiﬁcant price movements with proﬁtable arbitrage opportunities, and insider activity; one would expect proﬁtable insider arbitrage to be more apparent in this category, although Crafts does suggest this is particularly a feature of low proﬁle races. As it is, the most proﬁtable potential arbitrage opportunities are to be found in the categories in which horses are napped, and hence with publicly available information.

Conclusions Media tips appear to have a signiﬁcant impact on prices from max/mean early to SP, and this analysis suggests that knowledge of Winsome selections is a useful predictor of large contractions in price with the prospect of potential arbitrage opportunities. The analysis of price movements conﬁrms many of the outcomes of the Crafts study, although a question is raised regarding the strength of Crafts’ conclusion regarding insider activity, due to the poor performance in this study of horses that are not napped. There is some evidence of semi-strong inefﬁciencies in respect of media tips (OTO, WO and WAOT), based on this dataset. The above average actual SP and nominal mean-early returns are not accounted for by the differential incidence of the favourite–longshot bias on tipped and non-tipped categories. The differences in returns, therefore, may reﬂect an inefﬁcient use of tips. The rates of return are not signiﬁcant by conventional statistical tests, but it is suggested that further work is required on the nature of the distribution of betting returns in general. Whether the additional returns advantage at max-early prices constitutes semistrong inefﬁciency depends upon the extent of arbitrage opportunities, and warrants further study of the path of prices to SP. Do the abnormal Winsome proﬁts over three years indicate the judgement of sophisticated bettors who assess this as an aberration, and expect reversion to the mean, or is this evidence of inefﬁcient use of information? To answer this question

78

M. A. Smith

further extension of this study should use a larger sample that looks at the total naps record of each tipster individually, which would also reduce any bias caused by concentrating only on the type of race in which Winsome specialises.

Notes 1 Alternative names are used for the newspaper and journalist’s column to maintain anonymity. 2 The Sporting Life, was the authoritative trade paper at that time. It is important to note that betting forecasts offer estimates of prices – they are not available to real bets. 3 The nap is the horse considered by the journalist to be the best bet of the day. 4 Crafts uses an alternative measure, FP/SP, which has the disadvantage of being unweighted for the amount of money needed to move the price. 5 This is an appropriate division because the favourite–longshot bias appears to become marked at odds of about 8–1 (Hausch et al., 1981).

References Alexander, C. (2001), Market Models: A Guide to Financial Data Analysis. Wiley: Chichester. Ali, M. M. (1979), ‘Some evidence on the efﬁciency of a speculative market’, Econometrica, 47, 387–392. Ball, R. and Brown, P. (1968), ‘An empirical evaluation of accounting income numbers’, Journal of Accounting Research, Autumn, 159–178. Conrad, J. and Kaul, G. (1993), ‘The returns to long term winners and losers: bid-ask biases or biases in computed returns’, Journal of Finance, 48(3), 39–63. Crafts, N. F. R. (1985), ‘Some evidence of insider knowledge in horse race betting in Britain’, Economica, 52, 295–304. Dissanaike, G. (1997), ‘Do stock market investors overreact?’, Journal of Business Finance and Accounting, 24(1), 27–49. Fama, E. F. (1970), ‘Efﬁcient capital markets: a review of theory and empirical work’, Journal of Finance, 25(2), 383–417. Figlewski, S. (1979), ‘Subjective information and market efﬁciency in a betting market’, Journal of Political Economy, 87(1), 75–88. Hausch, D. B., Ziemba, W. T. and Rubinstein, M. (1981), ‘Efﬁciency of the market for racetrack betting’, Management Science, 27(12), 1435–1452. Krauss, A. and Stoll, H. (1972), ‘Price impacts of block trading on the New York Stock Exchange’, Journal of Finance, 27(2), 210–219. Patell, J. M. and Wolfson, M. A. (1984), ‘The intraday speed of adjustment of stock prices to earnings and dividend announcements’, Journal of Financial Economics, 13, 223–252. Shin, H. S. (1991), ‘Optimal betting odds against insider traders’, Economic Journal, 101, 1179–1185. Snyder, W. W. (1978), ‘Horse racing: testing the efﬁcient markets model’, The Journal of Finance, 33(4), 1109–1118. Vaughan Williams, L. and Paton, D. (1997), ‘Why is there a favourite–longshot bias in British racetrack betting markets?’, Economic Journal, 107, 150–158.

The impact of tipster information

79

Vaughan Williams, L. (1999), ‘Information efﬁciency in betting markets: a survey’, Bulletin of Economic Research, 53, 1–30. Vaughan Williams, L. (2000). ‘Can forecasters forecast successfully? Evidence from UK betting markets’, Journal of Forecasting, 19, 505–513. Zarowin, P. (1990), ‘Size, seasonality and stock market overreaction’, Journal of Financial and Quantitative Analysis, 25(1), 113–125.

8

On the marginal impact of information and arbitrage Adi Schnytzer, Yuval Shilony and Richard Thorne

Introduction It is self-evident that information is valuable, even indispensable, for optimal decision making when investing in ﬁnancial markets. A question which naturally arises is: at what point, if any, does the cost of additional information gathering exceed the beneﬁts? The question is complicated by the fact that information is not a homogeneous commodity. This distinguishes our question from that posed by Stigler (1961) on the diminishing marginal returns to (homogeneous) searching for the lowest price of a commodity. Under certain conditions, Radner and Stiglitz (1984) showed that for an expected utility maximisation problem under constraint, the marginal value of information at the point of no information is non-positive. For a similar result in a principal–agent setting see Singh (1985). These results suggest that information has a rising marginal value when information is ﬁrst accumulated. Indeed, it is easy to ﬁnd examples of particular scenarios where the marginal value of information1 is negative, see below, or not diminishing. The problem arises from the heterogeneity of information in ﬁnancial markets and is complicated by the existence of both public and private information. On the other hand, it may be that, if the investors gather information about a large number of stocks, the proposition of positive but diminishing returns to information at the successive going market equilibrium points, which develop and change over time, holds true on average. The purpose of this chapter is to present a formal representation of the information accumulation process and to use this representation to formulate the testable hypotheses that, in a ﬁnancial market, the marginal value of information is, on average, positive and diminishing. Using data from a horse-betting market, it will be shown that this hypothesis cannot be rejected, in spite of the fact that it does not hold for particular horses or races. We show that the ﬂow of inside information to the market, when its gainful exploitation is permitted, positively impacts upon the market by eradicating remaining biases in prices and that this impact is diminishing. The choice of a horse-betting market is motivated by a number of factors. First, since the betting on each race represents a separate market, it is possible to obtain data on many separate markets. Second, the institutional framework within which betting takes place in our data facilitates the transmission of both public and private information to the market. Third, the acquisition of transmitted information is

Marginal impact of information and arbitrage

81

virtually costless. The costless availability of both public and (second hand) inside information permits the focus of the chapter to be placed squarely upon the value of the information. Finally, in the context of horse betting, the marginal value of information is readily deﬁned intuitively: additional information has value if it permits the generation of a more accurate estimate of the winning probabilities of the horses in the race, than would be possible without that information. The question, then, is how may we use horse betting data to test the behaviour of the marginal value of information, on average? Pari-mutuel betting markets in the United States have received by far the most attention from researchers. A number of papers2 have shown that these markets are beset by what is known as the favourite–longshot bias. That is, bettors on the pari-mutuel systematically under-bet favourites and over-bet longshots relative to their winning frequencies. On the other hand, Busche and Hall (1988) have shown that the pari-mutuel market in Hong Kong is characterised by a reverse bias; that is, favourites are over-backed and longshots under-backed relative to their winning frequencies. We would argue that changes in the extent of any such bias in the market provide us with the appropriate measure. Using data on tote betting on harness races in Victoria, Australia at various times before the race, we show that in a betting market in which the pari-mutuel operates alongside bookmakers, betting by insiders with the latter provides valuable information to outsiders regarding the true winning probabilities of the horses. Outsiders use this information to update their expectations and the consequent change in their betting behaviour with the tote leads to an efﬁcient; that is, unbiased ﬁnal equilibrium. In a second tote market considered, bettors bet on races taking place in a different state, where a different tote operates. This latter tote is not available to the majority of bettors in Victoria, although prospective pay-out updates are available.3 In this case, local bettors receive information on the distant bookmaking market via a local on-course bookmaker who bets on out of state races. We show that this – less efﬁcient, because not all price changes are transmitted – information transmission mechanism leads to a signiﬁcant reduction in the extent of bias over time, but does not permit its complete removal. We use this comparison between the two markets to show that both markets are characterised by diminishing marginal value of information. We proceed as follows: The formal representation is provided in the section on ‘A formal representation’. Empirical results are presented and discussed in the section on ‘Empirical results’ while some conclusions are offered in the last section.

A formal representation There are three types of economic agent at the track. The betting public is composed of two disjoint segments – outsiders and insiders, while bookmakers add a third disjoint segment: (1) Outsiders, who have access only to public information of past records and current conditions. These are mainly pleasure bettors who bet relatively small

82

A. Schnytzer, Y. Shilony and R. Thorne

amounts with either the tote or the bookies. These bettors, when choosing horses to back, have a trade-off between favourites; that is, horses with a high probability of winning but small return, and longshots with a low probability of winning, but a high return. The bettors’ choices of horse on the tote affect the returns. The equilibrium in bettors’ decisions has been analysed by Quandt (1986) under the assumption that the objective probabilities of winning, p, are known. He ﬁnds that if bettors are risk-loving, a favourite–longshot bias must be present. A consequence, easily proved by same means, is that risk-aversion on the part of bettors implies the opposite bias. The argument follows even if, as we assume, bettors do not know p and employ instead expectations, e = Ep. In other words, on average, over many races, we should observe the implied bias. A bias in any direction may also be present owing to faulty probabilistic reasoning on the part of the public, such as that considered by Kahaneman and Tversky (1979, 1984) or Henery (1985) or, for that matter, for any other reason.4 The existence of bias in many countries is widely documented, see Hausch et al. (1994). (2) Insiders, who are usually associated with owners, trainers, drivers and other members of the trade, and have access to useful private information. An insider who wishes to gainfully employ his superior information will seek a more attractive outlet for using the information than the tote, namely a bookmaking market, where he can secure a guaranteed return. The reason is that the bettor’s mere backing of a horse in the tote reduces its return and all the more so if he has followers who try to imitate his actions. On the tote, the scope for heavy betting (plunging) is, therefore, limited and the ﬁnal return subject to uncertainty. We assume here that access to a market of bookmakers is available to the insider and that most plunging is carried out there.5 Of course, insiders may also bet with the tote, but if they do so it will be late in the betting, when uncertainty about the price is minimal. (3) Bookmakers, who sell bets at ﬁxed odds. In terms of the information at their disposal, bookmakers are assumed to be in a situation between that of outsiders and that of insiders, knowing more than the former and less than the latter. Thus, they will, on occasion, offer odds about a horse which, according to insiders, represent an unfair bet in the latter’s favour. It is under these conditions that a plunge occurs. Thus, the discrepancy between expected returns and prices, which gives rise to arbitrage opportunities, may derive from two sources. One is the bias discussed above, which is observed even by outsiders with public information. Could not a shrewd, risk-neutral bettor design a way to arbitrage the bias and make a proﬁt on average? In practice not, because the bias is not usually large enough relative to the tax on tote betting to warrant such activity.6 More important is another source for arbitrage, namely superior information held by insiders. An insider who observes a large enough gap between what the market ‘knows’ and his own more intimate knowledge may seize this opportunity and back an under-estimated horse. As noted above, this arbitrage activity will take place mostly in the bookmakers’ market, which is also composed of outsiders and insiders. If the consequent plunge is visible to outsiders in the tote market, the observers may learn something new about the horse from the plunge and follow suit.

Marginal impact of information and arbitrage

83

Since the plunge has been carried out at ﬁxed odds, insiders’ returns are unaffected by any such following. We now turn to a formalisation of this information-driven arbitrage. Let be our relevant sample space. Each element ω ∈ is an elementary composite event, which, if known to have occurred, conveys the fullest possible information about the coming race. A typical ω includes a full description of the horses’ and drivers’ conditions, the owners’ interests and stakes in winning or losing, track and weather conditions etc. Full information does not mean, of course, knowing the winner. Rather, deﬁne the Interpretation Function, I : → , which assigns to each elementary composite event a vector of random variables, namely the race, in the nwinning probability vector p = (p1 , . . . , pn ) for the n horses in the dimensional simplex = {(p1 , . . . , pn )|pi ≥ 0, i = 1, . . . , n and ni=1 pi = 1}. Of course, different people may have different interpretative faculties and therefore arrive at different conclusions regarding the winning probabilities. However, because we wish to concentrate on the informational element, we shall assume that all people are equally and perfectly astute in understanding racing events and have the same interpretation function, which gives the objective probabilities of winning. Thus, the most highly informed person can know at most the realisation of a particular elementary composite event, ω, which amounts to knowing p = I (ω). On , there is an a priori probability measure µ which is a Lebesgue measure over Borel subsets of . This a priori probability is common knowledge and is derived statistically from past frequencies and general knowledge by all keen observers. The difference between different bettors lies in the information at their disposal. Our formal description of information follows that developed in recent years in game theory; for example, see Osborne and Rubinstein (1994, ch. 5). A bettor’s information may be described as a partition R of into a set of disjoint subsets of itself; that is, R = (R1 , . . . , Rm ) such that Ri ∩ Rj = φ

for i = j and ∪m i=1 Ri =

The idea is that the bettor will know only which of the m possible events, R1 , R2 , . . . , Rm took place; that is, to which Ri the realised ω belongs. The more reﬁned the partition, that is, the greater the number of (thus smaller) sets it contains, the more revealing and useful is her information. An outsider with no access to useful information beyond past records and readily ascertainable current weather and track conditions; that is, beyond µ, has the degenerate partition R = ( ) and can do no better than estimate the winning probabilities by E(p| ) = I (ω) dµ. A better informed bettor; that is, one with a more reﬁned partition, R, knows which event Ri has occurred but not, of course, which ω ∈ Ri . To appraise the winning chances of the horses, she uses the information available to her to update the à priori distribution employing Bayes’ rule to get E(p|Ri ) =

Ri

I (ω) dµ µ(Ri )

84

A. Schnytzer, Y. Shilony and R. Thorne

Bookmakers have a partition Q, such that for every Qj in Q there is, for each insider with partition R, a set Ri in R such that Ri ⊂ Qj ⊂ . In other words, their partition is not so reﬁned as that of insiders but is more reﬁned than that of outsiders. They set their opening prices greater than their best estimate of the probabilities:7 Qj I (ω) dµ E(p|Qj ) = µ(Qj ) As the betting begins, outsider bettors make decisions based on their expected probability for each horse i to win, ei = E(pi | ). They will bet with both bookies and the tote, even though the former will never offer a price lower than ei for horse i. The price difference may be viewed as a premium for providing the market with initial odds, which is, in some degree, offset by the take-out of the tote. An insider, who usually specialises in a particular horse, may have a different estimate for horse j , say, E(pj |R) > ej , which is greater than the bookies’ opening price and therefore plunges the horse. If the plunge is visible to outsiders it reﬁnes their partition and reveals to them that ω is in a subset of to which I assigns higher probabilities for j to win than ej and thereby lowers the probabilities of other horses. Their estimation of horse j to win is updated upwards. Since, before the updating, outsiders were in equilibrium, which means indifference or absence of any incentive to switch horses, following the updating they have enhanced interest to back horse j , regardless of the direction of the initial bias, and doing so on the tote will lower its return. The typical outsider bets a small amount and can safely assume the odds given are not affected by his actions. Outsiders may also back the horse with the bookies, but they know that, since bookies have a more reﬁned partition, they will have revised their price after the plunge to a point at which it is now greater than the expected winning probability. Thus, again, outsiders will bet with bookies only if they are prepared to pay a premium for ﬁxed odds. The insiders do not all act together. Some may choose to act later and some may come across late information and so the process goes on. Suppose now, that there is a plunge on horse h with the bookmakers. Alert observers get their partitions reﬁned, directing their attention to subsets of the event they have known to occur where horse h is more highly appraised. That is, if a certain bettor knows that ω ∈ Rk , where Rk is one of his partition’s sets, the bettor learns from the new plunge that ω ∈ A ⊂ Rk and would now like to bet more on horse h with the tote if E(ph |A) > E(ph |Rh ) and the expected probabilities of other horses commensurably decline. The plunges may continue until racing time. In the process, information partitions get more and more reﬁned and the expected probabilities get closer and closer to the true probabilities, p = I (ω). In summary, the prediction from our model is that the more visible is the incidence of insider trading via plunges, the more outsiders tend to imitate insiders, thereby driving the subjective probabilities, ei towards the objective probabilities, pi . Note, we have assumed that all outsiders have access to plunges, whereas in

Marginal impact of information and arbitrage

85

the Victorian market, there are bettors on- and off-course. However, all off-course tote agencies provide regular updates of provisional pay-outs, so that in practice, outsiders on-course update their preferences, bet on the tote, and thus signal to those off-course the relevant information. Also, one can predict from our approach that in the absence of a bookmakers’ market, insiders who have no choice but to bet with the tote will bet lesser amounts, will thereby transmit less information to others and any extant bias will persist. Letting and I have more structure, one can build and test more elaborate hypotheses. For example, is the marginal value of information positive and is it increasing or decreasing? Suppose bettor i has three levels of information at three points in time; formally, ω εQi ⊂ Ri ⊂ Si . When least informed, her ignorance can be measured by µ(Si ) since this is the size of the set among whose points she is unable to distinguish. Thus, her information is the complement µ( \Si ) = 1−µ(Si ). The value of information at Si may be deﬁned as: V (Si ) = 1−|E(p|Si )− I (w)| where w is the true (unknown) state and the absolute value is the error committed by relying on Si to estimate I (w). Note that gaining more information and moving to Ri ⊂ Si could, in principle, be detrimental, that is, V (Ri ) < V (Si ). This could happen if, for example, w is close to the boundary of Ri and therefore less representative of it than of Si so that I (w) < E(p|Si ) < E(p|Ri ). An example is provided below. Suppose now that the marginal value of information is positive and that and I and the three sets are such that for the given true point, ω: Qi

I (ω) dµ −

Ri

I (ω) dµ

µ(Ri ) − µ(Qi )

I (w0 ) for ϕ(a, b) < I (w0 )

1 (b − a)2

b

(b − a)I (a) −

I (w) dw a

For a rising I over [a, b], extra information helps if the estimate overshoots the true value and distorts if the estimate undershoots it. The same result follows for lowering b. Of course, globally information is beneﬁcial as it drives the estimate toward the a,b→w0 true value, that is, ϕ(a, b)−−−−→ I (w0 ). Now we turn to the marginal value of information, where the same ambiguity holds. Because information is empirically useful it stands to reason that we concentrate more on this issue. Claim 2 1

The marginal value of information may be increasing or decreasing depending on the sign of the slope of I , on the sign of the estimating error and on the sign of the updating information, that is, whether a is increased or b is decreased. 2 For a rising I , the marginal value of information is decreasing everywhere, whenever information is beneﬁcial, if b 1 1 2 (b−a)I (b)− (b−a) I (b) < I (w)dw < (b−a)I (a)+ (b−a)2 I (a) 2 2 a Proof Differentiating V again we get −ϕaa for ϕ(a, b) > I (w0 ) −ϕbb for ϕ(a, b) > I (w0 ) Vbb = Vaa = for ϕ(a, b) < I (w0 ) for ϕ(a, b) < I (w0 ) ϕaa ϕbb Which shows part 1 of the claim. Part 2 requires Vaa < 0, Vbb > 0 where information helps for a rising I , that is, Va > 0, when ϕ < I , and Vb < 0, when ϕ > I . By working out the second derivatives one ﬁnds that Vaa < 0, Vbb > 0 together imply the inequalities of 2. Note that a necessary, but not sufﬁcient, condition for these inequalities is I (a) < 0, I (b) > 0. There are other possibilities and variations. For example if I is rising, which would be the case in our example of health affecting probability of winning, and concave throughout then the marginal value of information is diminishing for positive information about the horse, or share, and increasing for negative information. To wit, horror tips accumulate in strength while good ones lose weight. The exact opposite is true if I is falling and concave throughout. Note that a plunge reveals positive information for a horse while negative information does not have as direct and ready a way to make its presence felt in the market.

Marginal impact of information and arbitrage

87

Empirical results In summary, the general prediction from our model is that the more visible is the incidence of insider trading via plunges, the more outsiders tend to imitate insiders, thereby driving the subjective probabilities, ei , towards the objective probabilities, pi . However, the extent to which the process is completed depends critically upon the speciﬁc institutional environment. Further, the method by which we can determine whether there is diminishing marginal value of information also depends upon the institutional environment. Thus, a brief description of the two markets for which we have data is appropriate. In the Victorian markets, there are bettors both on-course and off-course. The tote has a monopoly in off-course betting and competes with bookmakers at the track. There is, however, only one tote pool and pay-outs are identical across betting locations. All off-course betting locations provide regular updates of provisional pay-outs and live telecasts of the races. However, they provide no direct information on the odds offered by bookmakers. Thus, bettors off-course obtain plunging information second-hand, via the tote updates which reﬂect changes in the pattern of tote betting on-course. Since outsiders on-course are able to see most bookmakers’ odds, they will, in practice, collect far more information than that shown by large plunges. In consequence, we would expect the ﬁnal tote equilibrium to be unbiased. The second market studied here is the inter-state market. In this market, Victorians bet on the Victorian tote on races which are run outside of Victoria. Thus, the bettors do not see bookmakers on-course and neither insiders nor outsiders, who are at the track at which the race is run, can bet on the Victorian tote. There is, however, a transmission mechanism for bookmakers’ price information from the track. Virtually without exception, when a race meeting is held in either New South Wales, Queensland or South Australia – the states on whose races the Victorian tote typically offers bets – there will be a parallel meeting of some kind somewhere in Victoria. Since bookmakers are permitted to bet on races taking place at other tracks than the one at which they operate, there will always be at least one, if not more, bookmaker betting on the inter-state race. Before he sets his opening odds on the race, the bookie receives a list of the average opening prices of all horses in the race. This list is transmitted by phone and arrives via a loudspeaker which is set up in his vicinity. Thus, all interested bettors may hear the opening prices at the distant track. As betting there proceeds, there are further transmissions, now of odds changes. Thus, with one important exception, Victorian bettors on-course are provided with the relevant information regarding plunges. The exception is with respect to very late plunges. When such occur at the very end of the betting, there is insufﬁcient time for the information to be transmitted. Further, since only average market odds are reported, some important information may be missing. Finally, the information arrives in discrete bundles at irregular intervals, which implies that its transmission to projected tote payouts may be more discrete than the regular ﬂow of information provided in the Victorian market. In short, while any bias extant in the inter-state market should

88

A. Schnytzer, Y. Shilony and R. Thorne

also be diminished in extent over time, the extent of information ﬂow may not be sufﬁciently complete to permit its eradication. We are now in a position to outline an empirical test for the presence of diminishing marginal value of information. The institutional set-up of both markets should permit bettors to obtain an increasingly accurate estimate of the horses’ true winning probabilities as race time approaches. One way to measure whether this is, indeed, the case on average, is to measure the extent and manner in which any favourite–longshot bias diminishes over time in these markets. Diminishing marginal value of information could be inferred from a diminishing extent of eradication of the favourite–longshot bias over time, provided that information ﬂows in these markets were more or less uniform over time. However, if, for example, more inside information is revealed at the start of betting, and the extent of revelation diminishes over time, then we would expect the extent of eradication of a bias also to diminish over time without any implication of diminishing marginal value of information. It should be noted that the choice of harness racing for this study is not fortuitous. Unlike jockeys, who are not permitted to bet, drivers may bet on their own horses without legal limit. Consequently, our choice eliminates any potential principle– agent problem which may exist between jockeys and the horses’ owners and/or trainers. For a detailed description of the data and the manner in which they were gathered, see Appendix. We use the following deﬁnitions: bhτ = the amount bet on the tote on horse h at time τ before the race, h = 1, . . . , n, where n is the number of horses in the race; Bτ = the total amount bet on the race at time τ and t = the tote take-out rate, not including breakage (14.25 per cent in the case of our data). Breakage, in the case of the Victorian tote, arises since pay-outs are rounded down to the nearest 10 cents. Since rounding causes a larger percentage loss for small odds than for large odds, we follow Grifﬁth (1949) and assume continuous pay-outs rather than pay-outs falling into 10-cent intervals. The easiest way to accomplish this is to assume that for a sufﬁciently large number of observations, the mean pay-out before rounding will fall half-way between the actual pay-out and the next pay-out up. In practice, this amounts to adding 5 cents to the projected pay-outs at time τ, Phτ . Let πhτ = Phτ + 0.05. Then the adjusted pay-out is given by: πhτ =

Bτ (1 − t) bhτ

and the bettors’ subjective probability at time τ that horse h will win the race, phτ , is given by: phτ =

bhτ (1 − t) = Bτ πhτ

Owing to breakage, the probabilities thus calculated did not sum to exactly one over each race and were thus normalised. All statistical calculations were performed on a data set screened to remove all races in which the data were not

Marginal impact of information and arbitrage

89

complete or in which there were late scratchings and in which any horse showed a pay-out of only $1 starting from 30 minutes before the posted race time until closing time.8 This reduced the number of observations for the data set to 2,054 races with 19,955 horses. The horses were sorted by the closing pay-outs and the sample then divided into 30 groups of as nearly as possible equal size. In addition to the pay-outs at the close of betting, data were available in viable quantities for the projected pay-outs 1, 2, 3, 5, 7, 9, 10 and 15 minutes before the actual start of the race, and 30 minutes before the ofﬁcial start time of the race. The latter case was chosen to obtain a set of pay-outs reﬂecting bettor behaviour before bookmakers had begun to offer ﬁxed odds on-course. Sorting by prospective pay-outs at one price only means that the pay-outs for these time periods in each group reﬂect changing bettor evaluation of the same horses over time. The same procedure was adopted for races being run in Victoria as for those being run outside the state. For each group at each time period, mean winning frequencies, w¯ iτ , and mean subjective winning probabilities, p¯ iτ , were calculated and the former regressed on the latter. A favourite–longshot bias is indicated by a negative intercept in the estimated regression. Figure 8.1 shows the intercepts for both markets over time. As the diagram makes clear, in the betting prior to the opening of the bookmakers’ market, there is a signiﬁcant bias in both markets. Much of this bias is eradicated as soon as tote bettors learn bookmakers’ opening prices and/or the nature of early

Constant in the regression of the mean win frequency vs the mean subjective probability

0.05 Start of on-track betting

No bias 0.00

–0.05

Within Victoria Outside Victoria –0.10 30

15

10

9

7

5 3 Minutes

2

Figure 8.1 The dynamics of the favourite–longshot bias.

1 Close

90

A. Schnytzer, Y. Shilony and R. Thorne

plunges. From that point on, there is steady convergence to efﬁciency, a state achieved in the Victorian market by around 5 minutes before the start of the race. On the basis of this result, it may be concluded that most of the valuable information has found its way into the market by this time. In the second market, the trend is similar although the bias is always more pronounced and has not been entirely removed even at the close of betting. In the case of the Victorian market, not only is the regression intercept highly insigniﬁcant (t-statistic = 0.411), but the point estimate is very low at −0.002. Table 8.1 shows the regression results consolidated as one regression for each market, with dummy variables for the intercepts and slopes of the different time periods. These results indicate the more discrete nature of the inter-state market, with all variables signiﬁcant except the dummies for 1 minute before the close. The latter lends support to the hypothesis that, in this market, any important late changes in the bookmakers’ market inter-state are not transmitted. On the other hand, in the Victorian market, there is a smooth, signiﬁcant change in the regression line until around the 5-minute mark, at which point the market has appeared to reach

Table 8.1 Regression of mean win frequency against mean subjective probability Variable

Coefﬁcient in Victoria

Slope 1.019355 Slope_1 0.0245735 Slope_2 0.0427793 Slope_3 0.0604486 Slope_5 0.0785061 Slope_7 0.0939413 Slope_9 0.1007083 Slope_10 0.1018986 Slope_15 0.1093001 Slope_30 0.3021055 Constant −0.0020021 Dummy_1 −0.0025288 Dummy_2 −0.0044023 Dummy_3 −0.0062212 Dummy_5 −0.0080795 Dummy_7 −0.0096684 Dummy_9 −0.0103649 Dummy_10 −0.0104871 Dummy_15 −0.0112486 Dummy_30 −0.0311057 Adjusted R 2 0.9717 No. of obs. 300

t-statistic

P >t

32.164 0.542 0.934 1.308 1.683 1.997 2.133 2.156 2.301 5.791 −0.411 −0.365 −0.633 −0.890 −1.151 −1.372 −1.468 −1.485 −1.589 −4.193

0.000 0.589 0.351 0.192 0.094 0.047 0.034 0.032 0.022 0.000 0.681 0.715 0.527 0.374 0.251 0.171 0.143 0.139 0.113 0.000

Coefﬁcient in other markets 1.107853 0.0650679 0.117948 0.1649855 0.2534352 0.312916 0.3560262 0.3796904 0.4467271 0.8822636 −0.0111775 −0.0066958 −0.0121376 −0.016978 −0.02608 −0.0322008 −0.0366371 −0.0390722 −0.0459706 −0.0907896 0.9825 300

t-statistic 41.001 1.654 2.928 4.011 5.922 7.121 7.947 8.386 9.581 15.739 −2.708 −1.132 −2.029 −2.809 −4.230 −5.152 −5.802 −6.153 −7.124 −12.604

P >t 0.000 0.099 0.004 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.007 0.259 0.043 0.005 0.000 0.000 0.000 0.000 0.000 0.000

Note Slope_x is a dummy variable for the slope of the regression at x minutes before the actual start of the race and Dummy_x is a dummy variable for the constant x minutes before the actual start of the race (except x = 30, which is 30 minutes before the ofﬁcial race start time).

Marginal impact of information and arbitrage

91

something very close to its ﬁnal equilibrium. The fact that the regression constant is generally greater in the inter-state market may indicate that Victorian bettors are, on average, less knowledgeable about inter-state markets than their own. This would lead to more uninformed betting, a known cause of a favourite–longshot bias.9 Indeed, although the bias is not removed in this market, it appears that the presence of bookmakers, as conveyors of information, is more important in this market than the domestic market. Two striking results are neatly captured in Table 8.1: First, for both markets, the size of the regression constant is monotonically rising, while the slopes are monotonically falling. Second, at any given point of time, the constant of the within-state regression line is consistently greater than that of the out-of-state line, while the slope is consistently lower. This change in each market and the comparison between them is highlighted by the ‘lines’ in Figure 8.1. (A) Both lines manifest monotonically diminishing slopes and (B) the slope of the within-state line is consistently lower than that of the out-of-state line. It could be argued that (A) is due to concentration of the ﬂow of useful information in the early stages of the betting. However, this explanation is contradicted by (B), since the out-of-state bettors always enjoy less information, so they could not get more information to account for their larger slope. Further, we may check directly the hypothesis that more useful information arrives during the early stages of betting. In our representation, useful information is provided by plunges. Accordingly, deﬁne del_x_y as the change in a horse’s subjective winning probability between y minutes before the race and x minutes before the race, if positive, zero otherwise. Table 8.2 shows the mean and standard deviation, per minute, for this variable in our data set. On the basis of the values shown in Table 8.2, there is evidence in support of an increase in the ﬂow of useful information over time in the Victorian market and an initial decrease followed by an eventual increase in the out-of-state market. We are, thus, unable to reject the hypothesis that there is, on average, diminishing marginal value of information in these markets. That is, even if in equation (1) the denominators were equal, which over time would imply Table 8.2 Basic statistics on the ﬂow of useful information (per minute during the relevant time interval) Variable

Mean in Victoria (7,176 observations)

Standard deviation

Mean in other markets (12,779 observations)

Standard deviation

del1530 del1015 del910 del79 del57 del35 del23 del12 delc1

0.0007811 0.0008857 0.0022199 0.0015625 0.0018640 0.0020891 0.0031813 0.0035377 0.0040938

0.0021235 0.0023572 0.0063435 0.0042454 0.0047649 0.0057342 0.0099417 0.0089089 0.011325

0.0012104 0.0013898 0.0031131 0.0022982 0.0026814 0.0032570 0.0047984 0.0058396 0.006417

0.0029187 0.0040426 0.0099793 0.0070794 0.0080292 0.0089488 0.0134331 0.0159559 0.0180889

92

A. Schnytzer, Y. Shilony and R. Thorne

a constant ﬂow of information, the inequality is due to the numerators. The effect of a new piece of information is larger when information is scanty.

Conclusions In this work we addressed two related issues: 1

2

The eradication over time of an inefﬁciency bias in a market for contingent claims due to transaction made by insiders and the information ﬂowing from them, and The marginal impact of inside information on the market. A model was built to describe information and its updating and accumulation over time through market revelations. The major prediction is that institutional environments that afford proﬁtable market use of inside information and facilitate its transmission will sustain less of any market bias. Data on horse betting markets were utilised to test that hypothesis and were found supportive. It was also found that the ﬂow of information over time is not skewed toward the beginning of the trading period and that therefore, the marginal impact of information is, on the average, declining.

Appendix The data set was compiled from pre- and post-harness race postings onto the Victoria region TABCORP website (www.tabcorp.com.au) and comprises 3,430 races with 33,233 horses starting from June 1997 to the end of February 1998. Race data were obtained from the remote site using a command driven http browser (LYNX) and PERL operating on the university’s UNIX network. Starting between 4 and 7 hours before the start of the day’s races a list of available races were downloaded and start times for harness races extracted. Starting from 70 minutes prior to the posted start time each race’s information was then saved from the remote site in Victoria onto the local host at Bar Ilan University. Each ﬁle was updated periodically so that any new information between 2 hours to the ﬁnal post-race results could usually be obtained. Due to the dynamic nature of the data acquisition, disruptions in internet access caused by overload of either the local (Bar Ilan) or remote site (Victoria) resulted in loss of information to the data set. This loss was without any discernible pattern and therefore should have no systemic inﬂuence on the analysis. During the tabulation of the data from individual races downloaded into the ﬁnal data set, updates were expressed according to their occurrence relative to the actual rather than the posted start time for each race for posting times less than 30 minutes before the listed start time. This adjustment was necessary since in 20.4 per cent of the races the actual start time of the race was up to 10 minutes latter than the listed start time displayed on the Victoria TABCORP web page. We assume that bettors on-course adjust their betting behaviour to delays in the start of a race.

Marginal impact of information and arbitrage

93

Notes 1 This notion is formalised in the next section of the chapter. 2 See, for example, Snyder (1978). The one exception to the existence of a favourite– longshot bias known to us is provided by Swidler and Shaw (1995). 3 For most totes in Australia, contingent prices are available via the internet and even betting is sometimes possible. Victorian tote agencies also provide an updating service. 4 See Thaler and Ziemba (1988) for a discussion of different explanations for the favourite– longshot bias. 5 See Schnytzer and Shilony (1995) for evidence of insider trading in this market. 6 We know of no study which has found a bias of sufﬁcient size to provide after-tax arbitrage opportunities. 7 For a detailed analysis of price setting by bookmakers, see Schnytzer and Shilony (1998). 8 This ﬁnal adjustment is necessary since the above equations hold true only in cases where the amount bet on a horse is not so great that the tote could not return a mandatory minimum pay-out of $1.10 for winners and still obtain the take-out rate on the race. Where betting on one horse to such an extent occurs, there is no way to deduce the subjective probability on the basis of projected pay-outs. 9 See Thaler and Ziemba (1988).

References Busche, K. and Hall, C. D. (1988), ‘An exception to the risk preference anomaly’, Journal of Business, 61, 337–46. Copeland, T. E. and Friedman, D. (1992), ‘The market value of information: some experimental results’, Journal of Business, 65, 241–66. Gandar, J. M., Dare, W. H., Brown, C. R. and Zuber, R. A. (1998), ‘Informed traders and price variations in the betting market for professional basketball games’, Journal of Finance, 53, 385–401. Grifﬁth, R. M. (1949), ‘Odds adjustments by American horse-racing bettors’, American Journal of Psychology, 62, 290–4. Hausch, D. B., Lo, V. S. W. and Ziemba, W. T. (1994), Efﬁciency Of Racetrack Betting Markets. Academic Press. Henery, R. J. (1985), ‘On the average probability of losing bets on horses with given starting price odds’, Journal of the Royal Statistical Society (A), 148, Part 4, 342–9. Kahaneman, D. and Tversky, A. (1979), ‘Choices, values and frames’, American Psychologist, 39, 341–50. Kahaneman, D. and Tversky, A. (1984), ‘Prospect theory: an analysis of decision under risk’, Econometrica, 47, 263–91. Osborne, M. J. and Rubinstein, A. (1994), A Course in Game Theory, MIT Press. Quandt, R. E. (1986), ‘Betting and equilibrium’, Quarterly Journal of Economics, XCIX, 201–7. Radner, R. and Stiglitz, J. E. (1984), ‘A nonconcavity in the value of information’, in Boyer, M. and Khilstrom, R. E. (eds), Bayesian Models in Economic Theory, NorthHolland, Amsterdam. Schnytzer, A. and Shilony, Y. (1995), ‘Inside information in a betting market’, Economic Journal, 105, 963–71. Schnytzer, A. and Shilony, Y. (1998), ‘Insider trading and bias in a market for state-contingent claims’, mimeo.

94

A. Schnytzer, Y. Shilony and R. Thorne

Singh, N. (1985), ‘Monitoring and hierarchies: the marginal value of information in a principal–agent model’, Journal of Political Economy, 93, 599–609. Snyder, W. W. (1978), ‘Horse racing: testing the EFM’, Journal of Finance, 33, 1109–18. Stigler, G. J. (1961), ‘The economics of information’, Journal of Political Economy, 69, 213–25. Swidler, S. and Shaw, R. (1995), ‘Racetrack wagering and the uninformed bettor: a study of market efﬁciency’, The Quarterly Review of Economics and Finance, 35, 305–14. Thaler, R. H. and Ziemba, W. T. (1988), ‘Anomalies – pari-mutuel betting markets: racetracks and lotteries’, Journal of Economic Perspectives, 2, 161–74.

9

Covariance decompositions and betting markets Early insights using data from French trotting Jack Dowie

The literature on the economics of betting markets has developed largely independently of the part of the psychological literature on judgement and decision making that focuses on the evaluation of subjective probability assessments. The aim of this chapter is to indicate how a speciﬁc type of subjective probability evaluation can be applied to racetrack data and to note the additional insights that may thereby be gained. It is shown that one can both arrive at a summary measure of the overall quality of the betting on a set of races and establish the relative contributions to this overall achievement of different types of knowledge and skill, in particular the ‘discrimination’ and ‘calibration’ displayed by the market/s. Furthermore, one can carry out this analysis for both different sub-markets and different types of event. Accordingly, it becomes possible for serious bettors to identify where their activities might be most proﬁtably deployed and for the operators of betting service (who have access to data not available here) to determine, on the basis of concepts not previously employed, which particular bets and events will maximise their turnover. The underlying data relate to horse racing at the Hippodrome Paris-Vincennes in France, where the races are trotting ones. Trotting is one of the two gaits in harness racing in the English-speaking world (North America, Australasia, Britain and Ireland), pacing being the other, but pacing races are outlawed in mainland Europe and ‘harness racing’ is therefore exclusively trotting. The data comprise all 663 races run during the 1997/98 ‘winter meeting’ at Vincennes, which involves racing several days a week from early November to late February on an ‘all-weather’ (cinder) track. In France there is a pari-mutuel betting monopoly and betting takes place offcourse in PMU (Pari-Mutuel Urbain) outlets throughout the country up to 13.15 on raceday (‘avant le reunion’ – ‘ALR’). These PMU outlets are located within cafes, tabacs or other type of shop. Afterwards (‘pendant le reunion’ – ‘PLR’) betting occurs either at the Hippodrome itself (PMH) or – and increasingly – in specialist off-course betting shops called ‘cafe-courses’. Since 1997/98 this has been extended to include betting via TV remote controls but our data precede this. Betting in France is focused heavily on racing in the capital, Paris, and trotting at Vincennes is (astonishingly) responsible for about one-third of the annual total betting turnover in France. Of the total national betting of roughly 6 billion francs

96

J. Dowie

in 2001 just over half is on trotting. In 2001 almost 98 per cent of this turnover was off-track, with about a quarter of that taking place ‘PLR’, this proportion having grown very rapidly in recent years. Two sets of odds are accordingly available for analysis: the ﬁnal PMU ones as at 13.15, which continue to be displayed alongside the latest updates as betting occurs (PLR) at the track and elsewhere; and those at the close of all betting (equivalent to ‘starting prices’), which we will call the PMH odds even though they incorporate the money bet PMU and PLR as well). From an analytical point of view one can therefore explore the difference between these two sets of odds and establish the overall effect – size and direction – of the changes in the populations betting and informational circumstances in which bets are placed. In addition, trotting at Vincennes occurs in two distinct disciplines, attelé (in harness with sulky) and monté (in saddle) – roughly a third of races are monté – and we can therefore also analyse the results by discipline. In fact, the data collected in this study also distinguish between races conducted on the ‘Grande Piste’ of 2000 metres (day time) and the ‘Petite Piste’ of 1,300 metres (at night), between races for horses of different sex, age and class, and between the four or ﬁve major betting races of the week (the ‘événements’) and the rest. About half the betting and the vast majority of newspaper coverage occurs on exotic bets on these big races, which involve picking the ﬁrst ﬁve, four or three in correct order – hence their name: ‘tiercé-quarté-quinté’). They are usually events with large ﬁelds – minimum fourteen starters – of well-known older horses and are only occasionally ‘classic’ races. Our main purpose here is to introduce the probability score and its covariance decomposition as tools for the analysis of horse race data such as these and to present some relevant illustrative data. We concentrate on the PMU/PMH comparison and the attelé/monté breakdown, but also present results for ‘événements’ versus ‘non-événements’ even though the number of the former is undesirably small.

A favourite–longshot bias? First we report on a conventional analysis of the aggregate PMH data to see whether a ‘favourite–longshot bias’ of the normal sort exists. The broad verdict (Figure 9.1) would have to be that it does not, at least not in any straightforward fashion. The ﬁgure is based on odds (unnormalised) grouped into ﬁfty-seven ranges at intervals of 0.3 (up to 1) of 0.75 (up to 10), of 2 (up to 20), of 5 (up to 50) and of 10 (up to 80, with all runners at 80–1 or more forming a ﬁnal interval. Around 69 per cent of stakes were returned to win pool bettors in this set of races. A ﬁve-interval moving average seems to be the simplest way to highlight the oscillating pattern that appears to be present. The pattern might be characterised as one in which •

after the typical early high returns (c. 100 per cent) at the very shortest odds there is a gradual deterioration to c. 50–60 per cent around odds of 4 and 5/1

Covariance decompositions and betting markets • • •

97

a subsequent return to ‘money back’ somewhere around 6/1 is sustained until about 10/1 there is then a rapid deterioration to about 80 per cent which then seems to persist until about 40/1 ﬁnally there is a progressive deterioration of the typical sort in the upper ranges, falling to approximately 30 per cent in the 80/1 and over interval

This oscillation produces a remarkably ﬂat regression line (y = −0.2684x + 91.352; but R 2 = 0.0215), which is conﬁrmed when we normalise the odds and subject them to simple calibration analysis (Figure 9.2).

180.0 160.0 140.0 % return

120.0 100.0 80.0 60.0 40.0 20.0 0.0

% return 2

5 per. Mov. Avg. (% return) 4

6

8

10

15

20 32 50 90

Odds

Figure 9.1 Vincennes winter meeting 1997/98. 1.00 y = 1.1346 x –0.0057 R 2 = 0.9081

0.80

0.60

0.40

0.20

0.00 0.00

0.20

0.40

0.60

0.80

1.00

Figure 9.2 Winning proportion (y) against probability assigned (x) ﬁfty-seven odds ranges.

98

J. Dowie

What might we learn if we apply the covariance decomposition of the probability score to this data set? (All the necessary references for the following section are contained in (Yates, 1982, 1988, 1994; Yates and Curley, 1985; Yates et al., 1991.)

The probability score and its decompositions If we normalise the odds on the horses in a pari-mutuel race we arrive at the proportions of the pool bet on each and hence the collective ‘subjective probability’ of the betting population responsible. We can ask about the quality of these probabilities – ‘how good’ they are – by the criterion of ‘external correspondence’, in the same way as we can seek to evaluate probabilistic forecasts made in relation to weather, politics or other topic. Broadly, probabilistic forecasts are ‘externally correspondent’ to the extent that high probabilities are assigned to events that ultimately occur and low ones to events that do not occur. An alternative criterion of quality or goodness is ‘internal coherence’, which asks, for example, whether a set of posterior probabilities reﬂect the normatively correct revision of prior ones in the light of the likelihood of the new evidence according to Bayes theorem. This alternative criterion is not considered here. One well-known and widely accepted overall evaluation of the external correspondence of a set of probability forecasts is the ‘Brier Score’. This is simply the mean squared error, arrived at by taking the difference between the probability assigned to each event and 1 (if it occurs) or 0 (if it does not occur), squaring the resulting difference and summing the results over the set of judgements. This score is negatively oriented, so that 0 is the best possible score, arising when probability 1 was assigned to all events that occurred and probability zero was assigned to all those that didn’t: (1 − 1)2 + (0 − 0)2 = 0. The worst possible score is 2, arising when zero probability is assigned to all events that occurred and probability 1 assigned to all those that did not: (0 − 1)2 + (1 − 0)2 = 2. Such an overall quality score provides no insight into the reasons for any differences in ‘goodness’ between alternative sets of probability assessors or assessments. Various decompositions have accordingly been developed to pursue greater understanding of the contribution of different skills and abilities to judgemental performance. We introduce two of the main decompositions of the Brier probability score (PS) and deﬁne them below, using horse races as the subject.

Murphy’s decomposition PS = U + C − D or PS = ‘outcome uncertainty’ + ‘calibration’ (‘reliability’) − ‘discrimination’ (‘resolution’) using the terms customarily applied

Covariance decompositions and betting markets

99

where U = d(1 − d) and d is the base rate of the to-be-predicted event, in our case the proportion of starters that win, that is, the number of races divided by the number of starters. Note that this term increases (and hence PS worsens, other terms equal) as ﬁeld size decreases. However, this is of no consequence in evaluation since this term is outside the control of the probability assessor and not the subject of judgemental or forecasting skill. C is the Calibration index. Probability judgements (derived from normalised odds) are grouped into ranges (e.g. 0.100–0.149). The proportion of starters that win in a range (e.g. 0.170) is deducted from the range’s midpoint value (0.125), the result (0.045) squared and multiplied by the number of starters in that range. The resulting numbers for each range are then summed and the total divided by the total number of starters to give the C index. The aim is to maximise calibration but therefore to minimise the C index. DI is the Discrimination index. The same ranges are used. This time the proportion of starters that win in a range is deducted from the base rate proportion of winners (d), the result squared and multiplied by the number of starters in the range. The resulting numbers for each range are then summed and the total divided by the total number of starters to give the DI. The aim is to maximise discrimination and to maximise the DI.

Yates’ covariance decompositions Yates was concerned about the nonindependence of the reliability and resolution terms in the Murphy decomposition and, for this, and other reasons, suggested using conventional covariance decomposition principles to arrive at PS = Variance d + Bias2 + Minimum Variance f + Scatter f − 2 (Slope ∗ Variance d) where d is, as above, the base rate of the to-be-predicted event, the proportion of ¯ − d), ¯ and f is the assigned probability starters that win so that Variance d = d(1 (i.e. forecast). Bias is the mean probability assigned to a starter winning minus the mean ¯ In pari-mutuel probability of a starter winning and so is equivalent to f¯ − d. markets for which the odds have been normalised this should, in principle, be zero. The mean probability of a starter winning is simply 1 over the size of the ﬁeld, irrespective of the distribution of betting and in a single race this must be the same as the average probability assigned to a starter derived from the normalised odds. It will differ from zero only for reasons connected with the use of variable deductions from the pool according to the odds on the winners (the French ‘prélevement supplementaire progressive’, which involves higher deductions from the win payout when the winner is 30–1 or longer) or with the rounding of odds in their journey from pari-mutuel operator to publication in a newspaper, in our case ‘Paris-Turf’.

100

J. Dowie

‘Bias’, thus deﬁned, is regarded by Yates as a measure of ‘calibration in the large’, as opposed to the more conventional concept of calibration (i.e. Murphy’s) which Yates calls ‘calibration-in-the-small’ and which has no equivalent in his covariance decomposition. Slope is the average probability assigned to winners (f 1) minus the average probability assigned to non-winners (f 0). The difference between these two conditional probabilities provides an alternative and intuitively more understandable measure of discrimination than Murphy’s ‘resolution’ (DI). The slope may vary from 0 (no discrimination: average odds on winners same as average odds on non-winners) to 1 (perfect discrimination: hypothetical average probability of 1 assigned to all winners and of 0 assigned to all non-winners). We can interpret an increase in slope as a percentage improvement in discrimination. The aim is clearly to maximise slope. (The slope is in fact literally the slope of the regression line that results when probability assigned is regressed on winning proportion.) Scatter f is an index of the overall ‘noisiness’ of the judgements and is the result of weighting the variance of the probabilities assigned to winners and the variance of the probabilities assigned to non-winners by the relative number of winners and non-winners. The aim is to minimise scatter, subject to exploiting any discriminatory power possessed. Minimum Variance f is the minimum variance in f necessary to achieve the given slope and exploit this amount of discriminatory power. Like the Variance d and Bias this can be taken to be essentially outside the control of the forecaster (bettors in our case), given their discriminatory power, so that the evaluation of judgemental/forecasting skill can be focused on the ﬁnal terms (Slope and Scatter). Minimum Variance f is equal to Slope2 ∗ Variance d. It is important to see that in a pari-mutuel market the odds may be perfectly calibrated (in Murphy’s terms) – and hence the unit return at all odds the same – irrespective of the degree of discrimination (‘knowledge’). To take the simplest example, imagine a set of two horse races. If all horses were assigned either a 60 per cent chance or a 40 per cent chance and they won in proportion the unit return would be the same at both odds. On the other hand, if all were assigned either 80 per cent or 20 per cent and won in proportion the unit return would again be the same at both odds. However, we would clearly want to say that the market knew more – showed more ability to discriminate between winners and non-winners – in the latter case.

Results of analysis The 663 races constituting the data set were the entire ‘meeting d’hiver’ at Vincennes which ran from 3 November 1997 to 28 February 1998. While 9,484 horses ran in these 663 races, some were part of ‘écuries’ of two, three or even four horses, which meant they were coupled in the win betting and formed one betting opportunity in the win pool. While individual win odds are displayed for each horse in an écurie (because the coupling does not apply to more exotic bets and the separate odds are useful information for exotic bettors), one can not actually

Covariance decompositions and betting markets

101

ask to bet the écurie and the écurie dividend is the same whichever horse wins. We have deleted from the data set all écurie entries apart from the one that ﬁnished highest in place order (or the one that appeared ﬁrst in the result if both/all were unplaced). The deleted number was 177 so the data set comprises 9,307 betting entries. We will often refer to these as ‘the starters’, even though it is not strictly correct. Table 9.1 contains all the data referred to in the following section. Before moving to the decompositions, to help the reader get to grips with the table we can note that the mean probability assigned to the winner at the close of betting was 17.0 per cent (column f 1 PMH = 0.1695) compared with 15.0 per cent at 13.15 offcourse (column f 1 PMU = 0.1498). Also that the lowest mean probability (of the estimates provided here) was 13 per cent (0.1321) for the winners of ‘événements’ offcourse and the highest 18 per cent for monté at close of betting (0.1807). (Événements’ are almost always attelé.)

PMH versus PMU and attelé versus monté Q: A:

Do the ﬁnal (PMH) odds show better calibration than the PMU ones and, if so, how much better? The major feature of the calibration results is the very high overall calibration in all cases – conﬁrming the conventional analysis presented earlier – except for the événements. The limited amount of data available on these may contribute to the much higher (i.e. poorer) Calibration index, but we believe there is a substantive effect as well (see below). On these limited data one could not support any claim that calibration is different between the PMU and PMH odds.

Q: Is this true of both monté and attelé races? A: No, there is a deﬁnite suggestion that calibration improves on monté from PMU to PMH (0.0006 to 0.0004), but deteriorates on attelé (0.0003 to 0.0007). This prompts the speculation that those able to see the horses in action in the 10–20 minutes before the event (at the track or by picture transmitted into PLR locations) overrate their interpretative ability in attelé races relative to monté ones. Q:

Do the ﬁnal PMH odds show better discrimination – more knowledge – than the PMU ones and, if so, how much better? A: Yes, they do, and of the ﬁnal PMH discrimination level about 20 per cent has been added since the PMU betting ﬁnished. Speciﬁcally the ﬁnal (PMH) odds for the complete data set (slope 0.1147) represent a 23.9 per cent increase in discrimination compared with the PMU odds (slope 0.0926). (The alternative – and less preferred – Murphy measure of discrimination gives a 25.9 per cent increase.) So one can say that they incorporate roughly 25 per cent more ‘knowledge’ than the PMU base, a formidable amount. Note that the scatter is also greater, even after taking account of the greater variance necessary

PMH Attelé Monté PMU Attelé Monté PMH Événement Non Évén’t PMU Événement Non Évén’t

0.0593 0.0579 0.0624 0.0607 0.0593 0.0637

0.0504 0.0604

0.0511 0.0618

962 8345

962 8345

PS

9307 6379 2928 9307 6379 2928

N

0.0582 0.0727

0.0582 0.0727

0.0712 0.0690 0.0762 0.0712 0.0690 0.0762

d

0.0685 0.0798

0.0687 0.0807

0.0794 0.0776 0.0837 0.0787 0.0771 0.0823

f

0.0427 0.0589

0.0419 0.0564

0.0548 0.0528 0.0594 0.0572 0.0547 0.0627

f0

0.1321 0.1514

0.1411 0.1722

0.1695 0.1639 0.1807 0.1498 0.1448 0.1596

f1

Table 9.1 Decompositions of PMH and PMU probability scores

0.0548 0.0674

0.0548 0.0674

0.0661 0.0642 0.0704 0.0661 0.0642 0.0704

0.0021 0.0004

0.0036 0.0004

0.0004 0.0007 0.0004 0.0003 0.0003 0.0006

0.0059 0.0061

0.0080 0.0075

0.0073 0.0070 0.0083 0.0058 0.0053 0.0073

0.0548 0.0674

0.0548 0.0674

0.0661 0.0642 0.0704 0.0661 0.0642 0.0704

Var d

DI

d(1 − d) CI

Yates

Murphy

0.0001 0.0001

0.0001 0.0001

0.0001 0.0001 0.0001 0.0001 0.0001 0.0000

Bias2

0.0004 0.0006

0.0005 0.0009

0.0009 0.0008 0.0010 0.0006 0.0005 0.0007

Min Var d

0.0894 0.0925

0.0991 0.1158

0.1147 0.1111 0.1213 0.0926 0.0901 0.0970

Slope

0.0055 0.0062

0.0058 0.0076

0.0074 0.0071 0.0080 0.0061 0.0060 0.0063

Scatter

Covariance decompositions and betting markets

103

to exploit the greater discrimination (as indicated by the higher Minimum Variance f ). Q: A:

Is this true of both monté and attelé races? Yes, both show the same 23–25 per cent proportionate increase in discrimination from PMU to PMH. The scatter data are also parallel.

Q:

Which do the betting markets know more about – monté or attelé races – and by how much? A: It may be initially surprising to those who know that monté races have a very high relative rate of disqualiﬁcation (for failing to maintain the correct trotting gait) that the answer is monté. The monté discrimination is about 9 per cent greater than that for attelé in the PMU odds, and, consistent with the previous answer, this superiority remains the same in the PMH odds. While monté ﬁelds are smaller (as evidenced by the larger d) this is supposedly dealt with in the decompositions by the incorporation of the Variance d term.

‘Événements’ and ‘non-événements’ Q:

Given the vastly greater public information and analysis applied to the ‘tiercéquarté-quinté’ races compared with others, what do the data suggest on calibration? A: The answer has to be offered with some caution because of the relatively limited number of événements in the data – they comprise only about 10 per cent of starters in our data set. (Note also that we are analysing the standard win pool odds on the horses in these races, not the unknown win odds assigned by those placing the exotic bets.) However, the data do suggest that calibration is signiﬁcantly poorer. One could speculate that the ‘professionals’ are less interested in these races – purposely selected by the betting organisers for their likely difﬁculty and high returns – and hence fail to act so as to bring the out-of-line odds (and returns) into line. The implication, if this ‘inefﬁciency’ truly exists, is that there are proﬁtable opportunities lurking in the win pool on événements. Q: A:

And what do the discrimination ﬁgures say in relation to this comparison? Here the position is more confused. While all the discrimination ﬁgures increase from PMU to PMH, Murphy’s discrimination increases more for ‘événements’ than ‘non-événements’ (36 per cent against 23 per cent), whereas Yates’ slope increases less (11 per cent against 25 per cent). We need therefore to remind ourselves that these two concepts are not the same and are measuring different aspects of forecasting ability. Yates is the preferred measure, given the nonindependence of the Murphy elements, and so we support the implication of his decomposition, which is that much less knowledge becomes available late on ( just before the race) in relation to these events than in relation to the ordinary ones.

104

J. Dowie

Conclusions Treating racecourse odds as subjective probability distributions means that we can draw on the various scoring principles and score decompositions developed in the judgement and decision-making literature. These decompositions enable us to distinguish, in particular, between the ability of the markets concerned to (a) discriminate between winners and non-winners and (b) assign appropriate odds to starters. In France there seems, on the basis of this limited study, little evidence of an overall bias up through the odds range either in PMU or PMH. The difference (‘inequity’) between PMU bettors (betting before 13.15) and later bettors is almost certainly down to the greatly superior information of the latter, rather than either their superior ability to assign appropriate odds to the runners, given available information, or differences in utility functions (odds preferences). There is some suggestion that both main aspects of ‘external correspondence’ are poorer for the win pools for the ‘événements’ on which over half French betting takes place (though most of this is exotic betting and the win pools on these events are not particularly above average). This prompts the speculation that the amount of information supplied about these races is overwhelming, even to the ‘professional’ bettors, who either perform no better than the rest in their betting on them or else choose to leave most of these races to the ‘amateurs’. In many ways this result is a conﬁrmation of the success of the betting promoters, in conjunction with the media, in providing highly uncertain races of high quality where ‘inside information’ plays little or no part and the market is therefore strongly as well as weakly efﬁcient. While these decompositional analyses may initially be of main interest to academic researchers they could prove a very useful monitoring tool for betting organisers wishing to establish what is happening between different pools in different areas and at different times. In particular, differences in slope between betting populations raise a priori ‘equity’ issues and the decomposition elements could be used as quantitative signs to be monitored and, if necessary, followed up. Such analysis, when combined with information on turnover, would also enable the links between the decompositional elements and betting activity to be established and exploited in the design of bets. Substantively, the tentative implication is that betting at Vincennes on trotting is fairly (weakly) efﬁcient but with the intriguing possibilities that there is ‘overbetting’ in the 4–5/1 range but plenty of value – and in fact almost ‘fair betting’ – in the 6–10/1 range, even given the high deductions which characterise this pari-mutuel monopoly. But of course this conclusion is based on just one small sample of races and much further work is needed to substantiate it and further explore the insights to be gained from this approach.

Acknowledgements I am grateful to Frank Yates for making his Probability Analyser software available and to Dominique Craipeau of the PMU in Paris for assistance with the data on French betting patterns.

Covariance decompositions and betting markets

105

References Yates, J. F. (1982), ‘External correspondence: decompositions of the mean probability score’, Organizational Behavior and Human Processes, 30: 132–156. Yates, J. F. (1988), ‘Analyzing the accuracy of probability judgments for multiple events – an extension of the covariance decomposition’, Organizational Behavior and Human Decision Processes, 41: 281–299. Yates, J. F. (1994), ‘Subjective probability accuracy analysis’, in G. Wright and P. Ayton (eds), Subjective Probability, Chichester, John Wiley and Sons, pp. 381–410. Yates, J. F. and S. P. Curley (1985), ‘Conditional distribution analyses of probabilistic forecasts’, Journal of Forecasting, 4: 61–73. Yates, J. F., L. S. McDaniel et al. (1991), ‘Probabilistic forecasts of stock-prices and earnings – the hazards of nascent expertise’, Organizational Behavior and Human Decision Processes, 49: 60–79.

10 A competitive horse-race handicapping algorithm based on analysis of covariance David Edelman

A model for empirically determining Competitive Strength or Class of races in a historical database is presented. The method, based on Analysis of Variance methods, is based on horses’ succesive runs, and includes a necessary weight allowance. The variable is applied out-of-sample to forecast the results of future races, with a Case Study being carried out on a set of 1,309 Australian Metropolitan Sprint races, demonstrating signiﬁcant added value, in both a statistical sense and a ﬁnancial sense.

Introduction In recent years, the scientiﬁc study of horse-race handicapping methods has established itself alongside the more traditional literature relating to other ﬁnancial markets and evironments as a serious, multifaceted challenge, both practical and academic. From its origins as a pastime, horse-race betting has evolved into a set of highly complex international markets, in the sense that virtually anyone in the world with sufﬁcient knowledge and means can bet on racing events taking place in any one of the hundreds of countries with organised markets for these events. Like any other international market, horse-race betting markets contain both rational and irrational investors, behavioural components, notions of efﬁciency, and the scope for Technical and Fundamental Analyses. In analogy with the literature on the Financial Markets, the literature on horse-race betting markets is divided among several catgories: investment systems (Asch and Quandt, 1986; Ziemba et al., 1987), textbook-style publications (Edelman, 2001), and academically-orientated journal articles, books, and collections (Hausch et al., 1994), with a moderate degree of overlap occurring from time to time. However, there is one fundamental difference between horse-race betting markets and ‘traditional’ ﬁnancial markets, which is that the tradition of the latter began with the notion of investment to either (i) enable or facilitate the production and/or delivery of goods and services, or (ii) to underwrite or aggregate inidividual risk, both of these generally seen historically as being beneﬁcial to mankind. The latter characteristic has meant therefore that the notion of such types of investment

A competitive horse-race handicapping algorithm

107

has been historically encouraged and even exalted, in a Moral sense, by secular and religious institutions alike. By contrast, activities such as horse-race betting and gambling, in general, have been regarded in a negative light in varying degrees by both secular and religious institutions, there being no by-products seen as being beneﬁcial to mankind, but perhaps rather being viewed as activity guilty of attracting Capital (both human and ﬁnancial) away from more ‘worthwhile’ uses. This stigma has meant that governments tacitly agree to treat horse-race betting markets in a fashion that resembles the manner in which they treat other activities or products judged to be ‘destructive’ (such as cigarettes), and regulate and tax them in such a way as to discourage their growth and prevalence. One of the main effects of this is the fact that, in contrast to traditional ﬁnancial markets in which an average investor can be expected to earn a return without any particular skill or knowledge, in horserace betting markets, average expected returns are decidedly negative, ranging by country from about −10 per cent to −40 per cent. When put together with the widely-held view that Markets are Efﬁcient in general, the ‘average negative expectation’ property of race-betting tends to lead to its grouping with other gambling games (possibly excluding certain forms of Blackjack), where the expectation is always negative for every player, regardless of how skillful or informed that player may be. Thus, the emphasis here will be the exploration of market inefﬁciency, which will be studied through probability forecasts produced from competitive ratings. It will be shown that the methods here lead to models which not only exhibit statistically signiﬁcant added forecast value marginal to bookmakers’ predictions, but which produce a clear out-of-sample proﬁt.

Background The assignment of probability forecasts to horse racing appears to have evolved universally into an approach generally known as handicapping. In the sport of racing, handicapping originally referred merely to a method whereby horses deemed to have better chances are weighted more heavily so as to make the chances of the various runners more even. But since this inherently involves the assessment of the chances of the various runners prior to the allocation of the weights, the term ‘handicapping’ has more commonly come to refer to the assessment step of this process. It is of interest to note that handicapping has come to be universally carried out using an incremental assessment method. Logically, it is assumed ﬁrst that in any given race, horses are handicapped correctly. From one race to the next, then, horses are handicapped based on their previous handicap ratings, plus any new relevant information since the last handicap assessment. The primary components to this change are the performance in the previous race, and the change in Class (grade) from the last handicapped race to the current one. Occasionally, there are other minor changes, such as a ‘weight-for-age’ improvement for horses in the earlier years of their racing careers.

108

D. Edelman

The primary weakness in this approach is the difﬁculty in quantifying reliably the change in Class from one race to another. It is this weakness which the competitive ratings model proposed here seeks to address. Before proceeding, a mention of several other accepted probability assessment methods is in order. Of these, the most demonstrably effective and widely accepted type is the Multinomial Logit Regression model proposed by Bolton and Chapman (1986), Benter (1994) and others, where various variables relating to runners’ abilities and track record are regressed against actual race results. These methods have been generalised to Neural Network models (Drapkin and Forsyth, 1987). Other ‘traditional’ methods have involved probability assessments based on absolute Weight-Equivalence ratings (see Scott, 1982) or Adjusted Time ratings (see Beyer, 1995; Mordin, 1998) or Pace ratings (see Brohamer, 1991). While each of the approaches referred to above has been shown to have usefulness of at least some degree, there has yet to appear a systematic study of a concept which skillful bettors and handicappers often apply in practice, known as ‘Collateral Form’, a method whereby the Class assessment or believed ‘difﬁculty level’ of a race may be amended post-hoc based on the subsequent performance of the various runners in the race, but there appears to be no published work making this concept precise, prior to the results to be presented here. The approach to be taken in the following sections takes the Collateral Form concept to its logical limit by considering at a given point in time the complete network of all interrelationships between all races for which there exist recorded information.

Methodology We shall consider successive runs of the same horse and (rather than trying to estimate the strengths of the various horses) focus on an estimation of the overall strengths of the various races, as evidenced by the difference in (weight-corrected) performances of horses in successive races. To this end, we shall consider the differences ij k in adjusted beaten lengths for the same horse moving from race i to j , where the index k allows for possibly more than one horse to have competed in these two events successively. As an additional variable, we shall use δwij k to denote the change in carried weight associated with ij k . Next, let η1 , η2 , . . . , ηn ( n1 ηi = 0) be parameters denoting the relative strengths of races 1, 2, . . . , n, let cw denote the coefﬁcient associated with the Weight Change variable, and let c0 be a constant. The model we shall consider is of the form ij k = c0 + cw δwij k − ηi + ηj + εij k where εij k denotes the error associated with factors extraneous to Class and Weight effects.

A competitive horse-race handicapping algorithm

109

Rather than basing the estimation of the parameters ηi , cw , and c0 on the minimisation of

{ij k − (c0 + cw δwij k − ηi + ηj )}2

which would tend to overemphasise longer beaten lengths, we shall consider a weighted least-squares solution, with the weights tij k =

1 blij(1)k

+ blij(2)k + 1

2 and employ the use of a Ridge stabilisation term ηi , with a cross-validated constant coefﬁcient K. Summarising, we seek to minimise {ij k − (c0 + cw δwij k − ηi + ηj )}2 1 + blij(1)k + blij(2)k

+K

ηi2

over η, cw , and c0 . For a ﬁxed history of races 1, 2, . . . , n this optimisation may be performed, and applied to the handicapping of subsequent races. In order to analyse the performance of such a handicapping method, however, it is necessary to envision an expanding history (1, . . . , n1 ), (1, . . . , n2 ), . . . , where the optimisation is performed anew as of each new day of race-meeting results, and applied to the next raceday. As the number of races (and hence parameters) can be very large, an optimisation algorithm based on incremental optimisation (i.e. using the solutions as of day1 as initial estimates of the solutions as of day2 can be shown to save large amounts of computing time. It is also worth noting that in programming, the use of sparse matrix representations can help considerably in the conservation of memory. Such optimisations have been performed using SCILAB (an interactive software package from INRIA) for problems containing as many as 20,000 races (parameters), with the optimisation on a l.2 GHz AMD Athlon with 0.5 Gb RAM taking approximately ten minutes to complete. The adjustments applied to the beaten lengths going into the analyses of the above type may be performed in various different ways. Since it is arguable that variation in beaten lengths beyond ten lengths or so may not contain too much marginal information, a smooth truncation at approximately ﬁfteen lengths is recommended: bltrunc = 15 tanh(blraw /15) Also, for races of varying distances, in order to account for the fact that longer races give rise to a greater variation in beaten lengths, a distance adjustment may

110

D. Edelman

be considered, of a form similar to bladj = blraw /

distance 1000

0.6

which has been found, statistically, to give roughly constant variation in beaten lengths accross various distances.

An experiment As a speciﬁc test of these methods, we will study a set of races that occurred in Australia between January 1991 and July 1998 at the Metropolitan (signiﬁcantly higher-grade than average) level, and at distances of between l,000 m and 1,200 m, inclusive. We shall not attempt to forecast any races prior to 1994, but will use the entire history of race results starting from January 1991 and continuing up until the day preceding the day of each race to be forecast. Such forecasts will be carried out for 1,309 races, after which a test against the bookmakers’ prices will be carried out using the multinomial logit model (see Bolton and Chapman, 1986; Benter, 1994, etc.), otherwise known as Conditional Logistic Regression model, to see if the removal of the Competitive Form (CForm) variable from the model including both it and the bookmakers’ prices signiﬁcantly affects the Likelihood Score on the actual outcome, and (perhaps more importantly) whether a proﬁtable betting strategy arises.

Results The results of a conditional logistic regression analysis over 1,390 races are shown below. It appears that the CForm variable is highly signiﬁcant (T = 15) marginal to the log-odds variable. When this form variable is omitted from the model, the R-squared drops from approximately 21 per cent to less than 19 per cent. The model including the Form variable without the log-odds variable is highly statistically signiﬁcant (T = 13), but at R-squared of approximately 2 per cent is virtually worthless by itself. -->scox([logodd,cform,stratum,indwin],‘x = [1,2],str = 3, stat = 4’); 7373.849 1462.882 203.071 13.599 0.608 0.024 0.001 Coef. Val. S.E. T-ratio ----------------------------1 -1.178 0.033 -35.788 2 0.715 0.048 14.923 ----------------------------L= 4573.30 R-sq: 0.213

A competitive horse-race handicapping algorithm

111

-->scox([logodd,cform,stratum,indwin], ‘x = [1], str = 3, stat = 4’); 6158.444 1357.574 180.593 11.714 0.516 0.020 0.001 Coef. Val. S.E. T-ratio ------------------------------1 -1.141 0.032 -35.599 -------------------------------L= 4728.04 R-sq: 0.186 -->scox([logodd,cform,stratum,indwin], ‘x = [2], str = 3, stat = 4’); 956.493 11.242 0.283 0.007 0.000 Coef. Val. S.E. T-ratio --------------------------------2 0.673 0.051 13.298 --------------------------------L = 5682.65 R-sq: 0.022

At any given timepoint, the mean of the Race Class parameters (on which the CForm variables are based) is near zero with standard deviation approximately equal to 0.33. The standard deviation of the (centered) CForm variable is approximately 0.42, indicating that the characteristic variation in log-odds in the ﬁtted composite model due to CForm is about 30 per cent, which is fairly strong in betting terms. It is of interest to test the efﬁcacy of betting runners with favourable values of the CForm variable overall, and to see if its effect differs over various odds ranges. We shall assume that betting is for a ﬁxed gross return of 1 unit. For all runners in our sample, regardless of form history, the total outlay would be 1,696 units, for a return of 1,390, or a loss of about 18 per cent. For runners whose One-run form variable is larger than half of a standard deviation above average, the total outlay would be approximately 442 units, for a return of 540, or a proﬁt of approximately 22 per cent. For runners of 2/1 or shorter, the outlay for all runners would be 348 units, for a return of 314, or a loss of 9.8 per cent, as compared to a loss of 20 per cent for runners longer than 2/1. Restricting to those runners with favourable values of the One-run form variable which are 2/1 or shorter, an outlay of 92 units results, for a return of 117, approximately 27 per cent. For runners with favourable One-run form variable which are longer than 2/1, the

112

D. Edelman

proﬁt margin is approximately 21 per cent. Surprisingly, as further odds-range breakdowns are investigated, the proﬁt margins achieved using this criterion do not appear to vary signiﬁcantly from 21 per cent, suggesting that the variable has roughly the same degree of impact accross all odds ranges. In summary, it appears that even this simple version of CForm variable is highly effective in identifying value.

Discussion This model appears to function very well at determining Empirical Class ratings, giving rise to apparently reliable proﬁtable betting strategies. There are several important extensions of this method which are under investigation. In the above analysis, only Sprint (1,000–1,200 m) races were analysed, where the problem of quantifying horses’ distance preferences was avoided. However, a much more powerful model would include a larger database of races over various distances, where (lifetime constant?) horse-speciﬁc distance preference models are simultaneously ﬁt along with the full optimisation. This clearly greatly increases the computational complexity of the model, but preliminary results suggest that the gains could be worth the additional trouble and complexity. It is believed that a signiﬁcant improvement in estimation is possible by including at least one ‘non-collateral’ Class Ratings variable in the model as a predictor, changing the interpretation of the η s to that of a competitively-determined Class Ratings adjustment. Other predictor variables can be added as well, giving rise in the end to an Index, which can then be used as an input to a ﬁnal Multinomial Logit Regression to produce a probability forecast. To date, such models appear to be possible and seem to show at least some marginal improvement, based on studies currently under investigation.

References Asch, Peter and Quandt, Richard E. (1986) Racetrack Betting: The Professor’s Guide to Strategies, Dover, MA: Auburn House. Benter, William (1994) ‘Computer-based horse race handicapping and wagering systems: a report’, Efﬁciency of Racetrack Betting Markets. San Diego: Academic Press, pp. 169–84. Beyer, A. (1995) Beyer on Speed, New York: Houghton Mifﬂin. Brohamer, T. (l991) Modern Pace Handicapping, New York: William Morrow Co., Inc. Bolton, Ruth N. and Chapman, Randall G. (1986) ‘Searching for positive returns at the track: a multinomial logit model for handicapping horseraces’, Management Science, 32 (8), 1040–60. Drapkin, T. and Forsyth, R. (1987) The Punter’s Revenge, London: Chapman and Hall. Edelman, David C. (2001) The Compleat Horseplayer, Sydney: De Mare Consultants. Hausch, D., Lo, V. and Ziemba, W. (eds) (1994) Efﬁciency of Racetrack Betting Markets, San Diego: Academic Press.

A competitive horse-race handicapping algorithm

113

Lo, Victor (1994) ‘Application of logit models to racetrack data’, Efﬁciency of Racetrack Betting Markets, San Diego: Academic Press, pp. 307–14. Mordin, N. (1998) On Time, Oswestry, UK: Rowton Press. Scott, Donald. (1982) The Winning Way, Sydney: Wentworth Press. Snyder, Wayne N. (1978) ‘Horseracing: testing the efﬁcient markets model’, Journal of Finance XXXII, 1109–18. Ziemba, William and Hausch, Donald B. (1987) Dr Z’s Beat the Racetrack, New York: William Morrow and Co., Inc.

11 Efﬁciency in the handicap and index betting markets for English rugby league Robert Simmons, David Forrest and Anthony Curran

This chapter examines the properties of two types of sports betting market: index betting and handicap betting. The former type of market has been particularly under-explored in the academic literature. Our speciﬁc application is to English rugby league matches over the 1999–2001 period. We test for market efﬁciency and for speciﬁc forms of bias in the setting of spreads and handicaps. Regression analysis suggests that favourite–underdog bias is absent in these markets. However, although we do not observe home–away bias in the index market it appears that bookmaker handicaps do not fully incorporate home advantage. Hence, the index market is found to be efﬁcient whereas the handicap market contains a particular home–away bias. We attempt to rationalise these divergent results. Simulation analysis suggests that a strategy of shopping around for lowest spreads and handicaps can improve betting returns in each market, even to the extent of delivering proﬁts in the handicap betting market.

Introduction Sports have been played for many centuries as a means for people to satisfy (relatively) peacefully natural desires to challenge and compete against one another. Betting markets have emerged worldwide, both legally and illegally, in response to demands from people to make wagers on the outcomes of sporting contests. In the United States, there are very few jurisdictions where sports betting is legal and the dominant market is based at Las Vegas, Nevada. The typical form of betting market there, in the cases of American Football and basketball, is based upon the notion of a betting line, in which the bookmaker will quote a points spread. Suppose the betting line places Washington Redskins as favourites to beat Dallas Cowboys by six points. A bet on the Redskins minus six wins only if there is a Redskins victory by seven or more points. A bet on the Cowboys plus six wins only if the Cowboys do better than a six points defeat. A Redskins win by six points represents a push and the original stake is returned to the bettor. The typical bet will be struck at odds of 10 to 11, so the bettor must place $11 to win $10. The bettor will not make a proﬁt by betting on each side of the line as the bookmaker attempts to achieve a balanced book with equal volumes of bets on either side of

Handicap and index betting markets

115

the points spread. The points spread is adjusted by the bookmaker, in the period before the game takes place, in response to ﬂows of money either side of the line. For example, a large volume of money on the Redskins minus six may cause the bookmaker to revise the points spread, so that the Redskins are favoured to win by, say, eight points. Note that it is the spread which is adjusted and not the odds of 10 to 11. The spread observed at the end of the betting period will reﬂect interaction between the demand (bettors) and supply (bookmakers) sides of the market. The National Football League (NFL) betting market has been extensively analysed, inter alia, by Gandar et al. (1988), Lacey (1990), Golec and Tamarkin (1991), Dare and MacDonald (1996), Gray and Gray (1997), Vergin (1998), Vergin and Sosik (1999), Osborne (2001), and Woodland and Woodland (2000). In Europe, most sports betting is based on ﬁxed odds which are announced several days before a ﬁxture takes place and which, generally, are immovable despite weight of money or announcements of new information about the teams. Odds are typically quoted on home win, draw and away win. A bettor who bets on all three outcomes simultaneously will in the case of UK soccer (the largest sports betting market in Europe) lose around 10.5 pence for £1 stake. This loss represents the bookmaker’s commission or over-round. In this chapter, we examine two further types of sports betting market. First, traditional British bookmakers, with high street retail shops, offer handicap betting on rugby league. This betting market differs only in detail from the US NFL betting market. Second, by contrast, index betting is a radically different style of betting from anything found in the US. The index betting market is relatively recent, covering many sports and a large variety of possible subjects for betting from match scores to more speciﬁc features such as number of cautions in a soccer match (see Harvey, 1998, for an entertaining layperson’s account; Haigh (1999) and Henery (1999) provide technical expositions). Index bets, called ‘spread bets’ in the UK, are usually made by telephone to an index betting ﬁrm. A bettor can buy or sell points around the offered spread, which is quoted on the Internet and on television text services. Our application below is to English rugby league and we can take, as an example, a quote by an index ﬁrm for Wigan to beat Salford by eight to eleven points. A bettor can ‘buy’ Wigan at the top side of the margin, eleven points. The total won or lost equals the unit stake multiplied by the absolute deviation of the actual points difference of the match from the predicted point at which the bet was placed. Suppose Wigan actually wins by just three points and the unit stake is £5. Then the bettor loses (£5 ∗ (11 − 3)) which is £40 for a unit stake. In contrast, if Wigan won by fourteen points the bettor wins (£5 ∗ (14 − 11)) which is £15. Alternatively, the bettor could ‘sell’ Wigan at the lower value of the spread, here eight points. This bettor believes that Salford will perform better than indicated by the spread. If Wigan wins by three points, then selling the spread at eight will return £25 or (£5 ∗ (8 − 3)). It is clear from this simple example that a modest unit stake can generate large gains and losses in this market especially when compared to the likely gains and losses for a similar unit stake in the less risky ﬁxed odds market. Index betting carries more risk than conventional betting because the magnitudes of potential gains and losses cannot be known in advance.

116

R. Simmons, D. Forrest and A. Curran

The handicap betting market is restricted to rugby league, as far as UK sports are concerned. It is organised around quotations of handicaps by bookmakers, who will usually offer a wide range of betting services on various sports, either by telephone accounts or in retail outlets. Again, an example will clarify what is involved. Suppose a bookmaker quotes the Wigan–Salford rugby league match at plus 10. A bet on Wigan, here the favourite to win, will be successful if Wigan beats Salford by more than ten points. A bet on Wigan loses if Wigan does only as well or less well than the quote of ‘plus 10’ indicates (i.e. Wigan must win by at least eleven for the bettor to win, otherwise he loses). Note that there is no equivalent to the ‘push’ present in US sports, where stakes are returned to bettors. In contrast, the bettor could back the outsider, Salford. If Salford does better than lose by ten points then the bet is successful. In rugby league handicap betting, the bookmaker offers ﬁxed odds of 5 to 6 so a winning bet offers a proﬁt of £5 for every £6 wagered. If the stake is £6 a winning bet then returns a total of £11 to the bettor. In rugby league betting, handicaps tend to be ﬁxed in the build-up to matches whereas index betting quotes are allowed to vary. In this chapter, we are concerned with the question of whether the index and handicap sports betting markets are efﬁcient. By efﬁciency, we shall mean the absence of opportunity for bettors to obtain a positive expected value from a particular betting strategy. This absence of a proﬁtable trading strategy is weak-form efﬁciency as deﬁned by Thaler and Ziemba (1988). Economists are naturally interested in whether markets are efﬁcient and betting markets offer an excellent opportunity to study efﬁciency due to the precise nature of the events in the market. Unlike markets for shares and bonds, sports ﬁxtures have well-deﬁned termination points and clearly deﬁned trading periods. If market inefﬁciency is observed in sports betting, we would then wish to discover whether traders can act proﬁtably upon this. In our rugby league betting context, there are two ways in which inefﬁciency may occur. First, it is possible that the variations in handicap, or spread midpoint in the index market, are not matched one-for-one by variations in actual points differences between teams. Then the handicap or spread midpoint would not be an unbiased predictor of game outcomes and there would be favourite–underdog bias. Even then, transactions costs such as commissions may be too great to permit proﬁtable trading and efﬁciency may be sustained. A further source of bias occurs when home-ﬁeld advantage is not fully reﬂected in the spread or handicaps that are set. This is home–away bias. Home-ﬁeld advantage, where home wins are a disproportionate share of match outcomes, is a common phenomenon in team sports. From North America, Schlenker et al. (1995, p. 632) report that ‘in several studies, covering amateur and professional baseball, (American) football, basketball, and ice hockey, home teams have been found to win more often than visiting teams, usually anywhere from 53% to 64% of the time’. In our case of rugby league, our sample reveals a 60 per cent win rate for home teams. In English League soccer, where draws (ties) are a frequent outcome, home teams win about 48 per cent of all games and away teams only about 24 per cent (Forrest and Simmons, 2000).

Handicap and index betting markets

117

Reasons for home-ﬁeld advantage include familiarity of the home team with speciﬁc stadium and ground conditions, greater intensity of support from home fans compared to (usually much fewer) away fans, disruption to players’ off-ﬁeld routines and physical and mental fatigue associated with travelling to away grounds (Schwartz and Barsky, 1977; Courneya and Carron, 1992; Clarke and Norman, 1995). In addition, it has been alleged that home fans can exert inﬂuence on refereeing decisions in a match in favour of the home side (Courneya and Carron, 1991; Garicano et al., 2001). Our concern here is not whether this home-ﬁeld advantage exists (it clearly does) but whether it is correctly incorporated into betting markets via handicaps or spreads. If not, the question follows: can bettors take advantage of this bias to make abnormal proﬁts, which in turn violates market efﬁciency? A deeper question, which we are unable to answer here due to lack of data, is whether inefﬁciency can persist over time or whether rational arbitrageurs eliminate mispricing in the betting markets. The methods that will be used to consider these questions of betting market efﬁciency are, ﬁrst, the use of regression analysis to investigate existence and sources of bias (if any) and, second, the use of simulation to examine the proﬁtability of various betting strategies which may be guided by the results of the regression analysis. The remainder of this chapter is set out as follows. In the section on ‘Institutional background to English rugby league and data’, we outline the nature and structure of English rugby league and describe our data set. In the section, ‘A model of market efﬁciency’, we develop our empirical model, with particular attention to the identiﬁcation of home–away bias. Regression results reported in the section on ‘Tests for market efﬁciency using regression analysis’ show that index ﬁrms do incorporate home-ﬁeld advantage fully into their quoted spreads. In contrast, though, bookmakers fail to incorporate home-ﬁeld advantage fully into handicaps, to varying degrees according to choice of bookmaker. We attempt an explanation of the contrasting results from bookmaker and index betting markets. This motivates the attempt in the section on ‘Evidence from simulations of betting strategies’ to explore simulations of various betting strategies, including the use of ‘lowest quotes’ found by comparison of quotes across index ﬁrms and bookmakers. The ﬁnal section concludes.

Institutional background to English rugby league and data English rugby league is a game which originated as a variation of ‘rugby’ in the nineteenth century. Until recently, it was played predominantly in the North of England, speciﬁcally in Lancashire and Yorkshire. A rugby league match consists of two teams of thirteen players chasing, running with, kicking and throwing an oval shaped ball on a rectangular turf pitch about 100 metres long. Throughout 80 minutes of play, one team employs speed, strength and aggression to try to

118

R. Simmons, D. Forrest and A. Curran

transport the ball to the opposite end of the pitch to earn a ‘try’, similar to a touchdown in American Football. The other team uses similar traits to try to stop them. Thousands of fans show their support by dressing up in team replica shirts, jumping up and down and bellowing words of encouragement, hindrance and reprimand at appropriate times. A referee and two ‘touch’ judges attempt to maintain a sense of order by intermittently blowing a whistle and waving their arms about. The sport is noted for its physical contact, with minimal protection for players from equipment or the laws of the game. Fans tend to regard this as a positive feature and often show disdain for the less physical, but more popular, game of soccer. Points are awarded in the match for scoring goals, tries and conversions. A goal is scored by kicking the ball over the crossbar of the opponent’s huge H-frame and is worth one point. A try is achieved by placing the ball on the ground at the opponent’s end of the pitch for which four points are given. On scoring a try, a team is given the opportunity to score a goal. This is known as a conversion and earns two further points. Team quality varies considerably. It is possible for a strong, dominant team to approach a score of 100 points in a match. Conversely, weak teams may score zero points although a nil–nil scoreline is extremely rare, unlike soccer. In our sample, the highest number of points recorded by one team in a match was ninety-six and the biggest points difference was eighty; but for 90 per cent of matches, supremacy was less than thirty-eight points. Scorelines tend to resemble those found in American Football. Our data relate to the English Rugby Super League over the period 1999–2001 (statistical details can be found on http://uk.rleague.com). Although previously a winter game, declining audiences exacerbated by the growth in popularity of soccer induced the rugby league authorities to reschedule the season from March to September. The Super League represents the highest level of rugby league played in the UK. Over our sample period, there were fourteen teams in the 1999 season, and twelve in the other seasons. The authorities allow very limited and discretionary promotion opportunities from the second tier, currently known as the Northern Ford Premiership, but in 2001 one team (Huddersﬁeld-Shefﬁeld) was relegated to be replaced by the Premiership champions, Widnes. Most of the teams come from a concentrated region in the North of England of mostly small towns, such as Castleford, Halifax, St Helens and Wigan. Currently, the dominant teams are Bradford, Leeds, St Helens and Wigan. Soccer and rugby league do not co-exist well and neither Liverpool nor Manchester, famous for their soccer teams, has a rugby league franchise, despite being located close to rugby league territory. London does have a Super League franchise but is the only southern based team. Some teams (Wigan, Halifax) share stadia with local soccer clubs but the cities of Bradford and Leeds, which each has a sizeable soccer club, have separate stadia. Each team played thirty games in 1999 and twenty-eight thereafter. Two points are won by the victor and, in the unusual event of a draw, each team receives one point. The top ﬁve ranking teams at the end of the season enter the play-offs. These consist of six knockout style matches culminating in the Grand Final to determine one deﬁnitive champion team. In 2001, it was Bradford who defeated Wigan to win this honour. This structure ensures a competitive atmosphere through the season

Handicap and index betting markets

119

as a team needs only to be in the upper 40 per cent of the league ﬁnal placings for a chance to end the year as grand champions. Over three seasons, and for a maximum of 497 matches for which we had accessible records, we collected (a) the date and the teams involved; (b) which team was the favourite and at which team’s stadium the match was held; (c) the match outcome; and (d) index ﬁrm spreads and bookmaker handicaps. Four index ﬁrms’ point spreads were quoted in the daily betting newspaper, the Racing Post and we selected four bookmakers for whom handicaps were available. The selected bookmakers comprise the three biggest retailers (Corals, William Hill and Ladbrokes) in the UK plus a wholesale supplier of odds and handicaps to independent bookmakers, Super Soccer. These data were not available electronically and library archives were searched for the data. For some weeks, issues of the newspaper were unavailable and, where they were, not all spreads or handicaps were quoted by each index ﬁrm or bookmaker. This means that sample sizes for our regression analysis will vary according to which index ﬁrm or bookmaker is the object of attention. In particular, we only have information on index betting markets for 1999 and 2000 whereas we were able to obtain information on handicaps for the additional 2001 season. Compared to North American literature on sports betting, we have very small sample sizes which are an inevitable result of the immaturity of the markets which we are studying. This means that our conclusions will necessarily be tentative.

A model of market efﬁciency One might test for market efﬁciency in rugby league by inspecting the coefﬁcients from estimation of a regression model: Yh = αh + βxh + random error

(1)

where Yh denotes home team’s points scored minus the points scored by the away team, xh denotes handicap or midpoint of index ﬁrm spread awarded to the home team and αh and β are coefﬁcients to be estimated. A test of weak-form efﬁciency would be an F -test of the joint restrictions, αh = 0, β = 1.1 A departure of the constant term from zero partly captures some home–away bias. If β > 1 then we have favourite–underdog bias where favoured teams are more likely to cover the index quote or handicap than the offered values suggest. If β < 1 then we have reverse favourite–underdog bias where underdog teams (whose handicap and index quotes will be the exact opposite of those for the favoured teams) are more likely to cover their index quote or handicap than the offered values suggest. It should be stressed that odds are ﬁxed and invariant across matches in the handicap betting market for rugby league; all handicap bets are struck at the same odds of 5 to 6. This means that favourite–longshot bias, where favourites offer odds of better value than outsiders, cannot arise in rugby league betting. Since all bets have the same odds, the same range of bettors’ wealth is covered and the potential for favourite–longshot bias is removed.

120

R. Simmons, D. Forrest and A. Curran

However, favourite–underdog bias may remain. For instance, sentiment may encourage fans to bet on ‘their’ team to win the match. This represents an overspill of fan affection from the pitch to the betting market, where fans place a wager in order to enhance their ‘stake’ in their team’s match result. This sentimental behaviour could generate favourite–underdog bias if the favourite has stronger fan support than the underdog, where favourites tend to be larger clubs. In the Superleague, it is indeed the case that the top clubs in League rankings tend to have the greatest support. The literature on NFL betting, which has the closest North American resemblance to rugby league handicap betting, offers mixed conclusions regarding efﬁciency. Authors who ﬁnd evidence of inefﬁciency include, inter alia, Golec and Tamarkin (1991), Gandar et al. (1988) and Osborne (2001). Vergin and Sosik (1999) report home–away bias in NFL betting on ‘national focus’ games, regular season games that are nationally televised and playoff games.2 Sauer et al. (1988) could not reject efﬁciency. Gray and Gray (1997) found some evidence of inefﬁciency but also found that exploitable biases were removed over time. In the NFL, market efﬁciency must imply that points spreads offered in the handicap market are unbiased measures of the relative strengths of the competing teams. As suggested in our interpretation of equation (1), the points spread in the betting market should not be systematically greater or less than the actual difference between home and away team points. As pointed out by Golec and Tamarkin (1991) in their analysis of points-spread NFL betting, the application of the above test procedure for efﬁciency, embodied in equation (1) (or its probit counterpart used by Gray and Gray (1997)), is deﬁcient if home team betting quotes are used to predict match scores deﬁned as home team points minus away team points. Equation (1) is acceptable as a basis for estimation and testing of efﬁciency only if there is no speciﬁc bias in the market. The problem identiﬁed by Golec and Tamarkin (1991) is that the model in equation (1) masks offsetting biases. The constant term measures the average of biases that are invariant to the size of points spread. If half of the observations in the sample of matches have a positive bias in the constant term and the other half a negative bias of equal size, then the constant term is zero, yet biases exist. In NFL and rugby league, a bias in favour of home teams implies a bias against away teams. If offsetting biases are hidden, estimation of equation (1) produces a constant term of zero and also, as shown by Golec and Tamarkin (1991), a βh parameter that is biased towards one, since betting lines are distorted. Favourite–underdog bias would be incorporated into both the constant term and βh . Estimation fails to reject market efﬁciency, even though biases and inefﬁciency exist. The test of market efﬁciency requires some modiﬁcation to equation (1) so as to separate favourite–underdog bias from simultaneous home–away bias. The procedure recommended by Golec and Tamarkin (1991), which we will follow for the case of rugby league betting, is to select home and away teams randomly from our sample and to create a dummy variable, HOME, which takes the value of one

Handicap and index betting markets

121

if the selected team is at home and zero if it is away. The model then becomes: Yi = αi + βi x + γi HOME + error

(2)

where the subscript i denotes the randomly selected team. This revised model allows us to test simultaneously for favourite–underdog bias and home–away bias. If γ > 0, then index quotes or handicaps for home teams are, on average, lower than actual points achieved by the home team relative to the away team. This holds regardless of the values of quotes or handicaps. Bettors would be predicted to be relatively more successful if they back home teams rather than away teams. Conversely, if γ < 0, then index quotes or handicaps for away teams are, on average, lower than actual points achieved by away teams relative to home teams and bettors would be relatively more successful if they back away teams. Why should a sports betting market exhibit home–away bias at all? Surely such a bias is indicative of irrationality on the part of traders, particularly as home advantage is well-known? Much depends on the type of bettor being observed. Following Terrell and Farmer (1996), we can usefully distinguish between ‘professional’ and ‘pleasure’ bettors. Professional bettors only bet when they perceive an expected proﬁt. These bettors utilise available information fully and undertake a bet as an investment. In contrast, ‘pleasure’ bettors consider a bet as a complement to the sporting activity which is the object of the wager. In our case of rugby league, a signiﬁcant proportion of potential bettors may be fans who would consider betting, conditional on value, to give themselves more of a stake in the outcome. The bet adds to the fun and excitement of watching, or following, a particular rugby league team in a match. Index ﬁrms and bookmakers would be expected, consistent with proﬁt-maximising calculations of index quotes and handicaps, to take account of how sensitive this segment of the market is to index quotes and handicaps. If the ‘pleasure’ bettors are primarily home fans who bet (if at all) on home teams, then we may detect home–away bias, reﬂected in a non-zero coefﬁcient on γ in the estimation of equation (2).3 There are two possible outcomes for a non-zero value of γ . First, pricediscriminating bookmakers seek to exploit the sentiment of home fans by offering particularly unfavourable quotes or handicaps to these fans. Home fans are perceived as having inelastic demand and, by taking advantage of this fact in setting index quotes or handicaps, the home–away bias generates a negative value of γ . The opposite case is where fans are perceived as having elastic demand. Bookmakers and index ﬁrms may then set especially favourable terms of bets in order to attract greater turnover from home fans. In this case, we would observe a positive value of γ . Hence, existence of home–away bias is not prima facie evidence of market irrationality but may reﬂect the utility that fans derive from supporting their team in the betting market and be an optimal discriminatory response by bookmakers or index ﬁrms to differing demand elasticities between groups of bettors. It is most likely that bookmakers are maximising expected proﬁts over a number

122

R. Simmons, D. Forrest and A. Curran

of bets and not expected proﬁts per bet. Studies of US sports betting markets tend to assume that bookmakers operate a balanced book on any particular game. A balanced book is not a requirement for bookmakers to earn proﬁts in English rugby league (or soccer) betting. A policy of offering more favourable index quotes or handicaps to home fans may generate an elastic response from betting volume. Then, the more favourable odds for home fans may be consistent with both proﬁt-maximising behaviour, assuming some possibilities for price discrimination in an imperfectly competitive betting market, and home–away bias. Offering more favourable quotes or handicaps need not imply losses for index ﬁrms or bookmakers, so long as the bet remains unfair. There remains the possibility, though, that bettors (in the aggregate) do not accurately estimate home advantage, for sentimental or other reasons, which in turn results in bookmakers setting handicaps that are not consistent with market efﬁciency. However, following the distinction between ‘pleasure’ and ‘professional’ bettors developed by Terrell and Farmer (1996), we would predict that fan sentiment and associated home–away bias are more prevalent in the handicap betting market than the more ‘exclusive’ index betting market. The higher risks attached to returns in the latter case imply a greater premium on information processing and less room for fan sentiment.

Tests for market efﬁciency using regression analysis Estimation of equation (2) is by Ordinary Least Squares (OLS), with White standard errors used to correct for heteroscedasticity. Simply estimating once with a randomised sample would not be adequate, as in addition to sampling matches from a population we would be further sampling a set of bets, namely those on teams picked out in the randomisation. A single set of estimates will not have reliable standard errors. Accordingly, for each index ﬁrm, and for each bookmaker, we repeat the randomisation procedure and estimation twenty times. The statistical signiﬁcance of the coefﬁcients can be examined using a procedure outlined in Snedecor and Cochran (1967). We count the number of cases in the twenty trials where a particular coefﬁcient is signiﬁcant at the 5 per cent level. A normal approximation can be used to test the null hypothesis that the ‘true’ proportion of cases where the coefﬁcient is not equal to zero is 5 per cent. If the null is true, the observed proportion of rejections, R, is distributed normally with mean r and standard deviation s = (r(1 − r)/n)1/2 , where n is number of trials, here twenty. The normal deviate, with a correction for continuity, is z = (|R − r| − (2n)−1 )/s. The critical value for this test statistic is 2.33 at a conservative 1 per cent signiﬁcance level. If there are four signiﬁcant (at 5 per cent) coefﬁcients out of 20 in our trials, the value of z is 2.57 which exceeds the 1 per cent critical value. Hence, where there are four or more signiﬁcant coefﬁcients amongst twenty trials, we conclude that the particular coefﬁcient is signiﬁcantly different from zero. In the case of the coefﬁcient β, though, we are concerned with whether this is signiﬁcantly different from one and a similar procedure can be adopted for this case.

Handicap and index betting markets

123

Equation (2) was estimated over twenty trials for each of four index ﬁrms offering spreads on rugby league matches and four bookmakers offering handicaps. In addition, for both index and handicap betting markets we report results from using the lowest spread or handicap available amongst the index ﬁrms and bookmakers, respectively. If less than four quotes were available, the lowest of those available was taken and if just one quote was available that was selected. This represents the ‘best’ index spread or handicap that, in respect of the focus team, could be obtained by shopping amongst the index ﬁrms or bookmakers. Table 11.1A and 11.1B report our results. The coefﬁcients shown are mean values across twenty trials. The ﬁgures in parentheses indicate in how many trials the particular coefﬁcient estimate was signiﬁcantly different from the value speciﬁed by the null (zero or one). Where this number is four or more, we can reject the null hypothesis. The following sub-sections summarise the regression results reported in Tables 11.1A and 11.1B.

Table 11.1A OLS estimation of actual points differences in handicap betting with twenty trials Variable

Bookmaker 1 Bookmaker 2 Bookmaker 3 Bookmaker 4 Lowest handicap

CONSTANT HANDICAP HOME

−2.202 (9) 1.012 [0] 4.830 (20)

R 2 (average) 0.50 N 441

−2.298 (12) 1.010 [0] 4.885 (20)

−1.703 (4) 1.009 [0] 3.640 (20)

−1.701 (3) 1.015 [0] 3.145 (20)

0.50 481

0.49 468

0.49 454

0.117 (0) 1.003 [0] 3.985 (20) 0.49 487

Dependent variable is points difference between randomly selected focus team i and its opponent. Notes Table shows mean coefﬁcients across twenty trials; ( ) is number of cases where coefﬁcient estimate is signiﬁcantly different from zero; [ ] is number of cases where coefﬁcient estimate is signiﬁcantly different from one.

Table 11.1B OLS estimation of actual points differences in index betting Variable

Firm 1

Firm 2

Firm 3

Firm 4

CONSTANT SPREAD HOME

−1.288 (1) 0.983 [0] 2.927 (0)

−1.161 (1) 1.013 [0] 2.807 (0)

−0.490 (2) 1.032 [0] 1.175 (0)

−1.180 (3) 0.965 [0] 3.146 (0)

R 2 (average) N

0.48 301

0.50 294

0.50 284

0.48 296

Best quote 0.606 (1) 0.996 [0] 2.537 (0) 0.49 310

Dependent variable is points difference between randomly selected focus team i and its opponent Notes SPREAD denotes midpoint of index ﬁrm’s point spread; Table shows mean coefﬁcients across twenty trials; ( ) is number of cases where coefﬁcient is signiﬁcantly different from zero; [ ] is number of cases where coefﬁcient is signiﬁcantly different from one.

124

R. Simmons, D. Forrest and A. Curran

Neither index nor handicap betting markets exhibit favourite–underdog bias In all our trials (200) there is not a single case where the coefﬁcient β is signiﬁcantly different from one. Average point estimates are very close to one, for each index ﬁrm, for each bookmaker and for the minimum spread and handicap. It seems, from our limited sample sizes, that index spread midpoints and bookmaker handicaps are each accurate predictors of rugby league scorelines, in the speciﬁc sense that a unit increase in index ﬁrm spread midpoint or in handicap is reﬂected one-for-one in the actual points difference between teams in a match. This is true for all index ﬁrms and all bookmakers in our data set. However, it does not follow from this that other biases in the setting of index quotes and handicaps are absent. Bookmaker handicaps do not fully incorporate home advantage in rugby league The γ coefﬁcients are positive and signiﬁcant at the 5 per cent in all twenty trials for each of the three main retail bookmakers and for the minimum handicap across the four bookmakers. For the specialist odds-setting ﬁrm (bookmaker 4), γ coefﬁcients are positive and signiﬁcant at the 10 per cent level in all twenty trials but only signiﬁcant at the 5 per cent level in two cases, below the critical threshold level of four. This would seem to give a strong indication that handicaps on home teams under-predict actual points differences in favour of these teams. The scale of home–away bias can be discerned by the size of coefﬁcients. Across twenty trials these are 4.83, 4.89 and 3.64 for the retail bookmakers, 3.15 for the specialist odds-setter and 3.99 for the minimum handicap. Hence, a team playing at home earns an average of three to ﬁve points more than if it plays away, for any given handicap. We shall examine below whether this four point discrepancy between home and away teams can be utilised to make abnormal returns. On ﬁrst sight, one would expect that this home–away discrepancy, which could not have been revealed without randomisation, offers the potential for higher returns from backing home teams compared to away teams. Index ﬁrms fully incorporate home advantage into their spreads; the index betting market is efﬁcient The results from inspection of γ coefﬁcients for the four index ﬁrms are extremely clear. Although these coefﬁcients are always positive, none is signiﬁcant at the 5 per cent level for any of the index ﬁrms or the minimum quote across ﬁrms. Since the constant term is also insigniﬁcant under our criterion for evaluation of trials, we are left with the conclusion that the index betting market for rugby league is weak-form efﬁcient. There are no biases revealed in this market for the bettor to exploit. The constant term and coefﬁcient on home dummy are not signiﬁcantly different form zero and the coefﬁcient on spread midpoint is not signiﬁcantly different from unity.

Handicap and index betting markets

125

It may be the case that we have not fully explored all the possibilities for bias in either the index or handicap market. We extended our model in three further directions. First, we considered the possibility of semi-strong inefﬁciency (Vaughan Williams, 1999) with respect to ‘fundamental’ information. We added to equation (2) variables to represent cumulative (season to date) points ratios, deﬁned as points divided by maximum possible. F -tests showed that these did not add signiﬁcant explanatory power to our model. In contrast, adding index spread midpoint or handicaps did add signiﬁcantly to a model containing home dummy and cumulative points ratios as variables. We conclude that both the index and handicap betting markets are semi-strong efﬁcient. Second, following Forrest and Simmons (2001), we explored the notion that fan support could affect efﬁciency in the markets. In the context of rugby league, teams with large fan support (such as Bradford and Leeds) might deliver higher points differences in their matches beyond those predicted by handicaps or index spread midpoints. To capture this possibility, we created a variable to denote the difference in previous season’s average home attendance for any two teams in a match. The coefﬁcient on this ‘difference in attendance’ variable was found by Forrest and Simmons to be positive and signiﬁcant in most divisions and most seasons for English soccer. In rugby league the ‘difference in attendance’ variable was never signiﬁcant in any trial for either index betting or handicap betting markets. Also in soccer, Dobson and Goddard (2001) argue that the underestimation of home advantage may vary along the range of odds in ﬁxed-odds betting. They report an advantage to bettors (with superior returns in the 1998–99 English soccer season) from ‘betting long’ on away teams. For rugby league, our third extension was to test their proposition by including an interaction variable to denote the product of home dummy and either spread midpoint or handicap. This ‘home times spread/handicap’ variable was not signiﬁcant in any trial. This leaves us with a puzzle: why is the index betting market efﬁcient relative to the bookmaker handicap market? Why do parallel betting markets deliver different outcomes, particularly in terms of presence of home–away bias? One possible rationale for the appearance of home–away bias in the handicap market together with its absence in the index market may lie in the constituency of each market. The index betting market can be characterised as comprising bettors who are not committed to particular teams. Some of these are sophisticated, professional bettors who simply desire a positive expected return. Others gain pleasure from the betting activity per se, but are not experts. To bet with index ﬁrms these bettors must be creditworthy and must be prepared to accept a higher variance of returns in the index market compared to the markets supplied by bookmakers. This combination of high risk and high returns offers an incentive for bettors to acquire and process more information surrounding the bet. In contrast, many investors in handicap betting markets may be fans who see their outlay as part of a general ﬁnancial and emotional stake in their team of allegiance. With a constituency populated largely by fans, the handicap betting market may be more prone than the index betting market to forces of bettor ‘sentiment’ (as termed by Avery and Chevalier (1999) in the context of NFL betting).

126

R. Simmons, D. Forrest and A. Curran

It is the absence of ‘fan’ bettors in the index market that is critical here. Index ﬁrms can choose a spread to maximise proﬁt (on both sides of the quote). With efﬁciency ‘pleasure’ bettor losses will be exactly offset by professional bettor gains and index ﬁrms rely on the over-round to generate proﬁt.4 In the handicap market, a bias in the direction of home fans is acceptable to bookmakers so long as the bias is not sufﬁcient to generate a positive expected return to these bettors. Index ﬁrms, lacking the presence of ‘fan’ bettors, do not enjoy this counter-balance and must offer more efﬁcient, unbiased quotes in order to avoid a loss. In an interesting parallel, some US economists, such as Golec and Tamarkin (1991) have pointed to the relative efﬁciency of college NFL betting markets compared to betting on the professional game. Their argument is that amateur or ‘pleasure’ bettors are prevalent in the NFL betting markets but are largely absent from the college game, which is less exposed to media coverage and publicity. According to Gandar et al. (1988), commenting on handicap betting in the NFL, which we have argued is a reasonably close approximation to handicap betting in rugby league: ‘the pool of money wagered by the unsophisticated public dominates the pool of money wagered by knowledgeable bettors’. The contrasting composition of bettor types between handicap and index betting markets could help explain why bookmakers can afford to set less efﬁcient quotes than index ﬁrms.

Evidence from simulations of betting strategies The regression analysis points to efﬁciency in the index betting market and a positive home–away bias in the bookmaker handicap betting market. The purpose of this section is to investigate the proﬁtability of various betting strategies in order to provide a check against the model. In the index betting market, we predict no success from a set of betting strategies. That is, returns should not be positive for sets of index bets placed with any of our four ﬁrms. Since home–away bias has been detected in the handicap betting market, we can determine whether a strategy of backing home teams would deliver positive proﬁts for bettors. Ideally, the model of the rugby league betting market should be used to make out-of-sample forecasts rather than making predictions from the existing sample. Also, Sauer (1998) notes the tendency for allegedly successful trading rules for proﬁtable betting to disappear over time (see Tryfos et al. (1984) and Sauer et al. (1988) for examples from the NFL). To track the persistence of proﬁtable betting strategies would require, as Sauer (op cit.) and Osborne (2001) note, a large number of observations covering many seasons. Unfortunately, in the case of rugby league betting, we are constrained by the small number of available observations to assess proﬁtability of betting strategies by simulation within the sample. What denotes successful betting in rugby league? In the handicap market, success is easily gauged as a win rate deﬁned as the number of wins as a proportion of total number of bets placed. A win rate above 54.55 per cent would indicate a proﬁtable strategy over our sample period.5 In the point spread market, the overall return can be computed, which is the sum of winnings and losses from all bets made.

Handicap and index betting markets

127

Table 11.2 Example of index betting from a match with point spread (8–11) Bets taken

Return from outcome = 3

Return from outcome = 9

Return from outcome = 16

(Buy at 11) £2 (Buy at 11) £3 (Sell at 8) £5 Total

−16 −24 25 −15

−4 −6 −5 −15

10 15 −40 −15

In an efﬁcient market, the actual return from employing betting rules should not be signiﬁcantly different from the expected return, given the over-round in the market, from betting at random. The expected return from random betting in the handicap market is – 10.4 per cent.6 The computation of expected return in the index market is best seen using an example. The index ﬁrms’ advantage lies in the margin between the buy and sell point. Whatever the match outcome, every pound of unit stake that is taken on both sides of the spread guarantees revenue to the index ﬁrm equal to the width of the spread, normally three pounds. Table 11.2 shows a simpliﬁed example relating to a match with point spread (8–11). When the ﬁrms are successful in equating betting volumes placed on either side of the spread, a single bet of £1 per unit placed at random will earn the ﬁrm £1.50. Allowing for rare cases where the spread points width is four points, not three, the observed average point width is 3.06. The over-round is then £1.53 per £1 unit stake. We proceed to evaluate the returns to some very simple betting rules applied to our sample covering the 1999 and 2000 seasons for index betting and the 1999, 2000 and 2001 seasons for handicap betting.7 Initially at least, simulations proceed on the assumption that all bets are place with the same bookmaker or index ﬁrm. This restriction is relaxed below, to permit shopping between bookmakers and index ﬁrms. Bet on all home teams The positive coefﬁcient observed on the HOME variable in the previous section conﬁrmed the existence of a bias in the home–away dimension. Results of betting on home teams are presented in Table 11.3A for bookmakers offering handicaps and Table 11.3B for index ﬁrms. Placing bets on the home team earns superior returns to those of random betting in both markets. A win rate of between 50.4 and 53.5 per cent is achieved in the handicap market, depending on choice of bookmaker. In line with our regression results, t-tests show that the percentage losses at bookmakers 1 and 2 are signiﬁcantly lower than random betting would offer, with p-values of 0.01 or less. Although, the win rates at bookmakers 3 and 4 are above the 48 per cent rate indicated by random betting, they are not signiﬁcantly higher than this ﬁgure.

128

R. Simmons, D. Forrest and A. Curran

Table 11.3A Simulation results from handicap betting on all home teams

Number of bets Bets won Win rate (%) Proﬁt Proﬁt (%)

Bookmaker 1

Bookmaker 2

441 236 53.5 −8.3 −1.88∗

481 257 53.4 −9.8 −2.04∗

Bookmaker 3 468 236 50.4 −35.3 −7.54

Bookmaker 4 454 231 50.9 −30.5 −6.72

Notes Proﬁt is the difference of returns from winning bets at odds of 5 to 6, including return of stake, and value of bets placed and assumes zero tax. ∗ Denotes that a t-test of the null hypothesis that computed proﬁt is −10.4 per cent, the value associated with random selection of bets, is rejected at the 1 per cent level.

Table 11.3B Simulation results from index betting on all home teams

Number of bets Return Return per bet

Firm 1

Firm 2

Firm 3

Firm 4

301 −63 −0.21

294 −38.5 −0.13

284 −232.5 −0.82

296 −62.5 −0.21

With odds of ﬁve to six, positive proﬁts from a ‘home only’ strategy cannot be found at any bookmaker. From Table 11.3B, we see that the observed return per bet for betting only on the home team at each index ﬁrm is higher (less bad) than the expected −£1.53. In contrast to our regression results, for three of the index ﬁrms’ spreads this home bias is stronger than at any of the handicap ﬁrms shown in Table 11.3A. Contrary to our regression results, for three of the index ﬁrms’ spreads the home bias is proportionately greater than at any of the handicap ﬁrms although again there are no opportunities for positive proﬁts once the over-round is taken into account. Bet on favourite or underdog For these strategies all matches are again covered, since in a two-team contest designating one team as ‘favourite’ automatically implies that the other is the ‘underdog’. Simulation results are summarised in Tables 11.4A (handicaps) and 11.4B (index betting). In the handicap market, betting only on the favourite is a bad strategy: at all four bookmakers the win rate is below the expected rate of 48 per cent so this strategy is inferior to betting at random. The highest win rate is 46.8 per cent over the period – far short of the 54.55 per cent required for positive returns. The number of times the match result is equal to the handicap, resulting in all bets losing, is a relevant factor in the success of the strategy. This occurred several times with bookmaker 2, in particular, causing both strategies of betting on and against the favourite to deliver expected returns below normal. In contrast,

Handicap and index betting markets

129

Table 11.4A Simulated win rates from betting on favourites or underdogs in the handicap betting market

Number of bets Bet on all favourites Bets won Win rate (%) Bet on all underdogs Bets won Win rate (%)

Bookmaker 1

Bookmaker 2

Bookmaker 3

Bookmaker 4

417

451

443

435

195 46.8

200 44.7

198 44.7

202 46.4

206 49.4

224 49.7

223 50.3

221 50.8

Table 11.4B Simulated returns from betting on favourites or underdogs in the index betting market

Number of bets Bets on all favourites (buy) Return Return per bet Bets on all underdogs (sell) Return Return per bet

Firm 1

Firm 2

Firm 3

Firm 4

302

295

285

297

−337 −1.12

−339.5 −1.15

−313.5 −1.10

−458 −1.54

−589 −1.95

−572.5 −1.95

−560.5 −1.97

−442 −1.49

betting on underdogs delivers slightly higher win rates than would be expected from random play. In the index market, a loss of £1.53 per bet is expected when selection of buy or sell is random. Buying every spread at one ﬁrm over two seasons gives an average loss of £1.23, which is not as bad as would be predicted from random selection. A ‘buy’ strategy is clearly superior to selling all spreads; this strategy results in a loss of between £1.49 and £1.97. Placing wagers in the index market according to whether teams are favourites or underdogs cannot provide proﬁt: betting £1 per unit results in a loss of several hundred pounds over the two-year period. Neither betting consistently on favourites nor on underdogs in either market is suggestive of existence of bias. The small deviations from expected returns seem to be generated by random noise. We suspect that bias in the favourite–underdog dimension is not a source of inefﬁciency in the betting markets for Super League rugby.8 Shopping for ‘best’ quotes The literature on sports betting tends to assume that all bookmakers publish similar odds and spreads and that arbitrage possibilities are absent. In North America, this is a reﬂection of the inﬂuence of a small number of major Las Vegas casinos on sports betting lines, and a remarkable consensus of these lines, which spreads to

130

R. Simmons, D. Forrest and A. Curran

both smaller bookmakers and illegal operators (Vergin and Sosik, 1999; Osborne, 2001). In English sports betting, there are more ﬁrms offering a greater diversity of betting products compared to the US. The simulations reported above suggest varying degrees of success depending on which bookmaker or index ﬁrm is selected for trade. In the Racing Post the handicaps and point spreads are published alongside one another for all ﬁrms. Given a particular strategy, it is easy to compare quotes and take advantage of any arbitrage possibilities. There is quite a lot of variance between handicaps and index quotes across ﬁrms. This variation tends to cancel out over time so no one ﬁrm sets quotes systematically too low or too high. Nevertheless, betting at the ‘best’ quote each time might signiﬁcantly improve returns. Already, we have seen from our regression results that a home team bias occurs when the lowest handicap is considered whereas this bias is absent from the handicaps offered by bookmaker 4. The rules tested above are reconsidered here. Each simulated bet is placed on the optimal quote, from the bettor’s point of view, from the choice of four index ﬁrms and, alternatively, four bookmakers. The relevant question is not whether this approach yields higher win rates (it must) but whether or not proﬁts can be earned. Simulation results from selection of optimal prices are shown in Tables 11.5A and 11.5B. Since the Racing Post does not display quotes for all index ﬁrms or bookmakers for all games, we deﬁne optimal quote to be the best quote out of all those available. If only two bookmakers or index ﬁrms offer quotes the better is selected. This is a more conservative selection procedure than disregarding any games where not all quotes are on offer. Clearly, use of optimal prices with selections of all home teams does deliver a proﬁt. The win rate in the handicap market is 57.5 per cent and the index market return per bet is £1.22. Shopping between ﬁrms delivers proﬁts in each market. Of other four strategies reported in Table 11.5A, betting on underdogs clearly outperforms favourites and betting on home underdogs delivers a greater win rate (over 60 per cent), and even positive proﬁts relative to all underdogs. This, again, is a reﬂection of home bias in the handicap market. Compared to using handicaps from single bookmakers, a selection across bookmakers reduces transactions costs

Table 11.5A Simulated returns from various betting strategies applied to lowest handicaps

Number of bets Bets won Win rate (%) Return Proﬁt (%)

Bet on home team

Bet on favourite

Bet on underdog

Bet on home favourite

Bet on home underdog

486 274 56.4 502.0 3.29

484 242 50.0 443.7 −8.33

484 267 55.2 489.5 1.14

321 173 53.9 317.2 −1.18

176 106 60.2 194.3 10.40

Handicap and index betting markets

131

Table 11.5B Simulated returns from various betting strategies applied to best quotes in the index market

Number of bets Return Return per bet

Bet on home team

Bet on favourite

Bet on underdog

Bet on home favourite

Bet on home underdog

310 379.5 1.22

306 169 0.55

300 −158 −0.53

195 162.5 1.35

189 −278 −1.47

of betting by cutting into bookmaker over-round, with an opportunity for positive proﬁt from backing home underdogs. In the index market, ‘buying’ favourites over the 1999 and 2000 seasons would have returned a proﬁt of £169 (per £1 per point stake). This is an average of 55-pence proﬁt for each bet; a signiﬁcant improvement on buying favourites at individual index ﬁrms where the average return was −£1.23. Conﬁning wagers to just those favourite teams at home would have yielded a higher average return again: £1.35 for every £1 (per point) speculated. A strategy of selling home underdogs delivers a loss close to normal whereas a strategy of betting on home underdogs yields a positive proﬁt. Hence, these two betting strategies offer differential returns, a feature that should be explored in further research to check for its robustness over time.

Conclusions We have examined efﬁciency in the handicap and index betting markets for rugby league. We ﬁnd that variations in quotes in each market are matched one-forone by actual points differences observed in Super League ﬁxtures. However, our regression results show a signiﬁcant home bias in the handicap betting market which is absent in the index market. This bias, of the order of three or four points, implies that backing home teams should generate higher returns to bettors than backing away teams. The differences in home bias between the two types of market, and the contrasting efﬁciency properties, may be attributed to different constituencies. Handicap markets may attract ‘pleasure’ bettors, including fans who bet in order to add to their emotional stake in a team. In contrast, the index market may be dominated by ‘professional’ bettors with no sentimental team attachment who simply seek the best possible expected return from a wager. When simulations of betting strategies are conducted, the home bias is conﬁrmed in that lower expected losses are found by backing home teams, compared to random selection, in the handicap betting market. But betting on home teams at particular bookmakers does not yield a proﬁt in either betting market. Selection of ‘best’ handicaps or index quotes alters the simulation outcomes considerably. In the handicap market, backing all home teams or backing home underdogs delivers positive proﬁts. In the index market, returns per bet are positive

132

R. Simmons, D. Forrest and A. Curran

when the ‘best’ quote on home teams is searched. This was not apparent from the regression results, suggesting an anomaly that deserves attention to be resolved in further research. Generally, the implications for efﬁciency from searching for lowest prices amongst bookmakers in sports betting markets have not been properly explored in the literature, in which arbitrage betting opportunities tend to be assumed to be absent. Unlike studies of North American sports betting, where sample sizes run into thousands, we are constrained in our study of rugby league betting by the immaturity of markets and the consequent low numbers of observations for analysis. For index betting markets, samples sizes were as low as 281. With such small samples, the cliché that further work is needed is particularly apt here, if only to reduce the chances of invalid inferences. At present our work must be seen as tentative and suggestive. There are three further directions of research that could usefully be taken. One is to see if the home bias revealed here for rugby league handicap betting persists over time. The second is to ascertain whether the proﬁtable opportunities obtained by shopping for ‘best’ quotes remain in place in future seasons. This is an issue which deserves greater attention in other UK sports, including soccer. The third question, following work by Vergin and Sosik (1999) and Gandar et al. (2001) on US sports betting, is whether there is an additional source of home bias to be found in higher proﬁle rugby league games, comprising nationally televised games on Friday nights and end-of-season playoff games.

Notes 1 An alternative approach is a probit model, where the dependent variable is the probability that a bet on a team beats the spread or handicap, that is, the probability that the bet is won. This approach is applied by Gray and Gray (1997) to NFL betting. We prefer not to use the probit model because this imposes a non-linear S-shape to the relationship between bet outcome and terms of the bet which makes interpretation of marginal effects of terms of bet, the probit equivalent of the β parameter, problematic. In the probit model, β will deviate from unity at extreme ends of the distribution of spread midpoint or handicap distribution, by construction. 2 Gandar et al. (2001), however, found no such bias in a study of betting markets for baseball or basketball or in a relatively small out-of-sample test of the Vergin–Sosik NFL results. 3 An alternative hypothesis is that fans who are nervous about their team’s prospects of winning take out a wager on the opponent to win. The disutility brought about by their team’s defeat would be partially offset by the satisfaction of winning the bet. If this ‘insurance’ motive predominates then the coefﬁcient on γ is predicted to be negative, assuming home fans are the majority of bettors on the outcome of a particular match. 4 Without ‘pleasure’ bettors in the index market, professional bettors would not enter as they would lack an opportunity for positive proﬁt. 5 By contrast, a win rate of 52.5 per cent is needed for proﬁtable betting in the NFL handicap betting market. The difference comes from the superior odds (10–11) offered on NFL games relative to rugby league games (5–6). 6 At odds of 5–6, the bettor needs to wager £109 to receive £100, ignoring draws. But in the case of draws the bettor loses. Draws are 2.35 per cent of match results in

Handicap and index betting markets

133

our sample. Given draws, the bettor must wager £109 ∗ 1.0235 = £111.6. The overround is 11.6 per cent while the expected take-out, conditional on a balanced book, is 11.6–111.6 = 10.4 per cent. 7 Some literature on sports betting examines several more complex and sophisticated rules. See, for example, Cain et al. (2000) on exact scores betting in English soccer and Lacey (1990) and Woodland and Woodland (2000) on NFL betting. 8 Simulations of betting on all home favourites or all home underdogs did not deliver a higher proportion of wins than betting on all home teams or all favourites or all underdogs.

References Avery, C. and Chevalier, J. (1999), ‘Identifying investor sentiment from price paths: the case of football betting’, Journal of Business, 72: 493–521. Cain, M., Law, D. and Peel, D. (2000), ‘The favourite–longshot bias and market efﬁciency in UK football betting’, Scottish Journal of Political Economy, 47: 25–36. Clarke, S. and Norman, J. (1995), ‘Home advantage of individual clubs in English soccer’, The Statistician, 44: 509–21. Courneya, K. and Carron, A. (1992), ‘The home advantage in sports competitions: a literature review’, Journal of Sports and Exercise Psychology, 14: 13–27. Dare, W. and MacDonald, S. (1996), ‘A generalised model for testing the home and favourite team advantage in point spread markets’, Journal of Financial Economics, 40: 295–318. Dobson, S. and Goddard, J. (2001), The Economics of Football. Cambridge: Cambridge University Press. Forrest, D. and Simmons, R. (2000), ‘Forecasting sport: the behaviour and performance of football tipsters’, International Journal of Forecasting, 16: 317–331. Forrest, D. and Simmons, R. (2001), ‘Globalisation and efﬁciency in the ﬁxed-odds soccer betting market’, University of Salford, Centre for the Study of Gambling and Commercial Gaming. Gandar, J., Zuber, R., O’Brien, T. and Russo, B. (1988), ‘Testing rationality in the point spread betting market’, Journal of Finance, 43: 995–1008. Gandar, J., Zuber, R. and Lamb, R. (2001), ‘The home ﬁeld advantage revisited: a search for the bias in other sports betting markets’, Journal of Economics and Business, 53: 439–453. Garicano, L., Palacios-Huerta, I. and Prendergast, C. (2001), ‘Favouritism under social pressure’, National Bureau of Economic Research Working Paper 8376. Golec, J. and Tamarkin, M. (1991), ‘The degree of inefﬁciency in the football betting market’, Journal of Financial Economics, 30: 311–323. Gray, P. and Gray, S. (1997), ‘Testing market efﬁciency: evidence from the NFL sports betting market’, Journal of Finance, 52: 1725–1737. Haigh, J. (1999), ‘(Performance) index betting and ﬁxed odds’, The Statistician, 48: 425–434. Harvey, G. (1998), Successful spread betting. Harrogate: Take That Ltd. Henery, R. (1999), ‘Measures of over-round in performance index betting’, The Statistician, 48: 435–439. Lacey, N. (1990), ‘An estimation of market efﬁciency in the NFL point spread betting market’, Applied Economics, 22: 117–129. Osborne, E. (2001), ‘Efﬁcient markets? Don’t bet on it’, Journal of Sports Economics, 2: 50–61.

134

R. Simmons, D. Forrest and A. Curran

Sauer, R., Brajer, V., Ferris, S. and Marr, M. (1988), ‘Hold your bets: another look at the efﬁciency of the gambling market for National Football League games’, Journal of Political Economy, 96: 206–113. Sauer, R. (1998), ‘The economics of wagering markets’, Journal of Economic Literature, 36: 2021–2064. Schlenker, B., Phillips, S., Bonieki, K. and Schlenker, D. (1995), ‘Championship pressures: choking or triumphing in one’s territory’, Journal of Personality and Social Psychology, 68: 632–643. Schwartz, B. and Barsky, S. (1977), ‘The home advantage’, Social Forces, 55: 641–661. Snedecor, G. and Cochran, W. (1967), Statistical Methods, 6th edition. Ames, Iowa: The Iowa State University Press. Terrell, D. and Farmer, A. (1996), ‘Optimal betting and efﬁciency in parimutuel betting markets with information costs’, Economic Journal, 106: 846–868. Thaler, R. and Ziemba, W. (1988), ‘Anomalies – parimutuel betting markets: racetracks and lotteries’, Journal of Economic Perspectives, 2: 161–174. Tryfos, P., Casey, S., Cook, S., Leger, G. and Pylpiak, B. (1984), ‘The proﬁtability of wagering on NFL games’, Management Science, 24: 809–818. Vaughan-Williams, L. (1999), ‘Information efﬁciency in betting markets: a survey’, Bulletin of Economic Research, 51: 1–30. Vergin, R. (1998), ‘The NFL point spread market revisited: anomaly or statistical aberration?’, Applied Economics Letters, 5: 175–179. Vergin, R. and Sosik, J. (1999), ‘No place like home: an examination of the home ﬁeld advantage in gambling strategies in NFL football’, Journal of Economics and Business, 51: 21–31. Woodland, B. and Woodland, L. (2000), ‘Testing contrarian strategies in the National Football League’, Journal of Sports Economics, 1: 187–193.

12 Efﬁciency of the over–under betting market for National Football League games Joseph Golec and Maurry Tamarkin

Introduction Sports betting markets are recognized as good data sources to test market efﬁciency. Readily observable outcomes and a deﬁnite betting or investment horizon are features that make these markets attractive research candidates. Various studies have examined American football, baseball, basketball, and horse-racing markets. In the American football betting market, the efﬁciency tests have focused on whether bettors can use certain simple team features, such as being the home team or the underdog, to select bets that can generate statistically signiﬁcant economic proﬁts. The most recent work on football betting focuses on econometric techniques that may improve the statistical tests of the efﬁciency of the football point spread betting market, or the forecasts from a betting model. For example, Gray and Gray (1997) extend the literature by using a discrete-choice probit model rather than the ordinary least squares regression methodology used previously by Golec and Tamarkin (1991). The basic approach to testing for market efﬁciency has been to regress game outcomes (difference in team scores) on the betting market’s predicted point spread. Various studies extended the basic model by including other explanatory variables such as home–away and favorite–underdog variables (see Golec and Tamarkin, 1991). In addition to using probit regression, Gray and Gray add “streak” variables to the regression such as team record in the most recent four games and overall winning percentage. They ﬁnd that some of the streak variables are signiﬁcant, implying some market inefﬁciency. In this chapter, we consider a different football bet. The most common football bet is the point spread bet, which tests one’s ability to predict the difference in team scores, compared to the market’s prediction. The next most common football bet is the over–under bet, which tests one’s ability to predict the total number of points scored in a game. This chapter focuses on the over–under bet. We know of no comprehensive study to date which tests the basic efﬁciency of the over–under market. In addition, we consider any differences in the statistical properties of point spreads and over–under totals and whether information in one market can be used to win bets in the other. The chapter is organized as follows: the next section on “The football betting market: setting point spreads” brieﬂy describes the football betting market and how

136

J. Golec and M. Tamarkin

point spreads are set; the section on “Testing football betting market efﬁciency” after that describes the data and presents the test results. The results are summarized in the conclusion.

The football betting market: setting point spreads Jaffe and Winkler (1976) point out that football betting markets are analogous to securities markets: a gambler “invests” through a bookie (market-maker) at a market-determined point spread (price), which is the market’s expectation of the number of points by which the favorite will outscore the underdog. The larger the spread, the larger the handicap the favorite must overcome. Those who bet on the favorite believe their team is underpriced; they speculate that the favorite will defeat the underdog by more than the point spread. In turn, those who bet on the underdog believe that the favorite is overpriced, that is, the favorite will either lose the game or win by less than the point spread. Licensed sports books in Las Vegas dominate the organized football betting markets. They commence betting on the week’s games at “opening” point spreads (the line) that reﬂect the expert opinions of a small group of professional spread forecasters. If new information on the relative strengths of opposing teams (e.g. a player injury) is announced during the week, the bookie may adjust the line. In addition, since the identity of the bettors is known, Las Vegas bookies may also change the line if professional gamblers place bets disproportionately on one team. Although, of course, once bets are placed at a speciﬁc point spread number, the bet stands at that number regardless of future changes in the point spread. Shortly before game time, the bookie stops taking bets at the “closing” point spread. Like securities prices at the end of trading, closing spreads are assumed to reﬂect an up-to-date aggregation of the information and, perhaps, biases of the market participants. In addition to point spreads, sports books also publish betting lines on the total points scored in each game. The bettor tries to predict whether the total number of points scored in a football game will be over or under a published number, the so-called over–under. The over–under number varies depending on the two participants’ offensive and defensive prowess and, to some extent, the weather forecast, as inclement weather can hold down scoring. The over–under number also may be adjusted by bookies until game time although, of course, once bets are placed at a speciﬁc over–under, the bet stands at that number regardless of future changes in the over–under number. In Las Vegas and other markets for large bettors, winners of point spread betting, or of the over–under, receive two dollars for each dollar bet; losers forfeit the amount of their bets plus an additional 10 percent paid to the bookie as commission (this commission is called vigorish or juice). In the case of ties, typically all bets are canceled (a push) although some bookies treat this as a loss for the bettor. Thus, a betting strategy must win at least 52.4 percent of the bets to be proﬁtable. The fact that bookies can change the line (we are including point spread and over–under bets in the line) leads researchers to propose an optimal strategy for setting the line. Assuming the line is a good forecast of the outcomes, the line

Efﬁciency of the over–under betting market

137

is an even bet and, over many bets, the bookie’s expected return is the vigorish, regardless of how disproportionate the betting might be on any particular team in a game. Many researchers have assumed that bookies adjust the line to even out the betting on each game, essentially hedging their positions in each game. But the bookie manages a portfolio of mutually uncorrelated unsystematic risks. Thus, the risk can be diversiﬁed away over many games. The bookie wants simply to maximize the bets placed subject to the constraint that the line is determined so that each bet is an even gamble. In conversation with bookies, we have found that they do not try to adjust the line to even out the betting.1 As one bookmaker put it, “Say I have a large difference in line of $8,000 on one team and only $2,000 on the other. Why should I try to change the line? I am laying $5,800 against $6,800 on what is essentially an even money proposition. I’ll take those kinds of bets all day!”2 This is similar to the way casinos operate. In a roll of the dice in craps, there may be disproportionately more money bet on the “pass line” than on the “don’t pass” line, but casinos do not care. What bookies do care about is increasing the amounts wagered, for their expected proﬁt goes up with the amount wagered whether or not bets are evened out in any particular game. Thus, sports books have expanded their offerings of bets to attract more wagers. One such bet that has become more popular recently is the over–under bet. We focus on this gamble in the empirical work below.

Testing football betting market efﬁciency We use data from The Gold Sheet, a sports information publication that has been in business for forty-three years. The data consists of all National Football League (NFL) games from 1993 through 2000 for which there were published point spreads and over–under lines. Final scores were also obtained here along with game dates. There is a total of 2,008 games. First, we look at some summary statistics for the point spread (PS) and the over– under (OU) in Table 12.1. The point spread is deﬁned as the number of points by which the favorite (underdog) is expected to win (lose). We note that for both betting lines, the actual outcomes are close to the predictions as given by the lines. Table 12.1 Summary statistics for NFL point spread and over–under bets during the 1993–2000 seasons Variable

Point spread (PS) Margin of victory (MV) [MV − PS] Over–under (OU) Total points scored (TP) [TP − OU]

Standard Mean

Median

Deviation

Skewness

Kurtosis

5.64 5.17 0.47 40.15 41.22 1.07

5.00 4.00 0.50 40.00 41.00 0.00

3.58 13.50 12.91 4.13 14.22 13.76

−0.98 0.02 0.02 0.72 0.35 0.36

0.75 0.30 0.24 1.20 −0.04 0.06

138

J. Golec and M. Tamarkin

They differ by about one point in the OU line and by one half point in the PS line. In both the cases, the medians of the differences are at or near zero. This is an indication that the lines are good estimates of the outcomes. When we look at the differences, (MV − PS) and (TP − OU), we see two things. First, when we take differences to get (MV − PS), we reduce the standard deviation relative to MV alone proportionately more than when we do the same for (TP − OU). This shows that PS is more highly positively correlated to MV than OU is to TP. That is, PS explains more of MV than OU explains of TP, so that differencing eliminates more variance from (MV − PS). Indeed, the correlation between PS and MV is 0.29, versus 0.25 for OU and TP. One interesting feature of the PS and OU lines is that they both exhibit skewness and kurtosis, with PS negatively skewed and OU positively skewed. But MV and TS are basically normally distributed, with little skewness or kurtosis. This is surprising because if the goal of the bookie is to set PS to mirror MV and OU to mirror TP, one might expect them to have similar distribution features. That is, if the realizations (MV and TS) are normally distributed, why are not the expectations (PS and OU)? This question is not answered here but it does bring up a related question for ﬁnancial asset returns. Short-term asset returns have been shown to be approximately normally distributed, although long-term returns may be lognormally distributed. Are expected returns normally distributed, and what is the consequence for asset pricing models if there is a difference between the distributions of expectations and realizations? For football betting, market efﬁciency implies that the closing over–under line is an unbiased measure of the total score in a game. In other words, the closing line should not be systematically higher or lower than the actual ﬁnal game scores. This can be tested with the following ordinary least squares regression of total points scored on the over–under line. TP = β1 + β2 (OU) + ε where TP = total points scored, OU is the over–under line, β1 and β2 are regression coefﬁcients and ε is an error term. The test of efﬁciency is an F -test of the joint hypothesis, β1 = 0 and β2 = 1. Table 12.2 presents the regression tests for the over–under National Football League betting market for the 1993 through 2000 seasons combined and for individual seasons. The regression is TP = β1 + β2 (OU) + ε where TP is total points scored, OU is the over–under line, β1 and β2 are regression coefﬁcients and ε is an error term. For the entire sample period, we ﬁnd that there is a statistically signiﬁcant bias in the over–under line. This result is largely driven by a ﬁxed bias of about six points, as measured by the signiﬁcantly positive intercept (β1 ). Under this condition, the slope estimate will be biased down. Indeed, β2 is smaller than one, but only marginally so. And given the bias imposed by the

Efﬁciency of the over–under betting market

139

intercept, we can say that there is probably little bias that varies proportionately with the OU line. Further evidence on the consistency of the bias can be found in the regressions for the data subset by year. Although six of the eight years have positive intercept estimates, only the 1995 estimate is signiﬁcant. In fact, it appears that this year may be driving the rejection of efﬁciency for the overall sample. During the last three years of the sample period, the market appears to be relatively more efﬁcient in the sense that the intercepts ﬂuctuate closer to zero and the slope closer to one. Furthermore, the regression R-squareds are considerably larger. This means that the OU line explains a larger portion of the variation in TP. It can be inferred from the overall regression results, taken at face value, that it would have been better to bet the over, and, that in 1995, this would have been a particularly proﬁtable strategy. One possible explanation for these results is that we have not accounted for overtime games. Out of 2,008 games, 108 are overtime games. If two teams are tied at the end of regulation play, they play a sudden death period in which the ﬁrst team to score wins. The probability of overtime may be a positive function of the spread. When team scores are predicted to be closer, regulation play is more likely to end in a tie. But when we ran a probit regression of overtime on the spread, there was only a weak negative relationship. Indeed, the correlation between the spread and a zero–one variable representing no overtime – overtime is only about 4 percent. Therefore, overtime appears to be largely unpredictable. Overtime may be unpredictable, but overtime games tend to result in larger point totals. This can be seen by redoing the regressions in Table 12.2 and including a dummy for overtime games and the spread to account for the slightly greater

Table 12.2 Regression estimates for tests of market efﬁciency for NFL over–under bets during the 1993–2000 seasons Sample period

1993 –2000 1993 1994 1995 1996 1997 1998 1999 2000

β1

β2

SER

R2

F-statistic

Obs.

(β1 = 0, β2 = 0)

(β1 = 0, β2 = 1)

5.96∗ (3.00)

0.88∗ (0.07)

13.75

0.065

139.34∗

7.39∗

2,008

13.18 (9.57) 8.87 (8.66) 29.65∗ (11.02) 13.32 (11.77) 13.75 (11.90) −3.85 (8.56) 6.41 (8.34) −3.73 (6.31)

0.66∗ (0.25) 0.81∗ (0.21) 0.34 (0.27) 0.68∗ (0.29) 0.67∗ (0.29) 1.13∗ (0.20) 0.87∗ (0.20) 1.08∗ (0.15)

13.68 13.97 14.54 12.64 13.68 13.16 14.08 14.06

0.027 0.055 0.006 0.022 0.021 0.105 0.065 0.165

6.71∗ 13.68∗ 1.52 5.56∗ 5.38∗ 29.66∗ 18.08∗ 51.24∗

1.14 1.53 7.01∗ 0.85 0.78 1.87 1.64 0.18

242 236 252 252 252 254 259 261

Notes Standard errors in parentheses show the estimates. SER is the standard error of the regression. ∗ Denotes statistical signiﬁcance at least at the 5 percent level.

3.73∗ (1.35) 8.69 (4.63) −0.25 (3.63) 5.95 (3.31) 4.81 (3.48) −1.37 (3.55) 2.10 (4.75) 10.82 (4.53) 1.40 (3.90)

0.73∗ (0.26) 0.71∗ (0.23) 0.41 (0.28) 0.68∗ (0.29) 0.67∗ (0.29) 0.97∗ (0.23) 0.91∗ (0.21) 1.08∗ (0.15)

11.66 (9.56) 10.40 (8.74) 27.15∗ (11.11) 12.94 (11.79) 12.96 (11.97) 0.53 (8.98) 5.34 (8.31) −4.13 (6.39)

β3

0.87∗ (0.07)

β2

5.64∗ (3.00)

β1

−0.21 (0.24) 0.42 (0.26) −0.17 (0.26) 0.02 (0.24) 0.23 (0.26) 0.40 (0.26) −0.12 (0.26) 0.09 (0.25)

0.06 (0.08)

β4

0.044 0.066 0.021 0.029 0.025 0.115 0.086 0.166

0.069

R2

Notes Standard errors in parentheses show the estimates. SER is the standard error of the regression. ∗ Denotes statistical signiﬁcance at least at the 5 percent level.

1993 –2000 1993 1994 1995 1996 1997 1998 1999 2000

Sample period

0.85 1.90 3.58∗ 0.44 0.82 1.95 0.66 0.18

3.52∗

F-statistic (β1 , β4 = 0, β2 = 1)

242 236 252 252 252 254 259 261

2,008

Obs.

Table 12.3 Market efﬁciency tests for NFL over–under bets during the 1993–2000 seasons adjusted for overtime games and point spread

Efﬁciency of the over–under betting market

141

tendency for low-spread games to end in overtime. Table 12.3 reports these results. It presents regression tests for the over–under National Football League betting market for the 1993 through 2000 seasons combined and for individual seasons. The regression is TP = β1 + β2 (OU) + β3 (OT) + β4 (PS) + ε where TP is total points scored, OU is the over–under line, OT equals 1 for an overtime game and 0 otherwise, PS is point spread and β1 , β2 , β3 , β4 are regression coefﬁcients and ε is an error term. The test of efﬁciency is an F -test of the joint hypothesis, β1 = β4 = 0 and β2 = 1. For the entire sample period, we ﬁnd that overtime games increase the total points scored by a statistically signiﬁcant average of 3.73 points. This is reasonable because the ﬁrst team to score in overtime wins, so most overtime games are settled by a three-point ﬁeld goal, which is easier to score than a six-point touchdown. Indeed, the actual average difference in scores in overtime games is 3.75. This means that the OU line explains none of the effect of overtime on TP. Market efﬁciency is still rejected for the overall sample but the F -statistic is much smaller and less signiﬁcant. Nevertheless, 1995 again drives the overall rejection of efﬁciency. For individual years, taking account of overtime games has moved the intercepts somewhat closer to zero and OU coefﬁcients closer to one, in most cases. Nevertheless, the point estimates of the intercepts are still very large in the ﬁrst ﬁve years of the sample. These are also the years in which the R-squareds are low. This may indicate that certain betting strategies will be more proﬁtable in these years. Table 12.4 presents outcomes for betting strategies of both “over” and “under”. Even though ties in Las Vegas are “pushes”, that is, bets are returned, other bookies may treat pushes as losses, so we also show results for the case where ties lose. These markets might exist in local areas where gambling is illegal, and bookies require a larger proﬁt margin because of the increased risk. Results for the full sample period show that betting the over is only marginally better than betting the under. Furthermore, the 50.1 percent winning percentage is nowhere near the 52.4 percent required to cover the vigorish paid to the bookie. Only in 1995 could one have made signiﬁcantly more than the required 52.4 percent. Of course, this is ex post, and we are not surprised to ﬁnd one proﬁtable strategy in eight sub-periods especially since we are considering both sides of the bet. Even in 1993 through 1997, where according to the intercept estimates there appear to have been relatively large ﬁxed biases, betting the over would not have yielded a proﬁt. Furthermore, in three of the ﬁve years, betting the under would have been as good as or better than betting the over. The results in Table 12.3 show that information impounded in the PS cannot be used systematically to predict the TP after the OU line is accounted for. Nevertheless, we considered two ways to use PS and OU in a more nonlinear fashion that could produce proﬁtable bets. First, when PS is larger than average and OU is smaller than average, we reasoned that it might be proﬁtable to bet the underdog.

142

J. Golec and M. Tamarkin

Table 12.4 Over–under betting strategies winning percentages. The proﬁtability of over– under betting strategies for National Football League games over the 1993 through 2000 seasons, for combined totals and for individual years Sample period

Betting strategy

Number of bets

Bets won

Ties

1993–2000

Over Under Over Under Over Under Over Under Over Under Over Under Over Under Over Under Over Under

2,008 2,008 242 242 236 236 252 252 252 252 252 252 254 254 259 259 261 261

988 984 119 119 115 117 133 112 125 122 122 128 126 123 125 131 123 132

36 36 4 4 4 4 7 7 5 5 2 2 5 5 3 3 6 6

1993 1994 1995 1996 1997 1998 1999 2000

Winning percentages Ties push

Ties lose

0.501 0.499 0.500 0.500 0.496 0.504 0.543∗ 0.457 0.506 0.494 0.488 0.512 0.506 0.494 0.488 0.512 0.482 0.518

0.492 0.490 0.492 0.492 0.487 0.496 0.528∗ 0.444 0.496 0.484 0.484 0.508 0.496 0.484 0.483 0.506 0.471 0.506

Note * Denotes a statistically signiﬁcant winning percentage. A push means that all bets are returned when the over–under betting line equals the total points scored in the corresponding game (a tie). Winning percentages are calculated assuming that ties lose and that ties push.

When OU is small, the market is expecting fewer points to be scored, and this may make it more difﬁcult for a favorite to beat an underdog by a large number of points. Second, when PS is larger than average and OU is also larger than average, we speculated that it might be proﬁtable to bet the favorite. Here, the market expects a large number of points to be scored so favorites might cover a large spread more easily. Table 12.5 reports results for these strategies. First, for the full sample without any ﬁlter, betting the underdog was nearly a proﬁtable strategy, with a 52.1 percent winning percentage. When we ﬁltered by choosing only games where the point spread was above average (PS > 5.5) and over–under was below average (OU < 41), betting on the underdog was more proﬁtable, as predicted. Assuming ties push, such bets yielded a 55.8 winning percentage. This strategy is proﬁtable in six of the eight sample years and has a winning percentage greater than 55 in four of those years. On the other hand, when we chose games where the point spread was above average (PS > 5.5) but the over–under was above average (OU > 40), our proposed strategy of betting on the favorite was still unproﬁtable. The winning percentage improved only marginally, from 47.9 to 48.5.

Efﬁciency of the over–under betting market

143

Table 12.5 Favorite–underdog point spread betting strategies using the over–under line. The proﬁtability of point spread betting strategies for National Football League games over the 1993 through 2000 seasons Sample period

Betting strategy

Number of bets

Bets won

Ties

1993–2000

Favourite Underdog PS > 5.5, OU < 41 Favourite Underdog PS > 5.5, OU < 40 Favourite Underdog PS > 8, OU < 38 Favourite Underdog PS > 8, OU < 43 Favourite Underdog

2,008 2,008

920 1,002

468 468 438 438

Winning percentages Ties push

Ties lose

86 86

0.479 0.521

0.458 0.499

202 255

11 11

0.442 0.558

0.432 0.545

205 218

15 15

0.485 0.515

0.468 0.498

99 99

40 56

3 3

0.417 0.583

0.404 0.566

131 131

66 60

5 5

0.524 0.476

0.504 0.458

Note * Denotes a statistically signiﬁcant winning percentage. A push means that all bets are returned when the point spread betting line equals the actual difference in points scored in the corresponding game (a tie). Winning percentages are calculated assuming that ties lose and that ties push.

We also look at more extreme ﬁlters for the same strategies. When we ﬁltered by choosing only games where the point spread was much above average (PS > 8) and over–under was much below average (OU < 38), betting on the underdog was even more proﬁtable. The winning percentage increased from 55.8 to 58.3. Furthermore, when we chose games where the point spread was much above average (PS > 8) but the over–under was much above average (OU > 43), betting on the favorite became barely proﬁtable at 52.4 percent. Clearly, there are fewer games that pass the more restrictive ﬁlters, however, these more restrictive ﬁlters support both of our speculations. Indeed, the restrictive ﬁlter has sharply increased the winning probability (from 47.9 to 52.4) for the strategy of betting the favorite, overcoming the strong overall tendency for the underdog to cover the spread.

Conclusion The examination of the over–under betting line gives additional insight to the efﬁciency of the football betting market. We ﬁnd that both PS and OU are good predictors of MV and TP respectively. In our data, neither the over nor the under are proﬁtable wagers. There does appear, however, to be a statistically signiﬁcant bias in OU, but the bias does not seem to vary proportionally with the line. One year, 1995, has a large ﬁxed bias and may be driving this result. When we account

144

J. Golec and M. Tamarkin

for the tendency of overtime games to be higher scoring, the bias is reduced but still remains statistically signiﬁcant. Our more interesting result comes from using the OU in conjunction with the PS to concoct a winning strategy. We argue that games that have a low OU are likely to have a low total score and thus, may prove to be more difﬁcult for the favorites to cover the point spread. Similarly, games that have a high OU may prove easier for the favorites to cover the point spread. Our results partially bear out these conjectures. Using the average OU as a ﬁlter improves predictions for the underdog when the PS is above the average and OU is below the average. On the other hand, wagering on the favorites when the PS and OU are above average is not proﬁtable. The adoption of more extreme ﬁlters improved the results and we were able to show proﬁtable betting results for both of our theoretical strategies. Betting on the underdog in games that pass the ﬁlters produces the best proﬁts. Betting on the favorites in games that pass the restrictive ﬁlters sharply increase the winning percentage overcoming the strong overall tendency for the underdog to win. The novel use of the over–under betting line that we employ shows that bettors can use information from one type of betting line to enhance their betting strategies in a different betting line. It is not known to what extent professional gamblers are aware of this. Future research in gambling can explore other combinations of betting lines.

Notes 1 Bookies will adjust the line if they notice that professional bettors are betting more heavily on one side. This is an indication to them that the original line may not be an accurate estimate of the mean of the distribution. 2 Conversation with Sonny Reizner, former sports book manager of Rio Suite Hotel and Casino, in Las Vegas.

References Golec, Joseph and Maurry Tamarkin (1991), “The degree of inefﬁciency in the football betting markets,” Journal of Financial Economics 30, 311–323. Gray, Philip K. and Stephen F. Gray (1997), “Testing market efﬁciency: evidence from the NFL sports betting market,” Journal of Finance 62, 1725–1737. Jaffe, Jeffrey F. and Robert L. Winkler (1976), “Optimal speculation against an efﬁcient market,” Journal of Finance 31, 49–61.

13 Player injuries and price responses in the point spread wagering market Raymond D. Sauer

This chapter studies the response of point spreads to a readily observed event: the absence of a key player due to injury. The analysis is thus similar to an event study, with the added feature that the mean price response is compared with the mean effect of the injuries on actual outcomes (game scores). The analysis in this chapter can thus be viewed as a test of event study methods using a market where the simplicity of the ﬁnancial contract makes such a test feasible. Yet, though the contract is simple, the injuries themselves create problems, since many of them are partially anticipated events. In the case of basketball injuries, an empirical model of the probability of player participation can be estimated and used in conjunction with a model of efﬁcient pricing to interpret the relation between point spreads and scores. The pricing model yields numerous implications that are consistent with the data. Hence, the good news is that the relation between point spreads and scores during injury events is consistent with efﬁcient pricing. The exercise tests and lends credence to the importance of partial anticipation as an important factor in interpreting abnormal returns when the ex ante probability of an event differs substantially from zero.

Introduction This chapter studies the point spread wagering market for professional basketball games. Its primary concern is the wagering market’s response to a series of events: injuries to star basketball players. In colloquial terms, the chapter seeks to determine if the absence of a Larry Bird or a Magic Johnson (perennial stars in this sample) is efﬁciently priced. Injuries were chosen for this study because the absence of a key player is arguably the single most important factor affecting game scores, and thus prices in this market. Contracts in the point spread betting market are quite simple, in that the value of a wager is determined once and for all by the outcome of a single game. This contrasts with the value of most ﬁnancial assets, which are affected by a continuum of events and anticipations at multiple horizons. The relative simplicity of wagering markets enables a sharper focus on the relation between events, market prices, and outcomes. For the most part however, the literature on wagering markets has failed to exploit this possibility. There are many papers which evaluate the proﬁtability of

146

R. D. Sauer

various betting rules, or that test for statistical biases in reduced form regressions, but few papers focus directly on the relation between events, prices, and outcomes. The analysis of injury events in the wagering market enables us to address an important question: do changes in market prices precipitated by events accurately reﬂect changes in the distribution of outcomes? Event studies generally presume that the answer is yes: abnormal returns measured therein are used to draw inferences about the consequences of changes in regulation, corporate governance, and other factors. These studies are valuable precisely because direct measurement of the event’s impact on earnings is difﬁcult. But the difﬁculty of measurement also makes it difﬁcult to directly test the event study method itself. This study uses a point spread pricing model to provide such a test based on injury events to star basketball players. A sample of 700 games missed by star players over a six-year period provides an opportunity to confront this model with a sequence of repeated events. We can therefore carefully evaluate the performance of the point spread market as a mechanism which puts a price on the event of interest. In fact, the study shows that point spreads are biased predictors of game scores during injury events. The question then becomes whether this bias reﬂects inefﬁcient pricing, or alternatively, a combination of the event-generating process and the empirical approach employed by the event study method. The deﬁnition of an injury event is guided by the central question of this study, namely, is the absence of a star basketball player efﬁciently priced in the point spread betting market? Hence, an injury event consists of a game missed by a star player. Although this appears to be a natural deﬁnition, it creates problems since many games missed due to injury are neither surprises nor perfectly anticipated. Partial anticipation of player absences can create bias in point spread forecast errors, much in the way that estimates of value changes due to takeover activities contain a negative bias due to sample selection and partial anticipation (Malatesta and Thompson, 1985; Bhagat and Jefferis, 1991). A unique feature of this study is the means by which the bias problem is resolved. By studying the nature of injury spells to basketball players, we learn how to form a subsample free of selection bias. In addition, knowledge of the injury process can be incorporated into a simple pricing model. The model implies that biases in the primary sample will vary in predictable ways as an injury spell progresses. Finally, the pricing model can be used to extract the market’s estimate of the participation probability of an injured player. This estimate is quite close to the estimate obtained from a duration analysis of the injuries. In each case, we ﬁnd that the point spread response to injury events is consistent with efﬁcient pricing. Hence, the primary question addressed in the chapter is answered in the afﬁrmative: price changes accurately reﬂect changes in the distribution of outcomes. Yet, proper interpretation of these price changes required detailed knowledge of the event generating process. Without such knowledge, interpretations of event study returns can be misleading, as Malatesta and Thompson (1985) and Bhagat and Jefferis (1991) have argued. The analysis begins with a brief description of the wagering market and the data. This is followed by a section that documents the essential facts on the nature

Player injuries and price responses

147

of injury spells. These are used to construct a model of efﬁcient point spreads in the section on ‘Participation uncertainty and efﬁcient point spreads surrounding injury spells’. The section that follows describes an estimation procedure which enhances our ability to test the model, and subsequently conducts the tests.

The point spread market for professional basketball games Efﬁcient point spreads A point spread (PS) wager is deﬁned by the following example. Suppose the Hawks are favored by 5 points over the Bulls. Let PS = 5 represent this 5 point spread, and deﬁne DP as the actual score difference, that is, points scored by the Hawks less points scored by the Bulls. A point spread wager is a bet on the sign of (DP − PS). Bets on the Hawks pay off only if DP − PS > 0, that is, if the Hawks outscore the Bulls by more than the 5 point spread. Bets on the Hawks lose if DP − PS < 0. Bets on the Bulls pay off/lose in the opposite circumstances and bets on both teams are refunded if DP − PS = 0. Winning bets pay off at odds of 1 to (1 + t), where t can be thought of as a transactions cost which covers the bookmaker’s costs of operation. A winning wager of $1 therefore returns $(1 + 1/(1 + t)). Standard terms in the Las Vegas market set t at 10 cents on the dollar. Consider a strategy that places bets on a chosen team under a speciﬁc set of conditions. Without loss of generality, deﬁne the DP and PS ordering by subtracting the opponent’s points from the points scored by the chosen team. The probability that a bet is a winner is p = [prob (DP–PS) > 0]. In addition, the probability that the bet is a loser is p = [prob (DP − PS) < 0], and the probability of the bet being refunded is p0 = [prob (DP − PS) = 0] = 1 − p − p . Efﬁcient point spreads deny the existence of a proﬁt opportunity to any strategy. Speciﬁcally, the expected return to a $ 1 wager must be non-positive: p(1 + 1/(1 + t)) − (1 − p0 ) ≤ 0

(1)

A similar requirement holds for PN, which is the probability that the opposing team beats the spread. Combining these yields bounds for the probability of a winning wager: (0.5 − p0 /2)/(1 + t/2) ≤ p ≤ (0.5 − p0 /2)(1 + t)/(1 + t/2)

(2)

This result is simpliﬁed if the commission is assumed to be zero. Then p = 0.5 − p0 /2. Since the probabilities sum to 1, p = p . Hence [prob (DP − PS) > 0] = [prob (DP − PS) < 0], which is satisﬁed only if PS is the median of the distribution of DP. Provided that the ex ante distribution of DP is symmetric, then PS = E(DP) is also implied, and the point spread can be said to be an unbiased forecast of the score difference of the game. The ex post distribution of the forecast errors

148

R. D. Sauer

(DP − PS) will then be symmetric with a mean of zero. Hence the null hypothesis implied by efﬁcient point spreads under these conditions is that the mean forecast error is zero: H0 : MFE(PS) = 0

(3)

which is a standard test that is often performed in point spread studies.1 These restrictions on efﬁcient point spreads are weakened when non-zero transaction cost are recognized. Assuming that p0 = 0 for convenience and setting t = 0.10, p is bounded by p ∈ (0.4762, 0.5238), which restricts PS to being within a precise distance of the median of DP.2 Inspection of equation (2), above, indicates that the distance from the median allowed by the no proﬁt opportunity condition shrinks to zero as t → 0. In sum, an efﬁcient point spread is the median of the distribution of score differences in the absence of transaction cost. For symmetric distributions, equation (3) is implied, and efﬁcient point spreads are the optimal forecast of the score difference. Given symmetry and positive transaction cost, failure to reject equation (3) is consistent with efﬁcient pricing of point spreads.3 Point spreads and scores: the data The data are based on a sample of 5,636 games played over six consecutive seasons beginning in 1982. The point spreads are those prevailing in the Las Vegas Market at 5 pm. Eastern time on the day the game is played.4 Deﬁne DPtij as the difference in the score of a game at t, and PStij as the point spread’s prediction of this differential, where the ordering is obtained by subtracting the visiting team’s (team j ) points from the home team’s (i) points.5 Figure 13.1(A–C) depicts the distributions of the point differences, spreads, and forecast errors. A glance at the distributions shows no obvious asymmetry; and the data pass formal tests of the hypothesis that the distributions are symmetric.6 Since the symmetry property is accepted, tests based on expected values can be used to test the proposition that point spreads are efﬁcient forecasts of the difference in scores of NBA games. Alternative ways of deﬁning the score difference ordering exist. Indeed, the home–visitor ordering is a simple transformation of the ordering displayed in daily newspapers, in which the point difference is deﬁned by subtraction of the points scored by the underdog from those of the favorite. A recent series of papers considers the implications of the score difference ordering (Golec and Tamarkin, 1991; Dare and McDonald, 1996; Sauer, 1998) for simple tests of efﬁcient pricing. In light of this discussion, we examine these tests under all partitions of the favorite–underdog/home–visitor partitions of the sample. Table 13.1 Panel A examines the median condition for each sub-sample. The right-most column in Table 13.1 lists the proportion of winning bets realized by a strategy of betting on the team listed ﬁrst in the score difference. Betting on the home team yields a winning percentage of 50.6 percent over the six-year period, whereas betting on the favorite wins 50.3 percent of the time. Only in the case

Player injuries and price responses A

149

Point spreads and score differences in the NBA (1982–88)

240

Frequency

200 160 120 80 40 0

–50

–40

–30

–20

–10

0

10

20

30

40

50

–50

–40

–30

–20

–10

0

10

20

30

40

50

–50

–40

–30

–20

–10

0

10

20

30

40

50

B

Frequency

450 350 250 150 50 0

C

240

Frequency

200 160 120 80 40 0

Figure 13.1 (A) Score differences; (B) point spreads; (C) forecast errors.

of pick ’em games in which betting on the home team wins just 47.7 percent of bets, is the proportion near the efﬁcient bound (0.4762, 0.5238), hence this simple test is consistent with efﬁcient pricing (test statistics are not applicable since the proportions are inside the bound). Table 13.1 Panel B presents the sample means and standard deviations for the point differential (DP), point spread (PS), and the forecast error (DP − PS) under each ordering. The right-most column lists the t-statistic for testing the hypothesis

5,636 4,341 1,209 86 5,550

A1. Home–away All games Home favorites Home underdogs Pick ‘em games A2. Favorite–underdog 5,510 4,243 1,181 86 5,424

Bets

4.62 (12.42) 6.87 (11.82) −3.09 (11.74) −0.91 (10.58) 6.05 (11.83)

B1. Home–away All games Home favorites Home underdogs Pick’em games B2. Favorite–underdog 4.38 (5.59) 6.81 (3.62) −4.05 (2.30) 0.00 (0.00) 6.21 (3.56)

PS 0.24 (11.15) 0.06 (11.07) 0.96 (11.45) −0.91 (10.58) −0.16 (11.16)

DP − PS

2,789 2,148 600 41 2,729

Wins

1.62 0.37 2.91 −0.79 1.06

t-stat

126 98 28 0 126

Ties 0.506 0.506 0.508 0.477 0.503

Wins/bets

Notes Sample characteristics. The sample encompasses all regular season NBA games played in the six seasons from 1982–83 through 1987–88. Score differences were obtained from the annual edition of the Sporting News NBA Guide. Point spreads were obtained from The Basketball Scoreboard Book. These point spreads are those prevailing in the Las Vegas market about 2.5 hours prior to the start of play (5 pm Eastern time on a typical night). No point spread is reported for twenty-two games during this period, which reduces the sample from 5,658 (all games played) to 5,636 (all games with point spreads). Panel A. This panel lists the number of games, bets (the number of games in which DP – PS, which are ties), and the number of bets won by wagering on the team in the ﬁrst position of the score difference. Wins/bets is the sample estimate of p, the proportion of such bets won. Since this proportion always lies inside the bounds given by (2), no test statistic is required to evaluate this implication of efﬁcient pricing. Panel B. Standard deviations are given in parentheses. The t-statistic tests the null hypothesis that the mean forecast error (DP − PS) is zero. Although the null is rejected in the case of home underdogs, the failure to reject efﬁcient pricing in panel A for this partition indicates that the rejection in B is caused by a departure from the symmetry assumption.

DP

Differencing method/sample partition

B. Sample means and standard deviations

Games

Differencing method/sample partition

A. Sample frequencies

Table 13.1 Score differences and point spreads for NBA games

Player injuries and price responses

151

the point spread is an unbiased forecast, that is, H0 : E(DP − PS) = 0. In the case of home underdogs, it appears that the point spread is biased as the mean of DP − PS = 0.96 (t = 2.91). Betting on home underdogs is not proﬁtable however, as seen in Table 13.1 Panel A (wins/bets = 0.508). Hence, one infers that this latter result is a violation of the symmetry condition for this sub-sample and not a rejection of efﬁciency.

Point spreads and injury spells: the data The sample of injury games We now examine the forecast errors of point spreads when important players are unable to participate in a contest. A sample of star players was compiled for each of the six seasons by reference to the previous year’s performance standings in the Ofﬁcial NBA Guide. The top twenty leading scorers and rebounders were recorded, as were members of the All-Star team. The games missed by these players in the subsequent year constitute the sample of injury games for analysis. This procedure creates a sample of 273 injury spells encompassing 700 missed games.7 Bias in the forecasts For this analysis, the score ordering is deﬁned by subtracting the opponent’s points from the points of the team with the injured player. The forecast error then, is the observed score differential for the game less the point spread (similarly deﬁned). The mean forecast error of the spread for the 700 game sample is −1.28 points (t = 2.87). Point spreads for injury games therefore contain signiﬁcant bias. Teams with injured players do worse, by more than a point on average, than predicted by the point spread. As far as previously documented biases in point spreads go, this is quite large. There are two possible explanations for this bias. A conjecture motivated by the behavioral school might go as follows. Bookmakers trade mostly with a relatively uninformed, unsophisticated clientele (since on average the clientele must lose). These bettors are not up to date on the status of injured players, so bookmakers do not fully adjust prices for injury games. Had one known that the player was destined to miss the game, betting against the team with the injured player would represent a proﬁt opportunity. An alternative hypothesis is that the bias stems from (rational) partial anticipation of the player’s absence from the game. If there is some chance ex ante that the player might participate in the game, the mean forecast error of −1.28 points is affected by selection bias even if the point spread is efﬁcient. The following example illustrates the point. Suppose there is a 50 percent chance (ex ante) that the player will miss the game. With the player, team i is expected to win by 4 points; without him, by 2 points. If so, the efﬁcient point spread would be 3 points. On average then, his team wins by 2 points when he misses the game, but the spread is 3, which delivers the bias.

152

R. D. Sauer

There are, thus, two competing explanations for the point spread bias. The remainder of the chapter explores implications of the partial anticipation explanation in which proﬁt opportunities are absent. We begin by studying the injuries themselves. Injury spells in the NBA When the casual sports fan thinks of injured athletes, famous cases of incapacitating injuries, such as a broken leg (Tim Krumrie in the 1989 Super Bowl, Joe Theisman on Monday Night Football) or broken ankle (Michael Jordan in 1985) come to mind immediately. Yet, these are relatively infrequent occurrences. Far more common are the nagging injuries such as muscle pulls and ligament sprains which could either heal or deteriorate between contests. Professional athletes continually play with taped ankles and thighs, knee braces, ﬁnger splints, wrist casts, ﬂak jackets, etc. Indeed, many games at the professional level are contested by the “walking wounded.”8 This is relevant to the analysis because it is nagging injuries which cause uncertainty over participation. This uncertainty exists not only for the general public, but also for team trainers and the players themselves. Indeed, the classic injury situation occurs when the team announces that a player is on a “day-to-day basis.” It is not uncommon for the player to announce that he is ﬁt and ready to play, while team ofﬁcials state otherwise. Whether or not an injured player will participate is often determined by his response during the pre-game warm-up, only moments before the game begins.9 Table 13.2 tabulates information on injury spell durations for the full sample of 263 injuries. Most spells are short: 75.5 percent of uncensored spells last three games or less.10 For the ﬁrst three games, the probability of returning in each subsequent game, that is, the hazard, is in the 0.30–0.40 range. For injuries that preclude participation for more than three games, the hazard is much lower, in the 0.10–0.20 range. The data on spell lengths suggest a fairly simple message: if a player hasn’t missed many games, his chances of returning in the next game are fairly high. On the other hand, if he has missed more than a few games, his chances of returning in the next game are quite low. This suggests two broad classes of injuries. In the much larger class the player may return to the lineup at any time. We classify these as nagging injuries that require an uncertain amount of rest for recuperation. More serious injuries can be incapacitating, completely ruling out a quick return to action. These injuries comprise the second, less common class. To take a closer look at nagging injuries, Table 13.2 Panel B tabulates spell lengths and return probabilities for spells lasting ﬁve games or less. Almost half of these spells terminate after just one game. Of spells that continue, slightly more than half terminate after the second game; the hazard is a bit greater following the third. On the assumption that the hazard rate for this sub-sample is constant, its maximum likelihood estimate is 0.513, with a standard error of 0.032.11 A rate of 0.50 would yield a cumulative return rate of 75 percent by the second game,

Player injuries and price responses

153

Table 13.2 Injury spell durations and hazard rates Spell length

Frequency

Censored

At risk

Hazard

Cumulative return rate

A. Injury spell durations and hazard rates 1 105 16 2 51 2 3 29 0 4 6 2 5 7 1 6 8 1 7 6 1 8 5 0 9 5 0 10 0 1 11 4 1 12 1 0 13 1 0 14 1 0 15 1 1 16 1 0 17 1 0 18 1 0 19 0 0 20 0 0

263 142 89 60 52 44 35 28 23 18 17 12 11 10 9 7 6 5 5 5

0.3992 0.3592 0.3258 0.1000 0.1346 0.1818 0.1714 0.1786 0.2174 0.0000 0.2353 0.0833 0.0909 0.1000 0.1111 0.1429 0.1667 0.2000 0.0000 0.0000

0.3992 0.6316 0.7551 0.7796 0.8148 0.8512 0.8797 0.9042 0.9250 0.9250 0.9456 0.9538 0.9580 0.9622 0.9664 0.9747 0.9789 0.9831 0.9831 0.9831

B. Hazard rates for nagging injuries 1 105 16 2 51 2 3 29 0 4 6 2 5 7 1

219 98 45 16 8

0.4795 0.5204 0.6444 0.3750 0.8750

0.4795 0.7685 0.9204 0.9502 0.9950

Note Frequency is the number of injury spells terminated after an absence of n games, where n is given in the spell length column. Censored observations are terminated by the end of the season rather than a return to action. Hence hazard is frequency/(at risk-censored); the cumulative return rate incorporates censored spells in a similar manner.

87.5 percent by the third, and 93.75 percent by the fourth. These are quite close to the actual return rates of 76.9, 92.0, and 95.0 percent.12 Hence, for nagging injuries (spells of short duration), we assume the probability of returning in the n + 1st game, having missed n games thus far, is about 0.5.

Participation uncertainty and efﬁcient point spreads surrounding injury spells Explaining the ex post bias by appealing to participation uncertainty is straightforward. Testing this explanation yields insight into its credence, and can be achieved by imposing a simple probability structure on the injury process. This structure differs according to whether the injury is nagging (yielding a short spell) or incapacitating (yielding a long spell). We begin with the case of nagging injuries.

154

R. D. Sauer

Assume that participation in games reveals information about the soundness of a player. Playing informs the market that the player is relatively sound, whereas not playing informs the market that the player is currently unsound. For simplicity, we assume that the participation probability, given that the player participated in the previous game, is 1.13 Based on the evidence in the section on “Point spread and injury spells: the data”, we assume that the probability of playing conditional on having missed the previous game is 0.5. These assumptions apply to nagging injuries; the onset of these spells is unexpected, and there is a positive probability of terminating the spell in each subsequent game. In contrast, incapacitating injuries are likely to be observable (e.g. a broken leg). Thus, the onset of long spells will be anticipated, and expected to continue for some time. Hence, for long spells we assume that the participation probability is 0. The following notation is used to develop the model’s implications. p = probability (ex ante) that the player participates DP = DPtij ; the difference in points scored on date t between teams i and j, where the ordering deﬁnes team i as having the injured player PS = PStij ; the market point spread with the ordering deﬁned as for DP PSPLAY = PS|play; the point spread conditional on the player participating (p = 1) PSNOT = PS|not play; PS conditional on p = 0. In an efﬁcient market, PSPLAY = E(DP|play), and PSNOT = E(DP|not play). The efﬁcient unconditional point spread for injury games, PS∗ , is thus given by: PS∗ = p · PSPLAY + (1 − p) · SNOT

(4)

A construction which will be important in testing some of the propositions that follow is the estimated “change” in the point spread. This is deﬁned as the difference between the market point spread, and the point spread that would be in effect assuming the injured player were to participate. This deﬁnes DIFF = PS − PSPLAY

(5)

We now use the model to describe the evolution of the point spread bias as an injury spell progresses. p = 1 for the ﬁrst game of short spells. Thus, the player’s initial absence is a surprise, which leads to the ﬁrst proposition. Proposition 1: For the ﬁrst game of short spells, PS∗ = PSPLAY, and therefore DIFF = 0. Given that the player does not participate in games during the spell, the expected outcome is E(DP|not play) = PSNOT. Since the efﬁcient point spread incorporates a non-zero probability of participation, it is a biased forecast (ex post), which is proposition 2. Proposition 2: The forecast errors of an efﬁcient point spread are biased for games missed during short injury spells.

Player injuries and price responses

155

This can be tested by calculating MFE(PS), which is predicted to be the difference between the expected outcome and the point spread. Thus, MFE(PS) = PSNOT − PS∗ = p · [PSNOT − PSPLAY] < 0. This proposition explains the point spread bias which we have already documented, provided that p is non-zero. We can be more precise however. The value of the player to the team is measured by the loss in output due to his absence. This deﬁnes LOSS = E(DP|not) − E(DP|play) = PSNOT − PSPLAY in an efﬁcient market. Our study of injury durations indicated that prob (playt |nott−1 ) = 0.5. Hence we can sharpen this proposition for games 2–n of an injury spell. Proposition 3: In games 2–n of an injury spell, the point spread adjustment (DIFF) will equal half the value of the injured player. Hence the mean forecast error will be 50 percent of the value of the player: MFE(PS) = 0.5 · LOSS. Obtaining a measure of LOSS along with MFE(PS) allows us to infer the market’s estimate of p. The analysis used above is symmetric in the sense that it applies not only during the injury spell, but in the game when the player returns to the lineup. Thus, when the player returns to the lineup, the bias is reversed, since the expected outcome given participation is PSPLAY, and p < 1. Proposition 4: The bias for the return game is PSPLAY − PS∗ = (1 − p) · [PSPLAY − PSNOT] > 0. If p = 0.5 and constant throughout the spell, the return game bias is simply the mirror image of the injury game bias. We also predict that DIFF < 0, as above. Once a player returns to the lineup after a short spell, p subsequently returns to 1.0. We thus have Proposition 5: After the initial return game, PS∗ = PSPLAY. Hence, we expect DIFF = 0, and the absence of forecast bias. Long spells involve more serious injuries. By distinguishing long from short spells, we develop two additional propositions. These stem from the assumption that the injury produces no uncertainty regarding the player’s status (p = 0). Proposition 6: For long spells, PS∗ = PSNOT, and thus DIFF = PSNOT − PSPLAY = LOSS. Since p = 0, the expected forecast error is E(DP|not play)−PSNOT = 0. Efﬁcient point spreads thus display no ex post bias for long spells. Recall the assumption that incapacitating injuries are observed by the market when they happen. Hence, p = 0 for the ﬁrst game. Proposition 7: For the ﬁrst game of long spells, PS∗ = PSNOT.

156

R. D. Sauer

Thus, DIFF = LOSS, and PS∗ is unbiased. This proposition stands in contrast to its counterpart for short spells. The model in this section provides us with an array of predictions about the behavior of point spreads during injury spells. Not only does it imply the ex post bias, it predicts the magnitude of the bias, and differences in the bias over the duration of the spell and across different types of injuries. Testing some, though not all, of these predictions requires knowledge of E(DP|play), or PSPLAY in an efﬁcient market. Since wagering opportunities on NBA games are normally offered only on the day of the game, PSPLAY is not observed in situations where injuries are involved. It turns out however, that a simple statistical technique provides very accurate estimates of PSPLAY, enabling tests of all 7 propositions.

Empirical analysis of the nagging injury hypothesis The propositions are tested in the section titled “Empirical tests of the partial anticipation hypothesis”. The basis for estimating PSPLAY is presented in the section that follows and the next section examines its statistical properties. This analysis shows that PSPLAY can be estimated with considerable precision. A method for estimating PSPLAY The method we use is like that of an event study, which requires an estimate of the expected return in order to compute an abnormal return. The former can be calculated using regression estimates of the parameters in the market model. Brown and Warner (1985) have studied the statistical properties of this method, and conclude that its estimates of abnormal returns are reasonably precise, with desirable statistical properties. This is so, despite the fact that the market model is a very poor conditional predictor of stock returns. In sample, the average R 2 of Brown and Warner’s market model regressions was 0.10. We can do much better with point spreads out of sample. The technique we use is motivated by the following. Suppose that the outcome of a game – the difference in score – is determined by luck, the home court advantage, the relative ability of the two teams, and idiosyncratic factors. Then score differences can be thought of as being generated by the following: DP = g(Ci , Si , −Sj , e, w)

(6)

Si and Sj are measures of the ability of teams i and j at full strength, ci is the home court advantage of team i, and e and w are random components. Each variable is assumed to be calibrated in terms of points (scoring). We assume that w is “luck” that cannot be anticipated, whereas e includes idiosyncratic factors (matchup problems, length of road trip, injured players, etc.) that may be known. It is assumed that e and w are uncorrelated with the teams’ abilities. Based on the ﬁrst section, we assume that PS = E(DP), and further, that g is a simple additive function.

Player injuries and price responses

157

Thus PS = ci + Si − Sj + e

(7)

Recall that the object of this exercise is to obtain an estimate of PSPLAY, the point spread that would be observed if the injured player was expected to play. Since Si and Sj are team abilities at full strength, PSPLAY can be constructed if they, along with ci , are known. We estimate them using the following regression: PStij = Shi · dhi − Svj · dvj + B · dtij + e

(8)

The estimation procedure uses the twenty games for each team played prior to each injury spell.14 di is a dummy variable which takes on the value of 1 when team i is the home team, and dj is 1 when team j is the visitor. Shi and Svj are the coefﬁcients of the team dummies, and are interpreted as the ability indexes. Since Shi and Svi differ, this speciﬁcation embeds a team speciﬁc home court advantage (ci ) in the model. Itij is the difference in the number of injured players. Since we have this data, it is included in the regression to keep the estimates of Shi and Svj free of omitted variable bias, which would otherwise occur if point spreads in the estimation period were affected by the absence of an injured player. e is the idiosyncratic error term which remains after account is taken of observable injuries. Out-of-sample estimates of PSPLAY can be obtained by subtracting the visiting strength of team j from the home strength of team i: PSPLAY = Shi − Svj The accuracy of PSPLAY Before using PSPLAY, we check the model’s ability to predict point spreads – for non-injury games – out of sample. There are three criteria: 1 2 3

What percentage of the variation in point spreads is explained by out of sample predictions from the model? What are the characteristics of the distribution of the forecast errors PS − PSPLAY? Is there a discernable difference between the ability of actual and estimated point spreads to predict game outcomes?

For each of the six seasons, the model was successively re-estimated using samples including the most recent twenty games for each team. Out-of-sample forecasts (PSPLAY) were then generated for the next ﬁve games. This procedure yielded 3,567 predicted point spreads for non-injury games over the six-year period. The variance of actual point spreads for these games is 31.4 points. The residual variance of the forecast error, PS − PSPLAY, is 4.2. Hence, out of sample, the model explains more than 85 percent of the variation in point spreads. This is almost an order of magnitude greater than what market models used in event studies achieve in sample.

158

R. D. Sauer PSPLAY 3567 non-injury games A

500

Frequency

400 300 200 100 0 –10

–8

–6

–4

–2 0 2 Value of PSPLAY

4

6

4

6

8

10

PSPLAY 700 Injury games B

70 60

Frequency

50 40 30 20 10 0 –10

–8

–6

–4

–2 0 2 Value of PSPLAY

8

10

Figure 13.2 Distribution of forecast errors PS − PSPLAY. Note The horizontal axes of these distributions are the magnitude of the difference between the actual point spread and the predicted point spread of the statistical model. Figure 13.2A is constructed by using the statistical model (estimated on a twenty game sample) to forecast ﬁve games ahead, and sequentially updating for each of the six seasons. Figure 13.2B displays the distribution of the error in predicting point spreads when players are injured.

The distribution of the forecast errors PS − PSPLAY, is depicted in Panel A of Figure 13.2. The mean of the distribution is −0.003, with standard deviation of 2.06. Less than a quarter of the forecast errors are more than 2 points away from zero. The distribution is thus concentrated on a narrow interval around zero, as it must be if we are to use the model to predict what point spreads would be in the absence of an injury. In contrast, observe the distribution of PS − PSPLAY for

Player injuries and price responses

159

games which players miss due to injury, in Panel B of Figure 13.2. This distribution is clearly shifted to the left of its non-injury counterpart. Evidently, the method is precise enough to portray changes in the point spread due to observable factors such as injuries. Returning to non-injury games, since the actual and predicted point spreads are very close to each other, it therefore follows that their ability to forecast game outcomes is similar. Indeed, for each of the six years, the mean forecast errors of PS and PSPLAY (and their variance) are virtually identical. PSPLAY is thus an accurate and unbiased predictor of point spreads. We can therefore employ it in tests of the injury bias model. Empirical tests of the partial anticipation hypothesis The model implies differences in point spread bias depending on whether the spell was long or short, and whether the game is the ﬁrst game of the spell, in the middle of the spell, or upon the player’s return to the lineup. In order to make a sharp distinction, long spells are deﬁned as those lasting ten or more games, and short spells as those lasting ﬁve games or less. Table 13.3 presents summary statistics on the forecast errors by game. Panel A lists the results for short spells, and panel B for long spells. In addition, panel C tabulates results for the ﬁve-game sequence (for all spells) beginning with the game when the player returns. Column 1 provides the mean forecast error of the actual point spread (PS), which is used to examine the market’s ex post bias. The loss to

Table 13.3 Forecast errors of point spreads by game missed MFE(PS)

DP − PSPLAY (LOSS)

PS − PSPLAY (DIFF)

N

A. Injury spells of ﬁve games or less (absolute value of t-statistics in parentheses) Game 1 −1.03 (1.18) −2.06 (2.35) −1.03 (6.08) 185 Games 2–n −2.16 (2.25) −3.83 (3.91) −1.67 (9.38) 145 B. Injury spells of ten or more games Game 1 3.08 (1.39) 0.92 (0.43) Games 2–n −0.56 (0.70) −2.20 (2.66)

−2.15 (3.22) −1.64 (9.53)

14 193

C. Forecast errors of point spreads upon return (all injury spells) Game 1 0.73 (0.92) −0.16 (0.43) −0.89 (5.88) Games 2–5 −0.01 (0.01) −0.17 (0.40) −0.17 (1.98)

207 778

Notes This table calculates forecast errors according to the sequence of the games missed by the injured player. Game 1 is the ﬁrst game of the injury spell for games missed, etc. Panel C tabulates the statistics for the ﬁrst ﬁve games after completion of the injury spell. MFE(PS) is the mean forecast error of the market point spread. LOSS is the average of DP − PSPLAY, that is, a measure of the effect of the player’s absence (or lack of it in Panel C) on the game. DIFF is average of PS − PSPLAY, that is, the point spread reaction in the market to the injury situation.

160

R. D. Sauer

the team due to the absence of the injured player is LOSS = E(DP)−PSPLAY. The mean forecast error of PSPLAY is thus our estimate of LOSS, which is tabulated in column 2. In column 3 is DIFF, the difference between the actual spread and PSPLAY. One can see immediately from inspection of panel B that for long spells, the hypothesis of no bias cannot be rejected. For short spells, the model predicts bias for games 2–N , and furthermore that 0.5 · LOSS = MFE(PS). Indeed, MFE(PS) is negative, and is 0.56 · LOSS, quite close to the predicted value. Hence, the ex post bias in the point spread provides an estimate of the return probability (0.56) which closely approximates the conditional return probability observed in the sample. The model fares less well in its implications for the ﬁrst game of the injury spells. The point spread response for short spells, measured by DIFF, is signiﬁcantly lower for game 1 than in subsequent games, as expected. Yet, the spread does drop by a point (t = 6.08), indicating that the model omits information that is factored into point spreads. Note, however, that teams with injured players suffer a signiﬁcantly lower LOSS (−2.06 vs −3.83 points) in game 1 than in subsequent games. This indicates that the surprise hypothesis has some merit, as the opposing team is unable to take complete advantage of the player’s absence in the ﬁrst game of the injury spell.15 When players return to the lineup, the forecast error of the ﬁrst game is positive (0.73), but not signiﬁcant (t = 0.92), and DIFF remains signiﬁcantly negative (−0.89, t = 5.88). These results are in rough accord with proposition 4.16 The forecast errors of the point spread thereafter are not statistically different from zero (MFE(PS) = −0.01, t = 0.01), as implied by proposition 5. On three counts the model performs quite well. The point spread is unbiased for long injury spells, where the selection bias problem stemming from partial anticipation is not relevant. In the middle of short injury spells, point spreads contain an estimate of a player’s return to action that is quite close to that implied by a duration analysis of the injury spells. Finally, point spreads for games played after the player’s return are unbiased. The model fares less well in the transition games surrounding the injury spells, most likely due to the stark probability structure that is assumed.

Conclusion This chapter began with what was expected to be a simple exercise: analyzing the response of market prices to a sample of repeated events. In the wagering market, this exercise encompasses not just events and price changes, but the outcomes themselves, enabling a comparison between price changes and outcomes that is generally infeasible with stock prices. Hence, the ﬁndings of this exercise provide a small piece of evidence on the efﬁciency of price responses to changes in information that is difﬁcult to replicate in other settings. Point spreads are biased predictors of game scores during injury events, which is potential evidence of inefﬁcient pricing in the wagering market. An alternative

Player injuries and price responses

161

explanation is based on the idea that many injuries are partially anticipated. A pricing model which combines empirical features of player injuries with the selection method for determining injury games predicts variations in the ex post bias which are consistent with both the data and efﬁcient pricing. The bottom line is that price responses in the wagering market contain efﬁcient estimates of the value of an injured player. Extracting this signal from a sample of injury games is, unfortunately, somewhat tricky. As with other events of interest in ﬁnancial economics, one needs to develop knowledge speciﬁc to the class of the event being studied before interpreting the forecast error. This creates a problem. Explanations of apparently inefﬁcient pricing for a class of events reduce to story-telling exercises unless the stories can be tested. Sorting through the relevant details that are potentially relevant to each class of event can take many papers and many years, as indicated by various takeover puzzles (Malatesta and Thompson, 1985; Bhagat and Jefferis, 1991). In this chapter we develop such a story using facts about injury durations in the NBA. A model that makes use of these facts has testable implications for the behavior of the ex post forecast bias across and within injury spells. These implications are easily tested, and are generally consistent with the data. Although this is good news for event studies – price responses to injury events are efﬁcient – these results highlight the problems involved in obtaining accurate estimates of value from price changes when the ex ante probability of an event is not well known.

Acknowledgments I’m grateful to Jerry Dwyer, Ken French, Mason Gerety, Alan Kleidon, Mike Maloney, Robert McCormick, Harold Mulherin, Mike Ryngaert, and seminar participants at Clemson, Colorado, Indiana, Kentucky, Penn State and New Mexico for comments on earlier drafts.

Notes 1 The symmetry condition is rarely examined however, and in some cases this is critical (baseball over/under wagers are an example). If the ex ante distribution is not symmetric, then the distribution of the forecast errors will be skewed. In this case the mean forecast error will not be zero even when PS = median (DP), and hence a test of equation (3) is inappropriate. 2 Tryfos et al. (1984) were the ﬁrst to systematically examine this bound. 3 Rejection of equation (3) would motivate consideration of transaction costs. For example, simple betting rules proposed in Vergin and Scriabin (1978) are re-evaluated by Tryfos et al. (1984), who use statistical tests which explicitly recognize the transaction costs. The conclusion that these rules are proﬁtable is overturned by these tests. 4 The point spread data are from Bob Livingston’s The Basketball Scoreboard Book. There were 943 games played in each of the six seasons. Point spreads were not offered for some games. The scores were obtained from the Sporting News Ofﬁcial NBA Guide. 5 Henceforth subscripts are dropped except where needed.

162

R. D. Sauer 3/2

6 The skewness coefﬁcients m3 /m2 are 0.02 (0.05) and 0.10 (0.05) for the point difference and forecast error distributions, respectively (m3 and m2 are the third and second moments of the distribution, with standard errors of the coefﬁcients in parentheses). This coefﬁcient is zero for a normal distribution, which is the usual standard of comparison. 7 In one case, the player checked into a drug rehabilitation clinic and missed several games. Although this is not an injury in a precisely deﬁned sense, these games were retained in the injury game sample for the purposes of simplicity of deﬁnition. If a game is missed, it is assumed to be an injury game. Differences in injury severity and so on are not commonly divided by bright lines, so we adopt a simple deﬁnition here as well. 8 There is ample evidence in newspaper reports to support this. One example is the sequence of hamstring injuries to World B. Free in 1985. Free is quoted in the Cleveland Plain Dealer of 11/21/85: “I had the left one for about two weeks but I didn’t say anything. Then I had put too much pressure on the right one and hurt it. . . . Things like that happen . . . .” 9 For example, consider the following remark of Clemson University’s sports information director, referring to star running back Terry Allen: “It’s the classic case of not knowing if he can play until he warms up (Greenville News, Oct. 21, 1989).” Allen suited up, but didn’t play. He returned to the lineup later in the season. 10 Censored spells are those terminated by the end of the season rather than a return to action. 11 The technique is described in Kiefer (1988), especially pp. 662–3. Since the sample used was deﬁned by excluding spells of six games or longer, this estimate is biased upward. This exclusion is the only practical means of separating incapacitating from nagging injuries. The bias induced is very slight however, since only 0.0312 of the sample would be expected to incur spells of six games or more if the return probability were indeed 0.5. As a means of evaluating the bias, an estimate of the hazard was calculated by treating all ﬁve game spells (there are only eight such games in the 219 game sample) as censored at four games, that is, as being of length >= four games rather than ﬁve. The estimate obtained is 0.505 (std error = 0.036), virtually the same as that reported in the text. 12 In contrast, the return rates implied by p = 0.4{0.64, 0.784, 0.870} are consistently below that observed. The rates given by p = 0.6 {0.84, 0.914, 0.956} are slightly above that observed for the second game, but close to the mark for the third and fourth games. 13 This is obviously untrue, but is a convenient way of imposing the condition that this probability is the same for players on both teams. Nagging injuries are a factor on both sides of the score. 14 Diagnostic checks indicated that ten game samples yielded accurate estimates as well. Hence, for injury spells commencing in the eleventhth through twentieth game of each season, the maximum available number of pre-injury games was used. Summary statistics for regressions using samples of the ﬁrst ten and twenty games of the 1982 season are presented in Appendix A. 15 Teams attempt to keep certain injuries secret for exactly this reason. A recent example is the New York Giants’ secrecy concerning an injury to quarterback Phil Simms’ throwing hand, suffered in practice prior to an 1990 NFL playoff game. Advance knowledge can suggest successful strategies to the opponent. 16 The magnitude of the bias (in absolute value) is less in the return game than during the spell, indicating that the return probability may increase with spell duration. The difference in bias can be traced to a decline in DIFF of 0.78 points (t = 3.23) in the return game relative to games 2–n of the spell. Note that this decline is sharply reduced (to 0.40 points, t = 1.37) if one compares the last game in a short spell to the return game. Recall also that the data on the actual injuries hint of an increasing hazard for nagging injuries, which seems reasonable.

Player injuries and price responses

163

Appendix 1982 Observations: 230, R-squared: 0.969 Variable Estimate t-value HAWKS CELTICS BULLS CAVS MAVS NUGGETS PISTONS WARRIORS ROCKETS PACERS CLIPPERS LAKERS BUCKS NETS KNICKS SIXERS SUNS BLAZERS KINGS SPURS SONICS JAZZ BULLETS HO-HAWKS HO-CELTI HO-BULLS HO-CAVS HO-MAVS HO-NUGGE HO-PISTO HO-WARRI HO-ROCKE HO-PACER HO-CLIPP HO-LAKER HO-BUCKS HO-NETS HO-KNICK HO-SIXER HO-SUNS HO-BLAZE HO-KINGS HO-SPURS HO-SONIC HO-JAZZ

−3.334359 2.910794 −6.229776 −11.091149 −4.196604 −4.830953 −4.356497 −6.065996 −11.510297 −6.846988 −8.742780 3.052869 0.430284 −3.712991 −6.676410 2.710749 −2.643456 −2.265396 −3.759032 −2.050936 1.049140 −8.272100 −3.932458 0.866018 6.213943 −1.819314 −9.438899 −2.065136 0.512874 −1.133018 −1.040707 −8.786952 −3.798072 −5.959837 7.183994 3.482937 0.121674 −3.169490 6.855646 2.692941 0.825848 −0.813232 1.410239 4.058689 −5.331165

−6.139491 4.645186 −11.396556 −20.562050 −7.130448 −8.943902 −7.775327 −9.978129 −17.661852 −11.348433 −15.109341 5.209969 0.773051 −6.807746 −11.700208 5.012054 −4.196462 −4.150938 −6.027770 −3.536199 1.616020 −15.026895 −6.694430 1.428633 11.137314 −3.059896 −16.256515 −3.323394 0.821924 −1.886866 −1.787460 −15.468773 −6.649878 −10.205729 12.348228 6.311453 0.197153 −5.575937 11.335485 4.853336 1.364113 −1.422055 2.455259 7.286932 −8.976392

References Bhagat, Sanjai and Richard H. Jefferis, 1991, “Voting power in the proxy process: the case of antitakeover charter amendments,” Journal of Financial Economics 20, 193–225.

164

R. D. Sauer

Brown, Stephen J. and Jerold B. Warner, 1985, “Using daily stock returns: the case of event studies,” Journal of Financial Economics 14, 3–31. Dare, William H. and McDonald, S. Scott, 1996, “A generalized model for testing the home and favorite team advantage in point spread markets,” Journal of Financial Economics 40, 295–318. Golec, Joseph and Tamarkin, Maurry, 1991, “The degree of inefﬁciency in the football betting market: Statistical tests,” Journal of Financial Economics 30, 311–323. Keifer, Nicholas M., 1988, “Economic duration data and hazard functions,” Journal of Economic Literature 26, 646–679. Malatesta, Paul H. and Rex Thompson, 1985, “Partially anticipated events,” Journal of Financial Economics 14, 237–250. Sauer, Raymond, 1998, “The economics of wagering markets,” Journal of Economic Literature, forthcoming in December.

14 Is the UK National Lottery experiencing lottery fatigue? Stephen Creigh-Tyte and Lisa Farrell

In this chapter recent innovations to the UK National Lottery on-line lotto game are considered. We suggest that innovations are necessary to prevent players from becoming tired of the game and therefore to keep sales healthy. We also examine how the lottery operators have tried to stimulate the wider betting and gaming market and maintain interest in the on-line game, through the introduction of periphery games and products. In summary, we conclude that the UK lottery market has been stimulated and expanded in line with all the available evidence from lotteries elsewhere in the world.

Introduction This chapter addresses the concept of lottery fatigue in the context of the UK National Lottery games, which were launched at the end of 1994. Creigh-Tyte (1997) provides an overview of the policy related to the UK National Lottery’s introduction and Creigh-Tyte and Farrell (1998) give an initial overview of economic issues. Lottery fatigue is the phenomenon experienced by many state/national lotteries whereby players have been found to tire of lottery games (reﬂected in a downward trend in sales) and so require continual stimulation to entice them to play (see Clotfelter and Cook, 1990, for a discussion of the US experience up to the late 1980s). This is the usual explanation given for the diversiﬁcation of lottery products. As the US National Gambling Impact Study Commission (1999) comments: ‘Revenues typically expand dramatically after the lottery’s introduction, then level off, and even begin to decline. This “boredom” factor has led to the constant introduction of new games to maintain or increase revenues.’ In this chapter we will review the latest facts and ﬁgures pertaining to the sale of National Lottery games and the recent economic research on the lottery games. To date there has been no single empirical analysis of the impact of the launch of peripheral games on the main on-line game, so we draw what evidence we can from the available research. We begin by looking at the performance of the on-line game since its launch in November 1994. Then we look at the Thunderball game, consider the impact of special one-off draws and give an introduction to the latest lottery game, Lottery

166

S. Creigh-Tyte and L. Farrell

Extra. A brief introduction to the market for scratch cards is then provided, followed by the conclusions.

The on-line game The on-line game is the central product in the National Lottery range. It was launched in November 1994 and has been running continually (on a once–weekly and more recently on a twice–weekly basis) ever since. Given that 2001 was the ﬁnal year of operations under the initial license, it is an appropriate time to review the game’s performance. The on-line game is the largest National Lottery product in terms of weekly sales ﬁgures, around £70–75 million in a normal week (i.e. a week with no rollover draws). Figure 14.1 shows the weekly sales ﬁgures by draw from the game’s launch until 31 March, 2002. The spikes in the distribution represent weeks that contained at least one rollover or superdraw, and draw 117 is the ﬁrst of the midweek draws. Whilst sales per draw have fallen since the introduction of the midweek draw, the weekly sales ﬁgures are higher than when there was just a Saturday single draw.

Conscious selection One way to ensure the long-term success of the on-line game is to encourage players to use systems to select their numbers or to play the same number every week. This results in players getting locked into the game and makes them more likely to play regularly. Evidence that players do behave in this way can be seen from the startling feature of the on-line game that it exhibits many more rollovers than could have been generated by statistical chance, as can be seen from Figure 14.1. This can only arise from individuals choosing the numbers on the lottery tickets they buy in 140,000,000 120,000,000

Sales (£)

100,000,000 80,000,000 60,000,000 40,000,000 20,000,000

Figure 14.1 National lottery on-line weekly ticket sales.

365

378

339

352

326

300

313

274

287

248

261

222

Week number

235

196

209

170

183

144

157

118

131

92

105

79

53

66

27

40

1

14

0

Is the UK National Lottery experiencing lottery fatigue?

167

a non-uniform way.1 That is, many more individuals choose the same combinations of numbers than would occur by chance if individuals selected their numbers uniformly. The result is that the probability distribution of numbers chosen does not follow a uniform distribution, whereby the probability of each number being chosen is one in forty-nine. Thus, the tickets sold cover a smaller set of possible combinations than would have been the case had individuals chosen their numbers in a uniform way – there will be more occasions when there are no jackpot prize winners.2 The implications of this non-uniformity and (unintentional) co-ordination between players are important. If players realise that such non-uniformity is occurring then they will expect the return to holding a ticket to be smaller (for any given size of rollover) than it would be if individuals were choosing their numbers uniformly. Essentially, the non-uniformity increases the probability that there will be a rollover and this changes the behaviour of potential ticket purchasers (provided they are aware of it). Haigh (1996) presents evidence of conscious selection among players and Farrell et al. (2000a) show that whilst conscious selection can be detected it has little impact on estimates of the price elasticity of demand for lottery tickets. In contrast most lotteries also offer a random number selector that players can use to pick their numbers. In the UK this is called ‘Lucky Dip’, but it is usually called ‘Quick Pick’ elsewhere. This is not, however, normally introduced until players have had a chance to develop a system for selecting their numbers and so it may simply attract marginal players who do not want to invest much time in the purchase of a ticket or those that have set numbers but who also try a Lucky Dip ticket. Simon (1999) argues that this is one reason why Camelot may have delayed introducing the Lucky Dip facility for a year, to ‘entrap’ players who feel they cannot stop buying tickets with a certain number combination because they have already played it for a long period. In the case of the UK game, the Lucky Dip was not introduced until March 1996 and represents the ﬁrst innovation in the game intended to regenerate interest from players who might have been losing interest. It represents a new way for players to play the lottery.3

The importance of rollovers Rollovers are good for the game stake for two reasons. First, they attract high levels of sales and second, successive draws also see increased sales. Farrell et al. (2000b) show that the effect of a rollover on sales lasts for up to ﬁve draws following the rollover. They use a dynamic model speciﬁcation to estimate both the long- and short-run elasticity of the demand for tickets. The short-run elasticity simply tells us how demand changes in a single period following the price change, whereas the long-run elasticity tells us the dynamic response of demand to the price change after all responses and lags have worked through the system. The size of the longrun elasticity is of interest as it can signal addiction among players. The general hypothesis is that the demand for an addictive good will be higher the higher demand was in the recent past.4 It is found that there is evidence of addiction among lottery players; the short-run (static) elasticity is smaller than the long-run

168

S. Creigh-Tyte and L. Farrell

(dynamic) elasticity. The long-run elasticity takes account of the fact that price changes have more than a single period effect and is found to be approximately unity. Moreover since rollovers boost sales they may be a cause of addiction. Sales following a rollover are higher than the level of sales prior to the rollover and this is known in the industry as the ‘halo’ effect. Thus, rollovers have a greater impact than just increasing sales in the week in which they occur; there is a knock-on effect in the following draws’ sales. Players are attracted by the rollover and either new players enter or existing players play more, or both, and after the rollover those who entered remain and those who increased their purchases continue to purchase at the higher level. Shapira and Venezia (1992) ﬁnd that demand for the Israeli lotto increased in rollover weeks, and this added enthusiasm for the lotto carried over to the following week’s draw. In the UK, Farrell et al. (2000b) show that the halo decays within 5–6 draws by which point sales have returned to their post rollover level (Figure 14.2). However, a close succession of rollovers would have the effect of causing sales to ratchet upwards. The effect of rollovers on the game is, therefore, very important and complex. Were it not for the presence of rollovers, sales would have a strong negative trend. Players would soon tire of the game, experiencing lottery fatigue. Estimates by Farrell et al. (2000b) suggest that the half-life of the UK game would have been 150 draws (if there were no rollovers). That is sales would halve every three years (of weekly draws) if it was not for the presence of rollovers in the game. Mikesell (1994) found that in the case of US lotteries, sales tend to have peaked after about ten years of operation. Rollovers are therefore essential for stimulating interest in the game and this is reﬂected in the amount of advertising that the operators give to rollover draws and the fact that lottery operators even create artiﬁcial rollovers in the form of ‘superdraws’.

More sales as a proceedings of rollover sales

1.25

Actual decay Fitted decay

1.2 1.15 1.1 1.05 1 0.95 0.0 1

2 3 4 5 6 Time in weeks where week 1 is a rollover week

Figure 14.2 The halo effect for the case of the UK National Lottery.

7

Is the UK National Lottery experiencing lottery fatigue?

169

The choice of take-out rate The ‘price’ elasticity of demand for lottery tickets shows how demand varies with the expected value of the return from a ticket and it is this elasticity that is relevant in assessing the merits of the design of the lottery and the attractiveness of potential reforms to the design. That is, it tells us how demand would vary in response to changes in the design of the lottery – in particular, the tax rate on the lottery or the nature of the prizes. Lotteries are, typically, operated to maximise the resulting tax (or ‘good causes’) revenue that is typically a ﬁxed proportion of sales. Thus, knowledge of the price elasticity is central to choosing an appropriate take-out rate (see Appendix). The methodology to estimate the price elasticity of demand for lottery tickets is relatively simple. Price variation is derived from the fact that lottery tickets are a better bet in rollover than in normal weeks. The Appendix to this chapter shows how the expected value of a lottery ticket is derived. Previous work (outside of the UK) has attempted to estimate this elasticity by looking at how demand varies in response to actual changes in lottery design across time or differences across states.5 However, these have been few and far between and limited attempts have been made to control for other changes and differences that may have occurred. An important exception is Clotfelter and Cook (1990), who estimate the elasticity of sales with respect to the expected value of holding a ticket.6 The current estimates for the UK also exploit the changes in the return to a ticket induced by ‘rollovers’ that occurs when the major prize (the jackpot) in one draw is not won and gets added to the jackpot prize pool for the subsequent draw. This changes the expected return to a ticket in a very speciﬁc way. In particular, the expected return rises in a way that cannot be arbitraged away by the behaviour of agents. The elasticity generated by this method is published in Farrell et al. (2000) and Forrest et al. (2000). Farrell et al. report estimates of −1.05 (in the short run) and −1.55 (in the long run).7 Gulley and Scott (1989) report an estimate of −1.03.8 Although Europe Economics (2000) argues that ‘studies using American data typically ﬁnd a lower elasticity than for the UK, with an estimated elasticity closer to one, which is the revenue maximising level of elasticity’, in fact, the UK results are broadly similar to those found for the US state lotteries. Gulley and Scott (1989) also use price variation arising from changes in the expected value caused by rollovers. They report elasticity of −1.15 and −1.2 for the Kentucky and Ohio lotteries and elasticity of −1.92 for the multi-state Massmillions lotteries. The longrun elasticity given in Farrell et al. suggests that the take-out rate could be lowered to increase the sales revenue and thus the money going to good causes. However, the short-run elasticity and the estimate of Forrest et al. are not statistically signiﬁcantly different from one, suggesting that the current take-out rate is right.

The introduction of the midweek draw Over hundred lotteries worldwide run midweek draws and the majority of these are held on a Wednesday. In general, innovations to games are an endogenous response

170

S. Creigh-Tyte and L. Farrell

to ﬂagging sales. The midweek draw has seen lower sales than the Saturday draw but total weekly sales have risen (as can be seen from Figure 14.1). This second draw is a replica of the Saturday draw and therefore ensures that players, who ‘own’ a set of Saturday night numbers, will become locked into the midweek draw as well. An interesting question is whether the price elasticity of demand across the two draws is the same, as this determines if the optimal take-out rate across each draw should be the same. Forrest et al. (2000) calculate that the Saturday elasticity is −1.04 and the Wednesday elasticity is −0.88 and ﬁnd neither estimate to be statistically significantly different from one. Farrell et al. (1998) also test if players respond in the same way to price incentives across the lotto draws (i.e. Saturday and Wednesday). When considering the separate samples it appears that the demand on Wednesdays is less elastic than the demand on Saturdays. Examination of the associated standard errors reported in the paper, however, shows that the elasticity is not statistically signiﬁcantly different from each other and this explains why none of the interaction terms in their model, indicating a change in the slope of the demand curve over the two types of draw for the full sample regression, are signiﬁcant. The signiﬁcance of the Wednesday dummy in the full sample regression implies that there is a change of intercept and sales are signiﬁcantly lower on Wednesdays than Saturdays. In general, the results suggest that the demand curve shifts backwards towards the origin on Wednesdays, but the elasticity of demand is unchanged. Furthermore, there is no evidence that players engage in inter-temporal substitution given that less people play on Wednesday rollovers than on Saturday rollovers despite the higher expected return. To date, lower sales on Wednesdays have been continually boosted through frequent topped up jackpot ‘superdraws’ but it is important to remember that the greater the frequency of the ‘superdraw’ the quicker the players will tire of this innovation. It is, therefore, clear that less people play on Wednesdays than Saturdays but the introduction of the midweek draw has been successful in increasing the overall level of sales. The logical question that naturally occurs is whether there is an optimal frequency of draws? To date there is no research on how the frequency of draws affects participation in the game. However, logically, the closer the draws the easier it is to inter-temporally substitute play across the draws. This could result in low levels of play in normal draws as players wait for rollovers to occur. Whilst low levels of play in normal draws will increase the number of rollovers, the size of the rollover will be small and so the effect of a rollover in attracting high sales diminishes.

Thunderball The third innovation to the current on-line game has been the introduction of the Thunderball game. This numbers game is different in that it has the format of a typical lottery but is not pari-mutuel.9 An interesting feature of the paper by Forrest et al. (2000) is what the time trends reveal for the on-line game. The linear

Is the UK National Lottery experiencing lottery fatigue?

171

trend in the model is positive, reﬂecting growing sales as the game was introduced and the boost to sales given by the introduction of the midweek draw. However, the quadratic trend term is more interesting (and is negative) and suggests that interest in the midweek draw fell around June 1998. Camelot’s response to this falling interest was the introduction of the Thunderball game. Figure 14.3 shows the sales path for the Thunderball game since its ﬁrst draw on Saturday 12th June, 1999 – initial sales of £6.4 million per game have trended downwards to £5.2 million in March 2002. Whilst the odds of winning, Thunderball are much better than those for the on-line game, the former is still considerably less popular. This may in part be explained by the fact that the value of the top prize is relatively small compared to that offered by the on-line game. See Table 14.1. Current research by Walker and Young shows, in the context of the on-line game, that the skew in the prize distribution (that allows players to receive very large jackpots) is a key factor in 7,000,000 6,000,000

Sales (£)

5,000,000 4,000,000 3,000,000 2,000,000 1,000,000

Figure 14.3 Thunderball sales.

Table 14.1 Ways to win at Thunderball Winning selection

Odds

Prize value (£)

Match 5 and the Thunderball Match 5 Match 4 and the Thunderball Match 4 Match 3 and the Thunderball Match 3 Match 2 and the Thunderball Match 1 and the Thunderball

1 : 3,895,584 1 : 299,661 1 : 26,866 1 : 2,067 1 : 960 1 : 74 1 : 107 1 : 33

250, 000 5, 000 250 100 20 10 10 5

Source: http://www.national-lottery.co.uk

136

141

131

126

116

Draw number

121

111

106

96

101

91

86

76

81

71

66

61

56

51

46

36

41

31

26

21

16

6

11

1

0

172

S. Creigh-Tyte and L. Farrell

the game’s success. Research in the context of other betting markets by Golec and Tamarkin (1998) for the case of racetrack bettors and Garrett and Sobel (1999) for the case of US lottery games also show that bettors like skewness in the distribution of prizes. The Thunderball game does have a skewed prize distribution, but it appears that it is not sufﬁciently skewed as the value of the top prize is not sufﬁcient to attract players to the game. One of the important lessons learnt from the US state lotteries is that single large games (such as the multi-state lotteries) generate greater amounts of revenue than numerous small games. It is, therefore, not surprising to ﬁnd that the Thunderball game only attracts sales of around £5 million a week compared to the on-line game with sales of around £70–75 million. Such games may pick up marginal sales, but it is important that care is taken not to allow them to simply divert resources away from the main game, as this would be detrimental to the long-term future of the main game.

Special one-off draws Christmas 2000 and 2001 have seen further innovations to the format of the standard on-line game. These came in the form of a pari-mutuel game where players paid £5 for entry into two draws. The idea was to have two very large jackpots that would generate extra interest and revive a general interest in lottery play. Big Draw 2000 had two draws: one on 31st December, 1999 and one on 1st January, 2000. Big Draw 2001 had two draws on 1st January, 2001, one at 12.12 a.m. and the second at 12.31 a.m. These games are in part copies of Italy’s Christmas lottery draw that attracts a huge number of players. Table 14.2 shows the number of winners and size of the prize for Big Draw 2001. Table 14.2 Big Draw 2001 Category

Prize (£)

Winners

Total (£)

Percentages

Jackpot 4 + bonus 4 match 3 + bonus 3 match 2 + bonus 2 match Total

0 0 54,587 2,650 260 163 57

0 0 43 103 3,212 3,452 88,676 95,486

0 0 2,347,241 272,950 835,120 562,676 5,054,532 9,072,519

0.0 0.0 25.9 3.0 9.2 6.2 55.7 100.0

Source: http://www.national-lottery.co.uk Notes Game 1 Winning years drawn at 12.12 a.m. GMT on Monday 1st January, 2001: Sorted order: 1909 1920 1931 1982 1992 Bonus: 1911. Game 2 Winning years drawn at 12.31 a.m. GMT on Monday 1st January, 2001: First year: 1,620; Second year: 2,438. Number of game 2 jackpot (£1 million) winners: 5.

Is the UK National Lottery experiencing lottery fatigue?

173

It is interesting that for the Big Draw whilst the total sales were £24,739,425 (because each ticket cost £5) the total number of tickets sold was just less than 15 million. This is around the number of marginal tickets that are sold in the Thunderball game. Moreover, given the small number of tickets sold the probability of having no jackpot winners is large and not surprisingly the game did not generate any winners of the top two prizes. This game illustrates the points that have been made throughout this chapter that the design of the game must match the size of the playing population. As a one-off game it did generate a large amount of revenue but players are disheartened by games that are too hard to win (especially with such high prices), and this lack of enthusiasm may have dangerous effects on the main on-line game. Luckily draw 2 had a much higher probability of generating winners given the number of players, and successfully produced ﬁve millionaires.

Lottery Extra This is the latest innovation to the on-line game. It exploits the fact that players like large jackpots by allowing a jackpot that is not won to rollover into the following weeks’ jackpot prize pool. This continues until the jackpot grows to £50 million and is then shared by the second-prize winners. Tickets cost £1 and the draw uses the same numbers as the on-line game. Players simply choose whether or not to enter the extra draw and then whether to use the same numbers that they used for the main game or to use a lucky dip selection. Lottery Extra saw its ﬁrst draw on Wednesday 15th November, 2000. Figure 14.4 shows that the level of sales for this new game are currently around £1.2 million on a Saturday and £0.8 million on a Wednesday. Again the game does not appear to have a wide appeal but is picking up some sales. The key problem is

5,000,000 4,500,000 4,000,000

Sales (£)

3,500,000 3,000,000 2,500,000 2,000,000 1,500,000 1,000,000 500,000

Draw number

Figure 14.4 Lottery Extra sales.

136

131

126

121

116

106

111

96

101

91

86

81

76

71

66

61

56

51

46

41

36

31

26

21

16

6

11

1

0

174

S. Creigh-Tyte and L. Farrell

whether these are new sales or simply expenditure that is substituted away from the main on-line game. The potential impact of all the innovations that take the form of peripheral games is that if they simply direct expenditure from the main game then this will mean that total sales will not rise and the costs of launching the new game are lost, but more importantly we know there are economies of scale in lottery markets and competition is detrimental to total market sales (even if that competition comes from games operated by the same company). Large jackpots attract players so lots of small games effectively destroys the market. Innovation is necessary to stimulate interest but too many peripheral games are a dangerous means to try to regenerate interest.

Instants National Lottery scratch cards called Instants were launched at the end of March 1995 and cost £1 or £2 depending on the card. Sales to date would tend to suggest that the UK market for Instants is quite small. Figure 14.5 shows the weekly sales ﬁgures for scratch cards. When they were ﬁrst released sales peaked at just over £40 million per week in May 1995. Currently sales have fallen to as little as £10.5 million a week. We can see that this revenue is small compared to that generated by the on-line game (although greater than the sales that Thunderball or Lottery Extra have generated). The challenging question to answer is what is the potential to extend this market? There is very little analysis on the market for scratch cards within the UK. This is mainly due to the poor quality of the available data. Surveys of scratch-card players persistently under record the level of activity. Analysis of the aggregate data is hindered by the fact that there are many games each offering different returns. 50,000,000 45,000,000 40,000,000

Sales (£)

35,000,000 30,000,000 25,000,000 20,000,000 15,000,000 10,000,000

Week number

Figure 14.5 Instants sales.

361

349

325 337

301

313

277 289

265

241

253

217

229

205

181

193

157 169

133

145

97

109 121

85

73

49 61

37

25

1

0

13

5,000,000

Is the UK National Lottery experiencing lottery fatigue?

175

Table 14.3 National Lottery stakes (£ million) Financial year

National Lottery lotto game sales (£)

Instant sales (£)

Easy Play sales (£)

Lottery Extra sales (£)

Thunderball sales (£)

Big Draw sales (£)

Gross stake (£)

1994–95 1995–96 1996–97 1997–98 1998–99 1999–00 2000–01 Total to end 2000–01

1,156.8 3,693.7 3,846.6 4,712.7 4,535.9 4,257.0 4,124.2 26,326.9

33.9 1,523.3 876.5 801.0 668.7 560.8 546.1 5,010.3

0 0 0 0 23.1 1.3 0 24.4

0 0 0 0 0 0 48.1 48.1

0 0 0 0 0 194.1 240.2 434.3

0 0 0 0 0 80.6 24.7 105.3

1,190.6 5,217.0 4,723.2 5,513.8 5,227.8 5,093.8 4,983.3 31,949.3

Source: National Lottery Commission

New experiments in terms of offering cars and other goods, rather than money prizes, are currently being tested in the market place. Camelot remains diligent in its attempts to stimulate and expand this market. This is important, as the innovations to the on-line game are limited, so over time sustaining the value of contributions to good causes will become increasingly focussed on expanding other areas of the lottery product range.

Conclusion All the indicators show that since 1994 Camelot has followed the pre-existing models of lottery operation. They are continually innovating in order to stimulate demand to prevent lottery fatigue from impacting on sales and the revenue for good causes. However, potential dangers of too many peripheral games have been highlighted. Due to the downward trend in lottery sales we should expect to see a continued high level of innovation in the game. Camelot enjoys the advantage of a monopoly market, however; although J. R. Hicks once characterised a ‘quiet life’ as the greatest monopoly proﬁt, the UK National Lottery is not an easy market. It is a demanding market with no room for complacency. The Gambling Review Body (chaired by Sir Alan Budd) began work in April 2000 with the purpose of reviewing the ‘current state’ of the UK gambling industry – they published their ﬁndings in the Gambling Review Report in July 2001, including 176 recommendations. While consideration of the National Lottery was expressly excluded from their brief, the Report has clear implications for the Lottery (and the rest of the gambling industry). The most signiﬁcant recommendations for the Lottery are: a b

that betting on the UK National Lottery be permitted; that limits on the size of prizes and the maximum annual proceeds should be removed for societies’ lotteries, and that rollovers should be permitted; and

176 c

S. Creigh-Tyte and L. Farrell that there should be no statutory limits on the stakes and prizes in bingo games, and that rollovers should be permitted.

The thrust of the Report is to ‘extend choice for adult gamblers’ and simplify gambling regulation, while ensuring that children and other vulnerable persons are protected, permitted forms of gambling are kept crime-free and that players know what to expect. As such, the Budd Report will (all else being equal) increase the level of competition within the various gambling and leisure sectors for consumers’ Table 14.4 Trends in betting and gaming expenditure Net betting and gaming expenditure

£ Million

% Change

1991–92 1992–93 1993–94 1994–95 1995–96 1996–97 1997–98 1998–99 1999–2000 2000–01 On lotterya 1991–92 1992–93 1993–94 1994–95 1995–96 1996–97 1997–98 1998–99 1999–2000 2000–01 On other betting 1991–92 1992–93 1993–94 1994–95 1995–96 1996–97 1997–98 1998–99 1999–2000 2000–01

3,181 3,296 3,517 4,324 6,034 5,898 6,414 6,550 6,587 7,254

—

0 0 0 660 2,719 2,425 2,785 2,615 2,547 2,492

— — — — 312.0 −10.8 14.8 −6.1 −2.6 −2.2

3,181 3,296 3,517 3,664 3,315 3,473 3,629 3,935 4,040 4,762

—

3.6 6.7 22.9 39.5 −2.3 8.7 2.1 0.6 10.1

3.6 6.7 4.2 −9.5 4.8 4.5 8.4 2.7 17.9

Source: ONS Note a Calculated as 50 per cent of the National Lottery stake.

Is the UK National Lottery experiencing lottery fatigue?

177

Table 14.5 Trends in betting and gaming expenditure relative to total consumer spending Net betting and gaming expenditure On lotterya 1991–92 1992–93 1993–94 1994–95 1995–96 1996–97 1997–98 1998–99 1999–2000 2000–01 On other betting 1991–92 1992–93 1993–94 1994–95 1995–96 1996–97 1997–98 1998–99 1999–2000 2000–01

£ Million

Share of total consumer expenditure (%)

0 0 0 660 2,719 2,425 2,785 2,615 2,547 2,492

0 0 0 0.2 0.6 0.5 0.6 0.5 0.5 0.5

3,181 3,296 3,517 3,664 3,315 3,473 3,629 3,935 4,040 4,762

0.9 0.9 0.9 0.9 0.8 0.7 0.7 0.7 0.7 0.7

Source: ONS Note a Calculated as 50 per cent of the National Lottery stake.

discretionary spending, and also the competitiveness of the gambling industry as a whole, with other non-gambling sectors. As shown in Table 14.3, since 1994 the total National Lottery ticket sales have reached almost £32 billion over the seven ﬁnancial years to 2000–01. However, over 82 per cent of this total is attributable to the 6/49 lotto game, with over 15 per cent due to scratch cards. Preserving the core lotto game stake is clearly a priority in maintaining sales and hence the good causes funding streams. Moreover, the National Lottery exists within an increasingly competitive UK betting and gaming sector. As shown in Table 14.4, consumer expenditure on non-lottery betting and gaming has risen (almost) continually between 1994–95 and 2000–01. Although the share of such non-lottery betting and gaming in total consumer spending has fallen from 0.9 per cent in the year of the National Lottery’s launch to 0.7 per cent in 2000–01, the overall share of all betting and gaming in consumers’ expenditure has risen from 0.9 per cent in 1993–94 to 1.2 per cent in 2000–01 per cent; see Table 14.5. Therefore the challenge facing the lottery in the near future is to learn to be adaptive and innovative in an increasingly competitive environment.

178

S. Creigh-Tyte and L. Farrell

Appendix: the expected value of a lottery ticket The formal expression for the expected value of a lottery ticket was ﬁrst derived in the work of Sprowls (1970) and has subsequently been adopted and reﬁned by Lim (1995) and Scoggins (1995). Here we will consider the case where players are assumed to select their numbers uniformly.10 The size of the jackpot is equal to the sales revenue times the proportion of ticket sales in draw t going to the jackpot prize pool and plus any rolled over prize money from the previous draw. We denote Ct , as the sales revenue (aggregate consumption) and Rt as the amount rolled over; which for most draws will be zero. Finally π6t is the proportion of ticket sales in week t going to the jackpot prize pool. The size of the jackpot in draw t, is thus expressed as Jt (π6t , Rt ; Ct ) = Rt + π6t Ct

(1)

The probability that this jackpot is won, p6 , is determined by the characteristics of the game. For the UK National Lottery, players must select six numbers (x) from forty-nine (m) and the jackpot is shared among those players who selected the winning six number combination drawn at random without replacement.11 The probability of there being a rollover is equal to the probability that none of the players win the jackpot (1 − p6 )Ct . In the case of the UK National Lottery there are also smaller prizes awarded for matching any ﬁve, four or three of the main numbers and a further prize pool for matching any ﬁve main numbers plus a seventh bonus ball (5 + b). The expected value of holding a lottery ticket taking account of the smaller prizes is therefore12 V (Rt , π6t , πj t , p6 ; Ct ) = [1−(1−p6 )Ct ][π6t Ct +Rt ]+ πj t Ct /Ct (2) j

where j = 3, 4, 5, 5 + b, p6 is the probability of a single ticket winning the jackpot, pj is the probability of correctly selecting any j numbers, π6t is the proportion of ticket sales in draw t allocated to the jackpot prize pool and πj t is the proportion of ticket sales going to the j th prize pool in draw t so that, j πj t + π6t = (1 − τ ), j = 3, 4, 5, 5 + b, where τ represents the take-out. The take-out is the proportion of sales revenue not returned in the form of prizes, which covers the operator’s costs, proﬁts, tax, and in the UK, contributions to a number of good causes.13 It is straightforward (see Farrell and Walker, 1996) to show that VR > 0, Vp6 > 0 and Vτ < 0 where subscripts indicate partial derivatives. The effects of the level of sales, Ct , is more difﬁcult. In the case where R = 0 it is simple to show that VCt > 0 and VCt Ct < 0, but in general VCt = (p6 Ct (1 − p6 )Ct ((1 − τ )Ct + Rt ) − Rt (1 − (1 − p6 )Ct ))/Ct2

(3)

which is not necessarily monotonic and Figure 14.1 depicts the possibilities together with the relationship for R = 0. V (·) always asymptotes towards (1 − τ ), but for R > 0 it is from above and at a slower rate than for R = 0 when it is faster

Is the UK National Lottery experiencing lottery fatigue?

179

and from below. For R > 0 the relationship may attain a maximum for some ﬁnite Ct , but for sufﬁciently large R the relationship will be monotonically decreasing. V is always higher in rollover draws than in regular draws, irrespective of the level of sales. Thus, it is impossible to arbitrage away the differences in V no matter what the variation in sales. This implies that there will always be some exogenous variation in V arising from the random incidence of rollovers. It is, indeed, possible in theory for the expected value to exceed unity, the cost of a ticket, so the net expected return becomes positive.

Acknowledgements We are grateful to Sandra Stirling, Chris Jones, Stuart Poole and Christiane Radin for their help in preparing this chapter. Parts of this chapter draw on Lisa Farrell’s research with colleagues at Keele University.

Notes 1 Assuming that the mechanism employed to generate the winning numbers generates those numbers uniformly. Cook and Clotfelter (1993) refer to this non-uniformity as ‘conscious selection’. 2 There will also be more occasions when there are a large number of jackpot winners. That is, the variance in the number of jackpot winners will be higher under non-uniform choice. 3 Allowing the Lucky Dip will of course reduce the frequency of rollovers as it increases the level of coverage of the possible combinations. 4 Explicit models of addiction stem from the work of Becker and Murphy (1988). They present and test empirically a theoretical model of addiction. The novelty of this approach is that the individual can behave rationally. 5 See Vrooman (1976) and DeBoer (1985). 6 Scott and Garen (1994) estimate a model of participation in a US scratch card game using micro-data, but could not estimate a price elasticity since there are no rollovers in such games. 7 See a later section for the difference between the long- and short-run elasticity. 8 These results are based on an analysis of the sales time series. Using micro-data also enables a more precise estimation of the price elasticity of demand. Given that richer people may choose to play only in rollover weeks when the return is higher we need to control for income variation between those individual who play in normal weeks compared to those who play in rollover weeks. Simple time series studies such as those mentioned above may obtain biased price elasticities due to the inability, within the time series data, to control for income effects. Therefore it is important to check for any bias by comparing these results to the corresponding elasticity estimated using micro-data when controlling for the effects of income. Farrell and Walker (1999) ﬁnd estimates of −1.7 but this estimate was based on price variation arising due to a double rollover and this event attracted a lot of publicity that may have led to an unusually large response from players. 9 The other ﬁxed-odds game that the lottery launched was called ‘Easy Play’ and was based on the football pools. Vernon’s Easy Play ran from Saturday 15th August, 1998 to Saturday 8th May, 1999 (thirty-nine weeks), and was then shut down. 10 Cook and Clotfelter, 1993, pp. 636–7 speculate that the theoretical structure of the game is unchanged if individuals pick their numbers non-randomly (they call this ‘conscious selection’). Farrell et al. (2000b) consider this more complex conscious-selection case

180

S. Creigh-Tyte and L. Farrell

and prove that the most important theoretical properties of the game are indeed unaffected by this generalisation. They also show that conscious selection has a minimal impact on the estimated elasticity. 11 The probability of winning in this case is, then, 1/13983816. 12 It will be assumed, for expositional convenience, that the smaller prizes do not rollover. Whilst it is possible for them to do so, in practice they never have and the probability of them doing so is very small. 13 For the UK National Lottery Treasury the duty is 12 per cent of ticket sales, the retailer’s commission is 5 per cent, operator’s costs and proﬁts are 5 per cent, and good causes get 28 per cent.

References Becker, G. S. and Murphy, K. M. (1988), ‘A theory of rational addiction’, Journal of Political Economy, 96, 675–700. Budd, A. (2001), (Chairman of the Gambling Review Body), ‘The gambling review report’, Department for Culture, Media and Sport. Clotfelter, C. T. and Cook, P. J. (1990), ‘On the economies of state lotteries’, Journal of Economic Perspectives, 4(4), 105–119. Cook, P. J. and Clotfelter, C. T. (1993), ‘The peculiar scale economies of lotto’, American Economic Review, 83, 634–643. Creigh-Tyte, S. W. (1997), ‘Building a National Lottery: reviewing British experience’, Journal of Gambling Studies, 13(4), 321–341. Creigh-Tyte, S. W. and Farrell, L. (1998), ‘The economics of the National Lottery’, working paper No. 190, University of Durham. DeBoer, L. (1985), ‘Lottery taxes may be too high’, Journal of Policy Analysis and Management, 5, 594–596. Europe Economics (2000), ‘A report for the National Lottery Commission’, Review of the Economics Literature on Lotteries, London: Europe Economics. Farrell, L., Lanot, G., Hartley, R. and Walker, I. (1998), ‘It could be you: midweek draws and the demand for lottery tickets’, Society for the Study of Gambling Newsletter, no. 32. Farrell, L., Lanot, G., Hartley, R. and Walker, I. (2000a), ‘The demand for lotto: the role of conscious selection’, Journal of Business and Economic Statistics, April. Farrell, L., Morgenroth, E. and Walker, I. (2000b), ‘A time series analysis of UK lottery sales: the long-run price elasticity’, Oxford Bulletin of Economics and Statistics, 62. Farrell, L. and Walker, I. (1999), ‘The welfare effects of lotto: evidence from the UK, 1997’, Journal of Public Economics, 72. Forrest, D., Simmons, R. and Chesters, N. (2000), ‘Buying a dream: alternative models of the demand for Lotto’, University of Salford, Mimeo. Garret, T. A. and Sobel, R. S. (1999), ‘Gamblers favour skewness, not risk: further evidence from United States’, Economic Letters, 63. Golec, J. and Tamarkin, M. (1998), ‘Bettors love skewness, not risk, at the horse tracks’, Journal of Political Economy, 106. Gulley, D. O. and Scott, F. A. (1989), ‘Lottery effects on pari-mutuel tax revenues’, National Tax Journal, 42(1), 89–93. Haigh, J. (1996), ‘Lottery – the ﬁrst 57 draws’, Royal Statistical Society News, 23(6), February 1–2. Hicks, J. R. (1935), ‘Annual survey of economic theory: the theory of monopoly’, Econometrica, 3(1), 1–20.

Is the UK National Lottery experiencing lottery fatigue?

181

Lim, F. W. (1995), ‘On the distribution of lotto’, Australian National University working paper in Statistics, no. 282. Mikesell, J. L., (1994), ‘State Lottery Sales and Economic Activity’, National Tax Journal, 47, 165–171. Scoggins, J. F. (1995), ‘The lotto and expected net revenue’, National Tax Journal, 48, 61–70. Scott, F. and Garen, J. (1994), ‘Probability of purchase, amount of purchase and the demographic incidence of the lottery tax’, Journal of Public Economics, 54, 121–143. Shapira, Z. and Venezia, I. (1992), ‘Size and frequency of prizes as determinants of the demand for lotteries’, Organizational Behaviour and Human Decision Processes, 52, 307–318. Simon, J. (1999), ‘An analysis of the distribution of combinations chosen by the UK National Lottery Players’, Journal of Risk and Uncertainty, 17(3), 243–276. Sprowls, C. R. (1970), ‘On the terms of the New York State Lottery’, National Tax Journal, 23, 74–82. Vrooman, D. H. (1976), ‘An Economic analysis of the New York State Lottery’, National Tax Journal, 29, 482–488.

15 Time-series modelling of Lotto demand David Forrest

This chapter offers a critical review of attempts by British (and American) economists to model the demand for lottery products, in particular the on-line numbers game known as Lotto. Economists’ focus has been to attempt to illuminate the issue of whether or not take-out rates and prize structures have been selected appropriately for the goal of maximizing tax revenue. It will be argued that, notwithstanding the ingenuity shown in modelling exercises, data limitations imply that one must remain fairly agnostic about whether or not variations in takeout or prize structure would, in fact, be capable of further raising the tax-take. However, it will be suggested that an injection of competition into the supply of lottery services could have the potential to reveal more about the nature of demand and to lead to greater expenditure on lottery tickets and consequent increases in tax revenue. In many jurisdictions, the public lottery is the most highly taxed of all consumer products. The situation in the UK is typical. Legislation mandates that 50 per cent of bettors’ expenditure on lottery products as a whole (i.e. scratch cards as well as on-line games) should be returned to bettors. For most games, the face value of a ticket is £1. So, on average, 50 pence of this comes back to the bettor in prize money, leaving the lottery organization with the other 50 pence. One way of deﬁning price is to identify it with this take-out: in this case the cost of participation in the lottery would be 50 pence (on average). This is, in the parlance of the literature, the ‘effective price’ of a lottery ticket.1 A very high proportion of the effective price is accounted for by tax. In the UK, 12 pence goes directly to general tax revenue as ‘lottery duty’. A further 28 pence represents hypothecated tax,2 divided among a number of distribution agencies that fund projects in ﬁelds such as sports, arts, heritage and education. Collectively, this 28 pence is said by the government to go to ‘Good Causes’.3 Altogether then, 40 out of the 50 pence effective price for participation in the lottery represents tax. With the pre-tax price at 10 pence and the tax-inclusive price at 50 pence, the rate of tax could be quoted as either 400 or 80 per cent, depending on the choice of base. It is striking that the pre-tax price of 10 pence corresponds to the take-out proposed by Keynes to the Royal Commission on Gambling of 1932–3. Keynes advocated a weekly lottery with 85–90 per cent of sales revenue returned in prizes. His rationale was that the provision of such a state

Time-series modelling of Lotto demand

183

lottery would make it a more straightforward matter for public policy to curtail other forms of gambling. His recommendation as to take-out was therefore based on social policy considerations. By contrast, when a public lottery was eventually introduced in Britain over sixty years later, the legislation explicitly set the goal for the lottery as maximizing ‘revenue for Good Causes’ (i.e. tax-take). The resulting high take-out rate on the lottery4 implies a rate of tax with an order of magnitude comparable only with duties on petrol, alcohol and tobacco products, in Britain. A welfare justiﬁcation for such tax treatment appears fragile. Lotteries seem to be a fairly ‘soft’ form of gambling and are not associated with the serious externalities claimed (at least by government) to be linked with the use of petrol, alcohol or tobacco. Nor do estimates of demand functions indicate exceptional elasticity conditions in the lottery market that might make high taxation ‘optimal’. Of course, the government has not purported to have any welfare justiﬁcation for high lottery taxes. Rather, it argued for the introduction of the product itself solely as a vehicle for taxation, as evidenced by the remit of the legislation, which is to maximize tax revenue.5 Given the unusual emphasis on lotteries as tax vehicles, it is not surprising that most demand studies have been designed to answer questions concerning whether policy changes (in take-out or in odds- or prize-structures) would increase tax revenue for the government. The review here adopts the same perspective. But before proceeding, it is appropriate to underline that this is an unusual perspective for economists to adopt. The conventional mode of analysis in evaluating public policy is that provided by standard welfare economics, which gives due weight to the interests of consumers. But policy analysis in the area of lotteries, nearly always, gives an implicit weight of zero to changes in consumer surplus. However, Mason et al. (1997) and Farrell et al. (1999) calculated measures of excess burden for lottery taxes using estimates of Lotto demand curves for Florida and Britain, respectively. Mason et al. found, as was to be expected, that attributing some weight to consumer welfare would imply changes in lottery design while Farrell and Walker demonstrated how potentially inefﬁcient the ‘Good Causes’ tax was in that it appeared to impose a deadweight loss equal to nearly 30 per cent of the revenue raised.6 However, there is little ground for believing that debate on the future of lotteries will include a refocusing that accepts them as just another consumer product, therefore the debate in the literature, and here, proceeds on the basis of discussing whether policy changes will or will not increase revenue.

The effective price model Prior to the contribution of Gulley and Scott (1993), attempts to evaluate the sensitivity of Lotto demand to variations in take-out had been based primarily on comparing sales across jurisdictions offering different value in their lottery product. Results had been mixed but it might have been optimistic to expect to detect signiﬁcant price effect where the range of ‘prices’ observed was narrow.7 The insight of the Gulley–Scott model was that estimation of elasticity from timeseries modelling of Lotto demand in a single jurisdiction becomes possible once it

184

D. Forrest

is realized that, while the mandated take-out might always be the same, it is only deﬁned over a period. The constancy of the take-out over time does not prevent there being signiﬁcant variations in effective price from drawing-to-drawing of the same lottery game. For example, when the grand (or jackpot) prize is not won in one draw, it is commonly ‘rolled over’ (i.e. added to the grand prize for the following draw) resulting in signiﬁcantly higher expected value (lower effective price) for bettors. In many jurisdictions, expected value has even, on occasions, been observed to be positive (i.e. effective price negative). By studying the response of sales to this draw-to-draw variation in value, it is argued that inferences can be drawn with respect to price elasticity of demand and hence about the take-out that would be optimal from the perspective of tax revenue maximization. The Gulley–Scott model and its strengths and weaknesses are highlighted here because it was in essence imitated by ﬁrst-generation studies of Lotto demand in the UK. Gulley and Scott studied Lotto games in Kentucky and Ohio and two games in Massachusetts. Data related to various periods over 1984–91 and the number of drawings observed for individual games varied between 120 and 569. The demand equation for each of the four games was estimated by regressing the log of sales on the log of effective price. Following Clotfelter and Cook (1987), effective price was identiﬁed as the difference between the nominal price of a ticket ($1) and its expected value. Variation in expected value across observations was provided primarily by rollovers augmenting the prize fund. Control variables were restricted to a trend and a dummy variable capturing the tendency of draws on Wednesday (as opposed to Saturday) to generate lower sales.8 Ordinary least squares estimation of such a demand function would yield biased results to the extent that effective price is necessarily endogenous. The authors note that their price variable will be inﬂuenced, in an arithmetic sense, by sales. The reason is that, as sales increase, the number of combinations covered by bettors will increase, making it less likely that the grand prize will remain unwon (and rolled over to be the next draw). Expected value to ticket holders therefore improves (and effective price falls) when sales increase. The authors were the ﬁrst to graph this relationship between effective price and sales. For a ‘regular’ draw (no rollover from the previous drawing), effective price will decrease with sales, though at a decreasing rate, and will converge on the take-out rate (at very high levels of sales, it is likely that almost all number-combinations will be ‘sold’ and therefore that all of the advertised prize funds will be paid out to bettors). In a draw beneﬁting from a rollover from the preceding draw, the relationship is, however, different. The same ‘economies of scale’ effect as before occurs but account must also be taken of the fact that bettors in such a draw beneﬁt from ‘free’ money in the prize fund. The beneﬁt of the free money is spread more thinly the greater the sales. The relationship between effective price and sales will thus converge on the take-out rate from below (at very high levels of sales, the beneﬁt from the ﬁxed rolled-over funds will become very small on a per-ticket evaluation). The classic response to an endogenous problem is to employ instrumental variables but the Gulley–Scott model was more properly represented as two-stage

Time-series modelling of Lotto demand

185

least squares. This is because there is a related but separate issue. When a bettor considers the purchase of a ticket, he must, if he takes value into account, forecast the value of some price variable. An appropriate price variable according to the model is ‘one minus the mathematically expected value of a ticket’. But this varies with sales and sales are not known until after the betting period has closed. Bettors’ decisions on whether, and how many tickets, to purchase can therefore be based only on some ex ante concept of price and expected value. The formation of expectations with respect to price was therefore modelled in a ﬁrst-stage equation and ‘expected’ effective price was then included as a regressor in the secondstage (demand) equation. Gulley and Scott obtained expected effective price by regressing actual effective price on the amount rolled over into the current draw and on the level and square of the size of jackpot announced or predicted (according to state) by the lottery agency. The actual effective price was one minus the expected value of a ticket as it could have been calculated in advance had the (true) number of tickets that were going to be sold been known. Note that the Stage 1 equation included in the regressors only information available to bettors in advance of the draw. Bettors were assumed to act as if they were able to process this information accurately9 and so expected price was taken as the ﬁtted value from the ﬁrst stage equation. The structure of the Gulley–Scott model was thus: Stage 1 P = f (rollover, rollover 2 , jackpot, jackpot 2 , trend, wed) Stage 2 Q = g(Pˆ , trend, wed) where P is effective price, Pˆ is the ﬁtted value of effective price retrieved from the Stage 1 estimation, rollover is the dollar amount added to the draw from a rollover, jackpot is the lottery agency’s announced or predicted jackpot for the draw, wed is a dummy set equal to one if it is a midweek draw and Q is dollar sales. The price and quantity variable were in logs. In the estimated demand equation, trend was signiﬁcantly negative in Ohio and Kentucky but of opposite signs for the two Massachusetts games (perhaps indicating a shift of consumers between the two over time). The coefﬁcient on wed was always strongly signiﬁcant and conﬁrmed the lower popularity of midweek draws. But the focus of interest was the coefﬁcient on Pˆ which was always negative and very strongly signiﬁcant. The motivation for the Gulley–Scott paper was to assess whether state authorities had set the terms of each lottery to their own revenue from the operation. This was to be revealed by the estimate of the coefﬁcient on Pˆ which, given that the speciﬁcation of the model was log-linear, was an estimated elasticity. What would be the value for elasticity that would be consistent with maximizing behaviour by the state? The answer depends on being able to estimate the marginal cost of a lottery ticket to the ‘producer’. This can, in fact, be done, reasonably precisely though not exactly. For example, it is known that retailer commission on a lottery ticket is typically 5 cents. Total operating costs of the lottery agency are normally known but some of these will be ﬁxed (infrastructure) costs. Given the

186

D. Forrest

available information, Gulley and Scott assumed, reasonably, that the marginal cost, inclusive of retailer commission, was 8 cents. Using the identity that mr = p(1 + 1/γ ), where γ is elasticity of demand, and setting mr = mc to represent proﬁt maximization, allowed them to estimate that, if the typical effective price was $0.50 and mc was $0.08, then γ must have been −1.19 if proﬁt maximization was being achieved. Therefore, whether the elasticity estimated was −1.19 was the Gulley–Scott test for whether choice of take-out had been consistent with proﬁt maximization. Elasticity with respect to effective price was indeed measured as extremely close to −1.19 for both Kentucky and Ohio; but estimates for the two Massachusetts games were, respectively, much more and much less elastic than −1.19, indicating that one game should have been made more, and the other less, generous to maximize state revenue. Implementation of the model may be criticized on some matters of detail. First, the constant-elasticity speciﬁcation of demand (i.e. log-linearity) sits uneasily with the goal of the paper, which is to make recommendations to governments concerning lottery pricing. If demand really could be described as linear in logs, and if elasticity were say, less elastic than −1, then take-out could be increased indefinitely, always increasing revenue, to the point where sales were close to zero. Of course, a log-linear speciﬁcation may ﬁt the data well within the range of effective prices actually observed; but since log-linearity is unlikely to characterize the whole demand curve, recommendations have then to be couched as (e.g.) whether price should increase without it being possible to say by how much.10 Second, only a shift dummy represents the difference in demand conditions between Wednesday and Saturday, whereas one might speculate that if there are different bettors on the two nights, the restrictions that the slope coefﬁcients be equal in the two cases should at least be tested. One might also be sceptical regarding the authors’ discussion of policy implications for one game, from the ﬁnding that demand was much more elastic than −1.19. Understandably, a reduction in take-out was said to be indicated. But, as an alternative, it was proposed that the game should be made ‘easier’ (e.g. changing the format so that there are fewer combinations of numbers from which to choose). This would lower the effective price in any one draw because there would be a lower probability of the grand prize remaining unwon. On the other hand, rollovers would be less frequent and the incidence of very low-priced draws would fall. The overall impact of sales over a period would, in fact, need to be simulated and would be sensitive to assumed functional form. Policy conclusions as to the structure of the odds are therefore much more problematic than the authors imply. These criticisms are matters of detail that could be, and to some extent were, resolved in later applications of this pioneering model. However, there are much more fundamental ﬂaws inherent in the model, interpreted as a means of providing precise guidance on choice of take-out in state lotteries. These problems are just as relevant to the ﬁrst-generation studies on the UK Lottery (which was launched in November 1994) for which Gulley and Scott effectively served as a template. Of course, it is a familiar situation to economists that limitations inherent in data mean that they have no alternative but to ‘swallow hard’ and proceed with estimation

Time-series modelling of Lotto demand

187

notwithstanding known problems. But appropriate caution has then to be exercised in making policy recommendations, for example, one may have to recommend that a model indicates that take-out should ‘not be reduced’ rather than that it should be increased to some speciﬁc level.

Limitations of the effective price model The Gulley–Scott model, broadly followed in UK studies by Farrell et al. (1999) and Forrest et al. (2000b), has the potential to seriously mislead policy-makers for at least three reasons. All relate to the problem that the great bulk of variation in effective price is provided by the difference in prize funds between regular and rollover draws. This means that when one measures elasticity, the estimate is based largely on observing how sales have responded in the past to rollovers. The ﬁrst problem with this is that rollovers generate media interest and the consequent free publicity for the lottery draw may also be supplemented by an active advertising campaign by the lottery agency itself. If extra prize money were to be made available regularly by an adjustment of take-out, the same degree of consumer response may not occur because, to some extent, high sales in rollover weeks may be explained by abnormal levels of publicity reminding bettors to purchase their tickets. Second, the observed response to rollovers relates to variations in effective price that are transient. Players may not respond in the same way to a permanent change in effective price achieved by varying the take-out for regular (non-rollover) draws. For example, some players may currently engage in inter-temporal substitution and concentrate their annual purchase of Lotto tickets in weeks when the game offers untypical good value. They would not necessarily wish to increase their annual purchase of lottery tickets if the game offered better value over the year as a whole. Essentially, rollovers are weeks when the lottery is offering a ‘sale price’. Measuring increases in turnover during a sale is not likely to provide much information about what would happen if the product were available indeﬁnitely at that special ‘sale’ price. Third, rollovers deliver superior value to bettors but this is achieved solely by augmentation of the grand (or jackpot) prize pool. Whenever effective price has varied signiﬁcantly from its normal level, there has also been simultaneous variation in the structure of prizes. Observed responses in turnover, then, may not have been primarily to effective price but to the size of the jackpot. Hence, if takeout were reduced permanently for future games and the beneﬁt spread across all the prize pools, bettors may not necessarily respond as much as they have when the beneﬁt has been focused on the grand prize. Estimated ‘price’ elasticity, therefore, provides no reliable guidance on whether it would be appropriate to raise or lower take-out: the estimate of elasticity is calculated from the estimated coefﬁcient on effective price in the demand equation and this estimate will be subject to omitted variable bias (where the omitted variable is prize structure). For this not to be a problem, bettors would have to be assumed indifferent to prize structure, but this

188

D. Forrest

would imply that they were risk-neutral; in this case, according to the standard expected utility theory, they would not be participants in the Lotto draw at all. Together, these problems imply that one is unlikely to be able to form very deﬁnitive views from the effective price model concerning what level of takeout a state should set to maximize tax revenue. However, results from effective price model studies are not entirely empty of policy relevance. This is because all the problems noted tend to bias estimates of elasticity in the same direction. All three point to the difﬁculty that, because of the way Lotto games work, whenever ‘low’ effective price is observed the demand curve for that particular draw lies to the right of the regular demand curve.11 The effective price model, therefore, identiﬁes a spurious demand curve, which displays greater elasticity than the ‘true’ demand curves for regular and rollover draws. The effective price model may then be claimed to offer a lower-bound estimate of ‘true’ elasticity with respect to take-out. This may enable some (unambitious) policy guidance to be offered. For example, one study of a British Lotto game estimated elasticity of −0.88 on the basis of application of an effective price model. True elasticity may be viewed as likely to be closer to zero than this ﬁgure suggests. Given that the implication of an estimate of −0.88 is that take-out could be increased, one could then be more conﬁdent in the conclusion given known biases in the estimate of the demand equation. On the other hand, had elasticity been estimated at a level such as −1.50, one could not conﬁdently recommend lower take-out because an estimate of −1.50 might correspond to a true value consistent with revenue for the state already being maximized.

First-generation UK studies Britain was, famously, the last country in Europe to introduce a public lottery in the modern era.12 The product went on sale in November 1994. It is evidently the world’s largest Lotto game in terms of turnover (though per capita sales are not out of line with other jurisdictions) (Wessberg, 1999). Sufﬁcient data had been accumulated towards the end of the decade for the ﬁrst demand studies to appear. This ﬁrst generation of studies adopted the effective price model. Relevant papers are Farrell et al. (1999) and Forrest et al. (2000b). Forrest et al. (2002) proposed an alternative model but, as a point of comparison, also offered an estimation of the standard effective price model. Another study, Farrell and Walker (1999), estimated elasticity with respect to effective price but did so through the exploitation of cross-section data on the purchase of lottery tickets by four samples of individuals taken for four different lottery draws. One of the draws (by chance) featured a ‘double rollover’ and therefore a radically lower effective price than normal. The estimate of elasticity the authors produce is subject to the same qualiﬁcations as apply to time-series studies and is in fact yet more problematic because there is only one observation of ‘price’ different from the usual price and it is an extreme outlier (only six double-rollovers occurred in the ﬁrst seven years of the National Lottery).

Time-series modelling of Lotto demand

189

British studies differ in detail from American partly because of differences in institutional arrangements. In contrast to the norm in the US and other countries, Britain’s lottery is franchised to a private operator. It is awarded a seven-year licence, the terms of which specify a mandated take-out (50 per cent) measured over all games and the whole franchise period. Camelot plc won the initial licence against ﬁerce opposition; but the advantage of incumbency was such that it faced only one rival bidder for the second seven-year term. In fact, the second franchise was initially awarded to the rival bidder (The People’s Lottery) but this was overturned after a legal battle and ﬁnal arbitration by a committee headed by a prominent economist. This controversial episode is considered further below, because the ﬁnal judgement depended substantially on the view taken about the nature of the demand function that the literature, currently under review, has been trying to identify. The immediate relevance of the lottery being operated privately, but with a mandated take-out, is that it alters the value of the elasticity measure that would be consistent with the state maximizing the ﬁnancial beneﬁt to itself. In America, the state normally runs the lottery on its own account and its interests, therefore, lie in proﬁt maximization by the department operating the lottery. With a plausible assumption as to marginal cost, take-out is optimal when elasticity is −1.19. In Britain, by contrast, the government’s gain from the lottery is maximized when the rules for the licence are set such that revenue-net-of-prizes is as high as possible because this is the base on which tax may be levied. Hence, the test as to the appropriateness of the mandated take-out now, is that elasticity should have the value −1. Of course, the lottery operator would prefer to have the freedom to set its own price/prize levels. Proﬁt maximization that is subject only to the payment of 40 per cent of gross revenue to the government would imply a very different sort of lottery. There has been some confusion in the literature on this point. Farrell et al. (1999) take a well-informed guess that marginal cost to the operator is £0.06 for a ticket with a face value of £1 (this includes £0.05 retailer commission). Noting that mc = p(1 + 1/γ ) and setting mr = mc, they conclude that proﬁt maximization by the operator would imply an elasticity of −1.06. Of course, this is very close to the value of −1 for tax-revenue maximization and it would be impractical to hope to estimate elasticity so precisely as to distinguish whether take-out has been chosen to favour government interests or the interests of the private ﬁrm running the lottery. But this would not matter anyway because the proximity of −1.06 to −1 implies a near-coincidence of interest between the two. Unfortunately, the evaluation of −1.06 is incorrect. It is based on setting the value for p at £1. But this is the face value of a ticket. The authors’ demand model is in terms of effective price and they measure elasticity with respect to effective price. Hence, p should be set equal to the mean effective price for the game not to £1.13 On this basis, the elasticity consistent with proﬁt-maximization would be −1.12. Further, marginal cost to the private operator should include the 40 pence per ticket it must pay to the government as lottery duty and hypothecated tax. With this taken into account, Camelot would like to be at a point on the demand

190

D. Forrest

curve where γ = −6.11. With a linear demand curve with a gradient such as has been found in most studies, this would imply a nominal price for Lotto tickets of several pounds if the current amount of money were still paid out in prizes. Plainly the interests of the government and operator are very divergent and the necessity for a ﬁrm legislative framework if a lottery is franchised to private operation is underlined. To return to the differences between US and UK studies, British authors also have to take into account a feature of the British lottery known as ‘superdraw’. The operator’s licence permits it to hold back some prize money to be added to the prizes for occasional, special promotional draws, sometimes supposedly marking a notable event in the calendar. The option is exercised several times each year. In all but one of these superdraws, the operator has put the extra money into the grand prize fund, making the effect akin to that of a rollover.14 Obviously all the British authors have had to build superdraws into their models. This has the advantage of giving greater variation in effective price (rollovers tend to be similarly sized and have much the same impact on effective price each time) though there is the caveat that superdraws are not likely to be truly exogenous events: the operator, one would expect, would use them on occasions where sales may otherwise ﬂag and, indeed, when a new Wednesday drawing was introduced, a superdraw was declared for each of the ﬁrst three weeks. An innovation in the British applications is the introduction of a lagged dependent variable into the demand equation. This implies, of course, that they produce both a short- and long-run estimate of elasticity, where the latter is relevant to the policy issue central to this literature. The employment of lagged dependent term(s) has successfully captured the role of habit in Lotto play and permits the UK models to account for the tendency of a rollover to beneﬁt sales for some weeks beyond the rollover draw itself. This offered some hope that insight might be gained as to optimal game format in so far as design will determine frequency of rollover. Less promising is the interpretation of the signiﬁcance of lagged dependent variables in Farrell et al. (1999). They take it as evidence of addiction in lottery play, applying the Becker–Murphy concept of myopic addiction (Becker and Murphy, 1998). But habit and addiction are not the same thing. Becker and Murphy deﬁne an addictive good as one where the utility from current consumption at any given level depends on the amount of the good consumed in the past. Hence, for an addictive good, a model of consumption should include a lagged dependent variable and this should be signiﬁcant. However, there are other reasons why current purchases may depend on past purchases. For example, in the case of Lotto, sales for one draw may rise because there is a rollover and some of the marginal purchasers may take the opportunity of being at the sales booth to procure a ticket for the following draw at the same time. Some such transactions cost mechanism is a more plausible explanation of the signiﬁcance of lagged dependent terms than addiction, because lottery tickets do not obviously possess other characteristics that distinguish addictive goods according to the Becker–Murphy model: an addictive good, for instance, would be expected to have a large number of total abstainers and of heavy users but few light users; this sort of distribution of consumption

Time-series modelling of Lotto demand

191

Table 15.1 Elasticity estimates in the UK Lotto Period

Draws included

Observations Estimate of elasticity

Farrell et al. (1999) Nov 1994–Feb 1997 Forrest et al. (2000b) Nov 1994–Oct 1997

116 188

−1.55 −1.03

Forrest et al. (2002)

127 127

−1.04 −0.88

Saturday Saturday/ Wednesday Feb 1997–June 1999 Wednesday/ Saturday

has not been found for lotteries.15 There is no ﬁrm evidence that Lotto is addictive.16 Table 15.1 summarizes the ﬁndings of UK studies that use the effective price model (but differ from each other in time periods covered, functional form and number and types of control variables employed). In the ﬁrst study, Farrell et al. (1999) rejected that take-out was consistent with net revenue maximization (and therefore with the government’s stated goal) and recommended that prizes should be made more generous. No subsequent study has come to the same conclusion, later elasticity estimates all being close to −1. The outlying nature of the Farrell–Walker result could be attributed to the peculiar characteristics of the data period employed. They used the very ﬁrst 116 draws in Lotto and this has the general problem that behaviour may have been untypical when bettors had to learn about this new and unfamiliar product and new and hitherto unexperienced phenomena such as rollovers and double rollovers. A particular problem was that in this early period there occurred a unique circumstance in the British lottery, namely two double-rollovers in the space of a month (the ﬁrst two rollovers of the six that were to occur by the end of 2001). These double rollovers offered prizes at a level unprecedented in British gambling and the result was a media frenzy surrounding the two draws. The very high increase in sales may have been a response to the extraordinary level of publicity (which was not repeated for later large-jackpot draws as the concepts of Lotto became familiar), but these two outlying observations of price were very inﬂuential in the derivation of the high elasticity value. In any event, no such high elasticity has been found in studies that included later data. Forrest et al. (2000b) took the data period further and estimated elasticity very close to −1, indicating that the choice of take-out had been very precisely ‘correct’. But a ﬂaw in their study is that the midweek draw had been added to the games portfolio, and they accounted for variation between levels of play on Wednesdays and Saturdays with only a shift dummy variable. Implicitly, they imposed the untested restriction that slope coefﬁcients were equal in the determination of Wednesday and Saturday turnover. Forrest et al. (2002) estimated an effective price model for the period between the introduction of the midweek draw and the introduction of a third on-line game, Thunderball, in mid-1999. An F -test rejected equality of slope coefﬁcients

192

D. Forrest

between Wednesday and Saturday draws. Separate models to explain Wednesday and Saturday play yielded estimates of elasticity that were statistically insigniﬁcantly different from −1. But the point estimates can be regarded as lower-bound estimates of elasticity given the biases likely in the effective price model. Thus, the calculated elasticity of −0.88 for the larger Saturday draw could be taken as suggestive that, if anything, there would be scope for increasing take-out on Saturdays. Walker and Young (2001) presented a more complex model, reviewed below, which nevertheless included effective price/expected value and similarly indicated scope for making the Lotto game ‘meaner’. So a consensus appears to have emerged that the early Farrell–Walker ﬁnding that prizes should be increased was premature. Amongst other interesting ﬁndings in the UK studies, one may note the tendency of superdraw money to be less effective than rollover money in boosting turnover (Forrest et al., 2000b) and the tendency of interest in games to diminish with time after an initial upward trend (Forrest et al., 2000b; Walker and Young, 2001). The negative inﬂuence of trend, reﬂecting a tendency of bettors to become bored and disillusioned with games, appears to be a worldwide phenomenon and presumably accounts for the regular introduction of new games by Camelot and other lottery agencies. An under-explored issue is the extent to which these new games cannibalize existing sales, though Walker and Young (2002) ﬁnd some negative effect on Saturday sales from the introduction of Thunderball and Forrest et al. (2001) attempt a more general modelling of substitution between Camelot products. Paton et al. (2001) made the ﬁrst study of substitution between lottery games and an existing gambling medium (bookmaker betting).

Second-generation UK studies Recent UK work – by Forrest et al. (2002) and Walker and Young (2001) – is motivated by scepticism of the potential of the effective price model to yield ﬁrm conclusions on lottery policy with regard to take-out and game design. Forrest et al. explore bettor preferences with a view to understanding why lottery players participate in the game and this is the basis for proposing an alternative model that appears to track lottery sales at least as well as the standard effective price approach. Walker and Young choose to extend the traditional analysis to include the variance and skewness, as well as the mean, of the probability distribution of prizes. A fundamental problem for the effective price model is that it ignores the possibility that bettors’ behaviour may be explained by variations in prize structure as well as by the amount of money expected to be paid out in prizes. Implicitly, the model assumes risk-neutrality. But why would risk-neutral bettors accept such an unfair bet anyway? The resolution of this paradox must lie in bettors obtaining utility from the gambling process itself. Conlisk (1993) retains the conventional expected utility framework but adds a ‘tiny utility of gambling’ to the expected

Time-series modelling of Lotto demand

193

utility function so that the purchase of gambling products becomes consistent with risk neutrality (or indeed risk aversion). This approach does not, however, rescue the effective price model of Lotto demand. For players to be indifferent to prize structure, it must be assumed that bettors are risk neutral and that the amount of utility attached to the ownership of a lottery ticket is invariant with respect to prize structure. If both these assumptions held, then a demand model could proceed on the basis that a lottery ticket was fun to hold and effective price was the price of fun; the cheaper the fun, the greater the quantity demanded. But why do lottery tickets impart utility? Clotfelter and Cook (1989) suggested that Lotto players are ‘buying hope’ and Forrest et al. echo this, and current sentiment in the lottery industry, by portraying them as ‘buying a dream’. They suggest that lotteries represent a relatively non-stimulating mode of gambling and the fun is not in the process itself (number selection, etc.) but rather dreaming about the lavish lifestyle that becomes available to the biggest winners. From this point of view, the price of a ticket – which is now the face value of £1 – buys the right to such daydreams. When rollovers occur the value of the grand prize increases and the dream becomes yet more vivid. Lotto play actually delivers more utility in a rollover draw and this, rather than any improvement in expected value, accounts for the observed increase in sales. According to this model, price is a constant but the demand curve shifts according to how much more enjoyment players receive when contemplating a larger jackpot prize. Players may not seriously expect to win; but they enjoy the dream and this dream may be related to the largest prize they could win, that is, to the prospective size of the pool for the grand prize ( jackpot). Note that this emphasis on the lottery ticket as a consumer good rather than a ﬁnancial asset would imply that sales (at the never-changing nominal price of £1) depend not on the expected value of the prize a ticket holder may receive nor even perhaps on the expected value of the jackpot prize itself (which will take account of the number of winners with whom the jackpot would have to be shared) but on the maximum prize the ticket holder could possibly win, that is, the size of the jackpot pool. For Saturday draws between February 1997 and July 1999, Forrest et al. estimate both an effective price model of demand and a jackpot pool model. The speciﬁcation in the two models is the same except that (expected) effective price is replaced in the second model by (expected) size of jackpot pool. The jackpot pool is instrumented on the same set of variables as effective price. The performance of the rival models is then compared by means of a Cox test (Cox, 1961, 1962; Pesaran, 1974). The ﬁrst hypothesis tested is that the effective price model comprises the correct set of regressors and the jackpot pool model does not. This is extremely decisively rejected (test statistic −17.2, critical value at 5 per cent level ±1.96). The second hypothesis tested is that the jackpot pool model comprises the correct set of regressors and the effective price model does not. This is also rejected (test statistic +4.19, critical value again ±1.96). What do these results tell us? The ﬁrst test result implies that including the size of the jackpot pool in the sales equation would raise explanatory power. The

194

D. Forrest

failure to include it in past modelling means that existing elasticity estimates are based on coefﬁcient estimates that are subject to omitted variable bias. Suggestions that take-out rates are close to optimal are therefore unreliable. Unfortunately, this problem with the effective price model cannot be practicably resolved: given that almost all ‘price’ variation comes from rollovers, effective price and jackpot pool will be highly correlated and inclusion of both in a sales equation would yield unreliable parameter estimates because of severe colinearity. The jackpot pool model proves as successful as the traditional model in terms of ability to track past sales; but it would also be a fragile basis on which to make policy recommendations. The result of the second part of the Cox test implies that effective price as well as the size of jackpot pool inﬂuences aggregate bettor behaviour. Perhaps the more decisive rejection of the effective price than the jackpot pool model is indicative that the size of jackpot pool is particularly important to bettors and this should be taken into account in formulating arrangements for the game. Although only one instance, the story of the lottery draw on 19 September, 1998, points to the same conclusion. This was a Saturday when Camelot declared a superdraw, that is, it added free funds to the prize pool, offering bettors signiﬁcantly better value than usual. In fact, effective price fell from the usual £0.55 to 0.28, equivalent to the impact of a substantial rollover. But, on this one occasion, Camelot experimented by augmenting the second prize pool not the jackpot pool.17 The experiment was disastrous. Sales actually fell compared with the regular draw the week before. In no other rollover draw or superdraw have sales ever failed to increase substantially. This event is consistent with the implication of the alternative model in Forrest et al. that it is the size of the jackpot pool that is the driving force of Lotto sales and that the apparently ‘good’ performance of the effective price model relies on correlation between effective price and jackpot pool. Forrest et al. are cautiously agnostic in their conclusions: both take-out and prize structure are likely to matter but their relative importance is hard to assess when (with the one exception noted) effective price and prize structure have always moved closely together in the same direction. Walker and Young (2001), by contrast, attempt to provide precise policy guidance. They estimate a demand model employing data since the beginning of the lottery.18 Regressors include similar controls as employed in Forrest et al. (2000b) and the expected value of a ticket is still included (expected value equals one minus effective price); but, to the expected value (or mean) of the probability distribution of prize money, they add variance and skewness as regressors. The estimated coefﬁcients are positive on mean, negative on variance, positive on skewness. The positive sign on skewness appears to capture bettor interest in high prizes. Walker and Young use their estimated model to perform a series of simulations that predict the impact on sales of, ﬁrst, two possible variations in the split of total prize money across the various prize pools and, second, a change in the format of the game from 6/49 to 6/53.19 In the latter, Walker and Young predict that aggregate sales would fall by a little less than 10 per cent (if the current take-out were retained). This is a particularly relevant ﬁnding because, when the franchise for the second term of the UK lottery was awarded, the only substantive difference

Time-series modelling of Lotto demand

195

between the two bids was that Camelot offered a 6/49 game, whereas The People’s Lottery proposed a change to a 6/53 format. The ﬁnal rejection of The People’s Lottery bid was based fairly explicitly on the perceived risk that the change in format might lower sales (National Lottery Commission, 2000, para. 16). The empirical model in Walker and Young appears, however, to provide a fragile foundation on which to settle the controversial battle between the two aspirant lottery organizations. One problem is that the demand model is estimated by ordinary least squares whereas the three moments of the prize distribution included as regressors are in fact endogenous: variance and skewness are dependent on sales in an arithmetic sense, similar to the mean (as discussed above). The inability to instrument mean, variance and skewness in the model will introduce biases of unknown magnitude into the parameter estimates. A second problem is that the estimated coefﬁcient on skewness was statistically insigniﬁcant in the demand equation,20 yet the point estimate is used, and is inﬂuential, in the simulation. Of course, it must be conceded that the point estimate of the coefﬁcient, rather than zero, is the ‘best’ estimate of the ‘true’ coefﬁcient but its failure to be signiﬁcant implies a high standard error and therefore imprecision in the forecasting exercise for the different prize structure and game format scenarios. In introducing skewness, Walker and Young were picking up a tradition that began with Francis (1975) who analyzed effects of skewness in returns in ﬁnancial markets. Golec and Tamarkin (1998) and Woodland and Woodland (1999) explored skewness in betting markets in horse racing and baseball, respectively. In unpublished work Purﬁeld and Waldron (1997) tested the attractiveness of skewness to players of the Irish lottery. Garrett and Sobel (1999) included skewness in a cross-section model of sales across 216 on-line lottery games offered in various American states in various periods. But their measure of skewness was ﬂawed: they measured it from a simpliﬁed probability distribution for each game in which each prize level was represented by the mean prize pay-out from that pool; but this gives a hypergeometric distribution for which a strictly deﬁned measure of skewness does not exist. The measurement of skewness in a Lotto game is in fact difﬁcult and problematic. Consider the probability distribution of prize pay-outs for a single ticket for the UK game. Over 98 per cent of players receive nothing at all. There is a ﬁxed pay-out of £10 for any player matching three of the six numbers; the probability of receiving £10 is 0.0175. Once the £10 prizes have been paid, the remainder of the prize pool is split in predetermined proportions between four prize pools: one for bettors matching four balls, one for bettors matching ﬁve balls, one for bettors matching ﬁve balls but also the ‘bonus ball’, and one for bettors who have a claim on the grand prize because they have matched all six of the main numbers drawn. This produces a distinctive probability distribution. There are large mass points (spikes) at zero and £10, which together account for over 0.99 of the distribution. Corresponding to the remaining prizes, there is a continuous distribution. In principle, even the winning jackpot pool could deliver a low prize (e.g. £1) depending on how many bettors have chosen the winning combination of numbers; but, essentially, the continuous part of the probability distribution consists of

196

D. Forrest

four components, each with a local maximum corresponding to the mean pay-out to a winning ticket in each of the four prize funds. This is a ‘mixed distribution’ for which interpretation of skewness is difﬁcult. Consider the effect of a rollover on the prize probability distribution. The spikes at zero and £10 remain unaltered and the sections of the probability distribution corresponding to the lower prize pools remain virtually unaltered. The effect on measured skewness derives only from the translation to the right of the component corresponding to the jackpot pool. Given that most of the variation in skewness is, in fact, provided by rollovers, putting skewness into the demand equation is essentially equivalent to putting the jackpot into the demand equation, which is itself problematic given the correlation between expected value and jackpot. One is then only modelling a complex functional form of the effective price model and coefﬁcient estimates on skewness may prove unreliable.21 In fact, the skewness model suffers from precisely the same underlying problem as the effective price model. For skewness, as for expected value/effective price, almost all the variation we can observe comes from rollovers. But rollovers shift skewness in a very speciﬁc way, by affecting only the jackpot pool. It cannot safely be assumed that bettors would respond in the same way to a change in the skewness measure that was generated by modifying the structure of the other prizes.22 It is very important to consider these other prizes, for example, 38 per cent of prize money is spent on £10 prizes and it would be a fair question to ask whether this money could usefully be allocated to the other prize funds and, if so, in what proportions. Further, one cannot know how much of the extra sales attributed to variations in skewness when the jackpot is high represent inter-temporal substitution by lovers of skewness. Permanent changes in the prize structure, or changes in the game design that altered rollover frequency, might not elicit the expected response to the extent that bettors favouring skewness in returns may currently concentrate their lottery expenditure on draws where extra skewness is available. Once again, it must be admitted that the econometric evidence would provide a ﬂimsy basis for strong policy recommendations on lottery reform.

A proposal for restructuring the lottery industry This review has taken a pessimistic view of the extent of practical use that has emerged from the time-series modelling of Lotto demand. Information on the importance of trend or on the impact of the introduction of a new game is worth having; but on central questions concerning take-out, game design and prize structure in Lotto itself, very ﬁrm conclusions have not emerged. This is not the fault of the economists who have supplied the studies to which extensive references have been made above. They face the inherent problem that arrangements for the Lotto game in the UK, as in many jurisdictions, have remained static. Underlying odds and prize structures have never been changed. Any variation, such as in effective price, has been transient in nature and nearly always of similar magnitude, so that no strong information is contained in data sets to allow conclusions to be drawn

Time-series modelling of Lotto demand

197

on the consequences of various possible reforms. Even if Camelot were to change the rules of the game in the future, it would likely be a move prompted by faltering sales and, therefore, would not be the exogenous shock that would be required for bettor behaviour to be properly modelled.23 Walker and Young (2001) point out the obvious, that ideally one would like an experiment in which some bettors are offered one variant of the Lotto product and other bettors another variant. But they dismiss this idea as impractical: the Lotto game is subject to peculiar economies of scale. Any variation of the game offered to sub-groups of bettors would be unattractive because of ‘small’ jackpot size, and conclusions on how the product would sell to the whole population could not therefore be made. It was Clotfelter and Cook (1993) who ﬁrst drew attention to this often cited phenomenon of the peculiar scale economies of Lotto: lotteries with similar takeout rates will generate different levels of per capita sales according to the size of the market in the jurisdictions they serve. Interest in the lottery will be greater in states with a large population because what captures the imagination of buyers is the absolute size of the jackpot pool. Lotto in small states will be relatively unattractive because it cannot hope to pool sufﬁcient money to offer life-changing amounts as prizes on a regular basis; very large jackpots can be designed into a small state game but emerge only with a game design that induces frequent rollovers; in this case bettors become bored because they do not expect anyone to win in any given draw. Clotfelter and Cook presented empirical evidence of the scale economies effect in a cross-section regression of per capita sales on population size (and control variables) in US states. Ironically, their proposition, and its empirical veriﬁcation, considerably undermines the effective price model that they and Scott and Gulley constructed because it draws attention to the important, independent inﬂuence of the size of the jackpot pool. The policy response across the world, for example, in the Nordic countries and in Australia and in some American states, has been for separate jurisdictions to merge their jackpot pools from time to time to produce larger jackpots. So, for example, six countries join together in the Nordic Lottery so that, at the level of the grand prize, the lottery becomes supranational and is sold to a large population base. Peculiar scale economies appear effectively to give Lotto the status of a natural monopoly. Territorial exclusivity can be presented as indispensable since, otherwise, bettors’ money would fail to be concentrated sufﬁciently into a jackpot prize pool that offered the appealing possibility of an enormous pay-out to the winner of the ﬁrst prize. The force of the natural monopoly argument is demonstrated by the history of the football pools in Britain. Until the product innovation of 1946, known as the ‘treble chance’, there had been hundreds of pools companies in Britain; but once business switched from small stake/small prize betting to the new long-odds product, the number of ﬁrms rapidly contracted (to three by 1960). Bettors switched their weekly investments from small to large pools because only the latter could offer a genuinely life-changing level of ﬁrst prize and, as they switched, there was a dynamic process leading in the direction of monopoly (Forrest, 1999).

198

D. Forrest

But over the last twenty-ﬁve years, competition has been introduced into many industries hitherto regarded as natural monopolies. Particularly in public utilities, processes of deregulation have proceeded on the basis of separating those parts of an industry where there is genuine natural monopoly from those parts where there are no scale economies sufﬁcient to justify the granting of exclusivity of supply. Thus, a national grid for the distribution of electricity or gas might constitute a genuine natural monopoly but competition can be introduced into the relevant energy market by permitting access to the distribution system by competing producers: the production side of the electricity or gas industry is not a natural monopoly and, with regulation of terms of access to the national grid, vigorous competition can emerge. Perhaps, then, competition can be introduced into the provision of a Lotto game. Competition is the normal framework in which consumers reveal their preferences because ﬁrms have the incentive to experiment with different product speciﬁcations in order to gain market share. National Lottery players have had limited opportunity to reveal their preferences and econometric modelling has therefore been limited in its ability to estimate the appeal of different take-outs and different prize structures. It is contended here that a measure of competition is entirely feasible in the supply of the main lottery product. Deregulatory reform could proceed from the recognition that the principal natural monopoly element of the lottery is the grand, or jackpot prize. Lotto players’ stakes need to be channelled towards one national jackpot prize fund for the game to survive as a mass-participation gaming activity built on ‘selling a dream’. This, though, would still be possible with the following alternative institutional arrangements. A ‘National Lottery Commission’ (state or privately operated) would organize the Wednesday and Saturday draws and provide appropriate publicity focusing on the size of the jackpot.24 The Commission would licence ﬁt and proper organizations to operate their own lotteries, afﬁliated to the National Lottery.25 All would be obliged to pay lottery taxes at the current rate or whatever rate the government sets in the future. All would be required to put into the National Lottery jackpot pool the current proportion of stake allocated to the grand prize. But they would be free to dispose of remaining revenue as they thought appropriate. For example, they might allocate all the remaining revenue available for prizes to an extra ‘grand prize’ fund payable if one or more of their own clients won, or they might scrap only the ﬁxed £10 prizes and reallocate that money to the four-ball prize pool. In the early stages, diversity in product offerings would be likely as ﬁrms sought market share. In the mature market, whether or not all the suppliers would offer a similar prize structure would depend on heterogeneity of bettor preferences. If a standard product emerged, that could perhaps indicate something about whether the current prize structure is optimal. If product diversity were sustained, this would be indicative of heterogeneity of preferences for which the current Lotto game does not cater; in this case, the market for the Lotto product should increase with revenue beneﬁts for the government and good causes. In Britain at least, the betting industry is well established and there would be no shortage of organizations (e.g. the Pools industry, national bookmaking chains)

Time-series modelling of Lotto demand

199

with the capability to enter the lottery market. Outlets such as post ofﬁces and supermarkets could be part of their own licenced operation or become outlets for lotteries offered by new entrants such as the pools companies. In the context of this chapter, the principal beneﬁt of the new arrangements would be that competition would reveal bettor preferences and lead the industry to a more optimal prize structure. By appealing to buyers with different risk-return preferences, it should also enlarge the market. Of course, it could be argued that Camelot already offers alternative products to cater for heterogeneous preferences; for example, it introduced Thunderball, an on-line game offering less skewness in returns than Lotto. But it is likely that a monopoly operator is very cautious in experimentation because of fears that it will cannibalize the market for its own principal existing product. The interest of bettors in choice of risk/return proﬁles is illustrated by the success of the twice-daily ‘49s’ lottery game, organized by the bookmaking industry and sold only at licenced betting ofﬁces. It offers no jackpot prize for six ‘correct’ numbers but bettors can control variance in returns by betting on three, four or ﬁve correct numbers (at different odds). At present, this market ﬂourishes outside, and is lost to, the National Lottery. Further, National Lottery players include many who only wish to gamble in the context of the national event that is the twice-weekly draw, and they may not bet at all if they do not like the current speciﬁcations of Camelot’s Lotto game. Other advantages of deregulation would include the efﬁciency gains usually associated with the dismantling of monopoly. Further, reform would remove the serious problem of how to choose the monopoly operator for each seven-year licence. The ﬁrst licence renewal generated considerable and prolonged controversy, which can be attributed to the inevitability that there is unlikely to be much difference between serious candidates for the lottery franchise. If one of the candidates attempts to differentiate itself by proposing a change in the game design or prize structure, the Commission has no ﬁrm basis on which to predict whether the changes would raise or lower revenue. The award of the franchise is, therefore, always likely to be contentious; and disillusion with the process could in future lead to the incumbent never being challenged at all. It has been argued here that there may not, in fact, be a need for a third monopoly franchise to be awarded. An element of competition is feasible in the supply of lottery services notwithstanding the peculiar scale economies of Lotto.

Acknowledgements I acknowledge the usefulness of discussions on lottery issues with Neil Chesters, David Gulley, David Percy, Robert Simmons and Mike Vanning.

Notes 1 Clotfelter and Cook (1987) adopted this deﬁnition of the price of a lottery ticket after a discussion of possible alternatives. It has been widely adopted in the subsequent literature. Mason et al. (1997), however, reported that an alternative measure, equal to one divided by expected value (rather than one minus expected value) gave more

200

2 3

4 5

6 7 8

9 10

11

12 13

D. Forrest

satisfactory results when employed as a regressor in a study of demand for the Florida lottery. The ‘one’ here is the face value of the ticket. Legislation stipulates that the hypothecated tax rate should increase beyond 28 pence if lottery sales exceed a certain amount during the course of the operator’s licence, but sales have never been large enough to trigger this increase. Early in the history of the lottery, Connolly and Bailey (1997) examined the extent to which the expenditure on ‘Good Causes’ represented new expenditure and the extent to which it just substituted for spending by other government programmes. Given changes in the areas of expenditure deﬁned as ‘Good Causes’ and eligibility for lottery funding, further studies along these lines would now be timely. The take-out rate is approximately twice as high as in the two next most popular British gambling media: slot machines and horse-race betting (Moore 1997, table 1). The UK government was not alone in viewing the introduction of a lottery as a means of raising revenue rather than a means of delivering utility to consumers. Erekson et al. (1999) are amongst those who have demonstrated the importance of ﬁscal pressure in triggering take-up of lotteries by American states. In Canada, the ﬁrst lottery was introduced to cope with the ﬁnancial crisis in Quebec following huge unanticipated losses from the staging of the Olympic games. More recently, Spain justiﬁed a new lottery by its need to improve public ﬁnances to meet the conditions for membership of the new European Currency Zone. This should perhaps be regarded as an upper-bound estimate to the extent that it was calculated using an early estimate of the UK Lotto demand curve that displayed greater elasticity than the consensus value from later studies. DeBoer (1986), however, had employed panel data for seven American states over ten years and found signiﬁcant price elasticity (−1.19). Subsequent UK studies tended to be much less spartan in terms of the number of controls. One may speculate that Gulley and Scott were concerned to estimate a similar equation for each game and incorporating one-off inﬂuences pertaining to particular states would undermine this. Scott and Gulley (1995) for the US cases, and Forrest et al. (2000a) for the UK, tested and could not reject that bettors’ behaviour is consistent with rational expectations in terms of the efﬁcient processing of available information. The authors report, but do not emphasize, alternative estimates for a linear speciﬁcation of the demand curve. Notwithstanding that these are stated to be of the same magnitude as the log estimates, they appear (evaluated at the mean) to be quite different, for example, −2.50 rather than −1.20 for the Kentucky lottery. The policy implication would then change in that Kentucky would be advised to run a more generous lottery rather than be advised to continue with the current terms (an implication of the −1.20 estimate). No test for which functional form was more appropriate is reported but, commonly in Lotto demand studies, it is hard to choose between functional forms because effective price tends to cluster at two levels corresponding to rollover and regular draws. It is assumed here that bettors view high jackpots (for given effective price) positively. Some plausibility is added to this assumption by an incident in the UK lottery when, as a one-off experiment, the lottery agency added reserve funds it held to the second-prize pool for a draw in 1999. In terms of effective price/expected value of a ticket, the effect was akin to a rollover. But sales for that draw did not respond at all to the reduction in effective price. In all draws when effective price has been lowered by rollover into the grand prize pool, sales have increased substantially. Britain had had lotteries earlier in its history but they were ﬁnally legislated out of existence in 1826. This is in fact above the licence take-out of £0.50 because the operator is permitted, and chooses, to price discriminate across its products and imposes a higher take-out on on-line than on scratch-card players.

Time-series modelling of Lotto demand

201

14 Usually, the amount of ‘free money’ paid into the pool is not announced explicitly but rather Camelot guarantees a certain size of jackpot pool. However, the guarantee has always been binding and Camelot funds have been required to bring the jackpot up to the amount promised. 15 Particularly dubious is Farrell and Walker’s assertion that lottery tickets are less addictive than cigarettes. This is based on a comparison of the coefﬁcient on the lagged dependent variable in their study and that found in a cigarette demand equation estimated by Becker et al. (1994). The time period over which consumption of the two goods is deﬁned is quite different between the two cases. 16 Camelot’s other main product, scratch cards, provides an opportunity for rapidly chasing losses and is therefore regarded as a ‘harder’ form of gambling. Data limitations have so far prevented any economic studies of demand but one may suspect that, if suitable data became available, it would be worthwhile here to test for addiction. 17 In the UK lottery, six numbers are drawn from a set of forty-nine (without replacement) and the jackpot is then shared by bettors whose six numbers correspond exactly with those drawn. The draw also picks out a seventh number (the bonus ball). The second prize is shared by those whose entry comprises ﬁve of the main winning numbers plus the bonus ball number. 18 Wednesday and Saturday operations are both included with different levels of demand accounted for only by a shift dummy. 19 If the players had to choose six numbers from ﬁfty-three instead of forty-nine, the game would be harder to win and more rollovers would result. Impact on total sales requires simulation because mean-variance-skewness will be altered in both regular and rollover draws and the relative frequency of regular and rollover draws would change. 20 Five per cent level of signiﬁcance. 21 It is of interest that Forrest et al. (2000b) reported that when skewness was added to their version of the expected price model, it attracted a t-statistic of only 0.37. 22 Walker and Young do not use the one observation when skewness was atypically affected by a superdraw: they employ a dummy variable for the draw on 19 September, 1998 and thereby eliminate its inﬂuence on the sales-skewness relationship. This may be justiﬁed statistically but it is unfortunate not to consider information on bettor preferences that may be contained in the data from this episode. 23 Beenstock et al. (1999) were able to observe variations in lottery design in Israel but these, likewise, could be argued as being endogenous. 24 Possibly it could be self-ﬁnancing since it could sell television rights for coverage of the draws. 25 A matter to be resolved would be whether these organizations would own their sales terminals or lease them from the Commission.

References Becker, G. S. and Murphy, K. M. (1988), ‘A theory of rational addiction’, Journal of Political Economy, 96: 675–700. Beenstock, M., Goldin, E. and Haitovsky, Y. (1999), ‘What jackpot? The optimal lottery tax’, working paper, Hebrew University of Jerusalem. Clotfelter, C. T. and Cook, P. J. (1987), ‘Implicit taxation in lottery ﬁnance’, National Tax Journal, 40: 533–546. Clotfelter, C. T. and Cook, P. J. (1989), Selling Hope: State Lotteries in America, Cambridge, MA: Harvard University Press. Conlisk, J. (1993), ‘The utility of gambling’, Journal of Risk and Uncertainty, 6: 255–275. Connolly, S. and Bailey, S. J. (1997), ‘The National Lottery: a preliminary assessment of additionality’, Scottish Journal of Political Economy, 44: 100–112.

202

D. Forrest

Cook, P. J. and Clotfelter, C. T. (1993), ‘The peculiar scale economies of Lotto’, American Economic Review, 83: 634–643. Cox, D. L. (1961), ‘Tests of separate families of hypotheses’, Proceedings of the Fourth Berkley Symposium on Mathematical Statistics and Probability, Vol. 1, Berkeley: University of California Press. Cox, D. L. (1962), ‘Further results on tests of separate families of hypotheses’, Journal of the Royal Statistical Society, Series B, 24: 406–424. DeBoer, L. (1986), ‘Lottery taxes may be too high’, Journal of Policy Analysis and Management, 5: 594–596. Erekson, O. H., Platt, G., Whistler, C. and Ziegert, A. L. (1999), ‘Factors inﬂuencing the adoption of state lotteries’, Applied Economics, 31: 875–884. Farrell, L., Morgenroth, E. and Walker, I. (1999), ‘A time series analysis of UK lottery sales: long and short run price elasticities’, Oxford Bulletin of Economics and Statistics, 61: 513–526. Farrell, L. and Walker, I. (1999), ‘The welfare effects of Lotto: evidence from the UK’, Journal of Public Economics, 72: 92–120. Forrest, D. (1999), ‘The past and future of the British football pools’, Journal of Gambling Studies, 15: 161–172. Forrest, D., Gulley, O. D. and Simmons, R. (2000a), ‘Testing for rational expectations in the UK National Lottery’, Applied Economics, 32: 315–326. Forrest, D., Gulley, O. D. and Simmons, R. (2000b), ‘Elasticity of demand for UK National Lottery tickets’, National Tax Journal, 53: 853–863. Forrest, D., Gulley, O. D. and Simmons, R. (2001), ‘Substitution between games in the UK National Lottery’, working paper, University of Salford. Forrest, D., Simmons, R. and Chesters, N. (2002), ‘Buying a dream: alternative models of Lotto demand’, Economic Inquiry, 40: 485–496. Francis, J. C. (1975), ‘Skewness and investors’ decisions’, Journal of Financial and Quantitative Analysis, 10: 163–172. Garrett, T. A. and Sobell, R. S. (1999), ‘Gamblers favour skewness not risk: further evidence from United States’ lottery games’, Economics Letters, 63: 85–90. Golec, J. and Tamarkin, M. (1998), ‘Bettors love skewness, not risk, at the horse track’, Journal of Political Economy, 106: 205–225. Gulley, O. D. and Scott, F. A. (1993), ‘The demand for wagering on state-operated lottery games’, National Tax Journal, 45: 13–22. Mason, P. M., Steagall, J. W. and Fabritius, M. M. (1997), ‘The elasticity of demand for Lotto tickets and the corresponding welfare effects’, Public Finance Review, 25: 474–490. Moore, P. G. (1997), ‘Gambling and the UK National Lottery’, Business Ethics: A European Review, 6: 153–158. National Lottery Commission (2000), ‘Commission announces its decision on the next lottery licence’, News Release 24/00, December 19, 2000. Paton, D., Siegel, D. and Vaughan Williams, L. (2001), ‘A time-series analysis of the demand for gambling in the United Kingdom’, working paper, Nottingham Trent University. Pesaran, H. (1974), ‘On the general problem of model selection’, Review of Economic Studies, 41: 153–171. Purﬁeld, C. and Waldron, P. (1997), ‘Extending the mean-variance framework to test the attractiveness of skewness in Lotto play’, working paper, Trinity College, Dublin.

Time-series modelling of Lotto demand

203

Scott, F. A. and Gulley, O. D. (1995), ‘Rationality and efﬁciency in Lotto markets’, Economic Inquiry, 33: 175–188. Walker, I. and Young, J. (2001), ‘An economist’s guide to lottery design’, Economic Journal, 111: F700–F722. Wessberg, G. (1999), ‘Around the world in 80 games’, presentation to the Inter Toto Congress, Oslo. Woodland, B. M. and Woodland, L. M. (1999), ‘Expected utility, skewness and the baseball betting market’, Applied Economics, 31: 337–346.

16 Reconsidering the economic impact of Indian casino gambling Gary C. Anders

A brief history of Indian1 gaming Native American casinos result from the 1988 Indian Gambling Regulatory Act (IGRA). The IGRA is a federal law stemming from the US Supreme Court’s decision in the case of California v. Cabazon Band of Mission Native Americans. This decision found comparable Native American gambling legal, where a state has legalized any form of gaming. There has been a massive proliferation of Indian casinos throughout the country. Currently, 124 of the 557 federally recognized2 tribes operate gaming facilities. This industry of more than 120 casinos and 220 highstakes bingo games sprang from a single bingo hall on the Seminole reservation in Florida. High-stakes gaming grew as other Florida and California Indian tribes began offering cash prizes greater than that allowed under state law. When the states threatened to close the operations, the tribes sued in the federal court. In California v. Cabazon (1987), the Supreme Court upheld the right of the tribes as sovereign nations to conduct gaming on Indian lands. The court ruled that states had no authority to regulate gaming on Indian land, if gaming is permitted for any other purpose.3 In light of the favorable Supreme Court decision, the Congress passed P.L. 100-497, the IGRA in 1988 recognizing Indian gaming rights. The IGRA faced strong opposition from Las Vegas and Atlantic City. States, however, lobbied for the legislation in an effort to establish some control over tribal gaming. The Congress sought to balance Native American legal rights with the states’ interests and the gambling industry (Eadington, 1990). The IGRA allows any federally recognized tribe to negotiate a compact with its respective state government to engage in gambling activities. A tribal–state compact is a legal agreement that establishes the kinds of games offered, the size of the facility, betting limits, regulation, security, etc. Compacts ensure that tribal governments are the sole owners and primary beneﬁciaries of gaming. These compacts deﬁne the various allowable types of Indian gambling activities according to three classes. Class I is deﬁned as social games solely for prizes of minimal value or traditional forms of Native American gaming engaged in by individuals. Class II includes bingo, and electronic bingo-like games, punch boards, pull-tabs,

Economic impact of Indian casino gambling

205

as well as card games not explicitly prohibited by state law. Class III includes all other forms of gambling including slot machines, casino games, and pari-mutuel betting. The IGRA created a framework for regulation and oversight of tribal gaming with four interdependent levels: tribal, state, federal including the Department of Justice, the FBI, the Internal Revenue Service (IRS) and the Bureau of Indian Affairs (BIA), and ﬁnally, the National Indian Gaming Commission (NIGC). Class I gaming is regulated solely by tribes. Class II gaming is regulated solely by tribes, if they meet conditions set forth in the IGRA. Regulation of Class III gaming is governed by tribal–state compacts. In general, tribes enforce frontline gaming regulations. Tribes establish their own gaming commissions and operate tribal police forces and courts to combat crime. They adopt ordinances, set standards for internal controls, issue licenses for gaming operations, and provide security and surveillance measures. Tribes or management contractors also manage tribal gaming operations. States enforce the provisions of Class III gaming compacts, which include background checks of employees and management company personnel. Some states like Arizona, for example, coordinate background checks and other security measures with tribes. At the federal level, the Department of Interior determines which lands can be placed into reservation trusts, approves tribal–state compacts, rules on tribal gaming revenue allocation plans, and conducts audits of gaming operations. The Department of Justice enforces criminal violation of gaming laws, conducts background checks of key gaming employees, and conducts investigative studies. The FBI and BIA provide oversight on crimes committed on reservations. The NIGC approves tribal resolutions and gaming ordinances, and reviews terms of Indian casino management contracts. The NIGC has the authority to enforce civil penalties, impose ﬁnes, and to close an establishment. The IGRA provides tribal gaming operations with an exemption from the Freedom of Information Act. Unless the tribe agrees, federal and state regulators cannot publicly release or disclose ﬁnancial information. This protective measure also makes it nearly impossible to ascertain individual casino revenues. Furthermore, because this is primarily a cash-based business, problems exist for law enforcement ofﬁcers looking for a paper trail of records to trace all gaming activity of customers engaged in large scale transactions, and potential money laundering activities (US General Accounting Ofﬁce, 1997). Casinos range from the palatial Foxwoods casino in Connecticut to trailers in remote locations offering a few slot machines. Tribes do not have to pay taxes on their gaming revenues to the state or federal government. Some tribes have negotiated revenue sharing agreements with their state government. All tribes are, however, legally required to withhold state and federal income tax and Federal Insurance Contributions Act (FICA) from all non-Indian and non-resident Indian4 tribal employees, and report payments to independent contractors. Additionally, Indian tribes must report gaming winnings to the IRS; withhold federal income taxes of statutorily deﬁned gaming winnings and payments to non-resident aliens;

206

G. C. Anders

report per capita distributions of more than $600 to the IRS; and withhold federal income tax on distributions of $6,400 or more (Anders, 1998).

Introduction Since it was legalized, Indian gambling has grown to account for approximately $9.9 billion in annual revenues (McKinnon, March 13, 2001). Policy makers in the United States have been relatively slow to grasp the signiﬁcance of this development, in part because there has been little research on the economic and social impacts of gambling on both native and non-native economies. The purpose of this chapter is to provide an overview of the various policy issues related to Indian gambling in Arizona. It discusses three interrelated issues regarding Indian gaming, namely: casino monopoly proﬁts; community impacts, and tax revenue displacement. This chapter presents the results of regression analyses that conﬁrm the displacement effects of casinos by economic sector. An attempt is made to extrapolate the economic impact of casino enlargement on local government using the number of slot machines as a measure of gambling activity on an anticipated loss in tax revenue. Several years ago, my colleagues and I examined the ﬁscal impact of casinos in Arizona. We conducted a statistical test on Maricopa County Transaction Privilege Tax revenues from 1990–1996. The results of that test indicated a destabilization of county tax collections beginning in July 1993 (Anders et al., 1998). We argued that a displacement of taxable expenditures reduces potential state tax receipts, but that this leakage is masked by population and economic growth (Hogan and Rex, 1991). In response to our ﬁndings, The Economic Resource Group, Inc. of Cambridge, Massachusetts was contracted by Arizona gaming tribes to write a report on the social economic beneﬁts of Indian casinos (Taylor et al., 1999). While touting the positive impacts of Indian casinos, the authors attempted to discredit the validity of our work. Researchers at Arizona State University West have advanced an Arizonaspeciﬁc claim of off-reservation impacts resulting from Indian gaming . . .. Thus, the authors’ attempt to link tax collection shortfalls to the introduction of casino gaming cannot possibly be correct unless people withheld purchases of goods and services in anticipation that casinos would be opened in the future . . . . That said, the failure of Anders, et al., to pick up an effect of actual casinos capacity additions on State tax receipts suggest at a minimum that much more careful and controlled analyses must be undertaken if substitution claims are to be supported. (Taylor et al., 1999, 35–37) Taylor et al. (1999) did not provide an alternative explanation based on actual casino revenue data to counter our results. Instead, they advocated for Indian gaming using anecdotal examples of the use of casino proﬁts to help tribes. Before

Economic impact of Indian casino gambling

207

responding to the call for rigorous tests of the displacement hypothesis, I will raise three pertinent issues related to Indian casinos. These are: (1) that the IGRA has created highly proﬁtable gambling monopolies, (2) that proponents overstate the positive beneﬁts of gambling on communities, and (3) casinos result in an increasing loss of public sector revenues because of tax displacement.

Three little understood aspects of Indian gaming Good public policy should match the intended outcomes with the actual results. Indian gaming was a compromise effort designed to promote Native American economic development while deﬁning acceptable limits to tribal sovereignty within the US legal and political framework. The results of the IGRA have been far different than its architects could have anticipated. Instead of promoting Native American self-sufﬁciency, gaming has made a small number of Indians rich while leaving many tribes mired in poverty. Most of us have been sensitized to the historical conquest and the harsh treatment of aboriginal peoples. In this respect, we feel compassion for the hardships inﬂicted upon Native Americans. Indians have lost valuable lands and continue to experience a number of health-related problems including drug and alcohol abuse. The treatment of Native Americans evokes a profound sense of moral outrage. Still, we should not confuse the economy of casinos and resort hotels that beneﬁt a few tribal members with restitution to an entire race for past injustices. There is no question that the economic development of Indian reservations should be a high national priority, but we should be careful about the type of economy that is being developed. Indian casinos are highly proﬁtable monopolies beneﬁting a small percentage of native Americans Since the IGRA was passed in 1988, Indian casinos have become far more successful than anyone could have imagined. Indian casinos now generate almost $10 billion which is more than the annual revenues than all the casinos in Nevada combined (McKinnon, March 13, 2001). The basis for a proﬁtable casino is a large market. Many of the casinos in Arizona border populated urban areas, or are located on heavily traveled highways. Mostly anytime, day or night, the Arizona Indian casinos are full. Customers are often lined three or four deep waiting for their chance to play slot machines. There are even special buses that pick up passengers and bring them to the casinos. Yet, every time a person drops a dollar into a slot, or a video poker machine, on average, they will only get back 83 cents, or less.5 Even accounting for an occasional large pay-out, Indian casinos are phenomenally proﬁtable. Indian casino revenues are not publicly available because the IGRA speciﬁcally exempted tribes from the Freedom of Information Act. Without this ﬁnancial information it is hard to know deﬁnitively, but a conservative estimate is that the nineteen Arizona casinos earn about $830 million in net revenues per year

208

G. C. Anders

(Cummings Associates, 2001). In reality, the actual amount is probably much larger.6 Table 16.1 presents a context for understanding the concentration of Indian gambling in Arizona. Data on the tribal population and the number of machines are presented. Aside from the Navajo tribe with over 104,000 enrolled members, most Arizona Indian tribes are small. It is signiﬁcant to note that about 36 percent of the total Native population controls gaming. Moreover, the proﬁtability of gaming is highly skewed. As shown in Table 16.2 the average annual net revenues from gambling range from between $4,000 and $260,000 per capita. Casinos generate huge proﬁts for urban tribes while the larger, more impoverished tribes have largely been excluded from the gambling windfall.7 To garner public support, various Indian groups sponsor studies on gaming. For example, researchers at the University of Arizona recently released a study (paid for by the Arizona Indian Gaming Association) on the economic impacts of Indian casinos (Cornell and Taylor, 2001). According to this study, Indian casinos generated 9,300 jobs and spent $254 million in goods and services in 2000. They argue that Arizona Indian casinos had a total impact of $468 million of the state economy (Mattern, 2001). A review of the literature on impact studies demonstrates that ﬁndings such as these should be viewed with caution. The abuse of economic impact models to exaggerate the beneﬁts of various activities to gather public support is well known. Wang (1997), for example, explains how analysts inﬂate estimates to produce economic impacts that reﬂect a greater contribution to the community and therefore improve popular support and legitimacy. Using various multipliers derived from the literature, Wang demonstrates how the same activity can have a total economic impact ranging from $6.8 to $55.2 million. Harris’s (1997) discussion of the “use and abuse” of economic impact multipliers in the tourism industry asserts that data quality and accuracy is a critical consideration. Unless the economic model incorporates actual gambling revenue data then the estimated economic impact of Indian casinos are likely to be inaccurate.8 The basis for Cornell and Taylor’s asserted impacts is a complex Input–Output model of the state economy. This computer model portrays the economy in terms of a matrix of over 200 sectors that interact based upon assumed relationships between indirect and induced effects (i.e. jobs created, wages, sales, and output.) Induced effects represent “new” employment opportunities that are created when an economic activity starts up and produces goods or services that are sold to consumers both in the state and outside. Two types of data are used to drive the model: (1) tax rates, which are known, and (2) input projections, which are estimated. A critical assumption is that the economic activity in question represents a new expenditure stream that generates subsequent rounds of spending. Also, the model assumes that businesses in the same sector are homogeneous and that it does not matter which sector experiences the stimulus. Typically, economic impact studies of Indian gambling confuse ﬁrst and second round expenditures (Gazel, 1998). Money coming to Arizona Indian casinos is not “new” money, but redirected money. For example, before the advent of

742 1,025 7,466 824 773 11,257 503 6,946 1,353 196 104,565 3,315 2500 6,405 9,385 10,787 132 12,429 743 182 179,064 65,697

Ak-Chin Cocopah Colorado River Fort McDowell Fort Mohave Gila River Havasupai∗ Hopi∗ Hualapai∗ Kaibab-Paiute∗ Navajo∗ Pascua Yaqui Quechan Salt River San Carlos Tohono O’Odham Tonto-Apache White Mountain Yavapai-Apache Yavapai Totals Gaming tribes

475 475 475 475 475 900 0 0 475 475 0 900 475 700 900 1,400 475 900 475 475 9,925

Slot and video poker – authorized 475 468 456 475 180 900 0 0 0 0 0 500 475 700 500 592 337 496 475 475 7,504

Slot and video poker – in use 488 350 350 1,700 0 1,800 0 0 0 0 0 476 300 0 1,000 0 280 200 0 150 7,094

Number of bingo seats 13 0 5 45 0 62 0 0 0 0 0 0 8 92 6 28 5 5 5 6 280

Card tables

Notes ∗ Indicates tribes without casinos even though a Class III gaming compact may have been signed with the State of Arizona.

Source: Arizona Department of Economic Security, 2001, and Arizona Department of Gaming, July 2001.

2000 population

Reservation

Table 16.1 Arizona Indian tribal population and gaming capacity

3/94 8/96 8/98 5/94 10/93 9/93 12/93 5/95 11/92

closed closed

12/94 11/92 6/99 1/93 4/95 11/97

Date opened

148,999

12,341 n/a 13,624 17 n/a

5,488 255,303 3,991 63,930 260,989

750,400,000

85 23 1,531 2 98 5,559 153 2,904 1,080 120 89,250 4,519 n/a 3,122 14,571

Welfare cases + in FY 2000

64,016 45,659 6,108 57,646 23,286 7,995 0 0 0 0 0 15,083 19,000 10,929 5,328

Slot machine revenue per capita∗∗ (in $)

15,281,046

1,341,321 n/a 1,759,413 n/a n/a 8,436,698

692,589 n/a 908,687 n/a n/a

n/a 551,188 n/a 554,395 n/a n/a 4,379,254 260,449 n/a 265,232 802,127

44,114 n/a n/a 1,067,510 n/a n/a n/a n/a 8,497,477 504,431 n/a 513,721 1,553,059

22,777

n/a n/a

State assistance in FY 1997 (in $)

n/a n/a

Federal assistance in FY 1997 (in $)

40,060,797

3,039,354 n/a 3,322,039 1,034 n/a

23,869 8,064 338,571 348 26,143 1,729,888 36,317 860,911 262,798 22,718 24,718,964 1,276,495 n/a 817,955 3,575,329

Food stamps in FY 2000 (in $)

79.0 24.0 n/a 12.5 33.0

6.0 22.4 9.0 11.3 13.9 29.6 75.0 55.0 37.0 n/a 52.0 34.4 43.0 15.4 30.0

1990

13.2 n/a 20.4 0 7.0

5.7 13.2 4.9 6.4 7.8 18.1 9.0 15.2 20.2 12.5 17.3 21.5 33.3 8.7 18.4

2000

Unemployment rates

Notes ∗ Indicates tribes currently operating without a casino though they may be a part of the compact currently signed with the state of Arizona. ∗∗ Using Cumming’s $830,000,000 revenue estimate means that, on average, each of the 7,504 Indian casino slot and video poker machines earns over $100,000 per year. To calculate a conservative estimate of the per capita revenue from slot machines I multiplied the number of machines times $100,000 and divided the total by the number of enrolled tribal members listed in the 2000 census. + Indicates the number of households currently receiving food stamps. The number of household residents varies.

Source: Arizona Department of Economic Security, July 2001 and US Census Bureau, Census 2000 Summary File. More recent data for this study was unavailable.

Total

Ak-Chin Cocopah Colorado River Fort McDowell Fort Mohave Gila River Havasupai∗ Hopi∗ Hualapai∗ Kaibab-Paiute∗ Navajo∗ Pascua Yaqui Quechan Salt River San Carlos Tohono O’Odham Tonto-Apache White Mountain Yavapai-Apache Yavapai

Reservation

Table 16.2 Per capita slot machine revenue, unemployment rates, welfare and transfer payments for Arizona Indian reservations

Economic impact of Indian casino gambling

211

Indian gaming a person might have gone to a movie or restaurant, but now instead goes to a casino. The money spent at the casino comes from one sector where it diminishes sales and goes to another where it provides proﬁts. There is nothing inherently wrong about this, except that it is incorrect to postulate positive economic impacts without considering the corresponding economic loss. Based upon the available evidence there is a high probability that gambling conﬁned to local markets only results in income redistribution, and causes no net change in employment (Felsenstein et al., 1999). The reason for this is that casino proﬁts constitute expenditure losses for competing businesses. Although some of the winnings are recaptured from gambling that would have been undertaken outside of the state, the fact is that Arizona Indian casinos derive almost all of their business from state residents. This is a signiﬁcant difference that substantially reduces positive impacts (Rose, 2001). Furthermore, at least two casinos in Arizona, Ak Chin and the Yavapi-Apache, in Camp Verde have management contracts with outside companies like Harrah’s and Fitzgerald’s which result in up to 40 percent of the proﬁts being remitted back to corporate headquarters outside Arizona. This means that the leakage is even greater because of reduced second round expenditures. Figure 16.1 presents a model of the Arizona economy with casinos to demonstrate why gaming reduces potential multiplier effects. In Arizona, Indian gaming is a highly proﬁtable monopoly that has been able to internalize the beneﬁts and externalize the social costs. The nineteen casinos take in over $830 million in net revenue and, except for modest regulatory costs, none

Consumer Expenditure

Indian casinos

The State of Arizona experiences decreased tax revenues and increased liabilities.

Expenditure and income effects • direct employment • indirect employment • valued added

Leakages Weak linkages with nonreservation businesses. Consumption multiplier is lower than private sector. Profits remitted to out side management companies.

Figure 16.1 A model of the Arizona economy with Indian casinos.

212

G. C. Anders

of the proﬁt is shared with the state. The Cornell study estimated that casinos spend only $254 million on goods and services, and have a total impact of $468 million. Comparing these estimates with the net revenue it then appears that Indian casinos are responsible for a drain of at least $362 million from the Arizona economy, and this does not consider other negative impacts.9 Claims of positive impacts of Indian casinos are overstated Tribal leaders and the Arizona Indian Gaming Association (AIGA), an industry lobby, argue that Indian gambling generates spillover beneﬁts in the form of jobs, and taxable wage income. They point to the thousands of jobs created by casinos, and to the added purchasing power afforded to their employees and tribal members as a result of gambling. Defenders of Indian gaming argue that decreases in unemployment and a reduction in the number of families dependent upon welfare have reduced state and federal payments to tribes and thus saved money. At the same time casinos are said to have been responsible for improved health care, substance abuse programs, educational scholarships, and improving the housing stock and infrastructure of the reservation communities (Stern, June 18, 2001). While there is some truth in these assertions, they need to be considered in light of the evidence. It is clear that casinos have created jobs albeit with a corresponding job loss in other sectors. However, the available evidence does not support the assertion that gaming has substantively reduced Native American unemployment. According to data from the Arizona Department of Employment Security and the BIA there is no statistical difference in changes in unemployment between Arizona tribes with a casino and those without a casino. While individual tribes (e.g. Cocopah) have experienced a dramatic decrease in unemployment from 22.4 percent in 1990 to 13.1 percent in 2000, overall rates of unemployment for all tribes including tribes without casinos have shown a downward trend after peaking in 1994. (see Table 16.2). Furthermore, the rate of employee turnover in Indian casinos is high, and the residual level of permanent employment is much lower that one might assume.10 To be fair there are numerous examples where tribes have used casino proﬁts to improve the quality of life for tribal members. Infrastructure has been improved, housing has been built, social services and health care have been expanded. Yet, relative to the per capita proﬁts from gambling there is still unexplainable residual unemployment and continuing dependence on welfare and food stamps among tribes with casinos. For example, the Ak Chin tribe has a total enrollment of 742 people. Out of a labor force of 209 adults 5.7 percent were still unemployed in 2000. Despite the fact that the Ak Chin casino generated net revenues in excess of $64,000 per capita, there were eighty-ﬁve individuals still receiving public assistance and food stamps. Numerous other gaming tribes such as the Gila River, Pascua Yaqui, San Carlos, and White Mountain experience similar anomalies with unemployment and welfare assistance. Contrary to widely promoted misconceptions there is no evidence that gambling has signiﬁcantly improved the quality of life for most Native Americans.

Economic impact of Indian casino gambling

213

According to a report entitled Survey of Grant Giving by American Indian Foundations and Organizations recently released by Native Americans in Philanthropy (NAP), gaming on Indian reservations has yet to signiﬁcantly lower the high levels of poverty endemic to Indian people nationwide. The report found that poverty among Indians has actually risen during the past decade of the gaming boom, and now more than half of all reservation Indians lives below the poverty level more than four times the national average . . . Small tribes located near major urban areas have beneﬁted the most from the gaming boom. (Native Americas Magazine, 1997)

Displacement effects of Indian casinos are signiﬁcant My research with Donald Siegel has been directed towards understanding the ﬁscal impacts of commercial gambling. We found evidence to suggest that the opening of Indian casinos was related to a structural change in the state Transaction Privilege Tax (TPT)11 (Anders et al., 1998). In other words, Indian gaming proﬁts come at the expense of other taxable sectors of the state’s economy. This is because, on average, the State of Arizona collects about 5.5 percent of the revenue from taxable sales.12 Based on annual net revenues of approximately $830 million per year Indian casinos on-reservation gambling reduces taxes by approximately $47.3 million per year – depending upon the extent to which gambling is “exported” to tourists and seasonal residents.13 For the most part, economic growth and in-migration have masked these leakages. Unfortunately, due to the lack of Indian casino revenue data, our research approach required the use of fairly sophisticated statistical tests. These ﬁndings caused us to look closely at the question of whether, or not, Indian casinos have a negative impact on other forms of gambling (i.e. horse and dog racing, and lotteries) that contribute to state tax revenues. More recently, we empirically examined the relationship between the state lottery and Indian casinos. Using regression analysis with the number of slot machines as a proxy variable for casino revenues, we found that decreases in lottery sales are correlated with the growth in Indian gaming (Siegel and Anders, 2001). Thus, a consistent picture emerges from this stream of research. Gambling on an Indian reservation by-and-large constitutes a leakage from the taxable economy. Economic impact studies written in support of tribal casinos typically use expenditure and employment multipliers to demonstrate that the casinos beneﬁt regional economics through direct purchases and employment or through indirect multiplier effects. Unlike Las Vegas or Atlantic City which exports gambling to residents of other states (Eadington, 1999) the casinos in Arizona rely almost exclusively on local trafﬁc. About 94 percent of the patrons to Indian casinos are state residents, (McKinnon, March 21, 2001) which greatly affects the way in which casinos impact other parts of the local economy. As a result there is cannibalization of existing businesses and reduced tax revenue.

214

G. C. Anders

Now that these three issues have been addressed, the next section discusses a new, more extensive series of test of the negative economic impact of Indian casinos.

Econometric analysis of displacement The following explains a test of the sales tax revenues displacement that occurs when residents gamble at non-taxed Indian casinos. Displacement is the loss of government revenue as the result of an exogenous event. The problem with empirically testing displacement as a function of Indian casinos is complicated by the favorable economic conditions and population growth over the last decade. Since the opening of Indian casinos in November of 1992, Arizona has experienced rapid demographic and economic growth. From 1990 to 2000 the state’s population increased from 3.7 million to 5.1 million. As a result of increased personal spending, TPT grew dramatically from $1.9 billion in 1991 to $3.6 billion in FY2000. The hypothesis tested here is that the Indian casino diverts potential tax revenue stream and reduces the amounts that governments can collect. In other words, that potential taxable revenue was taken from the economy at a time when there was considerable growth and low unemployment. To test this hypothesis, it is necessary to perform a statistical analysis of the TPT collections as a function of variables that are likely to explain its variation over time. Regression analysis is generally considered an appropriate statistical tool for testing economic hypotheses. This approach requires the speciﬁcation of a formal model or equation that expresses the hypothesized relationships between dependent and independent variables. After testing a variety of variables and functional forms, an Ordinary Least Squared model was found to offer robust results with a parsimonious speciﬁcation.14 Using data on TPT revenues, Arizona population, number of tourists, personal income, and the number of slot and video poker machines in Arizona Indian casinos, a series of regression were conducted. Table 16.3 summarizes the variables and data sources used in this study. Due to the fact that some of the data are kept on a monthly basis for a ﬁscal year beginning July 1, and others are kept on a quarterly basis for a calendar year, it was necessary to standardize the series on quarterly basis. Since there are numerous Table 16.3 Variables and sources of data Variable

Series used

Data source

TPT

Arizona Department of Revenue

Population

Gross Transaction Privilege, Use and Severance Tax Collections Arizona Population

Tourist Slots

Estimated Visitor Count Machine Count

Center for Business Research, Arizona State University Arizona Ofﬁce of Tourism Arizona Department of Gaming

Economic impact of Indian casino gambling

215

collection categories that would not be anticipated to have any interaction with casinos, the four sectors having the greatest likelihood of competition with casinos were tested. To accomplish this the state TPT data were disaggregated to concentrate on four speciﬁc sectors: Restaurants/Bars, Amusements, Retail, and Hotels.15 The TPT collections from these four sectors are the dependent variables that were individually regressed against the independent variables: population, the number of tourists, and slot machines. The number of slot machines was used as a proxy variable. The equation also included a dummy numeric to capture the growth trend, and another for summer seasonal effects. The Ordinary Least Squares in the SPSS statistical program allowed for the speciﬁcation of the following model: TPTi = a + β1 Pop + β2 Slotst−1 + β3 Trend + β4 Q1 + U where i refers to the Transaction Privilege Tax collected from a speciﬁc sector; Pop is the population of the State of Arizona; Slotst−1 is the machine count on Arizona Indian casinos lagged by one period; Trend is a numerical term to account for growth; Q1 is a dummy variable for the ﬁrst quarter of the ﬁscal year (July, August, and September); U is a normally distributed error term. Each of these individual TPT collection classes were regressed against the set of independent variables noted above. The data set used ranged from the ﬁrst quarter of FY 1992 to the fourth quarter of 2000. With four explanatory variables and thirty-four observations, the model has thirty degrees of freedom. Table 16.4 summarizes the results of the econometric tests. The explanatory power of a model is reﬂected in the R 2 statistic. In these regressions the R 2 indicate that between 66 and 97 percent in the variation of TPT can be explained by variations in the independent variables. When the value of the Durbin–Watson (DW) is close to 2 it can be assumed that there is no serial correlation in the residuals. Of these models the R 2 statistics for Restaurants and Bars, and Retail are the highest, however, only for the Amusements TPT is the T -statistic for the β3 Slots parameter greater than the critical value at the 95 percent conﬁdence level. The most signiﬁcant ﬁnding is that the number of slot machines Table 16.4 Results of state TPT regressions using quarterly data Dependent variable

Parameter estimates Pop

Slots

Trend

Q1

R2

F

TPT Restaurants/ Bars TPT Amusements TPT Retail TPT Hotels

−0.39 (−0.85)

−0.19 (−1.3)

1.5 (2.7)∗

−0.21 (−5.0)∗

0.94

133 2.0

−2.5 (−2.1)∗ −0.09 (0.26) −0.97 (−1.1)

−1.0 (−2.7)∗ −0.13 (−1.1) −0.60 (−0.21)

4.2 (2.9)∗ 1.2 (2.8)∗ 1.70 (1.67)

0.09 (0.96) −0.09 (−2.9)∗ −0.56 (−7.2)∗

0.66 0.97 0.82

14 1.8 227 2.0 35 2.1

Notes T statistics in parentheses. ∗ Statistically signiﬁcant at the 95 percent conﬁdence level.

Test statistics DW

216

G. C. Anders

are negatively correlated with the TPT collection for each of the four sectors. The best ﬁt occurred with the Slots variable lagged one quarter, which implies that every increase in the number of slot machines had a future negative impact on the Amusements TPT. The impact of seasonality is quite strong as reﬂected by the T statistics for Trend and Q1. Also, Stationarity or changes in the underlying economic trend is a factor with economic time series because the regression assumes a stable relationship between dependent and independent variables. To some extent this has been corrected by the use of a trend variable, a seasonal dummy. Ideally, there should be enough observations to decompose the data set to capture the different trends.16 To achieve a model capable of seasonable decomposition, I ran another set of regressions, this time using monthly data using the following speciﬁcation (See Table 16.5): TPTi = a + β1 Slots + β2 Trend + β3 Q1 + β4 Q2 + β5 Q3 + U where i refers to the Transaction Privilege Tax collected from a speciﬁc sector; Slots is the machine count on Arizona Indian casinos for that period; Trend is a numerical term to account for growth; Q1 is a dummy variable for the ﬁrst quarter of the ﬁscal year (July, August, and September); Q2 is a dummy variable for the second quarter of the ﬁscal year; Q3 is a dummy variable for the third quarter of the ﬁscal year; U is a normally distributed error term. The model has eighty-three degrees of freedom with ﬁve explanatory variables.17 These results clearly demonstrate that on a statewide basis the incidence of Indian casinos has had a negative impact on the four sectors of the Arizona economy. These results conﬁrm at the 95 percent conﬁdence level the hypothesis of a revenue displacement particularly in the Amusements sector. The negative parameter indicates an inverse relationship between the growth of Indian casinos and decreases in horse and greyhound track revenues.18 Because population is concentrated in two urban areas, there is good reason to expect that these relationships will also be evident for Maricopa and Pima counties. In addition, these two counties have high concentrations of Indian casinos. Using the same format, regressions were preformed on county TPT collections excluding the collections from Maricopa and Pima counties. Owing to changes in the Department of Revenue collection methodology, it was not possible to get a complementary time series for all four sectors. Instead, complete series were only available for two sectors: Retail, and Bars and Restaurants. Again, regressions were run using quarterly observations from the ﬁrst quarter of 1991 to the fourth quarter of 2000. Table 16.6 presents a summary of the county results. These results similarly conﬁrm the negative relationship between Indian gaming and TPT at the county level. It is interesting that the displacement effect is present in less populated counties. There are several possible explanations for this. First, there are fewer entertainment choices in rural areas. Second, there are Indian casinos spread widely throughout the state. Finally, it could be that the most pronounced effects of gaming are in less populated areas where the drain is less dampened by economic growth.

266 14 57 31

0.096 (1.41) 0.031 (.840)

0.092 (0.162)

−0.758 (−1.01)

0.86

1.05 (1.53) 1.92 (2.10)∗

−0.272 (−1.36) −0.200 (−0.756)

0.92

Trend

Slots

54

82

F

R2

Tourist

0.93 0.44 0.77 0.64

Pop

−0.12 (3.9)∗ −0.18 (−1.8)∗ 0.05 (0.80) −0.25 (−3.0)∗

Test statistics

0.29 (−8.6)∗ −0.44 (−4.4)∗ −0.13 (−1.9)∗ −0.42 (−5.2)∗

Parameter estimates

Notes T statistics in parentheses. ∗ Statistically signiﬁcant at the 95 percent conﬁdence level.

TPT Retail less Maricopa and Pima TPT Restaurants/Bars less Maricopa and Pima

Dependent variable

−0.39 (−11.7)∗ −0.15 (−1.5)∗ −0.133 (−2.0)∗ −0.73 (−9.0)∗

−1.0 (−14.6)∗ 0.88 (4.2)∗ −0.09 (−0.66) 0.49 (2.9)∗

F

−0.12 (−1.85)∗ −0.36 (−1.7)∗ −0.90 (6.9)∗ −0.008 (−0.47)

Q3

R2

Q2

Trend

Slots Q1

Test statistics

Parameter estimates

Table 16.6 Results of county TPT regressions

TPT Restaurants/Bars TPT Amusements TPT Retail TPT Hotels

Dependent variable

Table 16.5 Results of state TPT regressions using monthly data

1.94

1.91

DW

2.1 1.6 2.2 1.7

DW

218

G. C. Anders

Impacts on city taxes City taxes collections is another important area that may be impacted by the construction of hotels and resorts on Indian reservations. At least three tribes in close proximity to the Phoenix metropolitan area are building resorts. The Ak-Chin Indian Community and Harrah’s Entertainment, which operates the tribal casinos, have already opened a 146-room resort (McKinnon, March 13, 2001). The Gila River Indian Community is building a $125 million resort and spa south of Phoenix with 500 rooms, two 18-hole golf courses, and an industrial park (McKinnon, May 16, 2001). While the Fort McDowell casino is building a 500-room resort, and the Salt River casino resort will build a resort complex with 1,000 rooms, it is reasonable to anticipate that these developments will negatively impact cities that derive tax revenue from hotels and bed taxes (Schwartz, 2001). The logic for anticipating a dramatic revenue loss is based upon the following rationale. 1

2

3

4

5

The resort industry generates substantial taxes for cities, counties and the state. As shown in Table 16.7, Valley cities apply between a 3.8 and 6.5 percent tax on hotel and motels. The state and counties also receive revenue from taxes on hotels. If one includes property and state income taxes, the total tax contribution is large. The economic downturn is already putting pressure on tourism, and thus taxes from resorts and other tourism-related sources decrease. At the same time overbuilding may increase competition between existing hotels. The tribes are planning posh resorts with large conference centers and amenities including golf courses that will draw business and tax dollars away from off-reservation properties. These will be ﬁrst-class facilities with great locations in an established market. The room prices can be subsidized with gambling proﬁts the same way that Las Vegas casinos subsidize their meals and rooms in order to draw customers (Eadington, 1999). Even if the tribes do not choose to subsidize room costs with gaming proﬁts their prices can be substantially lower simply because no local taxes will be charged.

Assuming a total increase of 2,146 hotel rooms from Indian resort construction, the anticipated tax loss would be approximately $1.48 million dollars per year for city governments.19 There is also a possibility of increased tax losses from golf clubs and other amusements that will be available at the casino properties.

Caveats and conclusions Since 1992 there have been approximately nineteen Indian casinos established in Arizona. These casinos have generated hundreds of millions of dollars in proﬁts for tribal communities. Although halted by a recent Federal Court decision, Governor Hull is engaged in compact negotiations with Arizona’s Indian tribes. It has been publicized that in return for revenue sharing that could result in $83 million per year for the State of Arizona, the tribes will be able to increase the number of

Economic impact of Indian casino gambling

219

Table 16.7 City hotel and bed taxes City

Tax base Bed Total Total amount Number of Tax/per rate∗ tax∗∗ tax∗∗∗ collected in FY hotel rooms room (in $) 1999–2000 (in $)

Carefree Cave Creek Fountain Hills Mesa Phoenix Scottsdale Tempe

2.0 2.5 1.6 1.5 1.8 1.4 1.8

3.0 4.0 3.0 2.5 3.0 3.0 2.0

5.0 6.5 4.6 4.0 4.8 4.4 3.8

520,051 21,784 87,034 1,365,447 21,289,336 7,173,484 1,635,517

409 24 125 4,961 22,375 13,316 5,452

1,271.52 907.68 696.27 275.24 951.48 538.71 299.98

Source: Individual cities Tax Audit Departments, June 2001; Individual cities Convention and Visitors Bureau, June 2001; Arizona Ofﬁce of Tourism, June 2001; Northern Arizona University Ofﬁce of Tourism and Research Library, June 2001. Notes ∗ The base rate is the total sales tax charged by the city before the additional bed tax is applied. This does not include any state or county taxes. ∗∗ The bed tax is the additional tax rate that is applied by cities in addition to the sales tax on hotel and motel rooms. ∗∗∗ The total tax is the sum of the tax base and the bed tax added together.

slot machines to 14,675 and offer “house-backed” table games such as blackjack (Pitzl and Zoellner, 2002). At a time when the State of Arizona is experiencing an increasingly larger projected deﬁcit such an increase in revenues may look attractive. But, the combined impact of increased gaming may be underestimated. Cummings Associates estimates that an increase in the number of slot machines in the Phoenix metro area will increase tribal gaming revenues to about $1.3 billion, or one-sixth of the annual state budget (MacEachern, 2001). Table 16.8 demonstrates how an expansion in the number of slot machines would further erode state and local taxes. If we assume that the current average slot machine net revenue is approximately $100,000 per year, then each increase of 1,000 slot machines would result in a $1.0 million increase in gambling revenues. Assuming a combined tax loss of 9.07 percent to state, county, and city governments20 we can anticipate that the annual displacement would grow from $68 to $133 million per year. The available statistical evidence clearly demonstrates that Indian casinos do have a signiﬁcant negative economic impact on the state economy. Furthermore, an expansion of Indian casinos and resorts will exacerbate the tax drain on local government. Certainly, Arizona public ofﬁcials should consider these ﬁndings as they continue to grapple with the negotiation of new compacts for Indian gaming. There is nothing here that is infallible. The data used are subject to economic cycles and structural changes that affect the reliability of the econometric results. Also, there could be inaccuracies in the application of an average slot machine net revenue value for all casinos. But rather than relying on emotional appeal or casual reasoning, I have tried to use the existing data in a logical and well reasoned

220

G. C. Anders Table 16.8 Estimated impact of an increase in slot machines Number of slot machines

Estimated annual revenues (in $)

Estimated revenue displacement (in $)

7,504∗ 10,925∗∗ 14,675∗∗∗

750,400,000 1,092,500,000 1,467,500,000

68,061,280 99,089,750 133,102,250

Source: Arizona Department of Gaming. http://www.gm.state.az.us/.htm Notes ∗ Current number of gaming devices in Arizona casinos. ∗∗ Number of gaming devices authorized by existing compacts with the State of Arizona. ∗∗∗ Maximum number of gaming devices limited by statue.

manner to inform the public. The point is that we should use whatever empirical information we have to weigh both the costs and the beneﬁts of Indian gaming.

Acknowledgements The author acknowledges the talented assistance of Christian Ulrich and Robyn Stout. Kathy Anders provided invaluable help in collecting the data, and in sorting out the complexities of Arizona taxes. Special thanks to Don Siegel, David Paton and Roger Dunstan for their comments and suggestions. All remaining errors are mine.

Notes 1 The terms Native American and American Indian will be used interchangeably without intending offense. 2 Tribes can be recognized in only two ways: by an act of Congress, or; a lengthy and complex recognition process with the Department of the Interior. The Assistant Secretary makes the ﬁnal determination on tribal recognition for Indian Affairs. Currently, over 109 groups are seeking recognition from the Department of Interior for the purposes of establishing a reservation. 3 In two famous cases (Seminole Tribe v. Butterworth) and (California v. Cabazon Band of Mission Indians), the courts found that when a state permits a form of gambling including bingo or “Las Vegas” nights, then the tribes have the right to conduct gaming operations on their own land (Chase, 1995). 4 All Native Americans pay federal income, FICA and social security taxes; however, Indians who live and work on federally recognized reservations are exempt from paying state income and property taxes. Each tribe sets its own membership rules. In order to be eligible for federal beneﬁts most tribes require a person to have one-fourth quantum blood degree of his tribe’s blood to be an enrolled member. Some tribes have additional qualiﬁcations for membership. 5 The current compacts establish a minimum pay-out of 80 percent for slot machines and 83 percent for electronic poker or blackjack determine this percentage. Arizona Department of Gaming. http://www.gm.state.az.us/history2.htm.

Economic impact of Indian casino gambling

221

6 The typical casino derives approximately 70 percent of its total revenues from slot and other electronic gaming machines (Eadington, 1999). Indian casinos derive a much higher percentage from slot machines. 7 This is also a national trend. The General Accounting Ofﬁce’s (1997) study of Indian gaming found that a small portion of Indian gambling facilities accounted for a large share of the aggregate gaming revenue. Similarly, the California Research Bureau estimates that 7 percent of the Indians in California are members of gaming tribes. Most California tribes are small ranging from 50 to 400 people. The forty-one gaming tribes have about 18,000 members (Dunstan, 1998). 8 According to Leven (2000) when there are offsetting changes in local demand, as in the case of Indian casinos, the net multiplier can be less than one and even negative. 9 The basis for this conclusion is the difference between casino net revenues ($830 million) and the total economic impact ($468 million). If total spending instead of net revenues is used, the resulting difference is much greater. 10 This is based on several factors including high job turnover for the low wage hourly jobs, and the required educational level for jobs in the “back of the house” operations of casinos that include accounting, ﬁnancial management, marketing, and human resource management functions. 11 Technically, the State of Arizona does not have a “sales tax” paid by consumers for the goods they purchase. Instead, a tax on sales is paid by the vendor for the privilege of doing business in the state. 12 There is a complex series of taxes by industry class code. These rates ranged from 1 percent to 5.5 percent depending on the type of business. Effective June 1, 2001 the state TPT rates increased 0.06 percent as the result of Proposition 301. For more information see: http://www.revenue.state.az. 13 Since it is a net revenue ﬁgure, it does not reﬂect how much is actually spent at casinos. TPT taxes are based on spending so the actual displacement effect would be even larger. 14 Parsimony is the use of the fewest number of explanatory variables and the simplest functional form in order to achieve an acceptable level of statistical signiﬁcance. 15 This approach follows from previous ﬁndings regarding the sectoral impacts of casinos (Siegel and Anders, 1999). 16 Other functional forms (i.e. log linear and ﬁrst differences) and other variables including Personal Income, a data set maintained by the US Bureau of Economic Analysis, were also tried with mixed results. 17 A dummy variable for the fourth quarter is not necessary. 18 “In 1993 horse and dog tracks pumped $8.5 million into the state’s budget. That dropped to $3 million last year. Revenues at the state’s four-dog track’s live races have plunged to $77 million from $116.3 million from 1993 to 1999. The story at the state’s live horse-racing tracks where revenues slid to $46.5 million from $80.6 million in the same period (Sowers and Trujillo, 2000). 19 This amount was computed by taking the average room tax ($688) times the number of new hotel rooms (2,146). 20 This is based on the sum of state and Maricopa County tax rates equaling 7.87 percent, plus a city tax of 1.8 percent.

References Anders, Gary C., Siegel, Donald, and Yacoub, Munther (1998), “Does Indian casino gambling reduce state revenues: Evidence from Arizona.” Contemporary Economic Policy, XVI (3), 347–355. Anders, Gary C. (1998), “Indian gaming: Financial and regulatory issues.” The Annals of the American Academy of Political and Social Science, March, 98–108.

222

G. C. Anders

Chase, Douglas W. (1995), The Indian Gaming Regulatory Act and state income taxation of Indian casinos: Cabazon Band of Mission Indians v. Wilson and County of Yakima v. Yakima Indian Nation. Tax Lawyer, 49(1), 275–284. Cornell, Stephen and Taylor, Jonathan (2001), “An analysis of the economic impacts of Indian gaming in the state of Arizona, ”Udall Center for Studies in Public Policy, June. Cummings Associates (2001), “The revenue performance and impacts of Arizona’s Native American casinos,” February 16. Dunstan, Roger (1998), Indian Casinos in California. Sacramento, CA: California Research Bureau. Eadington, W. R. (1990), Native American Gaming and the Law. Reno, Nevada: Institute for the Study of Gambling. Eadington, William R. (1999), “The economics of casino gambling.” The Journal of Economic Perspectives, 13(3), 173–192. Felsenstein, Daniel, Littlepage, Laura, and Klacik, Drew (1999), “Casinos gambling as local growth generation: Playing the economic development game in reverse.” Journal of Urban Affairs, 21(4), 409–421. Gazel, Ricardo (1998), “The economic impacts of casino gambling at the state and local levels.” The Annals of the American Academy of Political and Social Science, March, 66–85. Harris, Percy (1997), “Limitation on the use of regional economic impact multipliers by practitioners: An application to the tourism industry.” The Journal of Tourism Studies, 8(2), 1997, 50–61. Hogan, Tim and Rex, Tom R. (1991), “Demographic trends and ﬁscal implications,” in McGuire, Therese J. and Naimark, Dana Wolfe (eds) State and Local Finance for the 1990s: A Case Study of Arizona. School of Public Affairs, Arizona State University, Tempe, 37–44. Leven, Charles, L. (2000), “Net economic base multipliers and public policy.” The Review of Regional Studies, 30(1), 57–60. National Gambling Impact Study Commission (1999), National Gambling Impact Study Commission Report, http://www.ngisc.gov. Mattern, Hal (2001), “Indian gaming: $468 Mil impact.” The Arizona Republic, June 21, D2. MacEachern, Doug (2001), Slots in the city. The Arizona Republic, April 29, V1,2. McKinnon, Shaun (2001), “Ak-Chin open resort hotel.” The Arizona Republic, March 13, D1,6. McKinnon, Shaun (2001), “Indian casinos savvy advertisers.” The Arizona Republic, March 21, D1,5. McKinnon, Shaun (2001), “Tribes Bet on Future.” The Arizona Republic, May 16, D1,3. Native Americas Magazine (1997), Indian gaming having little effect on poverty, February 18, p. 3. Pitzl, Mary Jo and Zoellner, Tom Hull (2002), “Tribes OK gaming deals.” The Arizona Republic, February 21, A1, A2. Rose, Adam (2001), “The regional economic impacts of casino gambling,” in Lahr, M. L. and Miller, R. E. (eds) Regional Science Perspectives in Economic Analysis. Elsevier science, 345–378. Siegel, Donald and Anders, Gary (1999), “Public policy and the displacement effects of casinos: A case study of riverboat gambling in Missouri.” Journal of Gambling Studies, 15(2), 105–121. Siegel, Donald and Anders, Gary (2001), “The impact of Indian casinos on state lotteries: A case study of Arizona.” Public Finance Review, 29(2), 139–147.

Economic impact of Indian casino gambling

223

Sowers, Carol and Trujillo, Laura (2000), “Gaming drains millions in potential taxes.” The Arizona Republic, September 16, A1,12. Stern, Ray (2001), “Casinos boom raises odds for addiction.” Scottsdale Tribune, June 18, A1,14. Schwartz, David (2001), Tribal Casinos Expand into Resorts. Lasvegas.com Gaming Wire, May 25, http:www.lasvegas.com/gamingwire/terms.html. Taylor, Jonathan, Grant, Kenneth, Jorgensen, Miriam, and Krepps, Matthew (1999), Indian gaming in Arizona, social and economic impacts on the state of Arizona. The Economic Resource Group, Inc. May 3. US General Accounting Ofﬁce (1997), A Proﬁle of the Indian Gaming Industry. GAO/GGD96-148R. Wang, Phillip (1997), “Economic impact assessment of recreation services and the use of multipliers: A comparative examination.” Journal of Parks and Recreation Administration, 15(2), 32–43.

17 Investigating betting behaviour A critical discussion of alternative methodological approaches Alistair Bruce and Johnnie Johnson

Introduction It is frequently observed that the investigation and analysis of betting behaviour occupies researchers from a wide range of disciplinary backgrounds, within and beyond the social sciences. Thus, aspects of betting activity, at the individual and collective levels, raise important questions for, inter alia, theoretical and applied economists, decision theorists, psychologists, those interested in organisational behaviour, risk researchers and sociologists. One consequence of this diverse research community in relation to betting, is the sometimes sharp distinction in approach to the investigation of betting-related phenomena that exists between researchers with different disciplinary afﬁliations. This creates a fertile environment for the comparative evaluation of alternative methodological traditions. The aim of this contribution is to explore an important strand of the methodological debate by discussing the relative merits of laboratory-based and naturalistic research into betting behaviour. This involves a comparison of a methodological tradition, laboratory-based investigation, which has dominated the psychological approach to understanding betting with the study of actual in-ﬁeld betting activity, which is more closely associated with the recent economic analysis of betting phenomena. It seems reasonable to suggest that the traditional emphasis on laboratory at the expense of naturalistic investigation probably owes much to the alleged beneﬁts of the former. Thus, proponents of laboratory-based investigation have tended to stress its advantages in terms of cost, the ability to isolate the role of speciﬁc variables and the opportunity it affords for conﬁrmation of results via replication of experiments under tightly controlled conditions. At the same time, naturalistic work is often criticised by laboratory advocates for its absence of control groups and its inability to control the number of observations in the categories of activity under scrutiny (see Keren and Wagenaar, 1985). Whilst a central contention of this chapter is that there is a compelling case for more emphasis on naturalistic vis à vis laboratory-based work, this is not to deny that the co-existence of these quite different investigative techniques yields

Investigating betting behaviour

225

beneﬁts in terms of our overall understanding of the ﬁeld. Indeed, as Baars (1990) observes: Without naturalistic facts, experimental work may become narrow and blind: but without experimental research, the naturalistic approach runs the danger of being shallow and uncertain. To some degree the legitimacy of these differing traditions reﬂects the fact that each offers unique insights into different aspects of betting. Thus, for example, laboratory-based work permits a richer understanding of the individual cognitive processes that lie behind betting decisions. Naturalistic research, by contrast, focuses on the investigation of observable decision outcomes in non-controlled settings. The distinction between process and outcome is signiﬁcant here, in reﬂecting those features of betting that engage more closely the interest of psychologists and economists, respectively. This chapter is structured in three main parts. First, we explain the particularly fertile opportunities for naturalistic research that are available to betting researchers, compared with naturalistic enquiry in other areas of behavioural analysis. A key feature here is the discussion of the richness of various aspects of the documentary material in relation to betting. The second part of the discussion addresses the particular difﬁculties associated with laboratory-based investigation of betting behaviour. Three main areas of weakness associated with the laboratory setting are considered. The section on ‘Calibration in naturalistic betting markets’ reports signiﬁcant empirical distinctions between observed naturalistic behaviour and behaviour in the laboratory, which serve to illustrate the limitations of laboratory work in this area.

Betting research: the opportunities for naturalistic inquiry Whilst the discussion that follows relates to the advantages of naturalistic research in the speciﬁc context of horse-race betting in the UK, many of the issues raised apply equally in the context of other forms of betting and wagering in the UK, as well as to horserace and other betting activity in non-UK settings. Naturalistic research into betting enjoys signiﬁcant advantages, from a purely pragmatic perspective, over naturalistic research in other areas of decision-making under uncertainty. A key factor in the potential for naturalistic betting research is the existence of a particularly rich qualitative and quantitative documentary resource for the analysis of a range of betting-related phenomena. Before describing the data in greater detail, it is instructive, given the focus of this chapter, to explain brieﬂy the nature of horse-race betting in the UK. Essentially, for any horserace in the UK, there are two parallel forms of betting market available to the bettor, the pari-mutuel market and the bookmaker market. The pari-mutuel market, whilst globally more signiﬁcant, is very much a minority market in the UK,

226

A. Bruce and J. Johnson

relative to the bookmaker market, which accounts for around 90 per cent of horserace betting activity. For each form of market, there are, equally, two sub-markets, the on-course market relating to bets placed at the racecourse, and the off-course market, where bets are placed in licensed betting ofﬁces. Whilst on and off-course betting are clearly separate bodies of activity in terms of location, there are important institutional linkages between the two parts of the market, especially in relation to bookmaker markets. Thus, for example, off-course bookmakers may manage their potential liabilities by investing funds, via their agents, in the on-course market. This will affect the pattern of odds in the on-course market, which in turn affects the odds available off-course, given that the odds reported (and available to bettors) off-course are those obtaining at that time in the on-course market. One of the appealing features of off-course bookmaker-based markets in particular, in data terms, is the fact that all betting decisions are individually recorded on betting slips, by the decision maker. Availability of samples of betting slips, therefore, immediately gives the researcher access to a set of key features relating to the betting decision, which permits insights into a range of issues. Thus, a betting slip relating to a bet placed in a betting ofﬁce routinely carries information relating to: 1 2 3

4

5 6 7

The particular horse(s), race(s), race time(s) and race venue(s) selected, thereby offering explicit details of the individual decision and its context. The stake, that is, the extent of the ﬁnancial commitment to the decision. The type of bet; this indicates, for example (i) whether success is a function of a single correct decision (the ‘single’ bet), or of several simultaneously correct decisions (‘multiple bets’) and (ii) whether or not the bet has ‘insurance’ features that generate some return if the horse is ‘placed’ as well as if it wins its race (e.g. ‘each-way’ bets) and so on. Whether the bet was placed at Starting Price, Board Price or Early Price,1 thereby offering insights into the bettor’s subjective evaluation of the ‘value’ inherent in prices offered. Whether tax2 was paid at time of bet placement (prior to October 2001), a factor that can be held to indicate the bettor’s level of conﬁdence in a bet. Exactly when the bet was placed, thus allowing insights into the value of information in evolving betting markets. Exactly where the bet was placed, thereby facilitating cross-locational analysis.

Clearly, the detail available from the betting slip has the potential to add significantly to our understanding of a range of aspects of the betting decision. Even with the comparatively high levels of electronic sophistication, which exist in bookmaking organisations and betting ofﬁces in the UK today, the betting slip remains overwhelmingly dominant as the means by which a bet is registered by the consumer. As a basis for the clear identiﬁcation of individual decisions, this is in marked contrast to other markets for state-contingent claims, such as markets for various ﬁnancial instruments, where the data relating to individual decisions are elusive. Factors inhibiting the analysis of individual decisions in such contexts

Investigating betting behaviour

227

include the employment of agents to conduct transactions and the fact that many decisions may simply result from the automatic implementation of predetermined trading rules. Beyond the data detailed on the betting slip, but available in the public domain, the results of races allow an unequivocal insight into the performance of the betting decision. Again, the contrast with other ﬁnancial markets is compelling. In most ﬁnancial contexts, the market duration is not deﬁned, so that unequivocal statements regarding decision performance are not feasible. A further characteristic of betting markets, which derives from their ﬁnite nature, is that the researcher has access to a very large, and continually expanding, set of ‘completed’ markets on which to base analysis. Within the aggregate of betting markets, there is scope for distinguishing between (and hence scope for comparative analysis across) different types of horse race, according to a variety of criteria such as the class or grade of the race, the form of race (e.g. handicap vs non-handicap) or the number of runners in the race. All of these additional characteristics are readily accessible in the public domain. Thus, to a degree, the researcher has the opportunity to control for aspects of the decision setting (e.g. complexity, see e.g. Bruce and Johnson, 1996; Johnson and Bruce, 1997, 1998), which might be regarded as potentially inﬂuential in determining the nature of the decision, by comparing subsets of races comprising sufﬁcient numbers to guarantee statistically meaningful results. The pari-mutuel betting market in the UK, operated by the Horse-race Totalisator Board (the ‘Tote’) offers a distinct set of data possibilities that reﬂects its different structure and mechanisms. The pari-mutuel market relies on an electronic system that obviates the need for individual bettors to complete betting slips, but which generates valuable aggregated information relating to the comparative betting activity of on-course, off-course and credit account bettors. Furthermore, the UK betting environment offers the near-unique opportunity to compare betting behaviour between two materially different market forms across a common set of betting events, thereby offering potentially valuable insights into the effect on betting of institutional peculiarities of market process and institution. This permits ﬁeld-testing of issues that have emerged as central to the research agenda of the experimental economics school (see, e.g. Smith, 1989; Hey 1991, 1992). A further beneﬁt associated with the use of naturalistic data for analysis of betting behaviour has increased in signiﬁcance in recent years. Thus, both the ‘civilising’ of the betting ofﬁce environment as a result of successive episodes of deregulation and the widening awareness of gambling, which has been promoted by the National Lottery, have meant that bettors constitute a more representative sample of the aggregate population in terms of most demographic variables. The factors discussed above serve to explain the particular appeal of naturalistic enquiry to betting researchers in a UK horse-racing context, in terms of data richness and volume, opportunities for comparative investigation and increasing representativeness. Beyond these particular advantages, it is worth noting, more generally, that horse-race betting markets feature a set of characteristics that

228

A. Bruce and J. Johnson

conform closely to an inﬂuential perspective on the essence of naturalistic decisionmaking proposed by Orasanu and Connolly (1993). Thus, factors regarded as distinctive to naturalistic decision-making are as follows. 1

2

3

4

5

6

7

The existence of poorly-structured problems: horse races can be seen as poorly structured in that there are, for example, no rules regarding how the decision problem should be addressed, how to combine different forms of information or how to select a particular form of bet. Uncertain dynamic environments: betting markets are inherently uncertain, dealing with conjecture regarding the unknown outcome of a future event, and dynamic as evidenced by the fast-changing and turbulent nature of prices in betting markets. Shifting, badly-deﬁned or competing goals: the motivations of bettors are complex and individual. For example, bettors may value ﬁnancial return, intellectual satisfaction or the social interaction associated with betting (see Bruce and Johnson, 1992). Equally, motivations may shift during, for example, the course of an afternoon’s betting activity, depending on the bettor’s experience or the emerging pattern of results. Action/feedback loops: a central feature of the betting task is that it involves making decisions based on the analysis of various forms of information/feedback from, inter alia, previous horseraces, informed sources (e.g. trainers, owners and ‘form’ experts). Action/feedback loops are central to the continual adjustment and reﬁnement of decision-making models as the information set evolves. Time stress: the signiﬁcant majority of betting activity on horse races takes place within a highly condensed time frame (typically around 20–30 minutes) prior to the start of the race, at which point the market is closed. High stakes: whilst the level of stake is at the discretion of the bettor, a key factor in the context of this discussion is that the stake represents the bettor’s own material resource. Multiple players: whilst betting markets for individual horse races vary in terms of the number of participants, betting is a mass consumption activity in the UK.

A fuller discussion of the degree to which the horse-race betting environment captures these essential characteristics of the naturalistic decision setting is presented in Johnson and Bruce (2001).

Laboratory-based research: a critical perspective The aim of this section is to provide a critical assessment of the potential and limitations of laboratory-based investigation in providing insights into aspects of decision-making behaviour. It is important to stress here that whilst the understanding of decision-making behaviour is a key theme of the betting research agenda, the analysis of betting behaviour offers a signiﬁcant insight into wider

Investigating betting behaviour

229

decision-making. This section provides the basis for a consideration, in the following section, of examples of empirically-observable distinctions between laboratory and naturalistic behaviour in particular relation to betting. The discussion of the laboratory environment focuses on three main areas of concern: the nature of the decision task in which participants are required to engage, the nature of the participants themselves and the character of the environment in which laboratory-based tasks are performed. It is, of course, the case that consideration, respectively, of the individual, task and environment represents an artiﬁcial separation. Clearly, the behaviour observed reﬂects the simultaneous inﬂuence of and interaction between factors at each level. As a loose organisational device, however, such a separation offers advantages in terms of identifying different forms of inﬂuence. The decision task One of the more important shortcomings of the laboratory investigation of betting and decision performance, in general, is its tendency to characterise the betting decision task as a discrete and clearly-speciﬁed problem that generates a similar discrete decision/betting ‘event’. One reason for this is the need to present subjects with a comprehensible and unambiguous problem, which in turn allows the researcher to identify an unambiguous and discrete response by the subject for the purposes of analysis. However, it has been observed (Orasanu and Connolly, 1993) that real life decision tasks rarely take this form. The decision maker will generally have to do considerable work to generate hypotheses about what is happening, to develop options that might be appropriate responses, or even to recognise that the situation is one in which choice is required or allowed. At the same time, for other forms of decision, including betting tasks, the types of process described above may be wholly inappropriate, the decision response resulting perhaps from an intuitive rather than an analytical approach. The laboratory decision task, as generally speciﬁed, tends towards the ‘simple analytical’ form, in the interests of generating identiﬁable events and measurable effects. It thereby offers little by way of insight into more complex analytical or more intuition-based decision and betting problems. Equally, it is frequently the case that laboratory decision tasks involve a choice between a prescribed menu of alternatives deﬁned with unrealistic precision in what is essentially a ‘one shot game’. Where the laboratory attempts to explore the interactions between a succession of decisions and feedback from previous decisions in the sequence, there are dangers of over-structuring the decision–feedback–decision relationship. In particular, the practical realities of a laboratory experiment may tend to condense the time frame within which this process is allowed to operate, compared with the often more extended period within which interaction occurs in the naturalistic setting.

230

A. Bruce and J. Johnson

In contrast, within real world betting contexts, decision makers are often faced by tasks which are poorly structured, involving uncertain and dynamic events with action/feedback loops. It is not surprising that subjects’ cognitive processes, which are effectively ‘trained’ in such real world contexts, become attuned to such tasks. Consequently, individuals develop strategies, such as changing hypotheses, which are designed to handle the often redundant and unreliable data associated with real world decision tasks (Anderson, 1990; Omodei and Wearing, 1995). These approaches can be functional in dynamic real world environments (Hogarth, 1980) but prove inappropriate when tackling the ‘static’ tasks provided in laboratory experiments, which often involve more reliable and diagnostic data. It is not surprising, therefore, that the consensus to emerge from experimental enquiry is one of poor quality decision-making resulting from a range of systematic biases. Subjects who are presented with misleading and artiﬁcial tasks in experiments, involving perhaps non-representative questions for which the cues they normally employ are invalid, are not surprisingly likely to make errors. Those who point to the bias caused by the artiﬁciality of the tasks presented in the laboratory, highlight studies which demonstrate that small changes in experimental design can produce results that suggest good or poor judgement (Beach et al., 1987; Ayton and Wright, 1994). Eiser and van der Pligt (1998) summarise concern with the nature of the decision task experienced in laboratory experiments as follows: experimental demonstrations of human ‘irrationality’ may depend to a large extent on the use of hypothetical problems that violate assumptions that people might reasonably make about apparently similar situations in everyday life. In laboratory investigations subjects’ risk taking and quality of decision-making are often assessed using tasks that have a pre-deﬁned correct solution. For example, in experiments designed to assess the accuracy of individuals’ subjective probability estimates (calibration), typical general knowledge tests are often employed – where subjects may, for example, be asked to decide whether the Amazon or the Nile are the longer rivers and to assess the probability of their answer being correct. These effectively become tests of the individuals’ assessments of the accuracy of their memories. However, in real world settings, particularly in a betting context, individuals are often required to make judgements about future events. Individuals appear to employ different cognitive processes when making judgements on memory accuracy compared with predictions about the future. These latter cognitive processes appear less subject to bias (e.g. Wright, 1982; Wright and Ayton, 1988). Consequently, laboratory experiments, which typically rely on tasks with pre-deﬁned correct solutions, may signiﬁcantly underestimate the ability of individuals to make judgements concerning the future in real world betting contexts. One of the advantages of laboratory investigations is the ability to isolate the effects of certain variables on the betting decision. However, to achieve this aim these studies are often conﬁned to exploring the effects of a limited set of variables. The danger is that this oversimpliﬁes the full richness and complexity of

Investigating betting behaviour

231

the decision task faced in real betting environments. As a result the correlations observed may be spurious and miss the impact and interaction of unexpected variables on the bettor’s decision. A related issue concerns measurability. Laboratory studies of betting behaviour often lack clear objective measures of the factors inﬂuencing betting decisions or of their consequences. Consequently, these experiments often rely on subjective measures of betting performance and of factors inﬂuencing the bettor’s decisions, such as the degree of perceived risk. However, in real world betting environments, such as horse-racing tracks, the horse selected by the bettor can be compared with the winner of the race. This acts as an unequivocal, objective measure of performance. Similarly, the odds, the stake or type of bet selected (e.g. a ‘single’, which requires only one horse to win to collect a return vs an ‘accumulator’, which requires several horses to win to be successful) act as objective measures of risk associated with the betting decision. Finally, in relation to decision task, caution is urged in aggregating or comparing the results of various laboratory investigations, since these often employ a heterogeneous set of research designs. In contrast, the decision task in real world betting environments remains reasonably consistent and more conﬁdence can be placed in aggregating results from this rather more homogeneous group of studies. The laboratory subject Concerns regarding material differences between decision tasks framed for the purposes of laboratory investigation and those that occur in the natural setting are mirrored by a concern that subjects taking decisions in laboratory experiments and those operating in the natural environment may be fundamentally dissimilar. The emphasis in this section is on the potential problems resulting from the fact that laboratory and naturalistic decision-making, respectively, is generally undertaken by individuals with different levels of expertise in the form of the decision task under scrutiny. Subjects employed in laboratory experiments are often asked to make judgements about issues outside their experience. There is an established tradition, in academic research into betting, of employing college students as subjects in laboratory studies, most of whom have little experience of betting in real world contexts, let alone expertise. Lack of expertise is likely to affect both the decision process employed and the quality of the resulting decisions. Interestingly, though perhaps unsurprisingly, whilst the majority of laboratory studies suggest that individuals’ subjective probability judgements are not well calibrated (i.e. are not well correlated with corresponding objective probabilities), studies conducted in naturalistic environments have found that individuals with expertise in a particular domain are often well calibrated when making judgements associated with that domain (see, e.g. Hoerl and Falein, 1974; Kabus, 1976; Murphy and Brown, 1985; Smith and Kida, 1991). These observed distinctions between novice and expert performance would appear to compromise the researcher’s ability to generalise ﬁndings from

232

A. Bruce and J. Johnson

laboratory settings using novice subjects to real world contexts where decision makers are familiar with the decision task and environment. In particular, this suggests that the employment of naïve subjects in laboratory investigations of betting behaviour may produce misleading results. Further, the lack of familiarity of subjects with the decision task increases their susceptibility to misunderstanding or bias in the way that instructions are interpreted. Such problems of interpretation would be far less likely to apply to decision makers in their familiar natural setting. Experienced individuals might be expected to internalise, through observation, the validity of certain cues to make judgements associated with their particular task environment. This would, in general, allow them to make accurate judgements within their familiar decision domain, especially where decision tasks are repetitive in nature, where decision-relevant stimuli remain fairly constant or where probability assessments are involved (see, in this context, Phillips, 1987). A more general discussion of the interaction between expertise and form of task is offered in Shanteau (1992). Further, a number of studies demonstrate large differences between the decision strategies of experts and novices in terms of the way they think, the information employed, speed and accuracy of problem solving and the nature of decision models employed (e.g. Larkin et al., 1980). Crandall and Calderwood (1989) identify the particular ability of experts to interpret ambiguous cues, select and code information and understand causal models. Whilst the link between expertise and decision performance appears quite robust across a range of settings, it should be acknowledged that experienced decision makers in certain contexts still appear vulnerable to the use of heuristics, which result in decision biases (e.g. Northcraft and Neale, 1987; Smith and Kida, 1991). It is clear that, in general, expertise plays an important role in inﬂuencing decision quality. Laboratory experiments that employ naïve subjects, are unlikely therefore, to adequately assess the nature or quality of betting decisions made in real world contexts by ‘experts’ operating and receiving feedback in their familiar task domain. The section on ‘Calibration in naturalistic betting markets’ demonstrates, in the context of betting, and speciﬁcally in relation to calibration of subjective and objective probabilities, how real bettors in their natural environment generate levels of performance that are wholly inconsistent with laboratory predictions. Beyond the issue of expertise, a further concern with laboratory subjects relates to their necessary awareness that their behaviour in laboratory experiments is the subject of close scrutiny. Such scrutiny may, itself, materially affect that behaviour. Aspects of this problem, which is essentially a function of the interaction between the individual and the environment, are discussed more fully below. Finally, it is important to acknowledge that the individual subject is, in the context of participation in an experiment, likely to have a tightly-focused objective relating to the assigned decision task. The laboratory, as an abstraction from the subject’s normal experience, is essentially a capsule that isolates the subject from the competing objectives and concerns, which are present in the case of the naturalistic decision-maker. The multi-objective naturalistic decision-maker’s behaviour

Investigating betting behaviour

233

is likely, therefore, to be materially affected by the need to address conﬂict between objectives, resulting in tradeoffs and compromises. The laboratory environment The advantages of the laboratory setting in decision-making research are well established in terms of its ability to enable multiple experimentation under highlycontrolled conditions. The essence of the naturalistic setting, is, by contrast, its chaotic and uncontrolled character, which renders each decision episode unique in terms of the precise environmental conditions obtaining at the time the decision is made. This raises the immediate question: does the imposition of tight ecological control inevitably compromise the ability of laboratory simulations to shed useful insights into decision behaviour in the natural setting? If absence of control is a deﬁning feature of the natural environment, is it disingenuous to contend that behaviour in closely prescribed settings reﬂects that which we observe in the ﬁeld, in terms, inter alia, of motivation, decision characteristics or decision outcomes? Such fundamental reservations regarding laboratory investigation would curtail further discussion of its potential. It is, arguably, more fruitful to reﬂect in greater depth on the nature of the limitations that laboratory simulation of decision-making embodies. This section, therefore, considers three principal areas where the laboratory setting involves potentially damaging abstraction from naturally occurring conditions. This involves discussion of, respectively: 1 2 3

the laboratory as a consciously-scrutinised environment; the oversimpliﬁcation inherent in the laboratory setting; and incentives and sanctions in laboratory and natural settings.

The laboratory as a consciously-scrutinised environment The fact that the laboratory constitutes a consciously-scrutinised setting for investigating behaviour raises two forms of potentially distortive inﬂuence that might counsel caution in interpreting laboratory-generated results. First, there is the danger that subject awareness of scrutiny may materially inﬂuence behaviour. Subjects may, for example, be keen to project a particular image of themselves as decisionmakers in general and gamblers in particular. This may, under scrutiny, lead to modiﬁcations to the structure of their decision processes, the manner in which they process information and/or the risk strategies and decisions that they adopt. Such ‘observation effects’ are a well-established concern. To a degree they may be mitigated by the particular design of the experiment: hence, for example, the real behavioural focus for scrutiny may be hidden if the experimenter contrives an alternative core decision problem. Subjects are, therefore, less sensitive to scrutiny of their behaviour in relation to the area that is genuinely under investigation. Of course, such diversionary tactics in experimental design may in themselves generate behaviour that is merely an artefact of the design, by encouraging subjects

234

A. Bruce and J. Johnson

to devote an inappropriate level of attention to the real problem under scrutiny. It is, of course, important to note that observation effects are not conﬁned to the laboratory; naturally occurring behaviour may also be susceptible where scrutiny by investigators is evident. In many cases, however, the naturalistic research design can ensure that subjects are entirely unaware that their behaviour is under scrutiny and hence any distortive effects of observation can be ruled out. A related issue concerns the danger that the investigator may compromise the experimental ‘purity’ of the exercise via the way in which the laboratory setting is conﬁgured. This is, essentially, an extension of the argument that the task design may be over-inﬂuenced by consideration of the behavioural phenomena under investigation. There is a ﬁne line, both at the task and environmental level, between an experiment that allows behavioural traits to be manifested against a neutral background and one that, consciously or otherwise, channels behaviour along particular lines anticipated by the investigator. The oversimpliﬁcation inherent in the laboratory setting Apart from any biases, which may be attributed to investigator or subject consciousness of their roles, the laboratory setting is constrained, from an operational point of view, in terms of the complexity of the designed environment. There are various layers to this argument. First, compared with the richness of many natural settings, the laboratory is necessarily limited to offering an environment that features only the basic or salient characteristics of the natural world: a signiﬁcant degree of abstraction is inevitable. There is then a danger that, in attempting to isolate particular variables for scrutiny, investigators may inadvertently miss or modify critical variables that are inﬂuential in the natural setting. Second, in a dynamic sense, laboratory characterisation of an evolving and uncontrolled real environment is, as noted above, a necessarily controlled process. To the extent that a degree of randomness may be designed into any investigation, the scope for random variation is limited by the parameters imposed by the experimental setting. Further, it should be acknowledged that the simple aggregation of individually identiﬁed relationships between pairs of variables in the laboratory in building an overall understanding of the decision process fails to capture the full richness of interdependence between variables. Hence, there may be a tendency to miss inﬂuences that are signiﬁcant in identifying the type of reasoning required in complex natural environments (Woods, 1998). The set of concerns discussed in this section highlights a general problem with the laboratory that, paradoxically, is frequently cited as a strength of this type of experimental approach; that is, the ability to isolate, manipulate and scrutinise the impact of a particular variable, whilst maintaining control over the wider environment. This neglects the fact that real decision environments are frequently characterised by simultaneous and unpredictable variation across a range of factors. Decisions are, therefore, invariably taken against a turbulent and chaotic background. Artiﬁcial isolation of individual variables denies the opportunity to observe interactive effects and (Cohen, 1993) runs the risk of, for example, amplifying the

Investigating betting behaviour

235

signiﬁcance of biases in the laboratory setting, which might not emerge against the ‘noisier’ background of the natural environment. As Eiser and van der Pligt (1988) note: It is therefore precisely because many studies fail to simulate the natural context of judgement and action that ‘errors’ and ‘biases’ can be experimentally demonstrated with such relative ease. The above points illustrate the general difﬁculties of capturing both static and dynamic naturalistic complexity in a synthetic environment. A further area of concern relates to the ability of the laboratory to replicate the particular protocols, customs and institutional peculiarities of real settings that are increasingly regarded as inﬂuential in explaining behaviour in real contexts. The work of the experimental economics school has been particularly inﬂuential in drawing attention to the importance of environmental idiosyncrasies in shaping behaviour in market contexts. Waller et al. (1999), for example, identify three forms of inﬂuence: 1 2 3

‘institutional effects’, the rules and conventions within which market activity takes place; the nature of incentives; and the existence of learning opportunities associated with information availability.

Clearly, from the standpoint of a laboratory investigator, these types of factors pose a particular challenge. An acknowledgement that speciﬁc details of setting or subtle nuances of process may materially affect outcomes imposes an additional burden on a medium of enquiry that, as noted above, must necessarily be limited to a relatively simple conﬁguration. A rather more fundamental concern, though, relates to the fact that the potentially inﬂuential aspects of institutional detail, convention, custom, process and protocol are each factors that emerge or evolve over time in the natural setting: they are, in other words, purely naturalistic phenomena in origin. As such, an attempt to transpose such factors into a laboratory environment may be regarded as wholly inappropriate. There is no obvious reason why factors that originate from, that serve to deﬁne and that are inﬂuential in, a particular naturalistic setting should carry a similar inﬂuence in a laboratory environment. Hence, the potential for laboratory-based work to further our understanding in this area might be regarded as highly limited. By contrast, in the particular context of UK horse-race betting, the coexistence of two distinct forms of betting market permits the naturalistic investigation of settings with signiﬁcantly differing institutional frameworks and differences in process and custom. The section on ‘Calibration in naturalistic betting markets’ demonstrates how naturalistic investigation is able to demonstrate the signiﬁcance of these factors in determining market outcomes.

236

A. Bruce and J. Johnson

Incentives and sanctions in laboratory settings The material distinctions that exist between the nature of incentives and sanctions, which characterise the laboratory vis à vis the natural environment, constitute a further basis for circumspection in relation to the results of laboratory-based investigation. There are various aspects of this distinction that merit attention. First, there is the issue of participation, whereby subjects in laboratory simulations require positive incentives to take part in experiments. Incentives may take the form of, inter alia, payments, free entry into a prize draw or simply, in the case of cohorts of college students, for example, peer pressure. The important issue here is that subjects observed in the natural setting participate voluntarily in the activity under scrutiny. There would appear to be strong prima facie grounds for suggesting that those who participate voluntarily may be expected to behave quite differently from those whose participation requires incentives and who would not, ordinarily, have engaged in the activity under investigation. Most pertinently, perhaps, naturalistic subjects face different incentives/sanctions structure in that, in the context of betting, they are investing their own resources with the associated prospect of material personal gain or loss. Laboratory subjects, by contrast, have no expectations of signiﬁcant gain or loss associated with their participation. Any material ‘rewards’ provided tends to be trivial. Apart from any ﬁnancial rewards or penalties, real bettors are likely to be subject to higher levels of arousal than those in laboratory simulations, with potentially material effects on behaviour. Brown (1988) observes that ‘some form of arousal or excitement is a major, and possibly the major, reinforcer of gambling behaviour for regular gamblers’. Clearly, even where there is an acknowledgement of the potential inﬂuence of arousal or stress in the natural setting, it is ethically problematic to submit laboratory subjects to inﬂuences that may be injurious to their health. Yates (1992) in questioning the ability of laboratory investigation to capture this feature of the natural environment, argues: there is reason to suspect that the actual risk-taking behaviour observed in comfortable, low-stakes, laboratory settings differs in kind, not just in degree from that which occurs in the often stressful, high-stakes, real-world context. The discussion in the following section of the relative rates of calibration between subjective and objective probabilities in the laboratory and the naturalistic decision setting, may be indicative of the relative potency of incentives and sanctions in the different contexts. This section has identiﬁed three areas of concern with aspects of the laboratory environment that might be expected to limit the usefulness of results derived in this type of setting. Together with the limitations relating to laboratory subjects and the speciﬁcation of tasks in the laboratory, they invite the view that there are strong reasons for evaluating with caution the signals that emerge from laboratory enquiry vis à vis empirical, naturalistic research in relation to decision-making in general and betting in particular.

Investigating betting behaviour

237

Calibration in naturalistic betting markets The preceding section discussed a number of shortcomings of laboratory-based experiments in the understanding of decision processes and decision outcomes. This has important implications for the exploration of the behaviour of horse-race bettors in their naturalistic betting environments, either at the racetrack or in betting ofﬁces. It has been argued that differences in the nature of the real world betting task, the complexity and dynamic nature of the real world betting environment and the degree of expertise of seasoned bettors are likely to result in clear distinctions between results obtained from laboratory and naturalistic studies of betting behaviour. In order to illustrate such distinctions, this section will contrast the degree of calibration observed in subjective probability assessments in laboratory experiments with those observed in bets placed at the racetrack. Calibration describes the degree to which subjective and objective probability assessments are correlated. Its importance derives from its value as a key measure of decision quality and is reﬂected in its prominence as a research theme within the decision-making literature. In the context of betting, the issue of calibration is of particular importance since the success of betting decisions hinges on the quality of subjective probability assessments. Calibration in laboratory studies The clear conclusion that emerges from laboratory-based investigations is that individuals’ subjective probability estimates are generally not well calibrated. Three main sources of underestimation have been observed. These are underestimation, respectively, of the subjective probability of events considered undesirable by the subject (e.g. Zackay, 1983) of events that are easy to discriminate (e.g. Suantek et al., 1996) and of tasks that have high base-rate probabilities (Ferrell, 1994). Analogously, overestimation occurs in events considered desirable or for events that are hard to discriminate or for which low base-rate probabilities apply. These deviations from perfect calibration have been attributed to the limited cognitive capacity of decision makers who rely on heuristics or rules of thumb to simplify the decision-relevant data associated with complex decision environments. These heuristics have been demonstrated to result in a range of systematic biases (e.g. Kahneman and Tversky, 1972; Cohen, 1993), leading, it is argued, to poor calibration. Calibration of pari-mutuel bettors To explore the extent to which these results were mirrored in real world contexts, the calibration of racetrack bettors was investigated (Johnson and Bruce, 2001). In particular, the staking behaviour of UK pari-mutuel horse-race bettors was examined for each of 19,396 horses in 2,109 races at forty-nine racetracks during 1996. It has been argued that the proportion of money placed on a given horse in a parimutuel market reﬂects the bettors’ combined subjective view of its probability of

238

A. Bruce and J. Johnson

Table 17.1 Comparison of bettors’ aggregate subjective probability judgements and horses’ observed (objective) probability of success Proportion of money staked on an individual horse in a race

Mean subjective probability

Mean objective probability

n

0.0–0.1 0.1–0.2 0.2–0.3 0.3–0.4 0.4–0.5 0.5–0.6 0.6–0.7 0.7–0.8 0.8–0.9 0.9–1.0 Total

0.05 0.14 0.24 0.34 0.44 0.54 0.64 0.73 0.83 0.97

0.04 0.13 0.26 0.32 0.47 0.68 0.82 0.69 1 1

11,795 4,590 1,850 679 309 120 38 13 1 1 19,396

success. If too little money were placed on a horse then the odds offered by the parimutuel operator would appear attractive and knowledgeable bettors would continue to bet such that the odds on a given horse reﬂect the market’s best estimate of its true probability of winning (see Figlewski, 1979). To explore the degree of calibration, horses were grouped into categories based on the proportion of money bet on them in a race. This offered an indication of the bettors’ subjective probability assessment concerning the horses’ chances of success. The objective winning probability of horses in a particular category was calculated by dividing the total number of winners in that category by the total number of runners in that category over the period. Perfect calibration in a category would exist if the objective probability of a horse in that category winning, matched its subjective probability of winning. The results presented in Table 17.1 (see Johnson and Bruce, 2001) clearly indicate a close correspondence between objective and subjective probabilities and suggest that the staking patterns of bettors closely reﬂect horses’ true probabilities of success. To formally test this observed effect a conditional logit model was employed (for a full derivation see Johnson and Bruce, 2001) to model the objective probability of horse i winning race j based on the bettors’ subjective probability assessments. In particular, the following equation was developed that related the objective probability of horse i in race j, (pijo ), with nj runners and the subjective probability of that horse (pijs ) (as per McFadden, 1974; Bacon-Shone et al., 1992):

β

i=1

pijs

pijs o pij = ni

β

for i = 1, 2, . . . , nj

(1)

Investigating betting behaviour

239

The parameter ∃ is determined by maximising the joint probability of observing the results of the 2,109 races in the sample. In fact, the estimated value of ∃ was 1.0802, which was not signiﬁcantly different from 1. This implies, from equation (1), that the objective probability of horse i winning race j is not signiﬁcantly different from the proportion of money staked on horse i in race j ; that is, we observe almost perfect calibration amongst racetrack pari-mutuel bettors. This result is in sharp contrast to the generally poor calibration that is observed in laboratory studies. A number of reasons might be suggested for this. They are as follows: 1

2

3

The majority of laboratory studies employ naïve subjects with little taskspeciﬁc knowledge, and this may hinder achievement of good calibration. However, the majority of racetrack bettors have some experience of betting and this may aid their calibration, since previous research suggests that experienced bettors learn to take account of a wide range of race-speciﬁc factors (e.g. Neal, 1998). It appears that experience and domain-speciﬁc knowledge aids calibration. For example, good calibration has been observed in other naturalistic studies amongst experienced auditors (e.g. Smith and Kida, 1991), experienced weather forecasters (e.g. Murphy and Brown, 1985), in the prediction of R&D success by experienced managers and in the prediction of ﬁnishing position by experienced horse-race bettors (Hoerl and Fallin, 1974). However, experience alone does not guarantee good calibration, since some naturalistic studies have identiﬁed poor calibration; for example, amongst experienced estate agents (Northcraft and Neale, 1987) and physicians (e.g. Bennett, 1980). It appears that those experienced individuals not used to assessing uncertainty in terms of probabilities (e.g. estate agents, physicians) are less likely to be well calibrated (Ferrell, 1994), whereas those who routinely employ more probability concepts in their normal domain (e.g. weather forecasters) are more likely to be well calibrated. The latter description most aptly describes experienced horse-race bettors since an essential ingredient of betting is the assessment of value in a horse’s odds (which reﬂect its subjective probability of success). Horse-race bettors are clearly spurred by the prospect of ﬁnancial gains and by non-pecuniary beneﬁts (e.g. increased esteem amongst peer group) associated with a successful bet. This is unmistakably observed by witnessing the frenzy of excitement at the racetrack as the race reaches its climax! The incentives available to racetrack bettors may help to explain the accuracy of their subjective judgements since research suggests that calibration is improved when motivation exists for accurate judgements (e.g. Beach et al., 1987; Ashton, 1992). If incentives are given in laboratory experiments they are often small, offering little by way of real welfare implications for subjects – this is unlikely to aid good calibration. The naturalistic environment of the racetrack may aid calibration since bettors become aware of the types of data that should be employed: those which are irrelevant and the cues that are vital to their success. This is particularly

240

4

A. Bruce and J. Johnson true in turbulent, fast changing and complex environments such as betting markets. However, individuals who have developed skills in such naturalistic environments may not fare well in calibration experiments involving more static tasks with more reliable data; the type often conducted in the laboratory (e.g. McClelland and Bolger, 1994; Omodei and Waring, 1995). In addition, research suggests that calibration is often better when individuals make predictions about the future (as is the case for horse-race bettors) rather than when they are required to assess the accuracy of their memory; which is typically required in laboratory calibration studies (e.g. Wright and Ayton, 1988). Bettors beneﬁt from regular unequivocal and timely feedback on their betting performance and this may aid them in appropriately adjusting their subsequent actions. Other groups that also receive regular, timely feedback associated with their judgements (e.g. weather forecasters) are also shown as being well calibrated (Murphy and Winkler, 1977). Poor calibration amongst physicians may be explained by the often long time-lags involved in receiving feedback on their judgements and the broad cross-section of conditions for which they are required to make judgements. Pari-mutuel bettors, on the other hand, repeatedly engage in a uniform activity, spread over a uniform and short time scale. Successive pari-mutuel betting markets are reasonably consistent in terms of information presentation and time frame. Bettors become familiar with the processes and rhythms of the market and receive regular immediate feedback. It is likely that these conditions aid learning and improve calibration. The lack of regular feedback over the long term in laboratory studies may help to explain the poor calibration observed there.

Calibration of bettors in bookmaker markets As noted above, in the UK, two forms of betting markets co-exist at racetracks – the pari-mutuel and the bookmaker markets. The main difference between these markets is that bettors can ‘take a price’ in bookmaker markets, whereby the odds offered by a bookmaker at the time the bet is struck will be the odds used to calculate returns if the horse wins. Consequently, returns to privileged information are insurable by ‘taking a price’. However, in pari-mutuel markets other bettors, who subsequently bet on the same horse, can erode these returns. Given the close physical proximity of the parallel bookmaker and pari-mutuel markets at UK racetracks, it is interesting to compare the calibration of bettors’ judgements in these markets and, in particular, to explore the impact of differences in the institutional characteristics of these markets on bettors’ subjective judgements. Two models were developed using logistic regression (see Bruce and Johnson, 2000). One modelled the relationship between the starting price (in bookmaker markets) of a horse and its objective probability of success (based on results of 2,109 races in 1996). The other modelled the relationship between the horse’s pari-mutuel odds and its objective probability (as in the earlier study discussed above). Consequently, functions were developed to determine

Investigating betting behaviour

241

0

Ln (win probability)

–2 –4 –6 –8 –10 –12 –14 –2.00

Bookmaker Tote Reference line win prob = 1/(1 + odds) 0.00

2.00

4.00

6.00

8.00

Ln (odds)

Figure 17.1 Predicted win probabilities.

the objective probability of winning for horses with particular: (a) bookmaker odds and (b) pari-mutuel odds. These functions are shown in Figure 17.1. The reference line in Figure 17.1 represents the situation where the odds perfectly reﬂect the horse’s objective probability of success. For example, if horses with odds of 4/1 won 20 per cent of races in which they ran (i.e. one in ﬁve) we could conclude that the bettors subjective judgements associated with such horses were perfectly calibrated. Consequently, the reference line in Figure 17.1 represents the situation where for a horse at odds of a/b the objective probability is given by 1/(1 + a/b). It is clear from Figure 17.1, as indicated above, that the judgements of pari-mutuel bettors’ are almost perfectly calibrated. However, in bookmaker markets there appears to be a strong tendency to overestimate the chance of outsiders and to marginally underestimate the chance of favourites – the so-called ‘favourite–longshot’ bias. For example, horses with odds of 50/1 only actually win 1 in 127 races, whereas horses with odds of 1/2 win seven of ten races. These results are surprising given the close physical proximity of the parimutuel and bookmaker markets and the availability of computer screens at various locations, displaying the latest odds in the two markets. This makes it relatively easy for bettors to compare odds in the two markets and to choose the market in which to bet. It is interesting to note that in the US, where no parallel bookmaker market exists, the subjective judgements of pari-mutuel bettors are not well calibrated, displaying the familiar favourite–longshot bias (Snyder, 1979). The presence of a parallel bookmaker market may help to improve the calibration

242

A. Bruce and J. Johnson

observed in pari-mutuel markets in the UK. In particular, the odds available in bookmaker markets and their evolution may provide a yardstick to bettors, against which to compare pari-mutuel odds. Those with privileged information are likely to bet with bookmakers, where their returns are insurable (by ‘taking a price’). Consequently, the observation of price movements in these bookmaker markets may provide insights to pari-mutuel bettors concerning the existence of privileged information; their subsequent bets are likely to be more informed and hence better calibrated. It might be argued that bettors in bookmaker markets also have the opportunity to observe changes in bookmaker odds; however, if privileged insiders have already forced the odds to a position that reﬂects the horse’s true chance of winning the race, subsequent ‘follower’ behaviour on the part of less-informed bettors will reduce the odds still further, leading to a starting price that is poorly calibrated. Pari-mutuel bettors also beneﬁt from two further advantages over bettors in bookmaker markets: 1

2

Pari-mutuel markets operate in a more uniform and mechanical manner than bookmaker markets. The amounts staked and the prevailing odds in the parimutuel market are displayed and regularly updated on numerous computer screens at racetracks. Odds are determined solely by the relative amounts staked on each horse. In bookmaker markets, by contrast, the odds are determined partially by the relative amounts wagered on different horses but also by the opinion of the bookmakers themselves. Consequently, whilst market moves are readily identiﬁed in pari-mutuel markets, odds changes in bookmaker markets must be interpreted; they may, for example, represent a change in bookmakers’ opinions or may represent a ‘false move’ created by bookmakers to stimulate the demand for certain horses. The uniformity of each pari-mutuel market enables bettors to become attuned to the market rhythm; it allows them to interpret market information that should enable them to focus more on the betting decision problem per se, resulting in better calibration than exists in bookmaker markets. Bookmaker markets at racetracks are competitive, with a number of bookmakers competing for bettors’ business. All bookmakers set their own odds – which reﬂect their opinion of the horse’s chance of success, the relative weight of money on each horse and their desire to attract high betting turnover (ideally spread across all horses in the race). Consequently, bettors are faced by an array of odds in bookmaker markets and may spend considerable time searching for the best value available. This activity can distract them from the central decision task – with negative implications for calibration. Pari-mutuel bettors’ calibration is unlikely to be adversely affected in this manner since they are faced with a single odds value for each horse and they can, therefore, focus directly on the task of selecting the horse on which to bet, without the distraction of searching for ‘value’.

A further explanation for the differences in calibration observed in the two parallel markets is offered by Shin (1992, 1993) and Vaughan Williams and Paton

Investigating betting behaviour

243

(1997). They suggest that bookmakers seek to protect themselves from informed bettors. It is argued that these bettors may have access to privileged information, which could suggest that a longshot has a signiﬁcantly better chance of success than the odds indicate. Bets placed by privileged insiders on such longshots may have a major impact on bookmakers’ proﬁt. Consequently, Shin (1992, 1993) and Vaughan Williams and Paton (1997) argue (and provide evidence to support the view) that bookmakers artiﬁcially depress the odds on longshots to reduce the potential impact of privileged insiders. This would help to account for the poor correlation between subjective probabilities inherent in bookmakers’ odds for longshots compared to their objective probability of success. Whilst the extent to which bookmakers deliberately shorten odds is not clear, the existence of this practice suggests that the calibration of bettors in bookmaker markets is almost certainly signiﬁcantly better than that indicated in Figure 17.1. In summary, investigation of calibration in real world betting markets suggests that bettors’ (certainly pari-mutuel bettors’) subjective judgements are signiﬁcantly better correlated with objective probabilities than those of subjects in laboratory experiments. In seeking to explore this discrepancy we identify the experience of racetrack bettors, their motivation for success, their familiarity with the environment and its attendant information cues and the regular, unequivocal feedback they receive as key features, which are often absent from laboratory experiments. Furthermore, in explaining the different degrees of calibration observed between bettors in the pari-mutuel and bookmaker markets, we highlight the importance of institutional features. Without naturalistic enquiry it would be difﬁcult to predict in advance the inﬂuence that a market’s structural characteristics, mode of information presentation etc. might have on calibration.

Conclusion This chapter has advocated the increasing exploration of betting behaviour in naturalistic environments. A range of concerns has been identiﬁed with laboratory experiments, including the use of naïve subjects with little betting experience, the lack of appropriate incentives, the artiﬁciality of the tasks presented to subjects and the sterility of the environments in which the betting tasks are performed. These concerns have been highlighted by exploring differences between the results of calibration studies conducted in the laboratory and those conducted in naturalistic betting environments. In particular, structural features of betting markets are identiﬁed, which may be difﬁcult to reproduce in the laboratory, but that appear to signiﬁcantly inﬂuence behaviour. Clearly, in spite of the limitations discussed, laboratory investigation retains a number of features that allow it to contribute to an enriched understanding of betting behaviour. The contention of this chapter is not, therefore, that the laboratory should be abandoned; rather that the interests of the betting research agenda are best served by shifting the balance between laboratory-based and naturalistic research towards the latter, whilst acknowledging the complementary nature of the different approaches.

244

A. Bruce and J. Johnson

Notes 1 For many horseraces, bettors have the option, when operating off-course in bookmaker markets, of either nominating that their bet be settled at Starting Price (SP), Board Price or Early Price. SPs represent the odds available at the racetrack at the culmination of the betting market (the start of the race) and are the subject of independent adjudication. Board Prices are prices at the racetrack that change throughout the immediate pre-race period, depending on the relative weight of support for different selections. These evolving odds patterns are transmitted to betting ofﬁces during this pre-race or show period. Early Prices are odds relating to future races that are offered by off-course bookmakers. They are generally available until a short period prior to the show period. Where bettors elect to have their bets settled according to Board or Early Prices, they are said to ‘take’ a price. 2 Betting tax, applicable to all off-course bookmaker market bets in the UK until its abolition in October 2001, was payable at the same rate (i.e. 9 per cent) either at the time of bet placement on the stake alone, or (in the event of a successful bet) on total returns.

References Anderson, J. R. (1990). The Adaptive Character of Thought. Hilsdale, NJ: Erlbaum. Ashton, R. H. (1992). ‘Effects of justiﬁcation and a mechanical aid on judgment performance’. Organizational Behavior and Human Decision Processes, 52, 292–306. Ayton, P. and Wright, G. (1994). ‘Subjective probability: What should we believe’? In G. Wright and P. Ayton (eds), Subjective Probability (pp. 163–183). Chichester: Wiley. Baars, B. J. (1990). ‘Eliciting predictable speech errors in the laboratory’. In V. Fromkin (ed.), Errors in Linguistic Performance: Slips of the Tongue, Ear, Pen and Hand. New York: Academic Press. Bacon-Shone, J. H., Lo, V. S. Y. and Busche, K. (1992). Modelling Winning Probability. Research report, Department of Statistics, University of Hong Kong, 10. Beach, L. R., Christensen-Szalanski, J. and Barnes, V. (1987). ‘Assessing human judgement: Has it been done, can it be done, should it be done?’ In G. Wright and P. Ayton (eds), Judgmental Forecasting (pp. 49–62). Chichester: Wiley. Bennett, M. J. (1980). Heuristics and the Weighting of Base Rate Information in Diagnostic Tasks by Nurses. Unpublished doctoral dissertation, Monash University, Australia. Brown, R. I. F. (1988). ‘Arousal, reversal theory and subjective experience in the explanation of normal and addictive gambling’. International Journal of Addictions, 21, 1001–1016. Bruce, A. C. and Johnson, J. E. V. (1992). ‘Toward an explanation of betting as a leisure pursuit’. Leisure Studies, 14, 201–218. Bruce, A. C. and Johnson, J. E. V. (1996). ‘Decision-making under risk: effect of complexity on performance’. Psychological Reports, 79, 67–76. Bruce, A. C. and Johnson, J. E. V. (2000). ‘Investigating the roots of the favourite–longshot bias: an analysis of decision making by supply- and demand-side agents. Journal of Behavioral Decision Making, 13, 413–430. Cohen, M. S. (1993). ‘Three paradigms for viewing decision biases’. In G. A. Klein, J. Orasanu, R. Calderwood and C. E. Zsambok (eds), Decision Making in Action: Models and Methods (pp. 36–50). Norwood, NJ: Ablex. Crandall, B. and Calderwood, R. (1989). Clinical Assessment Skills of Experienced Neonatal Intensive Care Nurses. Yellow Springs, OH: Klein Associates Inc. Eiser, J. R. and van der Pligt, J. (1988). Attitudes and Decisions. London: Routledge.

Investigating betting behaviour

245

Ferrell, W. R. (1994). ‘Discrete subjective probabilities and decision analysis: elicitation, calibration and combination’. In G. Wright and P. Ayton (eds), Subjective Probability (pp. 410–451). Chichester: Wiley. Figlewski, S. (1979). ‘Subjective information and market efﬁciency in a betting market’. Journal of Political Economy, 87, 75–88. Hey, J. D. (1991). Experiments in Economics. Oxford: Blackwell. Hey, J. D. (1992). ‘Experiments in economics – and psychology’ In S. E. G. Lee, P. Webley and B. M. Young (eds), New Directions in Economic Psychology – Theory, Experiment and Application. Aldershot: Edward Elgar. Hoerl, A. E. and Falein, H. K. (1974). ‘Reliability of subjective evaluation in a high incentive situation’. Journal of the Royal Statistical Society, 137, 227–230. Hogarth, R. M. (1980). Beyond Static Biases: Functional and Dysfunctional Aspects of Judgemental heuristics. Chicago: University of Chicago, Graduate School of Business, Center for Decision Research. Johnson, J. E. V. and Bruce (1997). ‘A probit model for estimating the effect of complexity on risk-taking’. Psychologicaal Reports, 80, 763–772. Johnson, J. E. V. and Bruce, A. (1998). ‘Risk strategy under task complexity: A multivariate analysis of behaviour in a naturalistic setting’. Journal of Behavioral Decision Making, 11, 1–17. Johnson, J. E. V. and Bruce, A. C. (2001). ‘Calibration of subjective probability judgements in a naturalistic setting’. Organizational Behavior and Human Decision Processes, 85, 265–290. Kabus, I. (1976). ‘You can bank on uncertainty’. Harvard Business Review, May–June, 95–105. Kahneman, D. and Tversky, A. (1972). ‘Subjective probability: A judgement of representativeness’. Cognitive Psychology, 3, 430–454. Keren, G. and Wagenaar, W. A. (1985). ‘On the psychology of playing blackjack: Normative and descriptive considerations with implications for decision theory’. Journal of Experimental Psychology: General, 114(2), 133–158. Larkin, J., McDermott, J., Simon, D. P. and Simon, H. A. (1980). ‘Expert and novice performance in solving physics problems’. Science, 20, 1335–1342. McClelland, A. G. R. and Bolger, F. (1994). ‘The calibration of subjective probabilities: theories and models 1980–94. In G. Wright and P. Ayton (eds), Subjective Probability (pp. 453–482). Chichester: Wiley. McFadden, D. (1974). ‘Conditional logit analysis of qualitative choice behaviour’. In P. Zarembka (ed.), Frontiers in Econometrics: Economic Theory and Mathematical Economics (pp. 105–142). New York: Academic Press. Murphy, A. H., and Brown, B. G. (1985). ‘A comparative evaluation of objective and subjective weather forecasts in the United States’. In G. Wright (ed.), Behavioral Decision Making (pp. 178–193). New York: Plenum. Murphy, A. H. and Winkler, R. L. (1977). ‘Can weather forecasters formulate reliable forecasts of precipitation and temperature?’ National Weather Digest, 2, 2–9. Neal, M. (1998). ‘ “You lucky punters!” A study of gambling in betting shops’. Sociology, 32, 581–600. Northcraft, G. B. and Neale, M. A. (1987). ‘Experts, amateurs and real estate: An anchoringand-adjustment perspective on property pricing decisions’. Organizational Behavior and Human Decision Processes, 39, 84–97.

246

A. Bruce and J. Johnson

Omodei, M. M. and Wearing, A. J. (1995). ‘Decision-making in complex dynamic settings – a theoretical model incorporating motivation, intention, affect and cognitive performance’. Sprache and Kognition, 14, 75–90. Orasanu, J. and Connolly, T. (1993). ‘The reinvention of decision making’. In G. A. Klein, J. Orasanu, R. Calderwood and C. E. Zsambok (eds), Decision Making in Action: Models and Methods. Norwood, NJ: Ablex. Phillips, L. D. (1987). ‘On the adequacy of judgmental forecasts’ In G. Wright and P. Ayton (eds), Judgmental Forecasting (pp. 11–30). Chichester: Wiley. Shanteau, J. (1992). ‘Competence in experts: the role of task characteristics’. Organizational Behavior and Human Decision Processes, 53, 252–266. Shin, H. S. (1992). ‘Prices of state-contingent claims with insider traders, and the favourite– longshot bias’. Economic Journal, 102, 426–435. Shin, H. S. (1993). ‘Measuring the incidence of insider trading in a market for statecontingent claims’. Economic Journal, 103, 1141–1153. Smith, J. F. and Kida, T. (1991). ‘Heuristics and biases: expertise and task realism in auditing’. Psychological Bulletin, 109, 472–485. Smith, V. D. (1989). ‘Theory, experiment and economics’. Journal of Economic Perspectives, 3, 151–169. Snyder, W. (1978). ‘Horse-racing: The efﬁcient markets model’. Journal of Finance, 78, 1109–1118. Suantek, L., Bolger, F. and Ferrell, W. R. (1996). ‘The hard–easy effect in subjective probability calibration’. Organizational Behavior and Human Decision Processes, 67, 201–221. Vaughan Williams, L. and Paton, D. (1997). ‘Why is there a favourite–longshot bias in British racetrack betting markets?’ Economic Journal, 107, 150–158. Waller, W. S., Shapiro, B. and Sevcik, G. (1999) ‘Do cost-based pricing biases persist in laboratory markets?’ Accounting, Organizations and Society, 24, 717–739. Woods, D. D. (1998). ‘Coping with complexity: The psychology of human behaviour in complex systems’. In L. P. Goodstein, H. B. Anderson and S. E. Olsen (eds), Tasks, Errors and Mental Models. London: Taylor & Francis. Wright, G. (1982). ‘Changes in the realism and distribution of probability assessment as a of question type’. Acta Psychologica, 52, 165–174. Wright, G., and Ayton, P. (1998). ‘Immediate and short-term judgmental forecasting: personologism, situationism or interactionism?’ Personality and Individual Differences, 9, 109–120. Yates, J. F. (ed.) (1992). Risk Taking Behaviour. Chichester: John Wiley. Zakay, D. (1983). ‘The relationship between the probability assessor and the outcomes of an event as a determiner of subjective probability’. Acta Psychologica, 53, 271–280.

18 The demand for gambling A review David Paton, Donald S. Siegel and Leighton Vaughan Williams

Introduction A rapid global expansion in gambling turnover has heightened interest in identifying the ‘optimal’ level of regulation in the gambling industry. Key policy objectives in this regard include determining the ideal structure of gambling taxes, maximising the net social beneﬁt of gambling and devising optimal responses to environmental changes, such as the growth of Internet gambling. Successful formulation of policy in these areas depends on the availability of good empirical evidence on demand characteristics and substitution patterns for various types of gambling activity. Policymakers in many countries are especially interested in assessing substitution effects for national and state lotteries, since they have become increasingly dependent on this source of revenue. They are also interested in determining how regulatory changes affect the demand for alcohol, tobacco, entertainment services and other consumer products that generate substantial tax revenue. The question of whether these products are substitutes or complements for gambling has important revenue implications when gambling regulations are modiﬁed. The purpose of this chapter is to review the available evidence on this topic. In assessing this evidence, we place signiﬁcant weight on academic literature that has been subject to peer review. However, we also consider consultancy-based reports where we consider these are particularly noteworthy. The following section begins with a discussion of forces that are likely to affect the demand for various gambling products. In the section on ‘Approaches to estimating gambling demand and substitution’ we outline the standard methodological approach. In the section ‘Review of the empirical literature’ we provide a comprehensive review on this topic. We summarise our ﬁndings in the ﬁnal section.

Substitutes and complements in gambling A major competitive threat to ﬁrms in this industry consists of two factors that inﬂuence demand: goods or services that constitute substitutes and complements. Economic theory predicts that a rise (decline) in the demand for complementary goods will increase (reduce) demand. On the other hand, close substitutes can potentially reduce proﬁtability by capturing market share and intensifying internal

248

D. Paton, D. S. Siegel and L. V. Williams

rivalry. In particular, the introduction and expansion of lotteries in various countries may have reduced the demand for conventional gambling services. In the US, there is evidence of substitution between Indian casinos and lotteries (Siegel and Anders, 2001) and substitution between riverboats and other businesses in the entertainment and amusement sector (Siegel and Anders, 1999). The proliferation of new products and services in the gambling industry (including the lottery), in conjunction with the rise of Internet gambling, increases the threat posed by substitutes. For example, the growth of offshore Internet betting had a signiﬁcant impact on the recent decision by the UK Government to reduce betting taxation. However, only limited data are available on the price sensitivity of the demand for gambling as a whole, or for particular gambling activities. There are a number of reasons for this, most notably the difﬁculty of generating accurate estimates of such price-elasticity from existing data sources. In a number of countries, for example Australia, gambling has been heavily restricted until recent years. There have also been signiﬁcant changes in the quantity of gambling products, and their relative market share, but this has, arguably, been driven more by regulatory changes than changes in price. Further, in many instances the effective price for the consumer is established via government regulation rather than by actions of the market. For example, in the US the ‘pay-out’ rate on a state lottery is not established by market forces, but rather by the state legislature. However, economic theory suggests that most forms of gambling should be relatively insensitive to price, due to two factors: 1

2

First, unlike normal consumer goods, the price of gambling is not readily apparent to the buyer. Insofar as consumers are not aware of the ‘true’ price or changes in the price, it is likely that they will be less responsive to price changes than if they had full information. It is also especially difﬁcult for the consumer to determine the true price where there are infrequent or highly variable pay-outs. One might also argue that gamblers will be more concerned about the odds and hence more responsive to tax/price changes, the greater is the probability of winning any particular bet. Second, there is some evidence of brand loyalty among gamblers to particular products (see, e.g. Monopolies and Mergers Commission, 1998), suggesting only limited substitution of one gambling form for another by consumers. The less substitutable a good is, in general, the less price responsive it is likely to be. For example, gambling machines have a signiﬁcantly lower pay-out ratio (higher price) than most casino table games, yet gambling machines are still very popular within casinos, indicating a lack of substitution by these gamblers based on price.

It is also important to note that the overall (general) responsiveness of demand for a particular type of gambling activity can differ from its speciﬁc responsiveness as measured at any given price, or tax rate. In general, the higher the level of the price, or tax rate, the higher the price elasticity. Whatever the measurement difﬁculties, another potentially serious substitution threat is growth in the underground or shadow economy. In this context, we refer

The demand for gambling

249

to illegal gambling establishments, which do not pay taxes. Schneider (1998, 2000a,b) provides evidence of growth in the shadow economies of all OECD countries. In the UK, Schneider estimates that the percentage of GDP represented by the shadow economy has risen from 9.6 per cent in 1989–1990 to 13 per cent in 1996–1997. Unfortunately, he cannot disaggregate these ﬁgures by type of activity, such as tobacco, alcohol, drugs, prostitution and gambling, so we cannot determine how much gambling activity has actually gone underground. More generally though, Schneider attributes at least some of the rise in the shadow economy to increases in taxes on items such as alcohol and tobacco. It is important to note that alcohol and tobacco are often consumed while individuals are gambling. Thus, for some consumers, alcohol and tobacco are part of the gambling ‘experience’. Our point is that many licensed premises have gambling and cigarette machines and many individuals who frequent betting shops smoke in the premises. It is potentially interesting to note that some of the same individuals or groups that smuggle alcohol and tobacco can also potentially provide gambling services. That is certainly the case in the US. The bottom line is that higher taxes on alcohol and tobacco could also reduce the demand for gambling. Note that the notion of complementarities implies that a relaxation in gambling regulation (e.g. a reduction in taxes) may increase the demand for alcohol and tobacco. An interesting theoretical perspective is that gambling, alcohol consumption and smoking constitute three types of addictive behaviours, which can still be examined through the lens of rationality (see Becker and Murphy, 1988 and Becker et al., 1991, 1994).1 If our conjecture that gambling, smoking and drinking are net substitutes is true, there is a second potential threat to the proﬁtability of the gambling industry – a decline in the demand for complementary goods. Ultimately the question of whether gambling, alcohol and tobacco are indeed substitutes or complements is an empirical issue. Answering this question is the key to understanding the implications of regulatory changes on each of these commodities. Substitution and a decline in demand for (potentially) complementary goods may have quite serious impacts on the nature of gambling in the UK. An examination of recent economic trends indicates that the UK gambling industry is becoming more competitive (see Paton et al., 2001b, 2002). Recent proposals to liberalise regulation governing casinos and slot machine gambling in the UK will potentially have even more signiﬁcant impacts on the structure of the gambling industry. An appreciation of the direction and magnitude of price and substitution effects is crucial to understanding the impact of such changes. Thus, in the next two sections we provide a comprehensive review of the available empirical evidence in these areas.

Approaches to estimating gambling demand and substitution The standard approach to estimating elasticity and substitution effects in the academic literature is to specify a demand equation such as the following: Qit = a0 + a1 Pt + a2 Yt +

βj Pj t + βZi + ut

(1)

250

D. Paton, D. S. Siegel and L. V. Williams

where Qi is a demand-based variable (such as turnover or tax revenue) for gambling sector i; Pi is the average price in gambling sector i; Y is income or other related variable; Z is a vector of other factors that effect demand in a gambling sector i; u is a stochastic error or classical disturbance term; Pj is the average price in gambling sectors that are rival to i, j = i; and the subscript t indicates a particular time period. We expect that a1 < 0, that is, an increase in price leads to a reduction in demand. The magnitude of a1 provides an estimate of the response of demand in sector i to a change in price. If the demand function is speciﬁed in logarithms, then a1 gives a direct estimate of the price elasticity. In this case, a1 < −1 implies that the product is price elastic and −1 < a1 < 0 implies the good is price inelastic. Similarly, a2 provides an estimate of the income elasticity of demand. If a2 > 0, then gambling in sector i can be considered as a normal (rather than an inferior) good. If a2 > 1, then gambling is said to be a luxury good. Lastly, βj represents the cross-elasticity of demand in i with respect to the price of sector j . βj < 0 implies that sectors i and j are complements whilst, βj > 0 implies they are substitutes. The academic literature on the estimation of such models suggests two key methodological problems: the deﬁnition and measurement of prices and the identiﬁcation of the model. A common deﬁnition of the price of a unit gamble is one minus the expected value (see, e.g. Paton et al., 2001a). In a study of the demand for horse racing, Suits (1979) deﬁnes price as the pari-mutuel takeout rate, or the fraction of total wages withheld by the state. With bookmaker betting, such data are generally not available.2 In this case, an alternative approach is to use changes to the tax rate as a proxy for changes in price (as used, e.g. by Europe Economics, 2000). Several studies of lotteries (Vrooman, 1976; Vasche, 1985, and Mikesell, 1987) also measure the price by the takeout rate. Gulley and Scott (1993), on the other hand, contend that the true price of lotteries should be based on a probabilistic calculation, related to the expected value of the bet. They calculated these ‘pseudoprices’ for lottery games in Massachusetts, Kentucky and Ohio and estimated demand equations. Obtaining enough information to compute prices for lotteries and (Native American) Indian casino games in the US (which was the focus of the analysis presented in Siegel and Anders, 2001) is impossible. The Native American tribes are not required to publicly report the relevant data. Furthermore, they are reluctant to disclose any information about their casino operations. The second key methodological issue is the question of whether equation (1) is identiﬁed (econometrically). Put another way, the own-price and the prices of substitutes are all potentially endogenous to the quantity demanded. Estimation of equation (1) without taking account of this is likely to lead to biased estimates. A standard solution to this problem in the literature is the use of ‘instrumental’ variables that do not enter into equation (1) but that are correlated with the endogenous variable to identify the effect of each variable. For example, Paton et al. (2001a) use tax rates to identify own-price in their betting equation. In the context of lotteries, it is common to use exogenous events such as rollovers or superdraws to identify the lottery price. These events increase the jackpot out

The demand for gambling

251

of proportion to the amount staked (see, e.g. Cook and Clotfelter, 1993; Gulley and Scott, 1993; Farrell et al., 1999; Farrell et al., 2000; Forrest et al., 2000a,b). In support of this methodology, Forrest et al. (2000b) ﬁnd evidence that participants in the UK National Lottery are able to efﬁciently process the information available to them. Speciﬁcally, they ﬁnd that players act as if they can, on average, forecast the level of sales for a given drawing. A complementary approach that has been used to identify substitution effects is to examine the impact on demand of speciﬁc events, such as a regulatory change permitting a rival form of gambling to operate. Examples of studies based on this approach include Anders et al. (1998) on the impact of Indian casinos, Siegel and Anders (2001) on substitution between lotteries and casinos, and Paton et al. (2001a) on the impact of the introduction of the National Lottery on betting demand in the UK.

Review of the empirical literature There have been several studies of the demand characteristics of gambling, the vast majority either being in the US or the UK. The salient characteristics and key ﬁndings of the most important of these studies are summarised in Table 18.1. Our discussion of the evidence summarised in this table is organised as follows. First, we consider the evidence relating to own-price elasticity of demand for various gambling sectors. We then examine the more limited evidence relating to socio-economic factors – income, unemployment and so on. Finally, we discuss substitution among different gambling sectors and potential displacement of tax revenue. Own-price elasticity of demand Lotteries One might expect, a priori, that lotteries – which are characterised by a low ticket-cost combined with a very low chance of winning – are likely to be highly insensitive to price across a broad range of prices. Indeed, it may be thought unlikely that lotteries could operate at their current levels in the presence of existing pay-out and tax rates if their demand was sensitive to price. This perception would, however, appear to contrast with the ﬁndings of some econometric studies. For instance, Farrell et al. (1999) ﬁnd that the demand for the UK National Lottery is highly elastic. They report a short-run price elasticity that is close to unity (−1.05), but a long-run elasticity that exceeds unity (−1.55).3 This ﬁnding could be spurious, since it is based on data from the initial years of the UK National Lottery, when there was substantial media frenzy surrounding double rollovers (extremely large prizes).4 Studies based on data from subsequent years (Farrell et al., 2000; Forrest et al., 2000a,b, 2002) report elasticity close to unity. It is important to note that the magnitude of the elasticity has important policy implications. For instance, elasticity in excess of unity implies that the ‘pricing’ of lotteries

Country

United States

United States

United States

United States

United Kingdom

United States United States

United States

United Kingdom

United Kingdom

United Kingdom

United Kingdom

United Kingdom

Author(s)

Anders et al. (1998)

Anders and Siegel (1998)

Siegel and Anders (1999)

Siegel and Anders (2001)

Paton et al. (2001a)

Suits (1979) Thalheimer and Ali (1995)

Gulley and Scott (1993)

Farrell et al. (1999)

Farrell et al. (2000)

Forrest et al. (2000a)

Forrest et al. (2002)

European Economics (1998, 1999, 2000)

UK betting establishments

UK National Lottery

UK National Lottery

UK National Lottery

Lottery in the US states of Massachusetts, Kentucky and Ohio UK National Lottery

Horse racing in the US Horse racing in the US

Native American (Indian) casinos in the US state of Arizona Native American (Indian) casinos in the US state of Arizona Riverboat casinos in the US state of Missouri Lottery and Indian casino gambling in the US state of Arizona Lottery and betting establishments in the UK

Type of gambling activity

The demand for betting is relatively inelastic with respect to the betting tax (an estimate of −0.6 to −0.7)

Short-run elasticity close to unity (−1.05); long-run elasticity exceeds unity (−1.55) The demand for the lottery has an elasticity that is close to unity (estimates range from −0.80 to −1.06) The demand for the lottery has an elasticity that is close to unity (−1.03) The demand for the lottery has an elasticity that is close to unity (−1.04 and −0.88 for Wednesday/Saturday draws)

Introduction of National Lottery did not reduce conventional betting demand; strong evidence of substitution between the UK National Lottery and conventional betting establishments; demand for betting is elastic (estimates ranging from −1.19 to −1.25) Demand for horse-race betting is moderately elastic (−1.59) Demand for horse-race betting is highly elastic (−2.85 to −3.09). The introduction of state lotteries reduced betting demand. Some evidence of price-induced substitution between betting and lotteries Demand for the lottery is moderately elastic (−1.15, −1.92 and −1.20, respectively)

The growth of Indian casinos is associated with the displacement of revenue from conventional establishments (which are subject to tax) An expansion of riverboat casinos is associated with a decline in expenditure on other forms of entertainment and recreation An expansion in Indian casinos is associated with a decline in lottery revenues, especially for games offering big prizes

The establishment of Indian casinos destabilised the collection of sales tax revenue in Arizona

Findings

Table 18.1 Key empirical studies of demand characteristics and substitution effects for various types of gambling activity

The demand for gambling

253

is inconsistent with the stated goal of the regulator – revenue maximisation. That is, ofﬁcials could generate additional revenue by reducing the price or takeout rate and increasing the potential prize pool. Typical of the later results is that of Forrest et al. (2000a). They estimate the steady-state long-run price elasticity of demand for UK lottery tickets as −1.03, which is not statistically different from revenue-maximisation. Looking outside the UK, Clotfelter and Cook (1990) use cross-sectional data across states in the US and estimate an elasticity of sales with respect to the payout rate to be −2.55 for Lotto and −3.05 for ‘Numbers’ games. However, they admit that these estimates ‘are not very stable to alternative speciﬁcations’ (p. 114). In a study that is closer in execution to the UK based research reported above, Gulley and Scott (1993) use time series data and ﬁnd that the demand for lotteries in Massachusetts, Kentucky and Ohio was price elastic (−1.15, −1.92 and −1.20, respectively).5 A study conducted in 1997 by Business and Economic Research Limited (BERL), for the New Zealand Lotteries Commission, one of the few papers to estimate elasticity for a range of gambling sectors, estimated a price elasticity for New Zealand Lotto of −1.054, very close to the estimates for the UK. In sum, the evidence from the US suggests that price elasticity of the lottery is greater than one, while corresponding estimates from the UK are close to unity. The obvious explanation for this difference is that lotteries in the US tend to be operated by public institutions (individual states) whereas the UK National Lottery is privately run. In order to maximise government revenue, a state-run lottery should set prices so as to maximise proﬁts. To do this, the price should be such that marginal cost equals marginal revenue. Assuming that marginal cost is non-zero, then marginal revenue is also positive and elasticity is necessarily in excess of unity. In the UK, the Government taxes the National Lottery at a ﬁxed proportion of revenue. To maximise tax receipts, price should be set so as to maximise sales revenue. This implies that the marginal revenue is equal to zero that, in turn, implies unitary elasticity.6 Another possible contributory factor to the disparity in estimates is that there is a single national lottery in the UK, while numerous states in the US have lotteries. That is, there are more available substitutes in the US for lottery players. In fact, it is often quite easy for consumers to play the lottery in neighbouring states. This results in a situation where the consumer demand is much more sensitive to price changes than in the UK. There are a number of possible explanations for the apparent difference between the econometric ﬁndings and the more qualitative assessment that demand for lotteries is likely to be insensitive to their price: 1

As mentioned above, a ﬁnding that demand for lotteries is sensitive at high prices – owing to current levels of taxes – does not mean that the demand is necessarily sensitive at lower prices and tax rates. In fact, faced with an inelastic demand curve, to maximise proﬁts a producer will continue to raise prices until eventually demand becomes elastic. Elasticity increases because at high prices, substitutes may emerge that are not viable at lower prices.

254 2

D. Paton, D. S. Siegel and L. V. Williams Most quantitative studies estimate the responsiveness of demand to price using consumers’ reaction to occasional big pay-outs, or superdraws, that are announced in advance and accompanied by advertising campaigns. It is unclear whether the consumer reaction to these occasional events is a good guide to how the demand for lotteries would change if tax reductions increased pay-outs on a permanent basis.

Gambling machines, casinos and betting The evidence on the price elasticity for gambling machines, casinos and betting is more limited than that relating to lotteries. In part, this reﬂects the difﬁculties of obtaining price data for these sectors. Gambling machines may provide more feedback to the consumer on total returns than lotteries, in the sense that they are played repeatedly, and consumers will have some idea of the rate at which they lose. This in itself may imply that the demand for gambling machines is more price sensitive than that for lotteries. In fact, one of the few studies to provide direct evidence on own-price elasticity in this sector is the BERL (1997) paper referred to above. This estimated the elasticity of demand for gambling machines and casinos in New Zealand to be just −0.8 (i.e. somewhat inelastic to price). The earliest study on the demand for betting was Coate and Ross (1974), which examined the effect of off-track betting on racetrack wagering. However, owing to data deﬁciencies the authors were unable to provide an estimate of either the price or income elasticity of the demand for wagering. Other early studies include those of Gruen (1976), Suits (1979), Morgan and Vasche (1979, 1980, 1982) and Pescatrice (1980). These studies focus primarily on the price-elasticity of racetrack wagering demand in the US. They do not consider, however, substitute products such as state lotteries and spectator sports. Still, such work has provided elasticity estimates, Suits (1979) for example, providing an estimate of the demand for betting on horse racing, which is quite elastic (−1.59). Thalheimer and Ali (1995) included substitute products such as a state lottery in the demand relationship speciﬁcation. They found a particularly high own-price elasticity of demand for betting, in their examination of pari-mutuel revenue at three racetracks using annual data for the period 1960–1987. They conclude that the elasticity of turnover (‘the handle’) with respect to price (as measured by the takeout rate) at these racetracks is between −2.85 and −3.09. Unfortunately this study makes no attempt to correct for nonstationarity of the variables. The reported values of the Durbin–Watson statistic indicate that this may be a problem, which would imply that the reported elasticity could be biased. In contrast, the 1997 BERL Report estimated the price elasticity of demand for betting on racing in the pari-mutuel system of New Zealand as −0.720, signiﬁcantly lower than the US estimates. BERL’s ﬁndings suggest that the demand for betting on racing in New Zealand was less sensitive to price changes than gambling machines, casinos or lotteries. The relative importance of bookmakers and of off-course betting in the UK suggests that it is unlikely that these results tell us much about betting demand in this country. A series of industry-commissioned reports by Europe Economics

The demand for gambling

255

(1998, 1999, 2000) investigate the elasticity of betting turnover with respect to taxation rates (rather than total price) in the UK and in Ireland. They estimate this elasticity to be in the region of −0.6 to −0.7. Paton et al. (2001a) derive elasticity estimates using both taxation rates as a proxy for price and direct data on prices derived from bookmakers takeout rates. Using monthly, between January 1990 and April 2000 inclusive, they ﬁnd that the elasticity of betting demand with respect to tax is between −0.62 and −0.63 (conﬁrming the Europe Economics estimates) and the price elasticity to be within −1.19 to −1.25. The authors point out, however, that these estimates rely on only a limited number of changes to the tax rate and should be interpreted with caution. Speciﬁcally, they point out that policymakers should not rely on these ﬁndings to forecast the impact of larger changes in tax rates, since they are based on relatively small changes in tax rates. The recent structural change in betting taxation implemented in the UK in October 2001 is likely to be much more informative about the nature of betting demand, but to date there is no academic evidence relating to this period. A related issue raised by Paton et al. (2001a) is whether the elasticity of demand is increasing over time as additional substitute products appear on the market. We are unaware of any evidence to date relating to this point. Socio-economic factors A number of studies examine the links between gambling expenditure (or growth in expenditure) and variables related to wider economic issues such as average earnings or unemployment. For Australia, Bird and McCrae (1994) showed that total gambling expenditure grew at an average of 15.5 per cent per year between 1973 and 1992, compared to the Consumer Price Index increase of 9.1 per cent. However, betting on racing increased at only 10.5 per cent. Tuckwell (1984) isolated those factors that most strongly inﬂuenced the level of betting with the totalisator and bookmakers in Australia. The main inﬂuences on totalisator betting were the level of real wages, unemployment and lottery turnover. This suggests an association between the level of totalisator betting and the level of disposable income. For bookmakers, the ﬁndings were not as conclusive. Tuckwell found only a weak association between turnover and real wages. Bookmaker betting may be somewhat insulated from changes in the level of per capita disposable income by the higher preponderance of wealthy gamblers who use this betting medium. He also found a persistent decline in real per capita turnover over time. Thalheimer and Ali (1995) (introduced above) ﬁnd a strongly signiﬁcant positive effect of income on pari-mutuel betting in the US. The authors also present evidence suggesting that this relationship is non-linear. They ﬁnd the effect to be positive at low levels of income, but at higher levels, further increases in income are associated with reductions in the betting turnover. The authors attribute this quadratic relationship to the greater opportunity costs of attending racetracks in terms of lost income. An alternative explanation is that the correlation is spurious as discussed above. Certainly, Paton et al. (2001a) ﬁnd no evidence of such a

256

D. Paton, D. S. Siegel and L. V. Williams

quadratic effect for the UK. They estimate that income (as measured by average earnings) has a signiﬁcantly positive impact on betting demand. However, using a variety of speciﬁcations, they are unable to ﬁnd any impact on lottery demand. Further, they ﬁnd that the rate of unemployment has no additional impact on the demand in either sector. Substitution and revenue displacement Lotteries and betting The theoretical rationale for displacement is based on the economic principle of substitution, that is, money spent on gambling is money that could be spent on other goods and services. For example, the closest substitutes for sports betting are most likely to be other forms of gambling such as casinos, horse racing, bingo, Internet gambling, and lotteries. Our discussion in the section on ‘Substitutes and complements in gambling’, suggests that the strength of substitution between the major forms of gambling has increased over time. In the US, state governments and operators of casinos (Native American tribes, Riverboats and casinos in Nevada and Atlantic City) have pursued aggressive marketing strategies to capture a larger share of the gambling market. In the UK, the establishment of the National Lottery in November 1994 and the introduction of various related games since then have posed an equal threat to the market share of betting. On the other hand, such trends may not only affect market shares, they may also increase the total market size for gambling. For example, Paton et al. (2001a) contend that the introduction of the National Lottery may have led to a climate in which gambling as a whole became more socially acceptable. Thus, it is possible that regulatory liberalisation in one sector may lead to both substitution and complementarities. This point is illustrated in Paton et al. (2001a), who employ a series of structural stability tests to examine whether betting demand was signiﬁcantly affected by the introduction of the National Lottery or any subsequent lottery game. Using a variety of econometric speciﬁcations, they conclude that, in fact, there was no signiﬁcant impact on betting demand. In other words, although the lottery clearly captured market share from betting, this substitution effect was completely offset by the market expansion effect. The authors go on to demonstrate that, despite this expansion, once the lottery had been established, price changes did indeed induce signiﬁcant substitution between sectors. The magnitude of the cross-price elasticity of betting demand with respect to lottery price was estimated to be between +0.26 and 0.75. The cross-price elasticity of lottery demand to betting price was between +0.48 and 0.68. These ﬁndings are consistent with some recent evidence from the US. For example, Mobilia (1992) found that the existence of a state lottery led to a relatively small decrease in attendance at pari-mutuel tracks, but had no signiﬁcant effect of real handle (gross revenues) per attendee. Similarly, Siegel and Anders (2001; discussed in more detail below) found no evidence of substitution between horse and dog racing and lotteries in Arizona. On the other hand, Thalheimer and Ali (1995)

The demand for gambling

257

report much stronger evidence of substitution between the state lottery and parimutuel betting. In particular, they estimated that over the period 1974–1987 the presence of the Ohio State Lottery resulted in a decrease in attendance-related revenue at the three pari-mutuel racetracks in the market area of 17.2 per cent, and a decline of 24 per cent in handle-related revenue.7 Casinos and lotteries Anders et al. (1998) examined the impact of the introduction of Indian casino gambling on transaction privilege taxes (basically, sales taxes) in the US state of Arizona. This is a critical issue for policymakers in states with Indian casinos, since activity on Indian reservations, including casino gambling, is not subject to any state or federal taxes. The authors estimated the following time series regression: LTPTt = β0 + β1 LEMPLt + β2 LRETAILt + ut

(1)

where LTPT is the logarithm of Transaction Privilege Taxes (basically sales taxes); LEMPL is the logarithm of employment; LRETAIL is the logarithm of retail sales; u is a classical disturbance term and the subscript t indexes month t. Brown–Durbin–Evans and Chow tests for structural stability of regression equations revealed that the expansion of Indian casinos induced a structural change (decline) in projected sales tax revenues. The authors also estimated regressions of the following form: LTPTt = β0 + β1 LEMPLt + β2 LRETAILt + β3 CASINOt + ut

(2)

where CASINO is a dummy variable that is equal to one after the introduction of Indian casinos in June 1993; otherwise it is zero. Consistent with the displacement hypothesis, they found that β3 is negative and statistically signiﬁcant. A series of additional econometric tests revealed strong evidence of leakage from taxable sectors, such as restaurants and bars, to these non-taxable gambling establishments. The authors also argued that these displacement effects were currently being masked by strong economic growth and favourable demographic trends.8 Inevitably, a downturn in the local economy would force the state to take action to stem these leakages. This is exactly what transpired in the years since the paper was published. Another paper by Siegel and Anders (1999) examined revenue displacements from riverboat gambling in the US state of Missouri. Unlike Indian casinos, riverboats are subject to state taxation. Using county level data for the St Louis and Kansas City metropolitan areas (the two largest cities in Missouri), the authors estimated regressions of the following form: LSALESTAXikt = βk0 + βi1 LSALESTAXj lt + βi2 LKCRIVt + βi3 LSTLRIVti4 + βi4 LOTHRIVt + βi5 YEARt + ut (3)

258

D. Paton, D. S. Siegel and L. V. Williams

where SALESTAX denotes sales taxes, KCRIV, STLRIV and OTHRIV are adjusted quarterly gross revenues generated by riverboats in Kansas City, St Louis, and other parts of the state, respectively (Kansas City and St Louis are the two largest states in Missouri), i indexes ﬁve industries that could potentially experience the displacement effects, j indexes an industry classiﬁcation (SIC 799) for miscellaneous forms of entertainment, which includes riverboat gambling, k denotes the eleven counties located within Kansas City and St Louis or within driving distance to the riverboats, l represents the (six) counties where riverboats are located, t is the time period (quarterly observations), and the L preﬁx signiﬁes that each variable is computed as a logarithmic change (from year to year) and u is a classical disturbance term. The authors found that in SIC 799, all of the coefﬁcients on the riverboat variables (βi2 , βi3 and βi4 ) are negative and signiﬁcant. That is, the statistical evidence strongly suggests that an expansion in riverboat activity is associated with a decline in expenditures on other forms of entertainment and recreation. A third paper by Siegel and Anders (2001) ﬁnds strong evidence of revenue displacement from lotteries to casinos in the US state of Arizona. This is a major public policy concern in the US because states have become dependent on lotteries to fund educational programmes and other off-budget items. From the perspective of policymakers, lotteries are an attractive source of revenue, because it is less painful and politically less risky than conventional tax increases. Using monthly data for the years 1993–1998, provided by Arizona ofﬁcials, the authors estimate variants of the following equation: Log LOTTt = α + δS Log NUMSLOTSt + δH Log HORSEt + δD Log DOGt + δy YEARt + ut

(4)

where LOTT denotes monthly lottery revenues, NUMSLOTS is the number of slot machines in Indian casinos, HORSE represents the racetrack handle and DOG is the greyhound track handle and YEAR is a dummy variable denoting the year. The authors use different lottery games, such as Lotto, Fantasy Five and Scratchers, as dependent variables. This enables them to assess the impact of Indian casinos on various types of lottery games.9 Note that δS can be interpreted as the elasticity of lottery revenues with respect to slot machines. The substitution hypothesis implies that δS < 0. Estimation of this parameter will help state policymakers predict the impact of casino expansion on state revenues. The evidence presented in the paper suggests that δS is indeed negative and statistically signiﬁcant, that is, an expansion of slot machines is associated with a reduction in lottery revenues.10 As mentioned above, they did not ﬁnd evidence of substitution between horse and dog racing and lotteries. The strongest displacement effects were found for the big prize lottery games. Thus, the ﬁndings imply that, at least for Arizona, there is indeed a ‘substitution effect’. Indeed, they found stronger evidence of substitution in Arizona than in Minnesota, where Steinnes (1998) reported that Indian casinos had a negative, but lesser impact on the Minnesota lottery.

The demand for gambling

259

Further evidence of complementarities is provided by McMillen (1998), in a report commissioned by the New Zealand Casino Control Authority. The author argued that the introduction of casinos had not resulted in a reduction in spending on other forms of gambling, but instead has led to an expansion in the total expenditure on gambling. In other words, the impact of casinos on the overall national gambling market was judged to be one of complementarities rather than substitution. The report further noted that casinos appeared to have been a catalyst for change in other forms of gambling. A survey of casino patrons conducted as part of the study found that if money had not been spent on casino gambling, 37.5 per cent said that they would have spent it on other forms of entertainment, 25.7 per cent on housing items, 8.7 per cent on other forms of gambling, while 6 per cent would have saved the money. Fifteen per cent of the respondents did not reply to the question. In summary, there is strong evidence of positive cross-price elasticities across different forms of gambling. In other words, a decrease (an increase) in price in one sector will signiﬁcantly decrease (increase) the demand in other sectors. However, the expansion of particular sectors due to a looser regulatory environment seems to have ambiguous effects. The extant literature suggests that this can have both a negative effect on existing forms of gambling due to reduced market share, as well as a positive effect due to market expansion. In the US, the introduction of Indian Casinos seems to have had a net negative impact on lottery revenue. On the other hand, the introduction of the National Lottery in the UK probably had no overall impact on traditional betting.

Conclusion In this paper, we have presented a comprehensive review of empirical studies on the demand for gambling. Our purpose in this section is to brieﬂy summarise some of the key stylised facts. The key conclusions are as follows: 1

2

3

The overwhelming majority of the evidence suggests that the long-run priceelasticity of demand for the UK lottery is close to unity, that is, the revenuemaximising level. Note that this result differs from the US ﬁndings, where authors typically ﬁnd that the long-run price-elasticity of demand for state lotteries is greater than unity. The disparity between the US and the UK is largely due to the fact that the private operators of the UK National Lottery set prices with the objective of maximising revenue, whereas in the US the public institutions that manage lotteries (individual state governments) set prices so as to maximise proﬁts. Another likely determinant of the greater sensitivity of lottery demand to changes in price in the US is that while there is a national lottery in the UK, most states in the US have lotteries. That is, there are more substitutes available in the US for lottery players. Evidence for the price elasticity of other forms of betting is more mixed. There are some studies indicating that the price elasticity of betting is fairly

260

4

5

6

7

D. Paton, D. S. Siegel and L. V. Williams high, but this work is less authoritative than that pertaining to lotteries, due to more imprecise measures of ‘true’ prices. There is a strong need for research on the impact of the recent UK betting tax changes on the demand for betting. There has been a less systematic study of the income-elasticity of various forms of gambling, but the evidence tends to suggest that the elasticity of gambling with respect to income is positive, that is, gambling is a normal good. There is mixed evidence on substitution effects between various forms of gambling, and between gambling and the availability of other leisure opportunities, although a number of studies have identiﬁed clear evidence of substitution between different leisure and gambling sectors. There is contradictory evidence from the US and UK on the impact of a State or National Lottery on other forms of gambling, which may be related to the impact of regulatory changes. These changes do seem to have a signiﬁcant market expansion impact (complementarities). At the same time, regulation and price changes tend to lead to signiﬁcant substitution between sectors. Thus, the overall impact of liberalisation of casinos/machines in the UK is difﬁcult to predict based on current evidence. There is a large amount of evidence from the US that expansion of the casino sector has a signiﬁcant net negative impact turnover and taxation revenues from State lotteries.

These conclusions need to be interpreted with some caution. In particular, ownand cross-price and elasticity estimates are only relevant to the range of data contained within the sample. Thus, predicting the impact of signiﬁcant policy that will have a major impact either on the price within a sector or on its competitive challenges is extremely difﬁcult. A further key unresolved issue is the question of the precise magnitudes of cross-price elasticity for substitutes and complements for gambling, such as alcohol and tobacco.

Notes 1 See Orphanides and Zervos (1995) for an interesting extension of the Becker and Murphy model. 2 See Paton et al. (2001a) for an exception. 3 Farrell et al. (1999) also provide a test of Becker and Murphy’s (1988) theory of rational addiction for lottery tickets. They ﬁnd that lottery play is indeed characterised by addictive behavior. Not surprisingly, however, gambling is found to be less physically addictive than other goods that may be physically addictive, such as cigarettes. 4 We are indebted to David Forrest for this observation. 5 The authors found that the demand for Massachusetts MegaBucks was inelastic (−0.19). 6 Again, we are grateful to David Forrest for clariﬁcation on this point. 7 See also earlier studies in the same vein by Simmons and Sharp (1987) and Gulley and Scott (1989). 8 Phoenix is one the fastest growing cities in America and also has a large population of retirees. Three of the most proﬁtable Arizona Indian casinos are Fort McDowell, Gila River and Ak-Chin, which are all located in the Phoenix metropolitan area. A fourth

The demand for gambling

261

casino offering table games has just opened on the Salt River reservation. A case before the Arizona Supreme Court will determine if they are also allowed to have slots. 9 Powerball, a very popular game, which was added in 1994, was not included in this dataset. 10 The actual level of displacement is difﬁcult to measure because of favourable economic and demographic factors that may have offset decreasing lottery sales.

References Anders, Gary and Siegel, Donald (1998). ‘An economic analysis of substitution between Indian casinos and the State Lottery’, Gaming Law Review, 2(6): 609–613. Anders, Gary, Siegel, Donald and Yacoub, Munther (1998). ‘Does Indian casino gambling reduce state revenues? Evidence from Arizona’, Contemporary Economic Policy, 16(3): 347–355. Becker, Gary S. and Murphy, Kevin M. (1988). ‘A theory of rational addiction’, Journal of Political Economy, 96: 675–700. Becker, Gary S., Grossman, Michael and Murphy, Kevin M. (1991). ‘Rational addiction and the effect of price on consumption’, American Economic Review, 81: 237–241. Becker, Gary S., Grossman, Michael and Murphy, Kevin M. (1994). ‘An empirical analysis of cigarette addiction’, American Economic Review, 84: 396–418. BERL (1997). ‘Sensitivity analysis of gross win to price elasticities of demand’. In Responsible Gaming: A Commentary by the New Zealand Lotteries Commission on the Department of Internal Affairs’ proposals for gaming and gambling. Contained in Gaming – A New Direction for New Zealand, and its Associated Impact Reports, New Zealand Lotteries Commission, Wellington. Bird, Ron and McCrae, Michael (1994). ‘The efﬁciency of racetrack betting markets’. In Efﬁciency of Racetrack Betting Markets: Australian Evidence, Donald B. Hausch, Victor S.Y. Lo and William T. Ziemba (eds), London: Academic Press, pp. 575–582. Clotfelter, Charles T. and Cook, Philip J. (1990). ‘On the economics of state lotteries’, Journal of Economic Perspectives, 4(4): 105–119. Coate, D. and Ross, G. (1974). ‘The effect of off-track betting in New York city on revenues to the city and state governments’, National Tax Journal, 27: 63–69. Cook, Philip J. and Clotfelter, Charles T. (1993). ‘The peculiar scale economies of Lotto’, American Economic Review, 83(3): 634–643. Europe Economics (1998). ‘The impact of the 1996 reduction in betting duty’, A Report for Betting Ofﬁces Licensees Association, Ltd., November. Europe Economics (1999). ‘The potential impact of off-short and Internet betting on government tax revenues’, A Report for Betting Ofﬁces Licensees Association, Ltd., January. Europe Economics (2000). ‘The potential impact of off-shore and Internet betting on government tax revenues: an update to reﬂect new evidence’, A Report for Betting Ofﬁces Licensees Association, Ltd. Farrell, Lisa, Hartley, Roger, Lanot, Gauthier and Walker, Ian (2000). ‘The demand for Lotto: the role of conscious selection’, Journal of Business and Economic Statistics, 18(2): 226–241. Farrell, Lisa, Morgenroth, Edgar and Walker, Ian (1999). ‘A time series analysis of UK lottery sales: long and short run price elasticities’, Oxford Bulletin of Economics and Statistics, 61(4): 513–526.

262

D. Paton, D. S. Siegel and L. V. Williams

Forrest, David, Gulley, David O. and Simmons, Robert (2000a). ‘Elasticity of demand for UK National Lottery tickets’, National Tax Journal, 53(4), part 1: 853–864. Forrest, David, Gulley, David O. and Simmons, Robert (2000b). ‘Testing for rational expectations in the UK National Lottery’, Applied Economics, 32: 315–326. Forrest, David, Simmons, Robert and Chesters, Neil (2002). ‘Buying a dream: alternative models of demand for Lotto’, Economic Inquiry, 40(3): 485–496. Gruen, A. (1976). ‘An inquiry into the economics of racetrack gambling’, Journal of Political Economy, 84: 169–177. Gulley, O. David and Scott, Frank A. (1989). ‘Lottery effects on pari-mutuel tax returns’, National Tax Journal, 42: 89–93. Gulley, O. David and Scott, Frank A. (1993). ‘The demand for wagering on state-operated Lotto games’, National Tax Journal, 46(1): 13–22. McMillen, Jan (1998). Study on the Social and Economic Impacts of New Zealand Casinos, Australian Institute for Gambling Research. Mikesell, John L. (1987). ‘State lottery sales: separating the inﬂuence of markets and game structure’, Journal of Policy Analysis and Management, 6: 251–253. Mobilla, Pamela (1992). ‘Trends in gambling: the pari-mutuel racing industry and effect of state lotteries, a new market deﬁnition’, Journal of Cultural Economics, 16(2): 51–62. Monopolies and Mergers Commission (1998). Ladbroke Group PLC and the Coral Betting Business: A Report on the Merger Situation, London: Monopolies and Mergers Commission. Morgan, W. D. and Vasche, J. D. (1979). ‘Horseracing demand, pari-mutuel taxation and state revenue potential’, National Tax Journal, 32: 185–194. Morgan, W. D. and Vasche, J. D. (1980). ‘State revenue potential of pari-mutuel taxation: a comment’, National Tax Journal, 33: 509–510. Morgan, W. D. and Vasche, J. D. (1982). ‘A note on the elasticity of demand for wagering’, Applied Economics, 14: 469–474. Orphanides, Athanasios and Zervos, David. (1995). ‘Rational addiction with learning and regret’, Journal of Political Economy, 103: 739–758. Paton, David, Siegel, Donald S. and Vaughan Williams, Leighton (2001a). ‘A time series analysis of the demand for gambling in the United Kingdom’, Nottingham University Business School Working Paper Series, 2001. II. Paton, David, Siegel, Donald S. and Vaughan Williams, Leighton (2001b). ‘Gambling taxation: a comment’, Australian Economic Review, 34(4): 427–440. Paton, David, Siegel, Donald S. and Vaughan Williams, Leighton (2002). ‘A policy response to the e-commerce revolution: the case of betting taxation in the UK’, Economic Journal, 112(480): 296–314. Pescatrice, D.R. (1980). ‘The inelastic demand for wagering’, Applied Economics, 12: 1–10. Schneider, Friedrich (1998). Further Empirical Results of the Size of the Shadow Economy of 17 OECD Countries. Paper presented at the 54th Congress of IIPF, Cordoba, Argentina and Discussion Paper, Economics Department, University of Linz, Austria. Schneider, Friedrich and Enste, Dominik H. (2000a). ‘Shadow economies: size, causes and consequences’, Journal of Economic Literature, 38(1): 77–114. Schneider, Friedrich (2000b). The Value Added of Underground Activities: Size and Measurement of the Shadow Economies and the Shadow Economy Labor Force all over the World, Discussion Paper, Economics Department, University of Linz, Austria.

The demand for gambling

263

Siegel, Donald S. and Anders, Gary (1999). ‘Public policy and the displacement effects of casinos: a case study of riverboat gambling in Missouri’, Journal of Gambling Studies, 15(2): 105–121. Siegel, Donald S. and Anders, Gary (2001). ‘The impact of Indian casinos on state lotteries: a case study of Arizona’, Public Finance Review, 29(2): 139–147. Simmons, S. A. and Sharp, R. (1987). ‘State lottery effects on thoroughbred horse racing’, Journal of Policy Analysis and Management, 6: 446–448. Steinnes, Donald (1998). Have Indian Casinos Diminished Other Gambling in Minnesota? An Economic Assessment Based on Accessibility, Mimeo. Suits, Daniel B. (1979). ‘The elasticity of demand for gambling’, Quarterly Journal of Economics, 93: 155–162. Thalheimer, Richard and Ali, Mukhtar (1995). ‘The demand for pari-mutuel horserace wagering and attendance with special reference to racing quality, and competition from state lottery and professional sports’, Management Science, 45(1): 129–143. Tuckwell, R. (1984). ‘Determinants of betting turnover’, Australian Journal of Management, December. Vasche, Jon David (1985). ‘Are lottery taxes too high?’, Journal of Policy Analysis and Management, 4: 269–271. Vrooman, David (1976). ‘An economic analysis of the New York State Lottery’, National Tax Journal, 29: 482–489.

Index

Adams, B. R. 63 Adjusted Time ratings 108 Alexander, C. 73 Ali, M. 43, 45, 53, 64, 254–6 Ali, M. M. 3, 30, 67 analysis of covariance 106 Anders, G. C. 206, 213, 248, 250, 256–8 Anderson, J. R. 230 arbitrage opportunities 82 Asch, P. 19, 30, 32, 43 Ashton, R. H. 239 asset pricing models 138 asset returns 138 attelé races 96, 103; versus monté races 101 Avery, C. 125 Ayton, P. 230, 240 Baars, B. J. 225 Bacon-Shone, J. H. 238 Barsky, S. 115 Beach, L. R. 239 Becker, G. S. 179, 190, 249 Becker–Murphy concept: of myopic addiction 190 Bennett, M. J. 239 Benter, B. 63–4 Benter, W. 108, 110 best case scenario 18 ‘best’ quotes 129–31 betting at the Tote 30–40 betting behaviour 224, 228; laboratory-based research 228–36; naturalistic research into 224–8 betting line 114 betting market efﬁciency 43–4; quantifying 44

betting markets 30, 45, 95, 195; role of turnover 45–50, 61; skewness 195 betting returns 43–4, 49; skewness 43 betting volume 48 betting with bookmakers 30–40, 254 bettor’s utility function 53 Beyer, A. 108 Bhagat, S. 146 bias 99–100; in the forecasts 151 Bird, R. 41, 255 Blackburn, P. 2 Bolger, F. 240 Bolton, R. N. 108, 110 bookmaker: betting 192, 250; betting market 39; handicaps 114, 124–5; markets 226, 240–2; odds 30, 38–40; returns 2–3, 6, 10 bookmakers 2, 32, 35, 82, 84, 87, 91, 121 breakage 43–4, 50, 61; costs 43–4, 51–2, 63 Brier probability score 98 British betting markets 31–2; efﬁciency and 32 British pari-mutuel (Tote) market 2 British racecourses 30; betting 31 Brohamer, T. 108 Brown, B. G. 231, 239 Brown, R. I. F. 236 Brown, S. J. 156 Bruce, A. C. 227–8, 237–8, 240 Bureau of Indian Affairs (BIA) 205 Busche, K. 30, 43, 45, 62, 81 ‘cafe-courses’ 95 Cain, M. 3, 15, 30–1, 33, 35, 41 Calderwood, R. 232 calibration index 99 California v. Cabazon 204

266

Index

California v. Cabazon Band of Mission Indians 220 Carron, A. 115 casinos 206, 254, 257; ﬁscal impact of 206 Chapman, R. G. 108, 110 Chevalier, J. 125 city taxes: impacts on 218 Clarke, S. 115 Clotfelter, C. T. 169, 179, 184, 193, 199 Coate, D. 254 Cochran, W. 122 Cohen, M. S. 235, 237 ‘Collateral Form’ 108 Conditional Logistic Regression model see multinomial logit model Conlisk, J. 63, 192 Connolly, T. 228–9 constant-elasticity speciﬁcation 186 Cook, P. J. 169, 179, 184, 193, 199 Cornell, S. 208 corporate governance 146 Courneya, K. 115 covariance decompositions 95–6, 98 Cox, D. L. 193 Crafts, N. 30–32 Crafts, N. F. R. 68–73, 75 Craig, A. T. 63 Crandall, B. 232 Creigh-Tyte, S. 165 cross-price elasticity 260 cubic utility function 58 cubic utility model 53–9, 64 Curley, S. P. 98

Dare, W. H. 115, 148 Davidson, R. 64 DeBoer, L. 179 decision–cost theory 47–9 demand (bettors) 115 deregulatory reform 198 diminishing marginal returns 80 discrete-choice probit model 135 discrimination index 99 displacement 214–17; effects 257–8 dividends: under the new method 22 Dobson, S. 125 Dolbear, F. T. 30 ‘double rollover’ 188 Dowie, J. 30, 32 Drapkin, T. 108 Dunstan, R. 221

Durbin–Watson statistic 254 each-way bets 38–41 Eadington, W. R. 204, 218 economies of scale 184; in lottery markets 174 Efﬁcient Markets Hypothesis (EMH) 31, 41, 67; racetrack betting 67 Eiser, J. R. 235 elasticity: of betting turnover 255 English rugby league 115; matches 114 Erekson, O. H. 200 ‘exotic’ bets 32 expected operator loss 24, 27 expected payout 27 expected returns 3, 48, 63 expected utility 6–7 expected value (EV) 26 Fabricand, B. P. 43 Falein, H. K. 231, 239 Fama, E. 33 fancied horses 35 Farmer, A. 121–2 Farrell, L. 165, 167, 178–9, 183, 187–8, 191 favorite–underdog point spread 143 favourable bets 25–6 favourite–longshot anomaly 63 ‘favourite–longshot’ bias 2–8, 11–15, 43–4, 68, 70–3, 77, 81–2, 88, 91, 93, 96, 119, 241 favourite–underdog bias 114–15, 119–21, 124 Federal Insurance Contributions Act (FICA) 205 Felsenstein, D. 211 Ferrell, W. R. 239 Figlewski, S. 68, 238 football betting market 136–7 forecast errors 154–5, 157 forecast price 68 Forrest, D. 115, 125, 187–8, 197 Forsyth, R. 108 Francis, J. C. 195 Freedom of Information Act 207 French trotting 95 Friedman, M. 3 Gabriel, P. E. 2, 8, 10, 30–1, 33, 35, 37, 40 gambling 30, 247; demand characteristics 247; on horse racing 30; machines 254; substitutes and complements in 247–9

Index 267 gambling markets: non-optimizing behavior 45 Gambling Review Report 175 Gandar, J. 115, 120, 126, 132 Garen, J. 179 Garicano, L. 115 Garrett, T. A. 172, 195 Gazel, R. 208 Goddard, J. 125 Golec, J. 3, 8, 44, 53–6, 64, 115, 120, 126, 135, 148, 172, 195 ‘Good Causes’ tax 182–3, 200 Gray, P. 115, 120, 132, 135 Gray, S. 115, 120, 132, 135 Grifﬁth, R. M. 2, 88 Gruen, A. 254 Gulley, O. D. 169, 183, 185, 250 Gulley–Scott model 183–5, 187 Gulley–Scott test 186 Haigh, J. 115, 167 Hall, C. D. 43, 62, 81 handicap betting 114, 127; markets 114–15, 124–6, 131–2 handicap betting market 129 handicapping 107 ‘harness racing’ 95 Harrison, G. W. 47 Harris, P. 208 Harvey, G. 115 Hausch, D. B. 19, 43, 62–3, 67, 82 Henery, R. 115 Henery, R. J. 82 heterogeneity of information: in ﬁnancial markets 80 heteroskedasticity-consistent estimated standard errors 50, 53 Hey, J. D. 227 high-stakes gaming 204 Hoerl, A. E. 231, 239 Hogan, T. 206 Hogarth, R. M. 230 Hogg, R. V. 63 home–away bias 114–15, 119–22, 124–6 home-ﬁeld advantage 115 horse betting market 80 horse-race betting 106, 225; average expected returns 107; favourite–longshot bias 67; markets 67–8, 106–7 horse race: bettors 239; handicapping methods 106

Horse-race Totalisator Board (Tote) 18, 28, 33, 227 horses’ win probabilities 47–8 horse track betting 43 horse wagering: market inefﬁciency 43 Hurley, W. 50 IGRA 205, 207 incapacitating injuries 153–5 incremental optimisation 109 index betting 114–15, 123–4, 127; market 119, 122, 124–6, 129, 131 index ﬁrms 121, 124, 126–7, 130 Indian casino gambling: economic impact 204, 208 Indian casinos 208, 212–13; claims of positive impacts of 212; displacement effects 213; employee turnover in 212 Indian Gambling Regulatory Act (IGRA) 204 information: diminishing marginal value of 80, 85–8, 91; rising marginal value 80 information-driven arbitrage 83 injury spells 152–6 inside information 14–15, 31, 33, 36–7, 80, 92, 104; marginal impact of 92 insider trading 14 institutional background 117–19 Internal Revenue Service (IRS) 205 inter-state markets 87, 90–1 Irish Horse-racing Authority (IHA) 18, 28 Jaffe, J. F. 136 Japan Racing Association (JRA) 44, 49; tracks 54–5, 58 Jefferis, R. H. 146 jockeys 88 Johnson, J. E. V. 227–8, 237–8, 240 Kabus, I. 231 Kahneman, D. 4–5, 82, 237 Keren, G. 224 Kida, T. 231–2, 239 Kiefer, N. M. 162 Kimball, M. S. 64 Lacey, N. 115 Larkin, J. 232 Leven, C. L. 221 Lim, F. W. 178 Lo, V. S. Y. 30 lotteries 251, 257 lotteries and betting 256

268

Index

‘lottery duty’ 182 lottery fatigue 165 lottery tickets 169, 178, 193; as a consumer good 193; expected value of 178; price elasticity of demand 169 Lotto demand 182–4; time-series modelling 182 Lotto play 8, 182 Lucky Dip 179 McClelland, A. G. R. 240 McCrae, M. 41, 255 McCririck, J. 41 McDonald, S. S. 115, 148 McDonough, L. 50 MacEachern, D. 219 McFadden, D. 238 MacKinnon, J. G. 64 McKinnon, S. 213 McMillen, J. 259 Malatesta, P. H. 146 Malkiel, B. G. 33 marginal value of information 81 market efﬁciency 44, 67 Markowitz, H. 4, 8 Markowitz utility function 5, 9 Marsden, J. R. 2, 8, 10, 30–3, 35, 37, 40 Mattern, H. 208 mean subjective probability 90 media tips: impact on prices 77 midweek draw 169 Mikesell, J. L. 168, 250 Minus pools 21–3, 28 Mobilia, P. 256 money laundering activities 205 monté 96; races 103 Moore, P. G. 200 Mordin, N. 108 Morgan, W. D. 254 multinomial logit model 108, 110 Murphy, A. H. 231, 239–40 Murphy, K. M. 179, 190, 249 Murphy’s decomposition 98 nagging injuries 152–4, 162; hazard rates for 153; hypothesis 156 National Football League (NFL) 115 National Indian Gaming Commission (NIGC) 205 National Lottery 177 National Lottery games 165 National Lottery scratch cards 174 National Association of Racing (NAR) 44; tracks 54–5, 58

naturalistic betting markets 237; calibration in 237–43 Neale, M. A. 232, 239 Neural Network models 108 New method 20–1 Norman, J. 115 Northcraft, G. B. 232, 239 objective probabilities 45 odds–arbitrage competition 64 off-course bettors 2, 87 Omodei, M. M. 230, 240 on-course bettors 87 on-line game 166 opening prices 33 opportunity cost 45, 47 optimal bets 48 optimisation algorithm 109 optimization theory 47 Orasanu, J. 228–9 Orphanides, A. 260 Osborne, E. 115, 120, 126 Osborne, M. J. 83 other tipsters only (OTO): classiﬁcation of racehorses 69, 77 out-of-state bettors 91 outsider bettors 81, 84, 87 overtime games 139, 141 over–under bets 135–7, 139 over–under betting: line 144; market 135; strategies 142 pace ratings 108 pacing races 95 pari-mutuel betting 18, 50; markets 47, 81, 240, 242; monopoly 95; and the place pool 18–19; role of breakage 50–3 pari-mutuel bettors 237, 240–3; calibration of 237–40 pari-mutuel operators 18, 20–2, 238 pari-mutuel systems 30–1 partial anticipation hypothesis 159 participation uncertainty 153–6 Paton, D. 62, 64, 67, 242–3, 249–50, 255 payout 22–3, 25 Peirson, J. 32 Pesaran, H. 193 Pescatrice 254 Phillips, L. D. 232 Phlips, L. 32 Pierson, J. 2 Pitzl, M. J. 219

Index 269 place bets 32, 38 place pool 19 player injuries: and price responses 145 ‘pleasure’ bettors 81, 121–2, 126, 131–2 PMH 95–6 PMU (Pari-Mutuel Urbain) outlets 95–6 point spread 115, 137, 148, 151, 157, 160; betting 135–6; bias 154; ‘closing’ 136; market-determined 136; and injury spells 151; and scores 148–51; wager 147 point spread betting market 146 point spread conditional on the player 154 potential operator loss 23–4 power utility 53; function 3–4; models 54–8 Pratt, J. W. 64 predicted place dividends 18, 21; the British/Irish method 18, 20–2 predicted place pay-outs 21 price elasticity 251, 253; of demand 259; of the lottery 253 price movements 69, 71, 75; analysis 71–2 probability score: and its decompositions 98 ‘professional’ bettors 121–2, 131–2 Purﬁeld, C. 8, 195 Quandt, R. E. 14, 19, 30, 43, 82 ‘Quick Pick’ see ‘Lucky Dip’ 167 racecourse betting 35; markets 31 racetrack bettors 43, 239 Racing Post 119, 130 Radner, R. 80 rates of return 70, 72, 75; analysis 72–3; on tipped and non-tipped horses 70 research costs 48 restrictive ﬁlters 143–4 revenue displacement 256–9; from lotteries to casinos 258 Rex, T. R. 206 risk-averse behaviour 8, 13; for longshots 13 risk-averse bettors 8 risk-loving behaviour 8, 13; for favourites 13 risk-neutral assumption 12 risk-neutral bettors 45, 192 risk-neutrality 53 risk-return preferences 44, 199

rollovers 167–8, 170, 179, 187, 250 Rose, A. 211 Rosett, R. N. 45, 64 Ross, G. 254 Rubinstein, A. 83 Sauer, R. D. 2, 43, 62, 120, 126, 148 Savage, L. 3 scale economies: of Lotto 197 Schlenker, B. 115 Schneider, F. 249 Schnytzer , A. 93 Schwartz, B. 115 Scoggins, J. F. 178 score differences ordering 148 Scott, D. 108 Scott, F. A. 169, 183, 185, 250 Seminole Tribe v. Butterworth 220 shadow economy 249 Shanteau, J. 232 Shaw, R. 93 Shilony, Y. 93 Shin, H. S. 14–15, 30, 37, 68, 242–3 Shin’s model 15, 38 Sidney, C. 41 Siegel, D. S. 213, 248, 250, 256–8 Simmons, R. 115, 125 Simon, J. 167 Singh, N. 80 skewed prize distribution 172 skewness 196; aversion 44; neutrality 53 skewness–preference hypothesis 44, 53 skewness–preference model 43 Smith, J. F. 231–2, 239 Smith, V. 45, 47 Smith, V. D. 227 Snedecor, G. 122 Snyder, W. 32, 72, 93, 241 Sobel, R. S. 172, 195 Sosik, J. 115, 120, 132 special one-off draws 172 sports betting markets 135 ‘spread bets’ 115 Sprowls, C. R. 178 standard method 19, 21: disadvantages 19–20; in the place pool 19 starting prices (SPs) 14–15, 32–6, 68–70, 72, 77; favourable 35; pay-outs 34 Steinnes, D. 258 Stern, R. 212 Stigler, G. J. 80 Stiglitz, J. E. 80 streak variables 135

270

Index

Suantek, L. 237 subjective probabilities 45 substitution 249–51, 256; effect 258 Suits, D. B. 254 superdraws 168, 170, 190, 250 supply (bookmakers) 115 Swidler, S. 93 Tamarkin, M. 3, 8, 44, 53–4, 56, 64, 115, 120, 126, 135, 148, 172, 195 tax revenue displacement 206 Taylor, J. 206, 208 Taylor series approximation 53 Terrell, D. 121–2 Thaler, R. H. 30, 32, 62, 93, 115 Thalheimer, R. 254–6 the ‘Brier Score’ 98 the Gabriel and Marsden anomaly 2 The Gold Sheet 137 The Racing Rag 67, 70 The Sporting Life 78 The Super League 118 theoretical model of addiction 179 Thompson, R. 146 Thunderball game 170–1, 173, 199 ‘tiny utility’ model 63 tipster information: impact on bookmakers’ prices 67 total pool 20 Tote betting market 39 Tote odds 8, 30, 38 Tote payout 6–7, 34 Tote place pool 38 Tote returns 2, 6–10 touched prices 33 ‘track take’ 50–1 transaction costs 70, 73, 115, 130, 147–8, 161; mechanism 190 Transaction Privilege Tax (TPT) 213, 257 tribal–state compacts 204–5 true winning probabilities 15–16 Tuckwell, R. 255 Tuckwell, R. H. 32 Tversky, A. 4–5, 82, 237 UK Lotto 191; elasticity estimates 191–2 UK National Lottery 168, 175; halo effect 168 US National Gambling Impact Study Commission 165 US pari-mutuel system 3

US sports betting markets 122 utility 3 van der Plight, J. 235 Vasche, J. D. 250, 254 Vaughan Williams, L. 38, 62, 64, 67, 125, 242–3 Venezia 168 Vergin, R. 115, 120, 132 Victorian markets 87 volume of betting 50 Vrooman, D. H. 179, 250 Wagenaar, W. A. 224 Waldron, P. 8, 195 Walker, I. 178–9, 183, 188, 191–2, 195 Walker, J. M. 47 ‘walking wounded’ 152 Walls, W. D. 43, 45, 62 Wang, P. 208 Warner, J. B. 156 weak-form efﬁciency 68, 115, 119 Wearing, A. J. 230, 240 weight-equivalence ratings 108 Weitzman, M. 3 Wessberg, G. 188 White, H. 50, 53 win bet 32 Winkler, R. L. 136, 240 winning payout 32 winning probabilities 50–1, 81, 83, 88–9 Winsome and other tipsters (WAOT) 69, 71–2, 77; classiﬁcation of racehorses 69 Winsome only (WO) 69, 77; classiﬁcation of racehorses 69 Woodland, B. 115, 133 Woodland, L. 115, 133 Woodlands, B. M. 195 Woodlands, L. M. 195 Woods, D. D. 234 worst case scenario 23 Wright, G. 230, 240 Yates’ covariance decompositions 99–100 Yates, J. F. 98, 236 Young, J. 192, 195 Zackay, D. 237 Zeckhauser, R. J. 64 Zervos, D. 260 Ziemba, T. 30, 32 Ziemba, W. T. 19, 62–3, 93, 115 Zoellner, T. 219

Our partners will collect data and use cookies for ad personalization and measurement. Learn how we and our ad partner Google, collect and use data. Agree & close