The new site for my research is: https://sites.google.com/site/philipnewallresearch/
The new site for my research is: https://sites.google.com/site/philipnewallresearch/
Earlier this summer my research on UK soccer gambling advertising was published (link to .pdf). With an observational study of gambling advertising over the 2014 soccer World Cup in Brazil, I found that two features predict whether a bookie will invest resources advertising a specific bet to bettors (primarily through either TV or shop window advertising).
Bookies advertise bets that combine complexity and salience. Compared to simple bet categories (e.g Germany to win), complex bet categories have a greater number of bet outcomes (e.g. Germany to win 2-0, 2-1, etc…). This allows bookies to offer alluringly high potential wins to customers, despite the fact that the true actual odds have actually become less fair. For example, although the probability of Germany winning should equal the sum of probabilities for all Germany winning scoreline bets (since the two sets of events are identical), the latter probability will be systematically higher as derived from bookies’ odds, indicating that the odds are less fair for the scoreline bets.
The more complex a bet category is, the harder bettors will find it to make normative probability judgments. This will especially be the case for salient events (cf the representativeness heuristic). Over the 2014 soccer World Cup bookies systematically advertised salient sub-events within complex bet categories (e.g. Germany to win 3-1, Thomas Muller to score the first goal, or Germany to win 3-1 and Thomas Muller to score the first goal). This suggests that bookies have learned how to nudge bettors towards alluring but bad bets.
The 2015-2016 soccer season has just started, so I took a few pictures of local bookies’ shop window advertising to see how this pattern holds up out-of-sample. Below are shop window adverts from Betfred, Ladbrokes and William Hill for the Manchester City versus West Bromwich Albion match yesterday. As you can see, the patterns hold up. The adverts are predominately for the favorite team to win (Manchester City, who coincidentally won 3-0), but the odds are made more attractive by requiring additional conditions to come off for the bet to win.
But the odds for these complex bets can actually be easily demonstrated to be less fair than a simple bet on “Manchester City to win” (see the link to my paper above).
This paper estimates the unnecessary costs of active investing as 0.67% of invested assets per year compared to passive investment alternatives. Framed as a dollar cost, this equals $330 per year for every person in the US. The average investor’s mistakes really are expensive.
This paper doesn’t dwell long on potential causes, but does briefly mention the canonical behavioural finance cause: overconfidence. Yes many people are overconfident, but the consensus in favour of overconfidence is perhaps less unanimous in the psychological literature than some might expect. My personal opinion is that the many unique features of financial decision-making – payoffs spread over years and decades, shrouded costs, exponential compounding – are both more plausible and potentially more treatable than overconfidence. What this paper does show, though, is that the cost of personal investing as currently practiced is very high.
This blog has already covered the companion to this paper, which looks at equilibria in markets for deceptive goods. This paper extends that static model to the case where firms can invest in exploitative innovations. Firms can make either value-increasing or deceptive innovations. As in the previous paper, deceptive equilibria are more likely to occur in socially wasteful industries, since no firm can profit from educating consumers.
The first counterintuitive result is that incentives to create socially beneficial innovations are low if innovations can be copied. These innovations only increase consumer surplus, so there is no increased payoff to the innovating firm. Exploitative innovations are very different. These innovations are profitable if they cannot be copied, but are even more profitable if exploitative innovations can be copied. Spreading exploitation amongst competitors weakens the incentives these competitors have to educate consumers.
These theoretical results are interesting, but do the necessary modelling assumptions reflect real financial markets? The authors mention credit cards and mutual funds as two potentially relevant markets, and discuss these markets at greater length in their companion paper. These are interesting markets, but consumer exploitation in at least the mutual fund industry may be driven mainly by historical accident – how consumers weight fees against past returns. Mutual funds have not changed that much in the past few years (beyond increased choice and ETFs, etc). The gambling sector has seen much more innovation – being the first sector to properly monetize the internet. Furthermore, since gambling is purely zero-sum, gambling is more likely to meet the authors conditions for a socially wasteful industry. If consumers are not too naïve, then mutual funds do have a good potential to increase consumer welfare through easier access to diversification.
My last point is that any empirical study of exploitative innovation should be grounded by a psychological theory of decision-making. Any exploitative innovation is likely to reverse engineer some previously established bias.
This paper builds on earlier theoretical analyses of competitively exploitative industries (e.g. Gabaix and Laibson, 2006), and lists conditions under which competitive industries converge on equilibria where naïve consumers are exploited. In this model firms compete to provide goods with variable social surplus by charging an up-front fee and then deciding to “shroud” an add-on fee, or “unshroud” and reveal the full fee of their product to naïve consumers. Sophisticated consumers exist in the market and know the magnitude of the full fee even if prices are shrouded.
There always exists an equilibrium where firms unshroud prices and naïve consumers are not exploited, but the authors argue that shrouded equilibria are more plausible when theoretically possible. A shrouded equilibrium can only remain if all firms decide to shroud their prices, so a key determinant is the good’s social surplus. If the product is socially valuable, then a single firm may find it profitable to unshroud its prices and turn all consumers sophisticated. But if the product is socially wasteful, then sophisticated consumers will not buy it, and unshrouding is unprofitable. For socially wasteful goods firms always maximize profits by shrouding prices and exploiting naïve consumers. The results continue to hold in multi-product markets, where sophisticated consumers buy the superior product, while naïve consumers buy a shrouded inferior product.
The authors suggest that real-world markets that are competitive but remain profitable are likely to involve exploitative price shrouding, and use mutual fund and credit card markets as two examples. The mutual fund market seems perfectly described as a shrouded multi-product market, where sophisticated consumers buy low-cost index mutual funds and naïve consumers buy high-cost actively managed funds.
Later on I hope to show how the results apply to the UK gambling market.
Mutual fund investors should invest in funds that minimize fees for their desired asset allocation. Instead, most investors actually invest by buying mutual funds with past high returns. The gap between what mutual fund investors actually do and what they should do leads investors in aggregate losing many millions a year, with the industry picking up excess rents.
So how do mutual funds advertise their funds to investors? Jain and Wu show that fund companies cater to the biases of investors by advertising funds that have had very strong recent performance. But as the saying goes, “past performance does not guarantee future returns”, and indeed this was the case. In the immediate period post-advertisement, the sample of advertised funds ended up significantly underperforming their benchmarks.
The post-advertisement performance is much more representative of the true returns mutual funds are likely to receive. Most mutual funds underperform by about the amount of fees that they charge to their investors. By advertising the rare fund that gets lucky and happens to beat its benchmark, mutual funds are profiteering from the well-known and costly biases of mutual fund investors. And as Jain and Wu show, this strategy is successful in gaining large new inflows of money to the funds with high past performance.
My high street in south London recently featured in a Channel 4 documentary on gambling (12.06, see video). In a short space of road there are six bookmakers: two branches of William Hill, two branches of Betfred, a Paddy Power, a Coral and a Ladbrokes. Placed conveniently nearby are a number of payday lenders and pawn shops.
The documentary points out that bookmakers cluster in this way partly because of the regulations over “fixed odds betting terminals”, high-speed betting machines where large sums can be gambled frighteningly quickly. Law restricts these machines to four per betting shop. But since they are such high-earners for the bookmakers, they simply open extra branches, usually in relatively deprived areas (9.30, see video).
Gambling is, clearly enough, a high-revenue industry. Although there are around 450,000 problem gamblers in the UK (19.50, see video) – a scary number – the gambling industry must always find new ways of increasing revenue. And if creating new gamblers is too difficult, then the only other way to increase revenue is to put up the “price” of gambling.
The price of gambling is hidden over time. Gamblers do not feel the price on every trial, since sometimes they will win and hence pay a negative “price”. Everyone knows the joy of getting something for free – at a price of zero. Perhaps the thrill of gambling is a souped up version of the same reward process.
But most gamblers do pay the price over time. Especially in mechanical games such as roulette, with no skill element, the house will always win in the long run. The industry is currently exploring two main ways to increase the price of gambling., decomposed as follows:
Price of gambling = expected loss per bet * amount wagered per hour
You can either get people to accept bets that are more statistically unfair, or you can get them betting higher amounts per hour. And if you can’t get them betting more per bet (because of loss aversion), you can increase the number of bets per hour. The online gambling industry was the first instance of how modern technology could exploit this latter point, but it is now moving into mobile devices and the high street with fixed odds betting terminals. The terminals are an issue the government is aware about, but there are some other features of the price of gambling that are less well known.
If the speed and stakes of gambling are already maxed out, then the only other way to increase the price of gambling is to get gamblers accepting bets that are more statistically unfair. This is an especially new and sophisticated way of increasing the price of gambling. Traditionally, bookmakers strove to have an “even book”, where the odds are initially set, and then adjusted so that a roughly equal amount is wagered on each outcome of a sporting event. In American football the main bet is whether a team can “beat the spread”, a bet with two mutually exclusive outcomes (beat the spread/not beat the spread) that should be equally likely. The spread is used to adjust for skill differentials between teams, so if team A is a marginal favourite, beating the spread might see them winning by a margin of more than three points. If an equal amount is bet on each side, then the bookmaker earns a risk-free profit of 5% of the total book, since they only offer odds of of 10-to-11 on each event with p=0.5 (bet 11 to win 10). This is pretty good, since it requires little knowledge of the psychology of gambling and probability, or the event being bet on (some sophistication is required to set the initial odds on a skill/luck domain such as a sporting event).
But bookmakers are starting to realise that a little more savviness can can produce bigger, although riskier, profits by exploiting biases in judgement and decision making. The football season started today with the community shield between Manchester United and Wigan. Almost all bookmakers promote football bets in their windows, although only three of the shops on my street were geared up this early in the calender. These bets follow a simple and almost universal pattern. They pay-off only if two or more events occur (a conjunction), and these are usually likely events (the best team winning, or a famous player scoring).
So in the match today, Manchester United are clearly the favourite as Premier League champions against a newly relegated side, and Robin van Persie is the most famous goalscorer, with 30 club goals last season. Here are the bets I found most highly promoted:
Notice how the bets follow the same pattern. Manchester United need to win, and a scoreline of 3-0 wouldn’t flatter them in many people’s minds given the skill difference. And Robin van Persie has to score the first goal. In the last bet Manchester United have to win by the specific scoreline 3-2. In actual fact, they won 2-0, with van Persie getting both goals. Notice how this creates the sense of a near miss, with individual events that are quite likely to happen, there’s a good chance of at least one occurring. When an event occurs, but other events of the necessary conjunction for the bet do not, this creates the sense of a “near miss”. The gambler did not lose; she “nearly won”. But in order to win these bets, a number of events must co-occur. Probabilities of all events happening quickly shrink as more events are added to the conjunction. If the events were independent (not the case here), two 0.5 probabilities would co-occur with 25% of the time, three events 12.5% and four 6.25%. Large conjunctions create many near wins, without a lot of actual ones. This is perhaps why accumulator bets, which offer seemingly high odds if a large number of specific teams win, are so popular.
But there is something even sneakier with the promoted bets. There is a large literature in judgement and decision making on the psychology of probabilistic reasoning, with conjunctions being a key topic of interest. This paper by Tversky and Kahneman (1983) kicked things off, with the finding of a “conjunction fallacy”: where, contrary to the axioms of probability, participants rated the probability of a conjunction (P(AandB)) being higher than one of the constituent parts (P(A)). This can’t, of course, happen, even if B happens with certainty. Of particular relevance here, they found this effect in a sporting context (p.10). Bjorn Borg was at the time the reigning Wimbledon champion, and participants rated the probability of him “losing the first set and winning the match”, as being higher than the probability of him “losing the first set”. The high perceived likelihood of him winning the match, an event that most participants had strong memories of, led to the conjunction fallacy.
This paper led to a large debate in the field, much of which isn’t especially relevant here. A recent paper by Khemlani, Lotstein & Johnson-Laird shows, however, that you can be irrational about conjunctions without committing the fallacy. Asking participants to estimate P(A), P(B), P(AandB), is enough to fix the “joint probability distribution”. Probability estimates can still be irrational, even if they do not commit the conjunction fallacy, if the joint probability distribution implies a negative probability for one event or more. And this happened frequently in their experiments (although it was not in a sporting context), especially if the probability of the conjunction was estimated before the individual event probabilities.
Bookmakers are clearly exploiting our poor ability to comprehend the probability of two or more events co-occuring. Bookmakers are no longer striving to earn risk-free profits with an even book. They do not promote, or even offer, bets that are the complements of their highly-advertised bets, such as “Robin van Persie not to score the first goal, and Manchester United not to win 3-0”. Bookmakers would rather risk making a loss when their promoted bets do happen, simply because the likelihood of these events happening is so much worse than the odds they offer.
These bets are a sophisticated way of increasing the price of gambling. Not only do they look attractive on the surface, Robin van Persie scores lots of goals, Manchester United often win, hence attracting more bets, but they are really overpriced compared to a true appraisal of the relevant probabilities. And bookmakers would rather take a risk on these bets not happening, than hope to earn risk-free profits from a balanced book. Exploiting errors in probabilistic reasoning is a clever way to raise the price of gambling.
Now, of course the gambling industry has an answer to every criticism. They are just offering the bets that people like, and they are bringing jobs to areas that need them. But jobs in this industry are only paid by a zero-sum transfer of wealth away from gamblers, something that the industry is becoming more efficient at. Take away the gambling, and consumers will have more money, which they can spend on other things and which in itself can create as many jobs as the gambling industry. Plus consumers can now actually spend their money on things with intrinsic value.
In most industries, consumers happily pay for high-price products that they highly value. A brand new iPhone makes a lot of people very happy. There is little evidence that the same thing works in gambling. Increasing the price of gambling makes most gamblers worse off, since they lose more money and at a faster rate. The price of gambling is hidden, both because it is only paid over time, and because a number of psychological effects are being used to add extra opacity. How much does this latter point add to the price of gambling?
“Nudge for good.” – Richard Thaler
When signing copies of his book, Richard Thaler tells his followers to “nudge for good”. And of course, most of us do want to do exactly that. Although there are plenty of dark nudgers and nudges out there, most of the ill-intentioned behaviour change out there is likely a result of random trial-and-error and imitation. But what do we even mean by “nudge for good”? Whose version of good do we mean? A key problem that proponents of behaviour change have faced is precisely trying to convince the public, and vocal opponents, that they “know better” than the average Joe. If the nudger (aka “choice architect”) is as systematically biased as the man on the street, then why should we nudge at all?
I believe that a key distinction lies between nudges that are value-free, that improve behaviour, and value-laden nudges, which merely affect it. Behaviour change is being hijacked by those who want to impose their values upon others. In doing so, they not only undermine the movement in many people’s eyes, but they also manage to miss the low-hanging fruit that could be easily reached to make real improvements in our lives.
David Cameron’s recently announced UK porn filter is a perfect case of a heavily value-laden nudge. This is an example of a default option nudge. Currently, the default option is to have free access to online porn. People who don’t want to receive access to porn have to opt-out. The new system flips this around. The default option will be for no access to porn, with people having to actively opt-in to accessing it with their ISP. Since many people are too lazy to question a default rule, the hope is that this change will help many people who do not want access to porn, but who aren’t willing to actively change from the current default. People who do want to access porn will not be prevented from doing so; they merely have to jump through an extra hoop.
The British population is being nudged in a way that flows from David Cameron’s own personal definition of “good”. Put aside for one moment that this is being rolled out without a proper randomised controlled trial – the usual metric for behavioural interventions. The filter is naturally attracting a lot of scathing attention in the press (c.f. Vice), and a lot of the criticisms are being directed at nudge theory. But is the really theory at fault, or just the fact that people are using it to fulfil their own personal goals? And people who question these personal goals will naturally question the theory that comes with their implementation.
But it doesn’t have to be like this. Many things we do – many “mistakes” – do not come with this value-laden baggage. If on the one hand we have a normative model of behaviour, which says how people rationally should act, and on the other hand a systematic bias – an average irrational mistake – then this debate is really superfluous. By nudging people towards the normative model, by reducing the average size of their mistakes, then there really isn’t any debate. Reducing systematic biases is an improvement, no matter your personal beliefs. And there are in fact so many biases in decision making, in financial decisions, probabilistic decisions, matters of health, and life and death, that we really should be budgeting our scarce nudge capital in better ways. There are only a limited number of economists, psychologists, and practitioners who are interested in these things. But there are so many systematic mistakes that are currently being unattended to, or only explored at a very shallow level. Econ 101 teaches us that any scarce resource should be allocated to where its comparative returns are highest.
Constraining choice architects to work within this framework of normative models and systematic biases will help in two ways. It will prevent people from arbitrarily imposing their values and personal beliefs on others. We have a great tool here for making the world a better place. It would be a shame to ruin it because the tool is associated with a whole bunch of extraneous baggage that comes with it in many people’s minds. And, perhaps more importantly, it will ensure that the tool is used in the most efficient way possible.
Here is one example of a value-free nudge; there are plenty more possibilities that could be found by reading around the judgement and decision making literature. Bayesian inference is the normative model for updating prior beliefs in the light of new evidence. A purely rational Bayesian decision maker should, over an indefinite course of time, update their beliefs to perfectly match objective probabilities in the world around them. Medical testing is one example of a Bayesian inference task. There is some prior probability that the patient is ill, a medical test is then performed, and an updated probability of illness can be computed. Crucially, medical testing is not a perfect science: there is some probability of false positives (a healthy person testing positive on the test).
Physicians should, normatively, use this model of decision making when informing patients with positive test results. Most people – physicians and non-physicians included – display systematic biases, however. We tend to not fully appreciate the role of false positives, and so overestimate the chances of having a disease given a positive test result. For example, HIV testing is very reliable, in that the vast majority of people with the disease will test positive. But people tend to make the opposite (and incorrect) inference, that a person who tests positive will almost certainly have the disease. In fact, for people in low-risk groups, the probability of having HIV given a positive test result can be around 50%. This is because the false positive test result is roughly as likely as a person in a low-risk group actually having HIV.
Nudging physicians towards making more accurate diagnostic inferences is one example of a value-free nudge, because it reduces the extent of a systematic bias from a normative model. This is a “good” thing, no matter your political, ethical, or moral beliefs. And in this case, there are nudges that can improve statistical inferences (e.g. by representing probabilistic information in terms of naturally occurring frequencies, or by highlighting causal pathways that can be responsible for false positives).
Given that there are so many potential value-free nudges out there, we should aim for the low-hanging fruit and aim to making improving behaviour our priority, not merely affecting it. Hopefully those in power will realise that there are bigger gains to be made from this tool than just political point scoring. The danger is that such value-based nudges might swing popular opinion so far away from behaviour change that we might lose support for this very valuable tool.
One of the themes of this blog is that behaviour change, which attempts to affect our behaviour through the use of psychological findings – informally known as “nudging” people – is a coin with two sides. Proponents of nudge policy usually see these methods as a way to improve our lives, be it in organ donation, increasing savings rates, or encouraging prosocial behaviour (see Thaler and Sunstein’s book as an early exposition). Although there are many opponents of nudging, most critics focus on a perceived reduction in civil liberties. My view is a little different. Although there are many beneficial nudges and nudgers out there, there are also many people out there attempting to nudge us in harmful ways. I would never want to add fuel to the fires of the anti-nudge brigade, only highlight other ways in which interventions could improve behaviour (by wiping out the “dark” nudges, which seem to be proliferating everywhere).
Enthusiasm for behaviour change is growing in both public- and private sectors. There are surely many incompetent nudgers in the public sector, who perhaps attempt to change policy without performing enough randomised controlled trials to ensure that their nudges actually help people on average. And there are surely many intelligent and well-intentioned nudgers in the private sector. But in general, I would like to claim that the majority of risks in the nudge wars is posed by the imbalance of a large number of profit-seeking nudgers in the private sector, and an insufficient degree of public attention. The problem might seem relatively small at the moment, but the forces of economic incentives lead me to believe that this issue will only grow in size if left unchecked.
Economic theory teaches us that firms in the private sector aim to maximise profits (that is, take on projects where net expected cashflows exceed the firm’s required rate of return). Nudges are, quite often, incredibly profitable for the nudger. Fixed costs are very low: often it only involves a few smart people kicking ideas around a room. Marginal costs – cost per product delivered – are also very low. Some nudges, such as rephrasing or personalising an email, have effectively a zero marginal cost. Nudges can then be rolled out to a very large population quite easily. A nudge can be very profitable even if it has a tiny effect on behaviour. Companies starved of profitable opportunities in our post-Great Recession world can be expected to start looking at new avenues for making money; and nudges can be very profitable indeed. Take these two cases as examples, from a recent event on behaviour change that I attended: how hand-written envelopes can easily justify their greater costs, and how simple website choice architecture can vastly increase profits. Now I have no doubt that these nudgers have only the best of intentions, only that similar techniques could be used by others for personal gain instead of societal gain. In fact spammers, those dark urchins of the internet, are starting to pick up on these ideas. I increasingly receive unsolicited emails which are personalised with my name; how long until senders of junk mail start hand-writing their letters too? Not only might this nudge people in bad ways, but it might also reduce the effectiveness of well-intentioned nudges.
One topic of interest is whether dark nudges are merely evolved, or specifically created by people with an understanding of the relevant literature. I have no doubt that much of it is evolved (take as an early example many of the techniques spontaneously developed by practioners in Cialdini’s book). One thing is certain though: no matter how these elements of choice architecture arrive, their prevalence seems likely to increase. Random changes in firm policy that “work”, by increasing profits at the expense of customer satisfaction, can lead to firms increasing in size. Dark nudges that work can even increase the probability of copycat behaviour from competitors. And though these changes might be random, evolutionary pressures might force them to become steadily more fine-tuned over time. (The double glazing example in this post is my most terrifying example.) One point that I’d like to reject is that we can expect the forces of competition to produce only beneficial nudges; as consumers we are simply too prone to systematic biases, and too unsure about what we really want. And I have plenty of cases of very large firms nudging us in ways that can only be seen as self-serving. Perhaps their dark approach to behaviour change is what leads to their success!
We are increasingly living in a man-made world. Many institutions have evolved over time, but there is always at least an element of intentional design. Apple are experts at making things very easy for you if you buy only their products, while making it increasingly hard for you to integrate your Apple gadgets with over providers’. While technology is improving our lives all the time, there are many ways in which it can achieve gains for the few while imposing unnecessary costs on the many. My personal belief is that many features of the investing world are used merely because they help the sellers of high-cost financial services, without actually benefiting their customers at all.
An alcoholic’s first step is to admit that they have a problem. I think that there is a serious problem with how many behaviour change practitioners underestimate the scope of their field – by specifically ignoring the darker side. The next steps will be long and difficult, but only an accurate appraisal can allow these ideas to benefit us positively – and I believe the potential upside is enormous. Needless zero-sum losses in the financial sector are so vast, and do so much to increase inequality, that a few strong nudges could greatly help the majority of individual investors saving for their retirement. This will of course involve overturning the entrenched interests that currently benefit from the massive industry of exploiting investors’ systematic mistakes.
There is an ongoing struggle over how information is presented to the masses; it shouldn’t be too much hyperbole to consider it a war.
Framing is one of the best documented effects in the judgment and decision making literature. Two logically equivalent presentations of the same data, such as describing a surgical operation in terms of the death rate or survival rate, can have very different impacts on behaviour.
It’s almost inevitable, then, that framing might be expected to be extensively used in the spin wars between opposing political parties. But other than this paper, which explores how framing tax burdens in terms of percentage of income or monetary amounts can affect notions of fairness, I can’t find a lot of relevant papers.
Perhaps the best solution is to go out into the field looking for examples. And today, with the comprehensive spending review, I found a great example. Here is how the review is summarised on the Guardian homepage:
• Home Office budget falls 6%, Environment and Communities by 10%
• Education department budget increased to £53bn
• Local government spending to be cut by 2%
Now when dealing with large figures of money, as you’ll find in government budgets, expressing a change in terms of percentages is a good way to hide the magnitude of change. All of these percentage changes seem relatively low, ranging from 10%-2%. Their impact seems relatively small, thanks to the percentage frame.
Contrastingly, the one increase in budget is framed as a total aggregate amount: £53bn. This number seems large – they must be vastly increasing the size of the school budget! This is another framing illusion, however. Looking at the Treasury documents, education spending is forecast at £52.8bn in 2014-2015, and only increases to £53.2bn in 2015-2016, an increase which is actually below forecast inflation (meaning a cut in real terms). Compare this to the 6% reduction in the Home Office budget, which takes spending from £10.4bn to £9.9bn. Or the 10% cut in Environment and Communities, from £1.7bn to £1.6bn, and the reduction in Local Government Spending from £54.8bn to £54.5bn (the figures and percentages don’t match up perfectly due to the numbers being presented in nominal amounts, with the percentages as real changes).
These differences in framing aren’t random; they are systematically designed to minimise the psychological impact of the net overall cuts. These figures from the Guardian likely originate from Government sources, so they help minimise potential opposition to these cuts. A less devious presentation would give before and after figures for total spending in each area, allowing percentage changes to be inferred by the reader.
One important impact of framing on the political discourse is that opposing parties might fight merely to have their favoured frames become the dominant presentation. An incumbent party might attempt to focus attention on percentage changes, while opposition parties attempt to frame these policies in terms of monetary amounts. An end result might be that the election of a new government, seemingly opposed to the previous government’s policies, might only lead to a change in framing and not real policy change. UK residents should prepare for no major changes in policy whoever they vote for in the next election. Ed Balls was on a recent edition of Newsnight, championing means-testing of winter fuel payments, a policy that he repeatedly said would save £100m a year. Jeremy Paxman, the master framer, quickly retorted that this change would only lead to a 1/1000th change in the deficit (0.1%). (Notice how Balls uses the monetary amount frame to attempt to overstate the true change from a populist cut.)