Values and Nudges
“Nudge for good.” – Richard Thaler
When signing copies of his book, Richard Thaler tells his followers to “nudge for good”. And of course, most of us do want to do exactly that. Although there are plenty of dark nudgers and nudges out there, most of the ill-intentioned behaviour change out there is likely a result of random trial-and-error and imitation. But what do we even mean by “nudge for good”? Whose version of good do we mean? A key problem that proponents of behaviour change have faced is precisely trying to convince the public, and vocal opponents, that they “know better” than the average Joe. If the nudger (aka “choice architect”) is as systematically biased as the man on the street, then why should we nudge at all?
I believe that a key distinction lies between nudges that are value-free, that improve behaviour, and value-laden nudges, which merely affect it. Behaviour change is being hijacked by those who want to impose their values upon others. In doing so, they not only undermine the movement in many people’s eyes, but they also manage to miss the low-hanging fruit that could be easily reached to make real improvements in our lives.
David Cameron’s recently announced UK porn filter is a perfect case of a heavily value-laden nudge. This is an example of a default option nudge. Currently, the default option is to have free access to online porn. People who don’t want to receive access to porn have to opt-out. The new system flips this around. The default option will be for no access to porn, with people having to actively opt-in to accessing it with their ISP. Since many people are too lazy to question a default rule, the hope is that this change will help many people who do not want access to porn, but who aren’t willing to actively change from the current default. People who do want to access porn will not be prevented from doing so; they merely have to jump through an extra hoop.
The British population is being nudged in a way that flows from David Cameron’s own personal definition of “good”. Put aside for one moment that this is being rolled out without a proper randomised controlled trial – the usual metric for behavioural interventions. The filter is naturally attracting a lot of scathing attention in the press (c.f. Vice), and a lot of the criticisms are being directed at nudge theory. But is the really theory at fault, or just the fact that people are using it to fulfil their own personal goals? And people who question these personal goals will naturally question the theory that comes with their implementation.
But it doesn’t have to be like this. Many things we do – many “mistakes” – do not come with this value-laden baggage. If on the one hand we have a normative model of behaviour, which says how people rationally should act, and on the other hand a systematic bias – an average irrational mistake – then this debate is really superfluous. By nudging people towards the normative model, by reducing the average size of their mistakes, then there really isn’t any debate. Reducing systematic biases is an improvement, no matter your personal beliefs. And there are in fact so many biases in decision making, in financial decisions, probabilistic decisions, matters of health, and life and death, that we really should be budgeting our scarce nudge capital in better ways. There are only a limited number of economists, psychologists, and practitioners who are interested in these things. But there are so many systematic mistakes that are currently being unattended to, or only explored at a very shallow level. Econ 101 teaches us that any scarce resource should be allocated to where its comparative returns are highest.
Constraining choice architects to work within this framework of normative models and systematic biases will help in two ways. It will prevent people from arbitrarily imposing their values and personal beliefs on others. We have a great tool here for making the world a better place. It would be a shame to ruin it because the tool is associated with a whole bunch of extraneous baggage that comes with it in many people’s minds. And, perhaps more importantly, it will ensure that the tool is used in the most efficient way possible.
Here is one example of a value-free nudge; there are plenty more possibilities that could be found by reading around the judgement and decision making literature. Bayesian inference is the normative model for updating prior beliefs in the light of new evidence. A purely rational Bayesian decision maker should, over an indefinite course of time, update their beliefs to perfectly match objective probabilities in the world around them. Medical testing is one example of a Bayesian inference task. There is some prior probability that the patient is ill, a medical test is then performed, and an updated probability of illness can be computed. Crucially, medical testing is not a perfect science: there is some probability of false positives (a healthy person testing positive on the test).
Physicians should, normatively, use this model of decision making when informing patients with positive test results. Most people – physicians and non-physicians included – display systematic biases, however. We tend to not fully appreciate the role of false positives, and so overestimate the chances of having a disease given a positive test result. For example, HIV testing is very reliable, in that the vast majority of people with the disease will test positive. But people tend to make the opposite (and incorrect) inference, that a person who tests positive will almost certainly have the disease. In fact, for people in low-risk groups, the probability of having HIV given a positive test result can be around 50%. This is because the false positive test result is roughly as likely as a person in a low-risk group actually having HIV.
Nudging physicians towards making more accurate diagnostic inferences is one example of a value-free nudge, because it reduces the extent of a systematic bias from a normative model. This is a “good” thing, no matter your political, ethical, or moral beliefs. And in this case, there are nudges that can improve statistical inferences (e.g. by representing probabilistic information in terms of naturally occurring frequencies, or by highlighting causal pathways that can be responsible for false positives).
Given that there are so many potential value-free nudges out there, we should aim for the low-hanging fruit and aim to making improving behaviour our priority, not merely affecting it. Hopefully those in power will realise that there are bigger gains to be made from this tool than just political point scoring. The danger is that such value-based nudges might swing popular opinion so far away from behaviour change that we might lose support for this very valuable tool.