Main:

Ethics of Control

Who Nudges the Nudgers?

The relatively young and trendy field of behavioral science – built in part around the insight that humans are not as rational as we think we are – offers a wealth of practical ideas for the workplace. While the discipline has experienced its greatest success in government institutions, the corporate world can also benefit from new methods of behavior change. But any application must be grounded in transparency and informed consent in order to support rather than extinguish the moral autonomy of employees.

Are you one of those people who uses a phone app to limit your time on social media? Or maybe plug the phone in to recharge in the bathroom to avoid mindlessly scrolling from bed in the morning. Have you removed social media apps from your phone entirely? If you’re that person, then you instinctively understand what behavioral science is about.

You know that you are not always as rational as you would like to be.

Your brain might decide it's important to break your social media addiction, but you know it can't be trusted to stick to its own plan. So you give yourself a bit of a mental buttress, a reinforcement, a nudge to assist you in sticking to your goal.

The relatively new field of behavioral science combines aspects of psychology, neuroscience and behavioral economics. It takes the insight that we are not the perfectly rational animals of classical economic imagination, and it rigorously investigates the many examples, causes, consequences of and potential remedies for such irrationalities. These researchers recognize that we experience life through a multitude of cognitive biases – or systematic errors in our thinking – when we attempt to make sense of the world around us. These errors then impact our judgments and the decisions that flow from them.

This is not necessarily a bad thing in many circumstances. There is simply too much information constantly bombarding our senses for our brains to interpret it all, and so we use shortcuts. For example, in-group bias refers to our tendency to favor those within our own group over those from other groups. This also means favoring the ideas of those within our own group. A fact that runs counter to our beliefs is more likely to be accepted if it is presented by someone with whom we already agree on most things, such as someone from our own political ‘team,’ than if the very same fact is presented by a member of an opposing political team. In-group bias thus can lead us to assess the truth of a claim not on the evidence presented, but upon who presents the evidence.

In many cases, this particular shortcut leads to false conclusions, but the reason it is still useful to us is that there simply are too many things to know about in the world and we do not have the time or expertise to assess them all. So we lean on people whom we trust on other matters. We wouldn’t be able to make any decisions without such shortcuts, or heuristics. Nevertheless, such heuristics have profound implications for society, the law and commerce.

In 2011, the field of behavioral science exploded into the public consciousness with the publication of Thinking Fast and Slow by Israeli psychologist and economist Daniel Kahneman 1; a bestseller that detailed his work over the course of many decades on cognitive biases. In particular, he popularized the idea that humans do not have a single, rational mode of thought, but two modes, or systems. The first system is the fast mode, which is automatic, intuitive, unconscious and emotional. We are using it when we understand a simple sentence, drive a car on an empty road or solve very simple math problems such as 2 + 2.

System two is the more logical and deliberative mode of thought. It is slow, conscious, infrequent and takes effort. We use this second system when we are assessing which plane ticket to buy, walking at a faster rate than normal or trying to solve more complex math problems. The fast system is invaluable for its ability to deliver swift decisions. We can see the benefit when we consider how, when early humans lived on the African savannah, an instinctive, rapid response to a rustling in the long grass on the assumption that it could be a predator was of substantial assistance to survival. In contrast, the slow system would conduct a more deliberative investigation into to whether it was just the wind. But the cognitive shortcuts that enable this speed also produce biases, rationalization after the fact and other errors in judgment. And of course the two systems are not hived off from each other, but are frequently used in concert, and conflicts between them can result in people making bad decisions.

The explosion in behavioral science in recent years has produced many findings regarding how we respond to threat and reward, how we process information and ultimately about practical ways that we can leverage cognitive biases to improve decision-making. The knowledge that the fast system, for instance, is profoundly influenced by the context in which it is deployed allows us to design structures – such as recharging a phone in the bathroom – that influence or change behavior for the better by changing the context. Application of the findings of behavioral science aims to develop techniques that use knowledge of our cognitive ‘failings’ to improve the quality of our judgments and decisions.

The discipline has become so prodigious that the scientific literature on behavior change is accumulating at a rate that leaves much of it fragmented. Specialists in the field reckon that there are on the order of hundreds of behavior intervention trials that take place every day, such as how to reduce smoking or excessive alcohol consumption, or investigating which new prescribing patterns among clinicians is most sustainable for patients.

But efforts to synthesize conclusions from across this vast volume of research takes years, meaning that anything that might be useful for decision-makers in government or the private sector is out of date by the time it is published. So, in order to make the findings from this colossal growth in behavioral science more user-friendly, the Human Behaviour Change Project (HBCP) was launched in 2017 under the leadership of University College London psychologist Susan Michie. The project is applying advances in artificial intelligence to automatically sort through the scientific literature and identify patterns that humans on their own would be unable to spot.

A joint undertaking of IBM and Michie’s colleagues at UCL, Cambridge and Aberdeen universities, the HBCP aims to develop a ‘Knowledge System’ that searches the behavioral science literature to draw out findings and deliver them up in an organized manner. Policy-makers, executives and researchers will be able to query the system via an intuitive interface to find out what behavioral modification interventions work, how well and why. It will even be able to predict the outcomes of proposed interventions.

Much as you can ask your mobile phone, 'Hey Siri, what ingredients go into a Negroni cocktail?' a municipal government should in principle be able ask the system, 'Hey HBCP, what is the best way to encourage citizens to regularly purchase carbon offsets when they travel using vehicles that emit greenhouse gases?'

The system has already sifted through some 220 million documents from the field, and a series of specialized online portals for public health officials, doctors and those in the insurance industry are currently in development.

Beyond the HBCP, the UK has been one of the regions in the world most receptive to the implications of behavior modification for public policy. As early as 2010, former Conservative prime minister David Cameron established the Behavioural Insight Team (BIT), or as it was more popularly known, the ‘Nudge Unit’, within the Cabinet Office, and it was dedicated to applying learnings from the field to UK government 2. The Nudge Unit was no creature of the political right however, led as it was by David Halpern, an advisor with former Labour Prime Minister Tony Blair’s strategy unit. It quickly became known for its sly successes, using insights from behavioral science to ‘nudge’ citizens to boost pension uptake, sharply increase organ donor registration and reduce missed hospital appointments. The same techniques sped up the payment of some £30 million in income tax simply by sending reminder letters that mentioned that most of a recipient’s neighbors had already paid 3. The Nudge Unit has since been spun off as a private consultancy, with offices as far afield as Singapore, Wellington and New York, and is increasingly advising the private sector. Similar nudge units have been set up in a stream of major corporations, and other specialist firms have opened up to compete with BIT to provide nudge consulting for both the public and private sectors.

The nickname for these teams comes from a book on the subject from Richard Thaler, an American economist who like Daniel Kahneman won a Nobel prize for his work; and Cass Sunstein, a legal scholar who headed the Obama administration’s Office of Information and Regulatory Affairs. That is the agency in the executive branch that oversees implementation of government policy. Nudge: Improving Decisions about Health, Wealth, and Happiness, was a runaway bestseller like Kahneman's book. Unlike Kahneman's book, however, Nudge was less on the findings of behavioral science than it was on examples and the philosophy of its application 4.

In the book, Thaler and Sunstein describe what they call a ‘choice architecture,’ or designing the presentation of choices to the citizen, consumer or employee in such a way as to subtly influence their decision-making and increase the likelihood of their making the best choice. A workplace cafeteria might place healthy foods within easy reach or at eye level, while more sugary or fatty options would still be available but less easily reached or noticed. Or to boost retirement savings, employees could still choose any plan they wish, but if they do not choose one, they are automatically enrolled by default. To increase the rate of organ donation, adult citizens are automatically organ donors unless they take action to change their status (a policy that as of 2020 has been adopted in the Netherlands). Instead of waiting for those who want to be donors to remember and get around to signing up to be one, a choice architecture makes that particular choice easier to make. Rather than have poor families have to seek out a form to opt in to a free school meal program for their children, the system would automatically enroll any child from an area known to have high rates of poverty.

Thaler reports one effort in which his team was able to boost compliance with hand washing at a food processing plant by 63 percent just by stamping the hands of employees with water-soluble ink. It worked as the simplest of nudges to help them remember to wash their hands without an order from a manager telling them what to do.

The philosophy behind choice architecture is what Thaler and Sunstein call libertarian paternalism: helping people to make the choices that are best for them while still preserving their freedom of choice 5. If they really, really want that chocolate bar from the cafeteria, they can still have it. In the case of the hand stamp, the nudge actually removed the feeling of being objects of surveillance and submission that some workers can experience in relation to being managed, enhancing their perception of personal autonomy.

Nudging is thus, according to Sunstein, saving and improving lives while saving taxpayers and businesses billions of dollars, without violating people’s freedom to go their own way.

As the nudge industry has taken off, however, some ethicists and political thinkers have raised a few red flags.

With respect to governments using such techniques to more efficiently implement policies for which they have been explicitly mandated, such as with the examples of school meals or even speeding up tax payments, there cannot really be any ethical complaint any more than one can complain about road signs and traffic lights (although some extreme libertarians do go so far as to argue that even traffic lights are an illegitimate government intervention against personal choice).

In general, there is no legal restriction on eating junk food. Others may disapprove of such a lifestyle choice from a health point of view, but free societies believe that it is up to the individual to make that choice for themselves. If some people in society believe that this choice is damaging to society (for example by increasing the cost of healthcare) and that everyone therefore should limit their junk food intake, then it is up to these anti-junk-food advocates to convince the rest of us that rules to achieve this goal should be enacted, and thereby deliver a democratic mandate to a political party to openly implement such a policy once elected.

To nudge us into eating more healthily without such a mandate attempts to do an end-run around democracy.

Unsurprisingly, nudging has become the central theme of at least two recent pieces of dystopian science fiction. In 2009, German writer Juli Zeh published The Method (Corpus Delicti in the original German), set in a ‘health dictatorship’ in the future, where laws have been written to optimize citizens' health. And in 2012, Scottish author Ken MacLeod wrote Intrusion, in which a technocratic UK Labour government ensures people make the choices they would have made if only they had had all the correct information.

The criticism is in effect an update of the ancient democratic question, quis custodiet ipsos custodes, or, Who watches the watchmen? If it is the case that all humans are not as rational as we think we are, then this is also the case for the nudge architecture architects, who are no less human than the rest of us, and thus no less irrational. Who then is nudging the nudgers to make sure that they are less subject to cognitive bias? Nudge-nudgers? And then who nudges the nudge-nudgers? Nudge-nudger-nudgers?

The junk food question on the face of it seems harmless enough. One might respond, 'Come on, who really is opposed to society being healthier?' But what if the nudge in question related to a somewhat spicier political topic?

The focus of a great many nudge policies is environmental sustainability, including the earlier mentioned effort to subtly encourage people to purchase carbon offsets. Such programs allow someone, who through their activities has emitted greenhouse gases, to make up for these emissions, typically by paying someone else to plant trees. The trees then draw down CO2 from the atmosphere as they grow, in principle counterbalancing CO2 emissions into the atmosphere from the combustion of fossil fuels. A 2016 paper by behavioral scientist Nadine Székely and her colleagues found that, on a website's 'choose your amount' slide bar for a possible carbon-offset payment, the bar defaulting to the highest value of the scale led to larger payments than if they set the default option at the middle. The researchers conjecture that this works because people rely on mental shortcuts such as the anchoring heuristic, in which an initial or recent piece of information that they receive is anchored, or used as an orientation point to then make their ‘own decision.’ Seeing a high default value anchors that number and thus nudges them toward a higher offset purchase 6.

Sounds good at first glance. But that would assume that carbon offsets are indeed a ‘good’ thing. Who exactly was it that decided that higher purchases of carbon offsets are something toward which society should be striving? Perhaps carbon offsets are actually lousy ways to reduce emissions. Maybe the nudge architects who decided that we all should be nudged into purchasing more offsets are themselves subject to a cognitive bias that led them to favor carbon offsets as a good climate policy.

It turns out they indeed may have been. In August 2020, California and Oregon suffered through some of the worst wildfires in their history. These fires, in part caused by a warming climate, burnt down tens of thousands of acres of carbon-offset forests, entirely erasing the alleged climate benefit they offered 7. For over two decades, many greenhouse gas mitigation experts have warned that carbon offsets are not a great climate solution not just because of the risk of these sorts of events, but more importantly for the simple reason that offsets work by paying someone else to draw down emissions to balance out the emissions you are releasing. And, quite logically, if everyone in the world is paying someone else to reduce emissions on their behalf, who is doing the actual emissions reduction? And so these critics have argued that offsets tend more to be a sort of modern revival of medieval indulgences wherein sinners paid the church to ‘pay off’ their sins and let them into heaven, but instead assuaging climate guilt or allowing politicians and companies to appear to be doing something about global warming, while in reality they avoid the sometimes very hard and expensive work of shifting to clean energy 8. And why might a politician, executive or consumer want to assuage their guilt or avoid hard work? Could cognitive biases play a role somewhere in there?

In 2010, the social media giant Facebook ran a behavioral science experiment together with University of California San Diego researchers. They placed a banner-ad at the top of the news feed of 61 million users during that year’s congressional elections that told them it was election day and linked to information about local polling places. Crucially, the ad also showed them pictures of all their friends on Facebook who had clicked an ‘I Voted’ widget. The researchers used as control groups users who received no such adverts, and then compared the numbers clicking on the widget against public voter rolls. The researchers published their findings in the prestigious science journal Nature, concluding that those who had received the pro-social message about voting were 0.4 percent more likely to vote, or 60,000 people. In addition, other research about social contagion effects suggests that a further 280,000 people were also indirectly influenced by this message to go out and vote. In total, Facebook’s nudge had produced over a quarter of the rise in voter turnout compared to the previous election.

This sounds great. But the immediate thought that comes to mind is that if Facebook can achieve such great results via nudge to get out the vote in a non-partisan fashion, surely they can achieve the same via nudges to get out the vote in a partisan fashion.

Already, a great many people around the world, whether in their position as citizens, consumers, users, or employees, increasingly feel that a self-appointed elite is out to manipulate them all the while telling them that any manipulation is in the people’s own interest 9. Regardless of whether this suspicion is reflective of reality, such apprehensions are reducing trust in experts and expertise. It is not surprising then that the Lancet medical journal recently reported worrying rises in anti-vaccination belief in much of the developed world in the middle of a pandemic whose only resolution will be the development of vaccines 10.

A group of people who nudge us toward their favored choices because they believe that the rest of us are insufficiently rational to decide for ourselves what is best for us, even though the science says that the nudgers are no more rational than the nudgees, is a recipe for the further erosion of trust in our political and economic institutions.

So does this mean that all choice architecture and behavioral science needs to be thrown out?

Not necessarily – so long as whenever we develop such structures, we are at all times wearing our ‘democracy goggles,’ fitted with ‘moral autonomy’ lenses.

Whenever developing a nudge or choice architecture, its architects should ask themselves, Am I violating informed consent and democratic principle? Am I treating another adult human being as a child? Am I evading democratic deliberation? Could it even be perceived that I am violating these principles?

If nudges like the automatic retirement savings plan enrollment or school meals, however noble or successful, are executed without the knowledge and consent of employees or families, they violate this. But a simple message to, respectively, all new employees or all families enrolling their children that this is what the system is now doing and that they can opt out if they wish resolves this problem. A government that had not been elected with a mandate to promote carbon offsets would be running afoul of democratic principle if it implemented offset nudges on websites without a democratic mandate to do so. But a private company that wanted to nudge its customers to do the same would be fine, for a private company has no democratic obligation to its customers. At the same time, the more such sneaky tricks are used, the greater likelihood customer trust in a brand will at some point be diminished. We all already kind of know that advertising is out to dupe us.

When social media apps offer us a mechanism to use nudges to reduce our screentime, we are still using the lessons from behavioral science, but in an open and transparent rather than manipulative and underhand way. We ourselves know that we need a crutch for our willpower and consciously decide to use that crutch. In this way, we are consciously helping ourselves to be more rational, rather than assuming that we are helplessly irrational. This is in line with the thinking of one of the foremost critics of nudge theory and particularly of Daniel Kahneman’s research, psychologist Gerd Gigerenzer. Gigerenzer is the director of the Centre for Cognition and Adaptive Behaviour at the Max Planck institute in Berlin, and argues that ‘nudge-ology’ represents an unfairly negative view of the human mind. It is not that he rejects the existence of cognitive biases, but rather he believes that we can work to resolving at least in part the problems that arise from the contradictions between the two systems of thinking. Just as there are exercises that enable us to strengthen our muscles, there are techniques that allow us to strengthen our reasoning 11.

We can train ourselves to be better thinkers. The scientific method, developed over the last three hundred years, is for example one very old architecture of bias-reduction, and one that is open and transparent. Even just being aware of cognitive biases can help us reduce their impact. The knowledge that all of us are subject to, for example, confirmation bias – wherein we seek out information that confirms our pre-existing beliefs – can encourage us to instead deliberately seek out sources of information that constantly challenge our ideas, such as reading publications with a spectrum of political viewpoints. Our cognitive biases make most of us appalling at statistical reasoning or risk comparison, so perhaps we could make statistics a priority within secondary mathematics education instead of the subject being typically a post-secondary topic. Could we do better at instilling critical thinking skills in elementary school?

It’s an approach that doesn’t reject nudge theory in its entirety or the panoply of learnings from behavioral science, but just rejects the idea that they offer evidence that we are hopelessly, incorrigibly dumb and can only be corrected by our self-appointed betters. It sees nudge theory as offering a handful of weapons that, carefully constrained to not undermine democracy or moral autonomy, exist within a much larger armory of ways to constantly improve human rationality.

Humans are not stupid. We’re just imperfect. But we can always get better.

Picture of Leigh Phillips

Leigh Phillips

Leigh Phillips is a science writer and political journalist whose work has appeared in Nature, Science, New Scientist, the Guardian, the Daily Telegraph, MIT Technology Review, and the New Statesman, amongst other publications. He is also the author of two books, Austerity Ecology and the Collapse-porn Addicts, a progressive defence of economic growth and industry within sustainability discourse, and most recently with co-author and economist Michal Rozworski, The People's Republic of Walmart, a history of the economic calculation debate.

Share

Share this article and let us know what you think. We're here to help and answer any questions you might have. Any suggestions or feedback? Let us know what you think and we will use your input for the future improvements.

Contact us

Join us

The return to journalism, the pursuit of truth and the utmost respect for solid, peer-reviewed science. You're just one click away from receiving the best of The Habtic Standard straight to your inbox. Subscribe to our monthly newsletter now and keep up to date with the latest corporate wellbeing insights from our experts around the globe.