How do you make good human decisions in the age of AI?

By Janus Boye

My hardcover copy of the book about human decisions in the age of AI

When we make decisions our thinking is informed by societal norms, “guardrails”, that guide our decisions like the laws and rules that govern us. But what are good guardrails in today’s world of overwhelming information flows and increasingly powerful technologies, such as artificial intelligence? 

Based on the latest insights from the cognitive sciences, economics, and public policy, the new book "Guardrails" offers a novel approach to shaping decisions by embracing human agency in its social context. In brief: The book explores the importance of establishing guardrails to manage the power dynamics in the digital age.

Written by Urs Gasser (Professor of Public Policy, Governance and Innovative Technology at TU Munich) and Viktor Mayer-Schönberger (Professor of Internet Governance and Regulation at the Oxford Internet Institute, University of Oxford), the book shows how the quick embrace of technological solutions can lead to results we don’t always want.

The two authors explain how society itself can provide guardrails more suited to the digital age, ones that empower individual choice while accounting for the social good, encourage flexibility in the face of changing circumstances, and ultimately help us to make better decisions as we tackle the most daunting problems of our times, such as global injustice and climate change.

In a recent member’s call we were joined by Viktor who took us through the thinking behind the book, how human decisions are flawed and also how just looking at AI through the lens of misinformation, bias or privacy is short-sighted. The problem is bigger. The book is about principles for good decisions, and Viktor shared quite a few memorable examples.

Below you can find my summary from the fascinating conversation coupled with a bit from my own reading of the book. Towards the end, you can also find pointers for additional reading on the topic and the entire recording from the call.

From infrastructure governance to information governance

Viktor opened the call by reminding us how the big topic in the late 90’s was all about infrastructure governance on what was then a quite young World Wide Web. The discussion back then was about who owned the routers and who owned the important domain names.

That’s infrastructure and still relevant today, but as Viktor said, with the explosion in content due to social media, later big data, and today Gen AI the conversation has shifted. Viktor mentioned several of the tech giants who hold massive amounts of data, including Amazon, Apple, Google and Meta, and this took us straight into the big discussion happening right now - who really owns all that information and are they making good decisions for us?

The Digital Markets Act establishes a set of clearly defined objective criteria to qualify a large online platform as a “gatekeeper” and ensures that they behave in a fair way online and leave room for contestability.

To quote Viktor:

“Those who have much data have much power”

As Viktor mentioned, the new digital markets act in the EU, is almost entirely focused on data governance and who has the power to use (your) data.

What’s the role of humans in the decision making process?

The two authors use the new book to take the conversation one step further: to decision governance. As technology advances, particularly in the realm of AI, there is a growing need to understand how decisions are being made and what role humans play in the process. Also, how can we best provide guidance to the decisions made by AI? The quick embrace of tech can easily lead to results, we don’t always want.

Actually, as Viktor said, AI is just the latest in a long history of decision guidance mechanisms, with social norms and laws being examples of earlier systems designed to influence decision-making. A familiar example of this is consumer protection laws, which enables you to reverse a decision, e.g. the EU rule that you can cancel a flight within 24 hours of booking it and get a full refund.

Another example of human vs. machine decisions is anti-lock braking system (ABS), which is now a standard safety system in all cars. Its purpose is to keep the driver safe by preventing the wheels from locking when you use the brakes.

Guardrails really focuses on human decision-making and its flaws, and the potential for machines to take over decision-making. The book challenges the notion that technology should step in where our own decision making fails, laying out a human-centered set of principles that can create better decisions.

Three guardrail principles in support of sound human decision-making

A key part of the book, which Viktor also outlined in our call, are the three principles for better decisions by humans.

This is chapter 6 in the book which starts with the story of how the Terra Nova Expedition failed miserably. This was the attempt to make it first to the South Pole by British naval officer Robert Falcon Scott. They were famously beaten by Norwegian Roald Amundsen by a month, and while Amundsen returned without incident, Scott and his fellow explorers lost their lives. Wrong decisions were made — shaped by social guardrails with catastrophic consequences.

The authors carefully describe how consistency and coherence aren’t key design principles of social guardrails and then put forward three principles for sound human decision making:

  • To empower the individual to make good decisions. It’s all about helping humans evolve and become better at deciding, rather than having either a machine or the collective second-guess what is best for you

  • To be socially anchored. Guardrails are deeply social in nature, born out of societal discussions, particularly in democracies. Implementation and enforcement also takes place in a social context.

  • To encourage learning. Good guardrails encourage learning to improve decision-making. This benefits individual decision-making and also further learning more generally.

After describing each principle, the authors share how the story behind the creation of ICANN, the Internet Corporation for Assigned Names and Numbers, highlights the importance of the linking between formal and informal guardrails. Both are social guardrails that shape decisions and constrain behaviors, and as they argue in the book, the guardrails can reinforce each other to create a more comprehensive and durable fabric of guardrails.

Transparency as a design principle for AI decision guidance

Using examples from organisations like Spotify and the world of science, Viktor illustrated the benefits of transparency in learning from mistakes and failures. He argued that transparency is crucial for enabling learning and innovation, and that it is a precondition for some of the other design principles.

Viktor also shared his strong belief that individuals should always have the opportunity to make decisions, even if it means going against the guidelines. He used the example of a car with a speed limit chip as an example of taking decision-making power away from individuals. Viktor prefers less intrusive guardrails that guide individuals while allowing them to make their own decisions.

In our call, Viktor also shared his concern about the increasing use of AI decision-making systems, as humans may stop questioning the machine and become mere automatons pressing the button. It’s important to keep us humans in the loop when it comes to making decisions, even with the advancement of AI technology, and Viktor used the example of a Korean airline that crashed on autoland, as the pilots had unlearned how to make the decision to land manually.

Learn more about decisions

During the conversation, Viktor referenced quite a few other authors and books. Here’s just a few to continue your learning journey:

Finally, you can also lean back and enjoy the recording below.