Are You Actually Interested in the Ethics of AI?

Oct 19, 2021

Ethical principles are broad and general, while every individual application of artificial intelligence (AI) is concrete and specific. How to operationalize ethical principles in practice is a challenge for anyone involved in developing AI.

Before I outline my approach, let me ask you an honest question: Why are you interested? Are you committed to improving the world? Or do you merely want to get better at using the language of ethics to justify the things you’re doing anyway?

At Carnegie Council’s Artificial Intelligence & Equality Initiative (AIEI), we want to reinvent how people in AI think about ethics. The problem is not that ethics isn’t discussed – it is. The problem is that many people in tech have become too comfortable with discussing ethics. They think of it as merely being politics by other means.

On the surface, there are similarities. Ethical criticisms of industry can look much like political criticisms. They can both lead to constraints and controls. But being good at politics means something very different from being good at ethics.

There is a cautionary tale for AI in the field of business ethics. Several decades ago, when business schools first started to teach business ethics, they hoped to equip students to do business more ethically. In practice, students now emerge from these courses with the tools to push back against critics by making a convincing case that their business is ethical, even if it isn’t. In other words, they are skilled at politics, not ethics.

So what does it mean to be skilled at ethics? I think of ethics as a way of approaching the challenge of navigating uncertainty. When you don’t have the full picture, ethical values can guide how you act. Values are how we calibrate between our ambitions and our uncertainties. The field of AI is filled with uncertainties, from the lack of transparency in how a smart system processes data, to an inability to predict how an application, such as a social media platform, will evolve and impact the society in which it has been deployed.

All choices have trade-offs. People who operationalize ethics well are sensitive to what those trade-offs are. They perceive the tensions that exist under the surface. They anticipate the potentially harmful effects of the choices they might make and look for ways to mitigate them.

At the start of this piece, I asked an honest question about your motivations for reading it. Now let me ask another: How did that question make you feel? Perhaps you were dismissive: “Well, of course, I want to improve the world.” Perhaps you paused for a moment to look within yourself: “I hope I don’t rationalize what my job involves. Is it possible that I do?”

If you felt instinctively curious about your own motivations, I’d say you’re on the right track to operationalizing ethics in your work. If you were irritably defensive, perhaps you’re not.

Whether or not we are conscious of it, there is always a tension between where we are and where we want to be. To be fully engaged in any work, you need either the willpower to suppress tension or to feel genuinely at ease with what you are doing. I try to approach each ethical dilemma by sensing which option leads to a natural quieting of the mind. How might we respond to the challenge at hand so that we don’t need to give it further thought and attention beyond working out the details?

That takes practice and discipline – but as a first step, we can try to be humble. We need to understand the extent to which we are so convinced about what we are doing, we block other viewpoints that could enrich our worldview. It’s the difference between a company that genuinely invites and listens to critical voices, and a company that appoints an ethics advisory board that ticks the boxes – gender and geography – but never raises any difficult questions.

Academics will tell you that, broadly speaking, there are three schools of ethics. Deontology says that doing the right thing is about following rules. Utilitarianism sees the right thing as whatever does the greatest good. And there’s virtue ethics, which argues that if you focus on cultivating character, you will then do the right things.

Virtue ethics has come back into fashion in tech. As a liberal internationalist worldview is generally considered to be virtuous, the argument goes, tech that is developed by companies with liberal internationalist values must surely make the world better.

Unfortunately, it isn’t true. We are developing AI in ways that are often making the world worse: destabilizing democracies, widening inequality, or degrading the environment. Many in the industry have become locked into the prevailing zeitgeist, rationalizing away their doubts.

To embed ethics effectively, the virtues we need to cultivate are different: the awareness of our personal resistance to be open to others, and the courage to invite diverse perspectives into processes of collective decision-making.


For more on AI and equality and Wendell Wallach’s personal story, check out his May 2021 podcast with Anja Kaspersen, “Creative Reflections on the History & Role of AI Ethics.”

Wendell Wallach is a Carnegie-Uehiro Fellow at Carnegie Council for Ethics in International Affairs. Together with Senior Fellow Anja Kaspersen, he co-directs the Carnegie Artificial Intelligence and Equality Initiative (AIEI), which seeks to understand the innumerable ways in which AI impacts equality, and in response, propose potential mechanisms to ensure the benefits of AI for all people.

You may also like

A Dangerous Master book cover. CREDIT: Sentient Publications.

APR 18, 2024 Article

A Dangerous Master: Welcome to the World of Emerging Technologies

In this preface to the paperback edition of his book "A Dangerous Master," Wendell Wallach discusses breakthroughs and ethical issues in AI and emerging technologies.

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.

FEB 23, 2024 Article

What Do We Mean When We Talk About "AI Democratization"?

With numerous parties calling for "AI democratization," Elizabeth Seger, director of the CASM digital policy research hub at Demos, discusses four meanings of the term.