Why we should worry about AI-powered online marketing

Nov 24, 2021

By now we all understand the trade-off involved in using the Internet: We let companies collect data about us, and in return they offer a more personalized user experience. But what if I told you that the long-term arc of this trade-off is beyond anything you can possibly imagine?

Everything we do online generates data, which platform companies carefully collect and categorize to create digital profiles. Their artificial intelligence (AI) systems then correlate our digital profiles with those of other users to determine what we see online: how our search queries are interpreted, what posts are included in our social media feeds, what adverts we are shown, and so on.

This kind of micro-targeting, based on individual psychological profiling, exploits what Daniel Kahneman, 2002 Nobel prize winner in economics, called “fast” thinking – the decisions we make quickly and without conscious consideration, such as whether to click on a link, watch another video, keep scrolling through our timeline, or put down the phone. The more AI gets to know about us, the more adeptly it can manipulate our emotions and decisions.

The most obvious application of this power is in advertising, which underlies most online business models. The case for personalized advertising is that it helps us to avoid decision fatigue by presenting us with the most relevant purchasing options. However, that helps us only if we want to make a purchase. Often, adverts instead tempt us to buy things that we don’t need or can’t afford, against our better judgement.

In order to serve us more adverts, the algorithms first need to hold our attention. The recent Netflix movies The Great Hack and The Social Dilemma explored the implications of this. One is that these algorithms tend to show us political content the AI thinks we will agree with, rather than alternative points of view that it might benefit us to consider and engage with.

The effect is to fracture societies, by giving everyone the impression that other people mostly think like they do. More insidiously, keeping our attention frequently also means showing us more extreme versions of views to which we tend to agree, polarizing people further.

In 2016, the Cambridge Analytica scandal famously showed how Facebook profiles might be used to manipulate people’s political leanings by micro-targeting them with misinformation tailored to their personal vulnerabilities. This capability has not gone away: Recently, Facebook whistleblower Frances Haugen alleged that the company still “knowingly amplifies political unrest, misinformation and hate” and MIT Technology Review’s Karen Hao recently revealed how big platform companies and tech giants are paying large amounts of money to operators of so called “clickbait pages,” making these fractures even greater. As Hao writes, in countries “where Facebook is synonymous with the Internet, the low grade content overwhelmed other information sources.”

As someone who has been working in the advertising industry long enough to have witnessed firsthand the impact of the digital transformation, the direction of travel is clear. We are creating a “clickbait” future in which ever more powerful AI systems manipulate us into making decisions that engineer desires and often go against our real needs – decisions about how to spend our money and our time, and what to feel and think about others.

If that is not dystopian enough, consider that other AI system applications are increasingly using our data to make critical decisions about our lives, such as whether or not we get called for a job interview, offered a medical treatment, or approved for a loan. Many of these applications use “black box” algorithms that cannot explain or justify their decisions, and may likely have biases about characteristics such as race or gender.

A famous example from 2018 was an Amazon HR algorithm that was found to be putting forward fewer women for interviews. The algorithm hadn’t been programmed to prefer male applicants; it had learned from its training data to prefer candidates with characteristics that happened to predict being male.

Such algorithms still typically consider only the information we give them – for example, an algorithm used in HR will look at a job candidate’s CV, while an algorithm used to make loan decisions will look at application forms and the applicant’s credit history. But what if, in the future, platform companies and third party actors gain access to more of our digital profile? And, in talking about a future metaverse, how can we make sure that it’s more than just a well-curated clickbait universe?

Imagine, for example, that you research symptoms of heart disease. Later, you search for health insurance. The AI system deduces from your recent searches that you may have a heart problem, so it quotes you a higher premium. Or imagine you shop or even search for folic acid pills, and later apply for a job. The algorithm concludes from your online shopping history that you may be or are contemplating becoming pregnant, so it recommends against calling you for interview.

At the moment we assume that the personal data we share with one entity – Google, the bank, an insurance company – will not necessarily then be shared with all the others. But we should not be naïve: All the information we share on the Internet has the potential to be accessed by any third party, and without regulations put in place, this is increasingly likely to happen. These third parties include not only other companies, but hackers and governments, which may use the information against us in ways we cannot foresee.

The trade-off of personal data for improved online experience may seem straightforward. But AI-powered marketing is creating subtle and multifaceted vulnerabilities, which, as individuals and as a society, we are ill-prepared to tackle.

Volha Litvinets was previously a research fellow for Carnegie Council's Artificial Intelligence and Equality Initiative. She is also a Ph.D. candidate at Sorbonne University (Paris, France), where she works on artificial intelligence ethics in the digital marketing field. Prior to beginning her PhD, Litvinets completed two Master's degrees in philosophy and in political philosophy and ethics.

You may also like

A Dangerous Master book cover. CREDIT: Sentient Publications.

APR 18, 2024 Article

A Dangerous Master: Welcome to the World of Emerging Technologies

In this preface to the paperback edition of his book "A Dangerous Master," Wendell Wallach discusses breakthroughs and ethical issues in AI and emerging technologies.

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.

FEB 23, 2024 Article

What Do We Mean When We Talk About "AI Democratization"?

With numerous parties calling for "AI democratization," Elizabeth Seger, director of the CASM digital policy research hub at Demos, discusses four meanings of the term.