marketingtechoutlook

The Art of Restraint

By Jason Jercinovic, Global Head, Innovation, Havas Worldwide

Jason Jercinovic, Global Head, Innovation, Havas Worldwide

Exploring the Ethical Application of AI in Advertising

Advertising has always been obsessed with understanding human behavior. We have employed countless techniques to gain even a brief glimpse into the mind of the consumer. We’ve interviewed, surveyed, and arranged focus groups. We’ve hired psychologists to observe shoppers. Most recently, we’ve collected reams of data from digital media.

Our effectiveness has steadily increased. We’ve learned how to craft more efficient marketing campaigns to drive sales. Our targeting capabilities improve as we separate buyers into ever smaller segments.

But we’ve never known a tool with greater potential to advance the industry than artificial intelligence.

The ability of AI systems to transform vast amounts of complex, ambiguous information into insight is driving deeper, more personal insights into market behavior than we ever dreamed possible. The source material is readily available. Nearly 2 billion Facebook users worldwide share about 5 billion pieces of content daily. Almost 200 billion compact sentiments are shared on Twitter every year. And Google processes more than 40,000 searches per second.

We can now assess the entirety of an individual’s social activity: every word, every picture, every emoji.

Add to that location-based data from mobile phones, transactional data from credit cards, and adjacent data sets like news and weather. When machine learning and advanced algorithms are applied to these oceans of digital information, we can intimately understand individual consumers’ motivations. And automation scales the delivery of targeted marketing to each person.

No one can blame the advertising industry for adopting such powerful tools. The benefits to both marketers and consumers are clear: fewer, more relevant advertisements; more effective and efficient campaigns.

But AI also introduces troubling ethical considerations, since advertisers may soon know us better than we know ourselves—not just our demographics but also our most personal motivations, vulnerabilities, and triggers. They may elevate the art of persuasion to the science of behavior control.

"The ability of AI systems to transform vast amounts of complex, ambiguous information into insight is driving deeper, more personal insights into market behavior than we ever dreamed possible"

More practical considerations also obtain data that is inherently biased; algorithms that make flawed or harmful decisions; major violations of personal privacy.

We therefore need a code of ethics that will govern our use of AI in marketing applications. We need a system that will ensure transparency and engender trust in our profession. In this whitepaper we seek to initiate an industry-wide conversation resulting—we hope—in the responsible use of AI that benefits the entire advertising ecosystem.

A System of Trust

The ethical landscape of AI in advertising is fraught. Some decisions will challenge the industry’s moral compass. The difference between right and wrong will not always be clear.

For example, most would agree that using AI to develop targeted digital marketing messages for a consumer interested in sports cars is acceptable. But what if you also knew that consumer was deep in debt, impulsive, and fiscally irresponsible? Or had multiple moving violations on his record, or a history of drug and alcohol abuse? Is it still okay to market a fast car to this person, in a way that would make it nearly irresistible?

The more complete our understanding of the target, the more persuasive our marketing can be. But each new insight raises new questions about our moral obligations to that individual and society at large.

Rather than judge each case individually, it’s more effective to establish guidelines that remove executive guess work and allow the market to decide. That’s why a transparent system–in which the consumer is more partner than target– is the only ethical way forward.

This system includes three main aspects of AI marketing: data, algorithms, and consumer choice.

• Data–AI’s raw fuel is the data used to train the algorithms and sustain the system. If the data source is flawed, inaccurate, or biased, those weaknesses will be reflected in the AI’s decisions.

Often these data sets simply reflect preexisting human biases. (Witness Microsoft’s experience with Tay, the conversation bot that learned hate speech from Twitter.)

Some argue that the application of AI to biased data sets can help remove those biases. At the very least, advertisers should make plain the data they use, to help the market better understand the source material that informs it.

• Algorithms–These engines of AI contain the code that refines raw data into insight. They make the AI’s decisions and learn over time. But they are designed and developed by humans, so their instructions can and should be “explainable.”

Some in the business call this “algorithmic transparency.” Transparency is impractical in this context. Because an AI’s most valuable intellectual property lives in the algorithm, agencies won’t eagerly share that code. Explainability, however, means ensuring the ability to clearly explain the decisions an AI makes and why.

Consumer Choice–Consumers should be aware of the techniques used to market to them and have the option of participating in a campaign. To make an informed choice, they should understand the value exchange. What are they giving up? What are they getting in return? And how easily can they opt out if they’re uncomfortable with the transaction?

Havas’ Commitment

Such radical transparency will be unfamiliar ground for many advertisers. But the market is already demanding it. Last month, Havas introduced a new Client Trading Solution portal to infuse more trust in programmatic advertising. Our clients were telling us there was a lack of transparency in the way online ads were placed, measured, and billed. So we will now itemize each fee and allow negotiations to be viewed in real time. We will even show clients how our rates compare with competitors’, even if unfavorably.

We will commit to similar transparency in our use of AI, which is shepherded by our Havas Cognitive division. We will take the lead in guiding our clients, partners, and competitors to the responsible adoption of this technology. Internally, we are installing an ombudsman to act as a public advocate across all advanced marketing technology.

Conclusion

We are advertisers, not ethicists, sociologists, or computer scientists. But that doesn’t excuse us from weighing AI’s social impact of ourwork. We know a line exists that can–and likely will–be crossed. And it will be difficult for our industry to distinguish what we can, should, and shouldn’t know.

But even if a marketer feels no moral obligation, there is a fragile trust among agencies, clients, and customers that supports the industry, however shakily. When that trust is abused–as it sometimes has been–the damage to a brand can be irreparable. And the effect on the advertising industry could be catastrophic.

AI has the power to drive new markets, to change behavior, to shape elections. Whether out of a sense of moral righteousness or enlightened self-interest, we must establish best practices for its use in advertising: for the good of the industry, our clients, and society as a whole.

Read Also

Transforming the Customer Experience

Transforming the Customer Experience

Jay Autrey, Chief Customer Officer, MONI
3 Standard Ways to Proactively Manage Peak Performance

3 Standard Ways to Proactively Manage Peak Performance

Pat Patterson, Senior Director of Marketing, Avaya

Weekly Brief

New Editions

<