September 08, 2021
Facebook faced a firestorm after its AI committed an “unacceptable error.” If Facebook’s AI, backed by worldly resources, elite data science, a wealth of data and years of training, can embarrass the brand, what does this mean for the rest of us?
Here’s what went down: The Daily Mail featured a video on the social network of Black men in altercations with white civilians and police officers. After watching the video, Facebook users were asked if they would like to “keep seeing videos about Primates.”
To its credit, Facebook investigated and disabled the AI-powered recommendation feature and apologized for what it called “an unacceptable error,” the New York Times reported. Facebook said it will be looking into the problem to “prevent this from happening again.”
Related: Download our report, Making MarTech Pay Off
If this sounds a little too familiar, it is. In 2015, Google mistakenly labeled pictures of Black people as “gorillas.” AI in healthcare also has had a bias problem. A couple of years ago, six algorithms impacting millions of U.S. patients were prioritizing care coordination for white patients over Black patients with the same illness.
“These high-profile, very bad missteps will rightly give marketers and providers reason for pause, spurring the industry to think more deeply, ask the right questions and raise the bar for rapidly improving these systems,” says Jim Lecinski, co-author of The AI Marketing Canvas. “But a failure even as egregious as this one should not result in full abandonment of any powerful new technology like this.”
There’s no question that AI will be disruptive to marketers. CEOs and boards of directors are increasingly viewing marketing as the chief growth engine charged with making data-informed predictions. The ultimate goal is to “find the optimal combination of the right product at the right price, promoted in the right way via the right channels to the right people,” Lecinski says.
So how can marketers hedge their AI bets and reduce the risk of unacceptable errors? I sat down with Lecinski to get his advice — mostly, what questions marketers should ask to safely navigate AI’s tumultuous waters. Here’s what he told me:
Given what happened at Facebook, how should marketers proceed with AI?
Lecinski: Since I don’t know the details behind [the Facebook AI error], I am not able to comment on it directly. But generally speaking, marketers should start with a small-scale, low-stakes, low-budget test to see what happens with their own training data, in addition to what the machine has ingested from other people’s training data.
Many marketers are still trying to determine fact from fiction. What do they need to know?
Lecinski: With any new initiative, marketers need to understand and carefully evaluate machine-learning-powered systems. Is the system really using machine learning or a set of if-then rules? Where training data is used as an input, a “recipe” is written for the machine to make predictions using that input, leading to some automated output application using that prediction. Then feedback comes back automatically to improve the next prediction and output application in a continuous learning loop.
How do marketers hedge against unacceptable errors?
Lecinski: Marketers should ask a lot of questions. How widely has this AI algorithm been tried and tested for reliability of outcome? By whom and how long? How often are they updated? What is the training data set? Where did it come from? Who was doing the categorization and classification? How was it checked and validated? Someone had to tell the machine that this [image] is a dog and this is a blueberry muffin to help it learn. This is where breakdowns can occur.
Related: Get ahead of the disruption! Become a CMO Council Member today!
Tom Kaneshige is the Chief Content Officer at the CMO Council. He creates all forms of digital thought leadership content that helps growth and revenue officers, line of business leaders, and chief marketers succeed in their rapidly evolving roles. You can reach him at tkaneshige@cmocouncil.org.
No comments yet.