Algorithms are often biased. In fact, it’s possible algorithms can’t be completely unbiased. There are two primary explanations for that: the way the algorithm is programmed and the data on which it’s trained. AI is ultimately biased because people program algorithms and select the data to train algorithms.

How can marketers reduce the bias in their algorithms? In this episode, AI Bias: A Tale of Sheep and Field, Jake Moskowitz and his guests explore AI bias, the causes of bias, different examples of algorithmic bias, and how AI bias can skew the results of marketing efforts.

Hear from industry experts and thought leaders: 

  • Rishad Tobaccowala, Senior Advisor to the Publicis Groupe and author of the best-selling book: “Restoring the Soul of Business: Staying Human in the Age of Data”
  • Shelly Palmer, CEO of the Palmer Group and host of “Think About This with Shelly Palmer and Ross Martin”
  • Ella Chinitz, Managing Director at EY and data science veteran of marketing and advertising
  • Ray Velez, Global Chief Technology Officer at Publicis Sapient

Jake and his guests discuss the different types of AI bias and some effective ways to reduce bias in marketing. How do we train AI to reduce bias? How can AI bias negatively impact marketing efforts? Understanding these questions can propel your marketing efforts to the next level.

Shelly Palmer tells the story about sheep in fields, and different ways that training data can affect AI bias. Ella Chinitz brings color and expertise to Jake’s Five List. Rishad Tobaccowala outlines the human aspect of AI and his 6 “I’s” of extracting value from data streams. Ray Velez discusses the negative effects of AI bias and how increasing the diversity in your data set can lead to better results.

The FIVE list: Five ways AI can negatively impact marketing and advertising:

  1. Conquesting
  2. Cross-platform attribution
  3. Targeting lower value customers
  4. Short-tail bias
  5. Early Adopter bias

The Five podcast is presented by Ericsson Emodo and the Emodo Institute, and features original music by Dyaphonic and the Small Town Symphonette. Social media and promotional was This episode was edited by Justin Newton and produced by Robert Haskitt, Liz Wynnemer, and Jake Moskowitz.


Transcript of S2 E2: AI Bias- A Tale of Sheep & Field

I think the clearest example of a bad AI is AI that is producing bias. Let’s talk AI.

Welcome to FIVE, the podcast that breaks down AI for marketers. This is Episode Two: A Tale of Sheep & Field.

I’m Jake Moskowitz.

Way back in 1974, The Equal Credit Opportunity Act was established to prohibit lenders from considering a borrower’s gender when issuing credit and determining credit terms, limits, and interest rates. Yet in 2019, 45 years later, Apple and Goldman Sachs were called out publicly excoriated in the press for doing exactly that. Back in 1964, amendments to the Civil Rights Act protected job applicants from employment discrimination based on gender. Yet only a couple of years ago, Amazon found itself reevaluating internal hiring techniques that did exactly that. You may remember these incidents, they were high profile national news stories. But here’s the thing, those aren’t stories about people discriminating against people. They’re stories about AI algorithms and the bias that’s programmed into them, likely without even data science teams realizing it. These days, we see stories like these popping up pretty often.

There’s a lot of talk about bias, especially given the conversation around racial injustice these days. We talk about inherent bias, systemic bias, racial bias. So your first inclination is probably to assume that this episode’s conversation about bias is about those ugly human aspects of our culture. When we talk about algorithmic bias, we’re talking about AI and one of its most significant and common flaws. Whether it’s about racial bias in the ads presented to Facebook users, political bias in content recommendations on YouTube, or bias that favors white