Subscribe

Marketers are currently evaluating how to effectively target and measure digital advertising at scale while adjusting for a whole bunch of ad industry shifts. And, how to do that with the same efficiency we’ve all gotten used to.

In this episode, show host Jake Moskowitz talks with Todd Touesnard, EVP of Product and Data Science at Ericsson Emodo, about a number of AI-related topics, including potential applications of machine learning for targeting advertising audiences without device IDs.

Jake’s FIVE List:

  1. Make up for lost scale
  2. Make wiser decisions in the face of rising costs for ID-based inventory
  3. Make contextual targeting more flexible by incorporating a wider definition of context
  4. Allow marketers to focus on data quality rather than quantity
  5. Be more privacy compliant

Jake and Jeremy Lockhorn meet up again to explore some really interesting ways to experience AI personally. They share four browser-based applications that are free, easy to access and easy to use. And, they’re really fun.


Transcript of Episode 8: AI’s Role in Identity

Jake Moskowitz 0:01
Problem is, you don’t have this ID to be able to sort of target users and locate the intended audience the same way that you did before. And I think that that’s exactly where machine learning models can help.

Let’s talk AI. Welcome to FIVE, the podcast that breaks down AI for markers. This is episode 8: AI’s role and identity. I’m Jake Moskowitz.

Unless you’ve been living under a rock for the last six months, you know that the advertising industry is undergoing an identity crisis. I’m not talking about an industry getting in touch with its inner self, though that wouldn’t be a misstatement. I’m talking about the deprecation of device IDs and cookies. Two important steps when learning about a new technology like AI, or one understanding exactly how to apply it in a specific use case, is getting your hands on it, and trying it yourself. This episode will help you do both. The future of identity is a timely industry issue that can serve as our AI use case, in fact, has become one of the most prominent and uncertain topics in marketing today. In this episode, I’ll talk with a product leader about a number of AI related things, including potential applications of machine learning, for targeting audiences without device IDs. As for getting your hands dirty experiencing AI, personally, Jeremy Lockhorn and I will explore some really interesting ways to do so a little bit later in the show. They’re free, easy to access and easy to use. And they’re really fun.

But let’s start with identity. It’s kind of a big deal. It’s hard to imagine digital advertising. Without the sophisticated targeting we as an industry have developed and sharpened and made the very core of our marketing strategies. The ability to link an exposure on a device to a specific outcome has enabled digital ad measurement to go far beyond that of traditional media. I mean, think about multi touch attribution, and controlled lift studies that measure in store sales and traffic. Or how about the ability to find devices again, just that capability ignited the explosion of retargeting. And the ability to aggregate data over time for a particular device enabled all kinds of capabilities we sort of take for granted now. Stuff like frequency capping and building behavior based audiences, custom audiences and look alike audiences.

But identity is changing. And we’re facing a future in which there’s less ability to accumulate massive amounts of device level data, and less ability to use the data we do have. We have privacy regulations, like ccpa and GDPR. And sweeping policy changes, like those in Apple’s iOS, the deprecate the mobile device ID. There’s the deprecation of third party cookies in Google’s and Apple’s browsers. And of course, there’s the changing consumer who’s becoming much more aware and sensitive about data tracking. That’s a lot. So marketers are currently evaluating how to effectively target and measure digital advertising at scale while adjusting for all these changes, and how to do that with the same efficiency we’ve all gotten used to.

The issue of identity is a really timely, relevant example to get into. In order to dive in deep and early, we kind of need to go behind the scenes. One way to do that is to open our own kimono. So I’ve invited Todd who’s in ARD to join me. Todd is the head of product and data science at my company, Ericsson Emodo. Todd, thank you for joining me, I’ve noticed just from watching how we work as a company, that so much of your product work and building AI products, connects you to every other part of the company.

Todd 4:13
Yeah, I think that’s right. Achieving success with this type of product goes well beyond, you know, engineering, data science and product teams. I think it requires alignment across most functions of the company. So you call that a few BD and working in tandem with, you know, potentially legal and data privacy to understand and go secure data marketing that cross functional discussions with marketing to communicate value, sales need to engage with customers and also for on the front line, providing feedback on the market. And then Ops, the operations team, at least in a marketing context here, ensuring that the algorithm is being leveraged to its full capacity also, which is very, very important to help measure success in the wild. Once you deployed, I knew, you know, ai type of product. So it is absolutely key. And yes, it goes well beyond the technical components of building a product like this, all functions seem to be lined up and going in the same direction.

Jake Moskowitz 5:13
That might be the best depiction I’ve heard of the cross company nature of an AI effort. So AI is not a product thing or a data science thing. But to do it, right, you need support and involvement across the company. When a company is focused on machine learning as a differentiator, or as a way to go to market, the entire company needs to be behind that effort all working together, as one

Todd 5:37
Absolutely agree, 100%.

Jake Moskowitz 5:39
What have you learned the hard way about building AI products that you could never have possibly known without personal experience?

Todd 5:47
I’ll call it two different areas. One is on the implementation side, it can certainly be a challenge. I’ve spent a lot of time on this particular one. So an example within implementation where you can have a challenge is about performance. So we have several models that we’ve had to implement over time that are highly computationally expensive. And there’s a huge cost to running them in his full scale production environment that just really, you know, wasn’t obvious in our lab. So that’s something I would I would note.

Another example is effectively rolling out a real world testing program, it can be much tougher to do that in the lab. So really, I was just thinking this through pulling a production focused plan to verify that your model is doing what it’s supposed to be doing. That is a smart approach, and something that I learned. So those are two pieces around implementation. And for starters, it’s not always obvious what’s going to be useful or what’s going to work. And domain expertise and hands on experience are very key in this particular step. So an example from from my work is we work with really large mobile operator datasets, where we have a comprehensive view of device and data consumption patterns, it’s huge data set and to take that data into engineer intelligent inputs, can be very, very difficult than a tremendous amount of work. It can also change based off of geography.

Another huge problem that I’d highlight is just getting good data may be a tremendous challenge. In my experience, getting your hands on data that fits all the key criteria to build great models is really hard to come by. I can’t be so you know, a huge data set, data set that represents all of the geographies that you need to if that’s the case, and all the classifications of data, if that’s relevant, but also trustworthiness in that data set. The accuracy is huge. But then, you know, and what’s really important, especially today in marketing, is data, privacy considerations and data usage rights. Those are very complicated problems that really, they aren’t technical in nature, but they are huge challenges that need to be dealt with.

Jake Moskowitz 8:00
Do you think the industry mistakenly thinks of gathering training data as a checkbox? Like, yes, we have it or no, we don’t have it. That there’s an under appreciation for the characteristics that make for successful AI, like trustworthiness of training data, or consistency or accuracy or usage rights of your training data?

Todd 8:25
Absolutely. They can often be the most difficult problems to solve.

Jake Moskowitz 8:31
I want to reflect back on something you said that models can be computationally heavy, and thus I assume expensive. It got me thinking building an AI product is a delicate balancing act, because you obviously want to maximize performance. But you also have to ensure you’re keeping costs in line so that the model provides more value to the client than it cost you to build, and thus is financially sustainable. Some market needs presumably can’t ever find that balance. So that’s part of the trickiness of using machine learning to solve industry needs that sometimes it just can’t solve them. Because you can’t find that balance. Does that sound right?

Todd 9:08
Yeah, that’s right spot on.

Jake Moskowitz 9:10
Where do you see the biggest use cases for machine learning within marketing in the next year or two?

Todd 9:16
Yeah, one use case that is certainly top of mind for me in most companies in the digital advertising space is around identity. You mentioned this at the top of the discussion. So access to third party cookies and mobile ad identifiers will be to some extent deprecated or go away entirely. So most believe that this will have a huge impact on the ecosystem. And I agree, core capabilities such as ad targeting and retargeting. Frequency capping measurement will certainly be affected, and there will be a lot of change. So with this change, though, I think there’s a huge opportunity for AI to play a strong role in solving some of these problems.

Jake Moskowitz 9:58
Just to clarify, there’s a lot of industry chatter around alternative idx structures like UI d 2.0, as a replacement for cookies and mobile ad ideas, and about old school contextual targeting making a comeback. When you talk about AI playing a role in the future of identity, is that what you’re referring to?

Todd 10:18
Yes, absolutely. I mean, if you just drill into the problem around ID deprecation, if you look at what’s actually happening, users going to have to explicitly opt in site by site or app by app in order for their data to be leveraged in the way that it is today. So that poses a huge challenge in terms of, you know, advertisers having the ability to reach their customers. So for example, even if a publisher has a tremendous database of 12 years of data built up on a particular user, the user does not opt in, and that data can no longer be used in the way that it is directly use today. So this could provide a huge challenge. I think predictive modeling is an area that could certainly play a big role. And I believe that the industry will certainly look to adapt in that way.

Jake Moskowitz 11:08
To be clear, you’re saying databases won’t be as useful regardless of their size? Because there will be less inventory with an ID to match to. And we’ve got to make up for that loss of scale elsewhere.

Todd 11:21
Yeah. And I think another dynamic that could surface is the cost of IDs based inventory could probably rise. So cost of inventory and solving that problem. And how do you allocate marketing spend most effectively is certainly another area that’s ripe for some AI based innovation.

Jake Moskowitz 11:39
So as cookies and mobile ad IDs go away, you need a varied approach. There’s no silver bullet, you need alternative identifiers you need contextual targeting. And you also need machine learning based approaches.

Todd 11:51
Yeah, that’s right. And as far as contextual targeting goes, I think that’s something that’s really evolving, that used to mean, you know, a user’s on a particular website, so serve them a specific type of ad based off of that context. So contact doesn’t necessarily mean around the ad, or what the user has searched anymore. I think with AI driven approaches, we’re able to deliver a much more sophisticated understanding of what context is. So leveraging more than just the content the users interacting with, but also deriving context based off of, you know, time of day, day of week, the physical location of the device that the users holding, the orientation of the device, and the movement of the device, the weather, proximity to home, and it goes on and on. I just really think there’s a huge opportunity to take context to the next level.

Jake Moskowitz 12:46
That’s really interesting. So let me play that back. Contextual targeting is indeed coming back into vogue. But it’s not necessarily your parents contextual targeting, so to speak, the definition of context has expanded beyond just the words on the page you’re looking at, to include things like time of day or device location, inventory source, that are contextual to the moment in time in which the ad slot is available. That sounds exactly right. When it comes to the use of machine learning to fill the gap and scale resulting from cookie and Id loss, what will differentiate those that are successful at it versus those that aren’t?

Todd 13:25
Yeah, I think there are quite a few things. One I’d start with is access to high quality, accurate, and potentially unique datasets that exhibit the many of the characteristics that we talked about earlier around size, representation and trustworthiness, but also data privacy. So I think that’s a big piece.

Another maybe not so obvious characteristic that I think is important is just having domain expertise to straddle the art and the science of building great AI products. Just think that’s really important. And not something that’s obvious, or that people think about.

And a final thing that pops to mind is the ability and willingness to adapt is more of a business question. But for example, on the inventory side, marketing publishers may start exposing additional attributes that bidders can leverage in their decisioning models, exchanges may do something similar to with new, you know, categories of metadata. So adapting to that. And back to data privacy, I just faced quite a few challenges over the last few years, but there will be new ones. So for companies to be successful with data, they’re going to have to face those challenges and adapt.

Jake Moskowitz 14:36
How do I balance the need for good, accurate, detailed training datasets simultaneously with the need to step up privacy? Don’t those conflict?

Todd 14:46
Yeah, it’s challenging. It’s not as simple as, and often these are, you know, business problems that are not obvious. For example, I spend a lot of time having discussions that are more legal or data privacy teams. that are, you know, really talking about, you know, the data attributes themselves. It’s talking about, you know, the business context around it, or the the the usage of that particular use case. These are areas that are at the beginning of the entire process, you have to understand what’s possible, and what’s gonna be safe. And all the regulations, wherever you’re operating before you can move on to the next step of trying to come up with a technology solution.

Jake Moskowitz 15:26
Let me throw something at you and see what you think. I’m generally of the belief that the ad industry has been obsessed with quantity of data, like how large is this segment? How many consumers can I reach? How high is my match rate? It feels to me like the emergence of machine learning requires a change of industry mindset away from quantity, towards quality of data. So it’s no longer a race to see you as the most data, it’s now a race to see who has the highest quality data. Because you don’t need as much scale for a training data set as you need for a deterministic targeting segment, for instance. So like we’ve covered with a training data set, you need depth and accuracy, representativeness and consistency, it seems to me like opt in data can work for all that stuff. So it’s possible to achieve success while simultaneously adhering to a higher bar of data privacy. Do you agree with that?

Todd 16:22
Yeah I think there is value in, in having a small, but strong panel. But it needs to be the right kind of panel, it needs to represent everything it needs to represent, it needs to be accurate needs to like the representation that you need. It needs to be the right kind of balance, basically.

Jake Moskowitz 16:41
I want to make sure we get a bit more specific on explaining how you’re envisioning using machine learning to go beyond cookies and add ideas. And beyond contextual targeting. Could you walk us through that?

Todd 16:53
Yeah, well, there could be a problem with reaching scale. So ml, and AI could come in, in helping solve that in a way that can be done accurately, which is the key, because let’s just assume that the inventory that’s out there is still going to be the same as it was before. The problem is, you don’t have this ID to be able to sort of target users and locate the intended audience the same way that you did before. So how do you make up for that in a way that can be done accurately? And I think that that’s exactly where machine learning models can help predicting, hey, this particular anonymous bid request, as an example, can be classified as a business traveler, even though we don’t have an identifier.

Jake Moskowitz 17:37
Perfect. Todd has been extremely helpful. Thank you.

Todd 17:41
You’re very welcome. That was fun.

Jake Moskowitz 17:45
Okay, there’s a lot in there. If I had to highlight the top takeaways, the key ways that AI can help the industry adjust for the changes to identity, I’d say these are the five. Number one make up for lost scale. It doesn’t matter how large deterministic data sets are, regardless of first or third party nature. If a user doesn’t opt into a site or app where the ad slot is available, that database is useless. Fortunately, machine learning is useful in predicting user characteristics based on programmatic metadata.

Number two, make wiser decisions in the face of rising costs for ID based inventory. With less ID inventory costs will rise. Putting the pressure on marketers to be more efficient. AI is the best tool to optimize for efficiency against any KPI.

Three, make contextual targeting more flexible, by incorporating a wider definition of context. Content contextual used to mean the words on the page where the ad resides. Now it can mean anything from time of day to device type, to publisher, to location, to inventory, source, and a lot more.

Number four, allow marketers to focus on data quality rather than quantity. You don’t need as much data when you’re using it to train a model rather than to target deterministically. But it’s important to remember that smaller training datasets need to be representative, accurate, thorough and complete.

And finally, number five be more privacy compliant. requiring less data means less need for opt out data. opt in data can be sufficient, even given lower opt in rates. Using data for anonymous algorithm training requires a lower bar of usage rights as compared to deterministic matching for one to one targeting

Before we go, I asked our regular guest, Jeremy Lockhorn to come back and join us again. And we decided this week to talk about ways to put AI into action. I generally am of the philosophy that you can’t really learn something purely theoretically, you have to do it, you have to experience it personally. And AI is a really hard one to do. Because a lot of people feel like you need to be a computer scientist, or you need to be a data scientist in order to do machine learning, or maybe even both computer scientists and data scientists.

But the reality is, there’s a bunch of tools out there that are made for people that are not data scientist or computer scientists to experience AI for themselves. And even my colleague, Jeremy was a little skeptical at first. But we went out into the ether. And we played around with a bunch of different tools. And we found some that we really, really liked. So we wanted to share those because we think it’s important that listeners personally experience AI for themselves. So with that, Jeremy, why don’t you tell us about your favorite?

Jeremy Lockhorn 21:05
Yeah, so I’ve got a couple. But the first one was QuillBot, which is a text summarization tool. And, you know, I think what’s really interesting to me about it is lots of different businesses need to share industry news amongst, you know, key executives and stakeholders. Here at Emodo. We’ve got a dedicated Slack channel where we post articles and discuss implications from the from the ad ecosystem in the industry. And part of the challenge, of course, is just that there’s so much content out there, that it’s hard to consume everything and digest everything. And so QuillBot was really interesting to me, because it has the ability to basically ingest articles, and summarize, like, you know, I gave it a bunch of different types of content, from, you know, a few ad industry trade articles to some politics and some sports, you know, and in every case, I was using articles that were, you know, between five and 750 words or so. And it was able to summarize it down to 100 150 words. And, you know, it not only captures the key salient points, but perhaps what’s most impressive is that it goes beyond just extracting phrases from the original article like, it seems to be able to interpret and understand meaning and sort of rate some of its own interpretation, which kind of creates these transitions between the salient points that it is extracting directly from the original text. So super interesting and potentially really valuable.

Jake Moskowitz 22:25
It reminds me that one of the big use cases for AI during COVID, has been summarizing all of the research done about COVID. Because there’s never been more research in a shorter period of time about any particular subject. And simply the ability to aggregate it all up into overall findings across 1000s of academic papers has been a key use case for AI. So it sounds like quill bot is similar in use case. So I’ll share one that is really important in showing how an algorithm gets trained. And I think it’s really fun. It’s called teachable machine. And you go to the screen, and it says class one and class two, and you have to upload files for class one, and then upload files for class two, and you can train it on a bunch of different things, you can train it on sounds and words, and I chose pictures. So I uploaded for class one, work related pictures. So that was screenshots and charts and pictures of slides back when we used to go to conferences live. And then in class two, I did family pictures. So I just did my wife and kids and me. And I tried to you know, be really specific about which class got which kind of picture. And then after you’ve uploaded pictures, or whatever you choose to upload to class one and class two, you then go over to the resulting place where you upload a new file, and you don’t tell it which one it is. And it uses the training information that you gave to class one, class two, to decide which one that you’re uploading it belongs in? Is it a class one? Or is it a class two, so I use a some like a work slide and a family picture. And it was 100% accurate in knowing for sure which class that it was in. And the entire algorithm is trained by me within two minutes with no no engineering and no data science. So it was really a fun little tool to really understand how an algorithm can get trained. And I would be curious if I kept going with it, how specific and how accurate it can get like for instance, I have identical twin daughters. So if I uploaded one identical twin two, class one and one identical twin to class two, couldn’t tell the difference. Frankly, I doubt it because Google Photos does not tell the difference between my identical twins, but it’d be interesting to see how far you can push it.

Jeremy Lockhorn 24:44
That’s fascinating. Very cool. So one of my other favorites was this thing called some mantras, which is basically these two different amazing word association games that are powered by AI. Both are super addictive. So I’ll just sort of share that as a warning to our listeners. But what’s interesting is it basically puts a bunch of words on the screen. And then you need to type in words that you believe are associated with those with those words that are on the screen. And there’s different game mechanics between two different versions of the game, I won’t get into that. But what’s really interesting is, you know, the training data set is largely based on inputs from people who are playing the game. So it kind of puts you in this mode of like, you’re trying to second guess the AI, like I see a word on the screen. And the first word that comes to my mind is x. But I kind of have to think about what’s most popular as well, right? And you’re kind of so you’re kind of second guessing yourself a little bit.

And like, one specific example is, one of the games I played had the word lemonade on the screen. And I immediately went to stand right, so I put standing is, it just seems seemed like a sort of natural association to me. But it didn’t have the desired effect. It was not what, you know, the AI didn’t associate that with lemonade, partially because farm and night, were also words on the screen. And so when I put in stand, it immediately associated that with night, and then I was left with farm and lemonade still on the screen. So I entered stand again, and wound up choosing farm. And so it went, and then I kind of never got my lemonade stand. But it was a fascinating experiment.

Jake Moskowitz 26:14
That’s actually a good way to think about like the role of a human and a machine working together. Because like, when you have an AI tool, no matter how powerful that AI tool is, you have to understand as a human, what it’s good at and what it’s not good at. So you know how to provide inputs that allow the algorithm to be as effective as possible. So that’s a really good depiction of that one.

The other one I wanted to talk about was really my favorite, just in terms of fun. It’s called Quick Draw. And Quick Draw is like a blank doodle pad. And it tells you something to draw, like, for instance, one that I got was saxophone. And just to quell any listener concerns, I am the worst drawer in the worlds I do not need to be a good drawer. But I just drew what I imagined the saxophone to be. And as I’m drawing it, the computer is guessing when I’m drawing. And the goal of the game is to draw it well enough to convey to get the computer to realize that you’re trying to draw a saxophone, it’s absolutely hilarious that as you’re going along, it’s guessing different things that have nothing to do with it.

So you might draw the end of this action mode, and it might think ball, or you might draw the bottom of the saxophone the way it curves. And it might say moon, as you’re drawing it, and then as you get closer and closer, it says, Ah, it’s a saxophone, bam. And the way it works is similar to some mantris. It’s trained on pictures that people have drawn about saxophone in the past. So, you know, it’s built on 10s of 1000s, hundreds of 1000s of pictures of saxophones that humans have uploaded. So it’s a self training algorithm that’s constantly getting better. But it’s also just totally fun.

Jeremy Lockhorn 27:53
I remember when they first released that it wasn’t gamified like it is now it was sort of left totally open ended. So you could draw whatever you wanted to and the computer would try to guess what it is. And you can imagine that, you know, people were drawing inappropriate things and seeing if the computer could could try and guess it. But they’ve tried to eliminate some of that, I guess, with the gamification that they’ve set it up with now.

Jake Moskowitz 28:16
That’s funny. But I wonder also another reason for that is because at the beginning, it didn’t have enough training data, to be able to accurately tell you that it was this X amount. And so maybe that was their way of gathering training data or something like that.

Jeremy Lockhorn 28:29
Very well could be.

Jake Moskowitz 28:31
Anyway, that’s been really fun little experiment talking through these little tools. And we encourage folks to Google around because there are dozens of different tools that require no data science or computer science, to experience AI, and learn about how AI functions. And whatever you choose, we highly encourage you to just try it out yourself because it’s important to learn about AI in a practical way, not just in a theoretical way. So Jeremy, thanks a lot for joining again.

Jeremy Lockhorn 28:57
Yeah, my pleasure. Thanks for having me.

Jake Moskowitz 29:05
I’d like to thank my guests Todd Touesnard, VP of product and data science, and Jeremy Lockhorn, Global Head of partner solutions, both from Ericsson Emodo. This one was kind of a family episode. Thanks, guys. Hope to see in the office one of these days. On the next episode of FIVE, we’ll wrap up this season by circling back with some of our guests to make sense of how AI will become more intertwined in the work you do going forward. Hey, and if you like the show, please write us a comment and give us a rating on your favorite podcast listening platform. We’d be super grateful, it definitely helps more people discover the podcast. Thanks for joining us.

FIVE podcast is presented by Ericsson Emodo and the Emodo Institute and features original Music by Dyaphonic and the Small Town Symphonette original episode art is by Chris Kosek. This episode was edited and mixed by Justin Newton and produced by Robert Haskitt, Liz Wynnemer and me, I’m Jake Moskowitz.

Additional episode resources:

Share

GET THE LATEST
IN YOUR INBOX

MORE LIKE THIS

Case Studies Season 3: Innovation The FIVE Podcast

S3 E10: The Campaign That Brought Theater-Goers Back to Broadway – With Andrew Lazzaro

Season 3: Innovation The FIVE Podcast

S3 E9: The Metaverse: A New Frontier for Commerce @ Zenith Basecamp

Season 3: Innovation The FIVE Podcast

S3 E8: How Blockchain Puts the Brand at the Center – With Shelly Palmer

Season 3: Innovation The FIVE Podcast

S3 E7: The Disappearance of Device IDs: How Marketers See it – With Jeremy Lockhorn

Season 3: Innovation The FIVE Podcast

S3 E6: CTV and the Consumer Journey – With Jessica Hogue

The FIVE Podcast

General Mills’ Kitchen of the Future

Season 3: Innovation The FIVE Podcast

S3 E5: The Campaign That Transformed an Island – With Richie Taaffe and Adrian Begley

Season 3: Innovation The FIVE Podcast

S3 E4: AR Ads and the Experience Economy – With Tom Emrich

Season 3: Innovation The FIVE Podcast

S3 E3: General Mills’ Kitchen of the Future – With Jay Picconatto and Michael Stich

Season 3: Innovation The FIVE Podcast

S3 E2: Rethinking Grocery Commerce – With Jason “Retail Geek” Goldberg

Season 3: Innovation The FIVE Podcast

S3 E1: Winning Both Today and Tomorrow – With Michael Stich

Season 3: Innovation The FIVE Podcast

Introducing FIVE: Innovation for Marketers

Season 1: 5G The FIVE Podcast

5G Bonus: Welcome to the AR Era.

Season 2: AI The FIVE Podcast

AI E9: The End of the Beginning

Season 2: AI The FIVE Podcast

AI E8: AI’s Role in Identity

Season 2: AI The FIVE Podcast

AI E7: The AI Pitch and Catch

Season 2: AI The FIVE Podcast

AI E6: AI and the Future of Work

Season 2: AI The FIVE Podcast

AI E5: AI is for Agency Innovation

Season 2: AI The FIVE Podcast

AI Bonus Content: The Full FIVE Interview with Rishad Tobaccowala

Season 2: AI The FIVE Podcast

AI E4: Putting the AI in Retail