From iab: A Comprehensive Guide to Avoiding Negative Consequences with Artificial Intelligence
Companies that seek to be successful in the future should develop genuine relationships with their customers, built on trust. Customers reward these companies with deep engagement around products and services and accompanying brand loyalty which often cannot be acquired through monetary means. Marketing and advertising are key practices for developing customer relationships.
The underlying mechanisms for marketing and advertising are increasingly enhanced by artificial intelligence (AI) to help achieve all the benefits in efficiency and effectiveness. Great AI systems require fastidious design, development, deployment, and maintenance by teams forged through broad and diverse representation. Throughout the lifecycle of a system, teams should strive to deliver AI that can be explained, trusted, and understood. Mission-critical is the understanding of unwanted or unintentional bias, how it originates, infiltrates systems, impacts models, and is deployed at scale in algorithms, affecting performance and potentially exacerbating societal inequities and eroding trust.
With proposed new regulations in the European Union1 for trust and excellence in AI, U.S. state legislatures are passing bills2 to study the impact of artificial intelligence on citizens. With increased regulatory scrutiny and prioritization by the Federal Trade Commission3 on AI and increasing internal compliance governance on AI use forthcoming, this guide is a must-read and a starting point for companies to develop frameworks for better AI solutions. It is intended for the entire value chain, not just the solution developers. Focusing on bias, we pull from real-world experience by AI professionals to define key terminology and explore the roles and responsibilities of stakeholders: requestors, builders, end-users, compliance and legal teams, and consumers. Throughout four phases—awareness, exploration, development, and activation—we explore the role of key stakeholders and their associated responsibilities as AI champions and arbiters of bias.
Bias is generally introduced into AI systems unintentionally by humans, but the duality of humans and machines makes bias detectable—and the risk mitigated helps companies do the right thing for their businesses and society.