How to Avoid Buying Biased AI-Based Marketing Tools

We’re excited to bring back Transform 2022 in person on July 19 and virtually from July 20 through August 3. Join leaders in AI and data for in-depth discussions and exciting networking opportunities. Learn more

In a previous article, I described how to ensure marketers minimize bias when using AI. When bias creeps in, it will have a significant impact on efficiency and ROAS. Therefore, it is essential that marketers develop concrete steps to ensure minimal bias in the algorithms we use, whether it is your own AI solutions or AI from third-party vendors.

In this article, we’ll take the next step and document the specific questions to ask any AI vendor to ensure they’re minimizing bias. These questions can be part of an RFI (request for information) or an RFP (request for proposal), and they can serve as a structured approach for periodic reviews of AI vendors.

Marketers’ relationships with AI vendors can take many forms, varying depending on which AI building blocks are internal or external. At one end of the spectrum, marketers often leverage AI that comes completely out-of-the-box from a vendor. For example, marketers can launch a campaign against a predefined audience within their DSP (demand-side platform), and that audience can be the result of a similar model based on an initial set of audience data. from the supplier.

On the other end of the spectrum, marketers can choose to use their own training data set, do their own training and testing, and simply leverage an external technology platform to manage the process, or “BYOA” (“Bring Your Own Algorithm”, a growing trend) to a DSP. There are many variations in between, such as providing marketers’ first-party data to a vendor to create a custom model.

The list of questions below is for the scenario where a marketer leverages a completely out-of-the-box, AI-powered product. This is largely because these scenarios are most likely to be offered to a marketer as a black box and therefore present the most uncertainty and potentially the most risk of undiagnosed bias. Black boxes are also harder to tell apart, which makes vendor comparison very difficult.

But as you will see, all of these questions are relevant to any AI-based product, no matter where it was built. So while parts of the AI ​​building process are internal, these same questions are important to ask internally as part of that process.

Here are five questions to ask vendors to ensure they are minimizing AI bias:

1. How do you know your training data is accurate?

When it comes to AI, garbage goes in, garbage goes out. Having great training data doesn’t necessarily mean great AI. However, having bad training data guarantees bad AI.

There are several reasons why certain data can be bad for training, but the most obvious is if it is inaccurate. Most marketers don’t realize how inaccurate the datasets they rely on are. In fact, the Advertising Research Foundation (ARF) just published rare insight into the accuracy of demographics in the industry, and its findings are eye-opening. Industry-wide, “Having kids at home” data is incorrect 60% of the time, “Single” marital status is incorrect 76% of the time, and “Ownership of a small business ” is incorrect 83% of the time! To be clear, these are not results of models predicting these consumer designations; rather, it is inaccuracies in the datasets that are presumably used to train models!

Inaccurate training data disrupts the algorithm development process. For example, suppose an algorithm optimizes dynamic creative elements for a travel campaign based on geographic location. If the workout data is based on inaccurate location data (which is very common with location data), it may for example appear that a consumer in the southwestern United States responded to an ad about a driving vacation to a Florida beach, or that a consumer in Seattle responded to a fishing trip in the Ozark Mountains. This will result in a very confusing reality model, and therefore a sub-optimal algorithm.

Never assume your data is accurate. Consider the source, compare it to other sources, check for consistency, and check against truth sets whenever possible.

2. How do you know your training data is complete and diverse?

Good training data also needs to be thorough, which means you need plenty of examples describing every conceivable scenario and outcome you’re trying to drive. The more in-depth you are, the more sure you can be about the patterns you find.

This is especially relevant for AI models designed to optimize rare outcomes. Freemium mobile game download campaigns are a great example. Games like these often rely on a small percentage of “whales”, users who buy a lot of in-game purchases, while other users buy little or none at all. To train an algorithm to find whales, it’s very important to make sure that a dataset contains a ton of examples of whales’ consuming journeys, so the model can learn the pattern of who ends up being a whale. A training dataset is bound to be biased towards non-whales, as they are much more common.

Another angle to add to this is diversity. If you’re using AI to bring a new product to market, for example, your training data will likely consist mostly of early adopters, which can skew in some ways in terms of HHI (household income), life cycle, age and other factors. . When trying to “cross the chasm” with your product to reach a more mainstream consumer audience, it is essential to ensure that you have a diverse training data set that not only includes early adopters, but also a more representative audience of subsequent users.

3. What tests have been carried out?

Many companies focus their AI testing on the overall success of the algorithm, such as accuracy or precision. Certainly, it is important. But for the biases in particular, the tests cannot stop there. A great way to test for bias is to document specific subgroups that are critical to an algorithm’s primary use cases. For example, if an algorithm is set up to optimize for conversion, we might want to run separate tests for large items and small items, or new vs. existing customers, or different types of creatives. Once we have this list of subgroups, we need to track the same set of algorithm success metrics for each individual subgroup, to find out where the algorithm performs significantly worse than overall.

The recent IAB (Interactive Advertising Bureau) report on AI Bias offers a comprehensive infographic to guide marketers through a decision tree process for this subgroup testing methodology.

4. Can we run our own test?

If a marketer is using a vendor’s tool, it’s highly recommended that you not only trust that vendor’s tests, but also run your own, using a few key subsets that are specifically critical to your business.

It is essential to follow the performance of the algorithm in the subgroups. Performance is unlikely to be identical between them. If not, can you accept the different levels of performance? Should the algorithm only be used for certain subgroups or use cases?

5. Have you tested for bias on both sides?

When I think about the potential implications of AI bias, I see a risk to both the inputs into an algorithm and the outputs.

In terms of inputs, imagine using a conversion optimization algorithm for a high consideration product and a low consideration product.

An algorithm can be much more effective at optimizing low consideration products because all consumer decisions are made online and therefore there is a more direct path to purchase.

For a high consideration product, consumers may search offline, visit a store, talk to friends, and therefore there is a much less direct digital path to purchase, and therefore an algorithm may be less accurate for these types of campaigns.

In terms of results, imagine a conversion-optimized mobile commerce campaign. An AI engine is likely to generate significantly more training data from short-tail apps (such as ESPN or Words With Friends) than from long-tail apps. Thus, it is possible for an algorithm to steer a campaign towards shorter inventory because it has better data on those applications and is therefore better able to find performance patterns. A marketer may discover over time that their campaign is over-indexed with expensive short-tail inventory and potentially lose what could be very effective long-tail inventory.

The bottom line

The list of questions above can help you develop or refine your AI efforts to have as little bias as possible. In a world that is more diverse than ever, it’s imperative that your AI solution reflects this. Incomplete training data or insufficient testing will lead to suboptimal performance, and it is important to remember that testing for bias is something that must be consistently repeated as long as an algorithm is in use.

Jake Moskowitz is Vice President of Data Strategy and Head of emodo Institute at Ericsson Emodo.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.

If you want to learn more about cutting-edge insights and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might even consider writing your own article!

Learn more about DataDecisionMakers