AI Is Biased. Here’s How We Can Fix That.

Can artificial intelligence be fair?

It’s no secret that artificial intelligence (AI) is one of the hottest enterprise technology trends. Though an Adobe study indicates that only 15% of enterprises are currently using AI, 31% say it is on their agenda for the next 12 months. And an even higher percentage of digitally mature businesses (47%, to be exact) say they have a defined AI strategy.

This increasing level of investment in cognitive technologies has prompted a flurry of concerns ranging from the practical to the theoretical. Many analysts are quick to point out that, for all the excitement, artificial intelligence is still far from being able to completely replace humans or take over the world.

So there’s a lot for businesses to consider when adopting and implementing AI tools. Of these concerns, however, one of the most pressing is a simple question: Can artificial intelligence be fair?

How Do We Know AI Is Biased? How Much Does It Matter?

Though we like to think of machines as uninfluenced by prejudice, the reality is they’re remarkably susceptible to bias. After all, they are created by humans—whose history of biased thinking is extensive and well-documented.

There are countless examples of artificial intelligence gone awry. Often, we dismiss them as inevitable glitches in a still-nascent technology—or resignedly accept these incidents as an accurate reflection of who we are as humans. When Microsoft’s Twitter bot learned to be racist after just one day, for example, many people shrugged their shoulders and said: “It’s only a bot. And Twitter is full of offensive content anyway. What did you expect?”

31% of businesses say AI is on their agenda for the coming year.

While these examples may seem inconsequential, there are also real-life cases of AI displaying prejudice that could cause irreparable harm.

Earlier this year, a computer program used by US courts was found to mistakenly label black defendants as likely re-offenders—incorrectly flagging them almost twice as often as white defendants, according to ProPublica. And as we now know, Amazon’s internal hiring tool overwhelmingly preferencedwhite male applicants, despite years of tinkering.

In light of incidents like these, the World Economic Forum recently listed bias in AI as a key concern—for all the same reasons that human bias is a problem. After all, many organizations hope to use AI to augment (and in some cases, replace) human decision-making. These decisions aren’t just everyday business tasks. They also include decisions that can profoundly affect human lives, like:

  • Hiring and promotion
  • Medical decisions and insurance claims evaluation
  • Sentencing in criminal court cases
  • Loan worthiness and application evaluation

These applications of AI aren’t usually the ones that capture the public’s attention, but they’re by far the most consequential. It’s therefore crucial to understand how AI bias can affect the outcomes of these decisions.

Not only is there a strong ethical case for making artificial intelligence as unbiased as possible, but there’s also a practical reason: bias can be a barrier to the technology’s success. Building trust between machines and humans is essential to widespread adoption of cognitive technology—and it’s impossible to truly trust something you think might not be impartial or ethical.

Diverse Teams Create Fairer Algorithms

Yes, AI can be biased, and it’s something everyone should be aware of. But that doesn’t mean you should never use it. It is very possible to ensure that the artificial intelligences we adopt are fair and unbiased. For AI experts, the first step in creating more equitable technology is fundamentally human.

“When we’re talking about bias, we’re worrying first of all about the focus of the people who are creating the algorithms,” says Kay Firth-Butterfield, WEC’s Head of Artificial Intelligence and Machine Learning. “We need to make the industry much more diverse in the West.”

Historically, data science and technology professions have struggled to be more inviting towards women and minority groups. According to a recent Bloomberg article, just 18% of undergraduate computer science (CS) degrees go to American women. In addition to this “pipeline problem,” tech companies can be notoriously unfriendly places to work for these groups.

“In the process of recognizing our bias and teaching machines about our common values, we may improve more than AI. We might just improve ourselves.”

There’s reason to believe the climate is changing, however. At MIT, for example, 42% of CS majors are women, an 8% increase from five years ago. The ratio is similar at Carnegie Mellon University, which hit a low point in female representation—at just 7%—in the 1990s.

This progress in academia points the way forward for changes in the tech industry at large, and especially in cognitive technology. “If we can do it, anybody can do it,” says Lenore Blum, a distinguished professor of computer science at Carnegie Mellon. “Now people are paying attention.”

Better Data = Better Outcomes

Another area for improvement is the data we feed cognitive machines. After they’re created, algorithms must be trained using pre-selected data sets. If you want your machine to diagnose skin cancers, for example, you might feed it images of melanomas and benign moles. Over time, the algorithm will learn to identify which is which.

But this data, experts say, may be incomplete or contain intrinsic biases.

Joanna Bryson, a researcher at the University of Bath, studied a program built to understand the relationships between words. Trained with text pulled from the internet, the program quickly learned to associate female names with professions like “nurse” and “secretary.”

“People expected AI to be unbiased; that’s just wrong,” Bryson says. “If the underlying data reflects stereotypes, or if you train AI from human culture, you will find these things.”

As with every technology, AI works according to the “garbage in, garbage out” principle. If the data you supply the algorithm is biased, the outcomes will be biased too. Making sure you provide a strong initial dataset is crucial to getting good results from AI, as well as minimizing the potential for bias.

When it comes to businesses applying this principle, the biggest roadblock is the data silos that come with rapid cloud adoption. They have too many disconnected data sources and not enough resources to move, clean, and reconcile the data manually.

But it’s not an insurmountable challenge; technologies like intelligent automation and integration promise to help relieve this burden. By connecting apps into cohesive workflows, businesses can not only keep their data clean and error-free but also seamlessly work artificial intelligence tools into their automated business processes.

Stamping Out Prejudice, One Machine At a Time

There’s no one simple solution to solving the problem of bias in AI. Instead, those working on (and with) the technology must remain vigilant and aware of the ways bias can influence their work. As with other areas of tech adoption, self-reflection is key. Even something as simple as asking, “Who is working on our AI implementation?” can go a long way towards reducing prejudice.

Working to make AI more equitable may also have long-term benefits that transcend the business realm. As IBM researcher Francesca Rossi points out, eliminating bias in AI could ultimately help stamp it out on a larger scale.

“As AI systems find, understand, and point out human inconsistencies in decision making, they could also reveal ways in which we are partial, parochial, and cognitively biased, leading us to adopt more impartial or egalitarian views,” she writes. “In the process of recognizing our bias and teaching machines about our common values, we may improve more than AI. We might just improve ourselves.”

So how should you be using artificial intelligence? Check out four ways it could optimize your business processes today >