5 AI fears and how to address them

IT leaders implementing AI will encounter fears – many of them well-founded. The trick is to focus on these real-world concerns, not the time-traveling robot assassins
502 readers like this.
robotic process automation rpa trends 2021

Artificial intelligence occupies the strange position of having a decades-long history while still feeling wholly futuristic to many people. It’s not actually new, but it remains an eternally “new” frontier. No one can honestly claim to know precisely where it will lead.

Any change produces fear. AI-related fears are of a different order, though.

So if it’s true that we fear what we don’t understand, then it makes sense that the future of AI keeps people up at night, especially when considering the more ominous possible outcomes. You could reasonably assume this is true of any major technological developments: They generate change, which produces fear, et cetera. AI-related fears are of a different order, though.

[ Want a quick-scan primer on 10 key artificial intelligence terms for IT and business leaders? Get our Cheat sheet: AI glossary. ]

Most people don’t know what microservices architecture is, for example, even if some of the apps they use every day were built in decoupled fashion. But technical evolutions like microservices don’t tend to cause the kinds of emotional responses that AI does around potential social and economic impacts. Nor have microservices haven’t been immortalized in popular culture: No one is lining up at the box office for "Terminator: Rise of the Cloud-Native Apps."

This speaks mainly to fears about AI’s nebulous future, and it can be tough to evaluate their validity when our imaginations run wild. That’s not particularly useful for IT leaders and other execs trying to build a practical AI strategy today. Yet you will encounter fears – many of them well-founded. The trick is to focus on these real-world concerns, not the time-traveling robot assassins. For starters, they’re much easier to defeat – er, address – because they’re often based in current reality, not futuristic speculation.

“The types of fears [people have about AI] depend on the type of AI that we are talking about,” says Keiland Cooper, a neuroscience research associate at the University of California Irvine and co-director of ContinualAI. “The more theoretical and far off ‘general AI’ – a computer that can do all the things that humans can do – will raise more fears than those from a more realistic AI algorithm like we see being commonly used today.”

Let’s look at five legitimate concerns about AI today – and expert advice for addressing them so that they don’t derail your AI plans.

1. Fear: AI will produce biased outcomes

There is growing focus on the possibility – though probability is likely the better term – of bias and other ills in AI systems and the decisions or outcomes they lead to. Unlike some of the more imaginative Hollywood narratives about AI, you should be scared of AI bias.

Unlike some of the more imaginative Hollywood narratives about AI, you should be scared of AI bias.

“Algorithms are only as good as the data that they are trained on. So if a dataset includes the historical biases of an organization, then the predictions it makes will reflect that historical behavior,” Chris Nicholson, co-founder and CEO of Skymind. “For example, if a company spent decades promoting white males with Ivy League degrees into positions of authority, then an algorithm trained to identify future leadership talent might focus on that same type of individual, and ignore people who don’t belong to that group.”

How to address it:

You should embrace this fear and act on it. An absence of concern about AI bias improves the odds that it will proliferate unchecked.

Algorithms should not absolve individuals and organizations of responsibility for the results; human oversight and governance is absolutely necessary, and a good example of how another fear – that we’re no longer needed – may be a bit overblown.

“You can’t trust AI to know everything or to make perfect decisions. Algorithms are produced by people, and people make mistakes,” Nicholson says. “So the thing that every company has to do is have a system built to check its AI. Take a regular sample of the AI’s decisions and show them to experts and ask them: Does that look right? Because then, at least, you’re no worse than the experts, which is all you could have hoped for to begin with.”

[ Ferret out bias: Read “AI bias: 9 questions leaders should ask.” ]

This may be especially important in sectors like healthcare, insurance, banking, government, and more. But really there’s nowhere where this won’t be an important issue.

“AI practitioners and machine learning engineers have to ensure they are holding themselves to a degree of algorithmic accountability, and IT leaders should have dedicated data teams building de-biasing programs for their existing data sets,” says Iba Masood, co-founder and CEO of Tara AI. “This would help deploy a level of fairness and equity in utilizing systems for decision-making processes, especially where end consumers are involved.”

It’s a matter of being ethical and equitable. AI ethics may also become a competitive differentiator, according to Masood.

“I believe that the next five years is going to see a conscious consumer who is looking to transact business with companies deploying fairness mechanisms in their decision making processes assisted by AI,” Masood says. “IT can have a significant impact in this consumer behavioral shift, by working to mitigate bias in data sets used for decision-based systems.”

2. Fear: We (will) have no idea why AI does what it does

Here’s another natural fear of the unknown: Many AI outcomes are difficult to explain.

“The most advanced forms of AI, which produce the most accurate predictions about data, are also the least able to explain why they made that prediction,” Nicholson says.

This is sometimes referred to as the "black box" of AI, referring to a lack of visibility into a system’s decisions.

This is sometimes referred to as the “black box” of AI, referring to a lack of visibility into a system’s decisions – something that could be problematic for a variety of organizations.

“In many cases and in many companies, people need to know why something was done,” Nicholson says. “That is especially true in highly regulated industries. Take healthcare. You don’t want an algorithm making decisions about a patient’s diagnosis or treatment without knowing why that decision was made.”

Cooper offers another scenario, noting that the black box model becomes particularly concerning when something goes wrong.

“Say I train an algorithm to pick the best stocks, and say it does a pretty good job, maybe making a nine percent profit,” Cooper says.

If you’re getting an adequate or better return on your financial investments, as in Cooper’s hypothetical (and plausible) scenario, you might not much care about why. You’re making money, after all. But what if you lost nine percent? What if you lost everything? You’ll probably care a whole lot more about why.

“The problem is that in many cases, we don’t know why it is choosing what is it choosing,” Cooper sats. “This is scary, as it not only makes us less involved with the system we are working with, but also doesn’t give us many insights should it do something wrong.”

[ Can AI solve that problem? Read also: How to identify an AI opportunity: 5 questions to ask. ]

How to address it:

One of the best means of addressing this fear is to ensure that human intelligence and decision-making is still a vital – and in some contexts, the ultimate – part of any process, even if that process is improved by AI. In other words, this fear can be mitigated by ensuring that people retain proper control of processes and decisions, even as the role of AI in those processes and decisions expands.

“In cases like [healthcare], AI is best employed as a form of decision support for human experts,” Nicholson says. “That is, you don’t let AI operate alone and without oversight. You integrate AI into an existing decision-making process, where it can make suggestions to a human expert, but the expert will be the one to make a final decision, and they will be able to explain why they made it.

3. Fear: AI will make bad decisions

Again, this is a perfectly sensible concern. How do we evaluate the accuracy and efficacy of AI’s results? What happens if it makes poor choices? (You can see how certain combinations of these fears have a compounding effect: What happens if AI makes bad decisions and we can’t explain why?) Assuming any and all AI-generated outcomes will automatically be “good” should make even the most optimistic people among us uncomfortable.

Bias can lead to bad decisions. This is actually a more sweeping fear, though, one that could – among other negative impacts – lead a team to mistrust any and every AI result. This can become more likely when the people outside of the AI team (or IT altogether) analyze the results. It can also lead to organizational stasis.

"This can be very tricky to nail down, particularly if a quantitative definition of a 'good' decision cannot be produced."

“Many people fear that AI will make poor decisions. This fear is often very broad from a technical perspective, but it always boils down to people thinking the decision ‘just isn’t right,’” says Jeff McGehee, director of engineering at Very. “For practitioners, this can be very tricky to nail down, particularly if a quantitative definition of a ‘good’ decision cannot be produced.”

How to address it:

Once again, the importance of the human element reigns. If you can’t quantify what constitutes a positive result, you’ll need to come up with a qualitative framework for doing so, while ensuring you’re relying on the right mix of people and information to combat real problems like bias.

“In order to identify such a definition, stakeholders must think critically about all possible definitions of good/bad with respect to the decision,” McGehee says. “Exact correctness may be ideal, but often, certain types of errors are more acceptable or more ‘human.’ In addition, ‘correctness’ may refer to whether or not you meet some standard list of predictions, but if this list holds inherent human bias, it may be a bad target. All of these factors can come into play when non-technical stakeholders are evaluating the quality of AI decisions.”

Kevin Casey writes about technology and business for a variety of publications. He won an Azbee Award, given by the American Society of Business Publication Editors, for his InformationWeek.com story, "Are You Too Old For IT?" He's a former community choice honoree in the Small Business Influencer Awards.

Comments

The most important thing is that there is always a strictly hardware OFF switch that cannot be bypassed!