This is just some announcement which people will want to pay attention to! Learn More
This is just some announcement which people will want to pay attention to! Learn More

Explainable AI: Why explaining AI models is important

As artificial intelligence solutions have grown in popularity, creators and users of AI have adopted and adapted different words and phrases to explain them and to name their work. You’ve probably heard phrases like “narrow AI,” “deep learning,” “neural networks,” and other new descriptors used to differentiate between types and functions of AI, different parts of AI solutions, and more. Another consequence of the explosion in AI’s availability and adoption has been the need for what we now call “explainable AI”: methods and techniques used to render AI more readily understandable by all kinds of humans. 

Why do we need explainable AI? Most importantly, because the proliferation of AI solutions has made them more available to workers whose knowledge of data science and artificial intelligence is general at best. It used to be that only those with technical knowledge could or would implement an AI solution. Now, there is effective AI available to hiring managers, salespeople, visual artists, and many other types of employees, as well as to people operating outside the realm of work. Many of these solutions are available out of the box, for someone like a floor manager to implement on his or her own, or for you to download directly onto your phone.

AI now affects our lives on both mundane and profound levels. AI might help you avoid some repetitive, time-consuming tasks at work today, and it might also make a diagnosis that saves your life at an upcoming medical appointment. These two disparate examples are representative of the two most often cited arguments in favor of making AI explainable: 

  1. People want to know how their AI works so that they feel comfortable using it, so that they can customize it, so that they can choose the right solution between many, or so that they can tune it to work for them more effectively. 
  2. People have a right to know how AI works for ethical reasons, because it’s being relied upon to make decisions in healthcare, law, finance, and other realms where moral and ethical boundaries are extremely important to most humans. 

Explainability is a way of opening up what’s often called “the black box” of AI and discussing solutions in terms that most people can understand. It also helps us answer important collective questions like “How exactly was this crucial decision made?” or “Were there other potential choices available?”

Companies like CrowdANALYTIX frequently need to work in the realm of Explainable AI. For example, we recently developed a very efficient ensemble model for a pharmaceuticals giant, based primarily on random forest, to help define the selection parameters for Phase III drug trials based on the prior results of Phase II drug trials. Our algorithm allowed Phase III trials to be conducted with 20% fewer patients than with traditional statistics-based approaches. This translated to millions of dollars in potential savings for the pharmaceuticals company.

However, we ran into problems when we attempted to explain how our model performed. Most random forest models, although accurate, end up becoming a complex web of trees with weights tuned by algorithm to optimize accuracy. They work well, but are too complex for humans to explain easily. Without better explainability, we had no chance of gaining the FDA approval needed for the company to actually conduct trials using our unique approach. 

Better explainability was key to the acceptance of our solution. To address this, we designed an approximation of our working model using a decision tree approach rather than the random forest actually used in the solution. We got similar results, but this version was far easier for us to explain and for those outside the data science community to understand. We had to sacrifice some of the precision of our original model, but this compromise was worthwhile, because we needed to be transparent to the traditional and ethically rigorous standards of the FDA. Any step towards making AI more accepted in fields like pharmaceuticals and healthcare is a step in the right direction. 

It’s clear that some of the most powerful companies in the world see Explainable AI as the future of artificial intelligence. Google is making massive investments in this field, increasingly offering interactive visualization tools for anyone who wants to understand exactly how machine learning works. Investors are rapidly picking up on the growing call for AI that can be broken down into digestible pieces of information for lay people, who want to have the power of knowledge when it comes to implementing and trusting their solutions for work and life. 

Explainable AI can only get us closer to our goals of leveraging AI to do the tasks it is most suited to, while humans continue to innovate and improve. If we’re able to understand when our AI succeeds, when it fails, and why, we can allocate our resources more strategically and gain massive efficiencies on every level. Now is the time for data scientists to acquire Explainable AI skills and begin leveraging them to make their solutions workable and appealing to consumers who want to understand their tools.

Share This Post

Share on facebook
Share on linkedin
Share on twitter
Share on email

More To Explore

The colleXion advantage

We call it the world’s largest marketplace for retail AI. But it’s so much more than that. Not only will you find AI solutions to

One Store for All Your AI Needs

It’s the small details that matter. That little something that makes all the difference. Like an out-of-the-box product the vendor keeps stocked, that works well