Explainable AI is paying off as software companies try to make AI more understandable. LinkedIn recently increased its subscription revenue after using AI that predicted clients at risk of canceling and described how it arrived at its conclusions. “Explainable AI is about being able to trust the output as well as understand how the machine got there,” Travis Nixon, the CEO of SynerAI and Chief Data Science, Financial Services at Microsoft, told Lifewire in an email interview. “‘How?’ is a question posed to many AI systems, especially when decisions are made or outputs are produced that aren’t ideal,” Nixon added. “From treating different races unfairly to mistaking a bald head for a football, we need to know why AI systems produce their results. Once we understand the ‘how,’ it positions companies and individuals to answer ‘what next?’.”
Getting to Know AI
AI has proven accurate and makes many types of predictions. But AI is often able to explain how it came to its conclusions. And regulators are taking notice of the AI explainability problem. The Federal Trade Commission has said that AI that is not explainable could be investigated. The EU is considering the passage of the Artificial Intelligence Act, which includes requirements that users be able to interpret AI predictions. Linkedin is among the companies that think explainable AI can help boost profits. Before, LinkedIn salespeople relied on their knowledge and spent huge amounts of time sifting through offline data to identify which accounts were likely to continue doing business and what products they might be interested in during the next contract renewal. To solve the problem, LinkedIn started a program called CrystalCandle that spots trends and helps salespeople. In another example, Nixon said that during the creation of a quota setting model for a company’s sales force, his company was able to incorporate explainable AI to identify what characteristics pointed to a successful new sales hire. “With this output, this company’s management was able to recognize which salespeople to put on the ‘fast track’ and which ones needed coaching, all before any major problems arose,” he added.
Many Uses for Explainable AI
Explainable AI is currently being used as a gut check for most data scientists, Nixon said. The researchers run their model through simple methods, ensure there’s nothing completely out of order, then ship the model. “This is in part because many data science organizations have optimized their systems around ’time over value’ as a KPI, leading to rushed processes and incomplete models,” Nixon added. People often aren’t convinced by results that AI can’t explain. Raj Gupta, the Chief Engineering Officer at Cogito, said in an email that his company has surveyed customers and found that nearly half of consumers (43%) would have a more positive perception of a company and AI if companies were more explicit about their use of the technology. And it’s not just financial data that’s getting a helping hand from explainable AI. One area that’s benefiting from the new approach is image data, where it’s easy to indicate what parts of an image the algorithm thinks are essential and where it’s easy for a human to know whether that information makes sense, Samantha Kleinberg, an associate professor at Stevens Institute of Technology and an expert in explainable AI, told Lifewire via email. “It’s a lot harder to do that with an EKG or continuous glucose monitor data,” Kleinberg added. Nixon predicted that explainable AI would be the basis of every AI system in the future. And without explainable AI, the results could be dire, he said. “I hope we progress on this front far enough to take explainable AI for granted in years to come and that we look back at that time today surprised that anybody would be crazy enough to deploy models that they didn’t understand,” he added. “If we don’t meet the future in this way, I’m worried the blowback from irresponsible models could set the AI industry back in a serious way.”