For most businesses, decisions — from creating marketing taglines to which merger or acquisition to approve — are made solely by humans using instinct, expertise, and understanding built through years of experience. However, these decisions were invariably susceptible to the nuances of human nature: decisions included bias, stereotypes and inconsistent influences.
And while humans eventually shifted into considering data, the human brain can’t process it in large amounts. Humans naturally prefer summarized content when making decisions. Unfortunately, these summaries don’t always include relevant data and details, and some are still marred with bias.
We’re now in the “Decisions with Humans + Data + AI” phase. With the advent of the cloud and the massive storage and compute power that came with it, analyzing complex data sets became fast and efficient. With the additional information processing capabilities of AI, we’re able to process vast amounts of information quickly and uncover relationships in data that would have otherwise gone unnoticed.
AI solutions produce outcomes and predictions for multiple use cases across various industries much faster than a human brain can process. From predicting cancer in patients and potential hospital stay times to making credit-lending decisions and personalizing shopping recommendations, AI has a broad reach.
There are limitations though. Humans have access to additional ambient information when they make decisions including being empathetic, and this is difficult for AI to comprehend. When we leave the decisions to AI, without intervening when needed, we might get incorrect or unusual outcomes. When we leave decisions only to AI, there are unanswered questions. Do we know why AI made those predictions? What was the reasoning behind it? Are we certain there is no inherent bias? Are we certain there is no inherent bias? Are we certain we can trust the outcomes? How do we course-correct if we don’t know?
The Next Phase: Explainable AI
Complex AI systems are invariably black boxes with minimal insight into how they work. Enter Explainable AI.
Explainable AI works to make these AI black-boxes more like AI glass-boxes. Although businesses understand the many benefits to AI and how it can provide a competitive advantage, they are still wary of the potential risks when working with AI black-boxes. So now the focus is moving towards an explainability infused approach into the ‘Decisions with Humans + Data + Explainable AI’ phase.
Read the full article on Datanami.