Back to blog home

How Explainable AI Keeps Decision-Making Algorithms Understandable, Efficient, and Trustworthy - Krishna Gade x Intelligent Automation Radio

Krishna Gade, co-founder and CEO of Fiddler, recently spoke with Guy Nadivi from Ayehu, the IT automation company powered by AI. Guy welcomed Krishna as a guest on Intelligent Automation Radio, to help executives understand the impact and importance of Explainable AI in their business. You can watch the full episode or read the transcript here. Below, we’ve summarized the highlights of the conversation. 

Where did Fiddler get its name?

You may be familiar with the story of how Fiddler got started based on Krishna’s work as an engineering lead at Facebook, where his team built an explainability system for the AI models that powered the Newsfeed. But where did the name “Fiddler” come from?

Krishna shared the story with Guy. “The first feature that we built was this ability to ‘fiddle’ with the models or ‘fiddle’ with the inputs in the model and observe the changes in the outputs. And so we really loved that and we wanted something that is very easy to remember, and ‘fiddling’ with AI was something that we wanted people to be able to do — to make it transparent, to trust it. And that’s how the name Fiddler came along.”

What is Explainable AI, and why does it matter?

In simple terms, models look at historical data and find patterns that they use to predict the future. However, models are sophisticated “black boxes” using high-dimensional data and layers upon layers of decision-making logic. As Krishna put it, “A human, whether that’s a business user or a developer, cannot open up the system and look at all the rules that it [the model] is using to do the predictions. And so Explainable AI is a set of techniques that can make this black box AI system transparent so that you can look into what’s going on.”

Explainable AI is important because more and more ML and AI systems are being deployed into production across every industry, becoming a first-class citizen in the software stack for many companies. If you can’t open up these black boxes and look at what’s going on, you are going to have several problems in several areas:

Debugging and finding the root cause of production issues with models. Just like production software, AI systems need to monitored after they’re deployed. But when there’s an alert about poor model performance, it can be difficult to debug the issue if you don’t have explainability. Is something broken in the data pipelines? Are we seeing outliers? Does the model need to be retrained? Explainability helps ensure your models are performing correctly and reduces the time to troubleshoot, resulting in a bigger return on investment and reduced time-to-market for organizations looking to implement AI.

Knowing when the historical data that you used to train the model is no longer an accurate prediction of the future. Models are non-deterministic and subject to performance degradation due to “data drift.” This happened during the COVID-19 pandemic with many companies. “The models were trained with this prehistoric pre-coronavirus data, where the user behavior is normal,” said Krishna. “And then post-coronavirus, all of a sudden the models were not predicting with high accuracy.”

Detecting and fixing bias in your data — so you don’t break consumer trust. Companies don’t try to create bias, but it can happen when they’re not paying attention. Krishna told the story of the infamous Apple Card gender bias issues from about a year and a half ago. “People were getting different credit limits set automatically, and what happened was within the same household, the husband and wife got 20x difference in terms of the credit limits that were being set by their algorithm….when the customers complained to Goldman Sachs customer support, the answer that they got was, ‘Oh, we don’t know. It’s just the algorithm.’” With explainable AI, the team could have validated the model and caught an issue like this before it went to production.

Being compliant with current and upcoming regulations. “When you’re using AI and machine learning in regulated use cases like credit underwriting or recruiting, it’s important to make sure that a third party is going to be able to understand how your models are working,” Krishna said. “In those cases, explainability is a must-have, because otherwise, you would not be compliant with respect to the regulations.” Today, regulations affect industries where there are high-risk use cases for AI, such as healthcare, financial services, and insurance. In the near future, we can expect to see regulations like the Algorithmic Accountability Act currently put before Congress, which creates overall compliance requirements for any AI application.  

Case Study: Hired.com and Fiddler

To give an example of how an organization can benefit from Explainable AI, Krishna shared the story of how Hired uses Fiddler to build trust with AI. Hired is a job search marketplace that uses AI to match recruiters with candidates. “During this process, Hired was building very sophisticated, deep learning models to do this matching,” Krishna explained. “And what they felt was, they did not have a good way to understand how these models are working.” If the Hired team — like developers who were building the models or business stakeholders — had a question like, “Why did we reject this candidate?” or “Why was this candidate matched to that job or this recruiter?”, they had no clear way to find the answer. 

“By using Fiddler in their workflow today, they’re able to understand how their candidate curation is working,” Krishna said. As a result, “They are treating candidates across different genders and ethnicities equitably, and fixing defects in their models.” In short, Hired has been able to use Fiddler to improve trust in their system and reduce the time spent debugging and finding the answers to questions they had about their models.

It’s an exciting time to join the data science field

“Lots of companies are hiring data scientists and machine learning as a whole is taking off,” Krishna told Guy. “And so it’s a very, very exciting time for people to learn this field and join the workforce in this area in general.” 

Krishna shared that Fiddler looks to hire people who are passionate about the mission of building trust and transparency with AI. They should have technical experience with data science and machine learning, but that doesn’t mean they need a PhD. “If you’re thinking of going into machine learning or data science from a different field, we actually have quite a few data scientists on our team who have done that,” Krishna said. 

Apply here!

Explainable AI can create value from a goldmine of data

Why is AI such an exciting field at the moment? Well, it has a lot to do with data. “In the last two decades, data has just exploded,” Krishna said, noting the wide variety of cloud companies and data warehousing tools that have come about. Enterprise companies especially collect mountains of data from various different channels about their users, the business, and how customers are interacting with their products and services. As Krishna put it, “We are sitting on this goldmine of data. Now, how do you make sense of this data so that you can actually build actionable products? How do you improve your credit decisioning system? How do you improve your e-commerce recommendation system? Or how do you improve your recruiting system that is processing resumes? How do you improve your clinical diagnosis?” 

Krishna explained that executives need a few things to really take advantage of AI at this pivotal moment: 

  • Clear understanding of the business problem they want to use AI for
  • Data that can be used to train a model to solve this problem
  • A team of data scientists and engineers that can implement and maintain the solution
  • The right toolkit to empower your team to deploy models in a responsible, scalable manner. 

Based on conversations with customers, Krishna has seen that this last point is often what’s missing at an organization looking to deploy ML. “I was having a chat with the SVP of Analytics of a very large insurance company this week,” Krishna shared. “And he was telling me that there are so many models that they create that don’t end up in production. Why? Because they don’t know which model to launch, they don’t know what’s a good model, whether this model will perform properly in production or not, they don’t have the good tools to measure and monitor the performance of the models, they don’t have good tools to understand how the model is working in a deeper manner.” 

These are the problems Fiddler is designed to solve. We’re looking forward to continuing to share our perspective on the importance of ML Monitoring with Explainable AI and how it can be integrated into the entire AI lifecycle. Request a demo.