Back to blog home

Series A: Why Now and What’s Next for Fiddler?

In the news: VentureBeat, Axios Pro Rata- Venture Capital Deals, TechStartups, SiliconAngle, peHUB, Finsmes, Analytics India Magazine 

It has been a year since we founded Fiddler Labs, and the journey so far has been incredible. I’m very excited to announce that we’ve raised our Series A funding round at $10.2 million led by Lightspeed Venture Partners and Lux Capital, with participation from Haystack Ventures and Bloomberg Beta. Our first year in business has been super awesome, to say the least. We’ve built a unique Explainable AI Engine, put together a rock-solid team, and we’ve brought in customers both from the Enterprise and startup worlds. 

As we ramped up Fiddler over the last one year, the one thing that stood out was how so many Enterprises choose not to deploy AI solutions today due to the lack of explainability. They understand the value AI provides, but they struggle with the ‘why’ and the ‘how’ when using traditional black-box AI because most applications today are not equipped with Explainable AI. It’s the missing link when it comes to AI solutions making it to production. 

Why Explainable AI? 
We get this a lot since Explainable AI is still not a household term and not many companies understand what this actually means. So I wanted to ‘explain’ Explainable AI with a couple of examples.

Credit lending

Let’s consider the case where an older customer (age 65+) wants a credit line increase and reaches out to the bank to request this. The bank wants to use an AI credit lending model to run this query. The model returns a low score and the customer’s request is denied. The bank representative dealing with the customer has no idea why the model denied the request. And when they follow up internally, they might find that there is a proxy bias built into the model because of the lack of examples in the training data representing older females. Before you get alarmed, this is a hypothetical situation as Banks go through a very diligent Model Risk Management process as per guidelines specified in SR-11-7, ECOA, and FCRA to vet their models before they launch them for usage. However, the tools and processes have been built for much simpler quantitative models that they have been using for decades to process these requests. As Banks and other financial institutions look to moving towards AI-based underwriting and lending models, they need tools like Fiddler. If the same AI model were to run through Fiddler’s Explainable AI Engine, the team will quickly realize that the loan was denied because this older customer is considered an outlier. Explainability shows that the training data used in the model was age-constrained: limited age range of 20-60 year olds.

Cancer prediction

Let’s consider the case where a deep neural network AI model is used to make cancer predictions with the data from chest X-rays. Using this trained data, the model predicts that certain X-Rays are cancerous. We can use an Explainability method that can highlight regions in the X-ray, to ‘explain’ why an X-ray was flagged as cancerous. What was discovered here is very interesting - the model predicted that the image was cancerous because of the radiologist’s pen markings rather than the actual pathology in the image. This shows just how important it is to have explanations in any AI model. It tells you the why behind any prediction so you, as a human, know exactly why that prediction was made and can course-correct when needed. In this case, because of explanations, we realized that the prediction was based on something completely irrelevant to actual cancer. 

Explainability gives teams visibility into the inner workings of the model, which in turn allows them to fix things that are incorrect.  As AI penetrates our lives more deeply, there is a growing ask to make its decisions understandable by humans, which is why we’re seeing regulations like GDPR stating that customers have ‘a right to explanations for any automated decision’. Similar laws are being introduced in countries like the United States. All these regulations are meant to help increase trust in AI models and explainable AI platforms like Fiddler can provide this visibility so that companies accelerate their adoption AI that is not only efficient but also is trustworthy. 

What we do at Fiddler
At Fiddler, our mission is to unlock trust, visibility, and insights for the Enterprise by grounding our AI Engine in Explainability. This ensures anyone who is affected by AI technology can understand why decisions were made and ensure AI outputs are ethical, responsible, and fair. 

We do this by providing

  • AI Awareness: understand, explain, analyze, and validate the why and how behind AI predictions to deliver explainable decisions to the business and its end users
  • AI Performance: continuously monitor production, test, or training AI models to ensure high-performance while iterating and improving models based on explanations
  • AI Compliance: with AI regulations becoming more common, ensure industry compliance with the ability to audit AI predictions and track and remove inherent bias

Fiddler is a Pluggable Explainable AI Engine - what does this mean?
Fiddler works across multiple datasets and custom-built models. Customers bring in their data that is stored in any platform- Salesforce, HDFS, Amazon, Snowflake, and more - and/or their custom-built models built using Scikit-Learn, XGBoost, Spark, TensorFlow, PyTorch, Sagemaker, and more, to the Fiddler Engine. Fiddler works on top of the models and data to provide explainability on how these models are working with trusted insights through our APIs, reports, and dashboards. 

Our Explainable AI Engine meets the needs of multiple stakeholders in the AI-lifecycle: from data scientists and business analysts to model regulators and business operations teams. 

Our rock-solid team
Our team comes from companies and universities like Facebook, Google Brain, Lyft, Twitter, Pinterest, Microsoft, Nutanix, Samsung, Georgia Tech, and Stanford. We’re working with experts from industry and academia to create the first true Explainable AI Engine to solve business risks for companies around user safety, non-compliance, AI black-box, and brand risk.

As the Engineering Lead on Facebook’s AI-driven News Feed, I know just how useful explanations were for Facebook users as well as internal teams: from engineering all the way up to senior leadership.  My co-founder, Amit Paka, had a similar vision when he was working on AI-driven product recommendations in shopping apps at Samsung. 

Since our inception back in October 2018, we’ve significantly grown the team to include other Explainability experts like Ankur Taly, who was one of the co-creators of the popular Integrated Gradients explainability method when he was at Google Brain.

As we continue our hyper-growth trajectory, we’re continuing to expand both the engineering and business teams and are hiring more experts to ensure the Fiddler Explainable AI Engine continues to be the best in its category.

We’re super excited to continue this mission and ensure that AI is explainable in every enterprise! 

Want to join our team? We’re hiring! Check our open positions here.