Series A: Why now and what’s next for Fiddler?

In the news: VentureBeat, Axios Pro Rata- Venture Capital Deals, TechStartups, SiliconAngle, peHUB, Finsmes, Analytics India Magazine 

It has been a year since we founded Fiddler Labs, and the journey so far has been incredible. I’m very excited to announce that we’ve raised our Series A funding round at $10.2 million led by Lightspeed Venture Partners and Lux Capital, with participation from Haystack Ventures and Bloomberg Beta. Our first year in business has been super awesome, to say the least. We’ve built a unique Explainable AI Engine, put together a rock-solid team, and we’ve brought in customers both from the Enterprise and startup worlds. 

As we ramped up Fiddler over the last one year, the one thing that stood out was how so many Enterprises choose not to deploy AI solutions today due to the lack of explainability. They understand the value AI provides, but they struggle with the ‘why’ and the ‘how’ when using traditional black-box AI because most applications today are not equipped with Explainable AI. It’s the missing link when it comes to AI solutions making it to production. 

Why Explainable AI? 
We get this a lot since Explainable AI is still not a household term and not many companies understand what this actually means. So I wanted to ‘explain’ Explainable AI with a couple of examples.

Credit lending

Let’s consider the case where an older customer (age 65+) wants a credit line increase and reaches out to the bank to request this. The bank wants to use an AI credit lending model to run this query. The model returns a low score and the customer’s request is denied. The bank representative dealing with the customer has no idea why the model denied the request. And when they follow up internally, they might find that there is a proxy bias built into the model because of the lack of examples in the training data representing older females. Before you get alarmed, this is a hypothetical situation as Banks go through a very diligent Model Risk Management process as per guidelines specified in SR-11-7, ECOA, and FCRA to vet their models before they launch them for usage. However, the tools and processes have been built for much simpler quantitative models that they have been using for decades to process these requests. As Banks and other financial institutions look to moving towards AI-based underwriting and lending models, they need tools like Fiddler. If the same AI model were to run through Fiddler’s Explainable AI Engine, the team will quickly realize that the loan was denied because this older customer is considered an outlier. Explainability shows that the training data used in the model was age-constrained: limited age range of 20-60 year olds. 

Cancer prediction

Let’s consider the case where a deep neural network AI model is used to make cancer predictions with the data from chest X-rays. Using this trained data, the model predicts that certain X-Rays are cancerous. We can use an Explainability method that can highlight regions in the X-ray, to ‘explain’ why an X-ray was flagged as cancerous. What was discovered here is very interesting – the model predicted that the image was cancerous because of the radiologist’s pen markings rather than the actual pathology in the image. This shows just how important it is to have explanations in any AI model. It tells you the why behind any prediction so you, as a human, know exactly why that prediction was made and can course-correct when needed. In this case, because of explanations, we realized that the prediction was based on something completely irrelevant to actual cancer. 

Explainability gives teams visibility into the inner workings of the model, which in turn allows them to fix things that are incorrect.  As AI penetrates our lives more deeply, there is a growing ask to make its decisions understandable by humans, which is why we’re seeing regulations like GDPR stating that customers have ‘a right to explanations for any automated decision’. Similar laws are being introduced in countries like the United States. All these regulations are meant to help increase trust in AI models and explainable AI platforms like Fiddler can provide this visibility so that companies accelerate their adoption AI that is not only efficient but also is trustworthy. 

What we do at Fiddler
At Fiddler, our mission is to unlock trust, visibility, and insights for the Enterprise by grounding our AI Engine in Explainability. This ensures anyone who is affected by AI technology can understand why decisions were made and ensure AI outputs are ethical, responsible, and fair. 

We do this by providing

  • AI Awareness: understand, explain, analyze, and validate the why and how behind AI predictions to deliver explainable decisions to the business and its end users
  • AI Performance: continuously monitor production, test, or training AI models to ensure high-performance while iterating and improving models based on explanations
  • AI Compliance: with AI regulations becoming more common, ensure industry compliance with the ability to audit AI predictions and track and remove inherent bias

Fiddler is a Pluggable Explainable AI Engine – what does this mean?
Fiddler works across multiple datasets and custom-built models. Customers bring in their data that is stored in any platform- Salesforce, HDFS, Amazon, Snowflake, and more – and/or their custom-built models built using Scikit-Learn, XGBoost, Spark, TensorFlow, PyTorch, Sagemaker, and more, to the Fiddler Engine. Fiddler works on top of the models and data to provide explainability on how these models are working with trusted insights through our APIs, reports, and dashboards. 

Our Explainable AI Engine meets the needs of multiple stakeholders in the AI-lifecycle: from data scientists and business analysts to model regulators and business operations teams. 

Our rock-solid team
Our team comes from companies and universities like Facebook, Google Brain, Lyft, Twitter, Pinterest, Microsoft, Nutanix, Samsung, Georgia Tech, and Stanford. We’re working with experts from industry and academia to create the first true Explainable AI Engine to solve business risks for companies around user safety, non-compliance, AI black-box, and brand risk.

As the Engineering Lead on Facebook’s AI-driven News Feed, I know just how useful explanations were for Facebook users as well as internal teams: from engineering all the way up to senior leadership.  My co-founder, Amit Paka, had a similar vision when he was working on AI-driven product recommendations in shopping apps at Samsung. 

Since our inception back in October 2018, we’ve significantly grown the team to include other Explainability experts like Ankur Taly, who was one of the co-creators of the popular Integrated Gradients explainability method when he was at Google Brain.

As we continue our hyper-growth trajectory, we’re continuing to expand both the engineering and business teams and are hiring more experts to ensure the Fiddler Explainable AI Engine continues to be the best in its category.

We’re super excited to continue this mission and ensure that AI is explainable in every enterprise! 

Want to join our team? We’re hiring! Check our open positions here.

Join us next week at TwiML Con

Fiddler will be at the very first TwiML conference next week on October 1 & 2! It’s a new conference hosted by the amazing folks at TwiML, and we can’t wait to explore and learn about the latest and greatest for AI in the enterprise.  

At Fiddler, our mission is to enable businesses to deliver trustworthy and responsible AI experiences by unlocking the AI black box. 

Where to find us

1) October 1, 11.20 -11.45am, Robertson 2

Session: Why and how to build Explainability into your ML workflow

Join our CEO & Founder, Krishna Gade to learn how Explainable AI is the best way for companies to deal with business risks associated with deploying AI – especially in regulation and compliance heavy industries. Krishna comes from a data and explainability background having led the team that built Explainable AI at Facebook.

2) October 1 & 2, Community Hall, Booth #6 (see our location on the map below)

Come chat with us about: 

  • Why it’s important to provide transparent, reliable and accountable AI experiences
  • Risks associated with lack of visibility into AI behavior
  • How to understand, manage, analyze & validate models using explainability 

Schedule a time to connect with us

If you’d like to set up a meeting beforehand, fill out this meeting form and we’ll be in touch to finalize dates & times. We’re excited to chat with you!

See you next week!

Fiddler at O’Reilly AI Conference Sept 11 & 12

The San Jose O’Reilly Artificial Intelligence conference is almost upon us. From top researchers and developers to CxOs innovating in AI, we’ll hear about the latest innovations in machine learning and AI. 

Fiddler’s very own Ankur Taly, Head of Data Science, will be speaking on September 12 on Explaining Machine Learning Models. Ankur is well-known for his contribution to developing and applying Integrated Gradients  — a new interpretability algorithm for Deep Neural Networks. He has a broad research background and has published in several areas including Computer Security and Machine Learning. We hope to see you at his session!

At Fiddler, our mission is to enable businesses of all sizes to unlock the AI black box and deliver trustworthy and responsible AI experiences. Come chat with us about: 

  • Risks associated with not having visibility into model outputs
  • Most innovative ways to understand, manage, and analyze your ML models
  • Importance of Explainable AI and providing transparent and reliable experiences to end users

Schedule a time to connect with us

If you’d like to set up a meeting beforehand, then fill out this meeting form and we’ll be in touch. We’re excited to chat with you!

Where to find us

September 11 & 12

We’ll be in the Innovator Pavilion: Booth #K10, so stop by and say hi! 

September 12

Join Ankur Taly, our Head of Data Science, at his session on Explaining Machine Learning Models – 2:35pm – 3:15pm, Sep 12 / LL21 A/B

As machine learning models get deployed to high stakes tasks like medical diagnosis, credit scoring, and fraud detection, an overarching question that arises is – why did the model make this prediction? This talk will discuss techniques for answering this question, and applications of the techniques in interpreting, debugging, and evaluating machine learning models.

See you next week!

Welcome Anusha Sethuraman!

We’re excited to introduce Anusha Sethuraman, the newest member of our team. Anusha joins us as our Head of Product Marketing.

Anusha comes from a diverse product marketing background across startups and enterprises, most recently on Microsoft’s AI Thought Leadership team where she spearheaded the team’s storytelling strategy with stories being featured in CEO and exec-level Keynotes. Before this, she was at Xamarin (acquired by Microsoft) leading enterprise product marketing where she launched Xamarin’s first decision-maker event and was instrumental in creating the integrated Microsoft + Xamarin story. And prior to that, she was at New Relic (pre-IPO) leading product marketing for New Relic’s mobile monitoring product.

Anusha believes in a world where AI is responsible, ethical, and understandable. In her own words:

“The idea of democratizing AI is great, but even better – democratizing AI that has ethics and responsibility inbuilt. Today’s AI-powered world is nowhere close to being trustworthy: we still run into everyday instances of not knowing the why and how behind the decisions AI generates. Fiddler’s bold ambitions to create a world where technology is built responsibly, where humanity is not only putting AI to the best use possible across all industries and scenarios but creating this ethically and responsibly right from the start is something I care about deeply.  I’m very excited to be joining Fiddler to lead Product Marketing and work towards building an AI-powered world that is understandable, transparent, explainable, and secure.”

Anusha Sethuraman

Regulations To Trust AI Are Here. And It’s a Good Thing.

This article was previously posted on Forbes.

As artificial intelligence (AI) adoption grows, so do the risks of today’s typical black-box AI. These risks include customer mistrust, brand risk and compliance risk. As recently as last month, concerns about AI-driven facial recognition that was biased against certain demographics resulted in a PR backlash. 

With customer protection in mind, regulators are staying ahead of this technology and introducing the first wave of AI regulations meant to address AI transparency. This is a step in the right direction in terms of helping customers trust AI-driven experiences while enabling businesses to reap the benefits of AI adoption.

This first group of regulations relates to the understanding of an AI-driven, automated decision by a customer. This is especially important for key decisions like lending, insurance and health care but is also applicable to personalization, recommendations, etc.

The General Data Protection Regulation (GDPR), specifically Articles 13 and 22, was the first regulation about automated decision-making that states anyone given an automated decision has the right to be informed and the right to a meaningful explanation. According to clause 2(f) of Article 13:

“[Information about] the existence of automated decision-making, including profiling … and … meaningful information about the logic involved [is needed] to ensure fair and transparent processing.”

One of the most frequently asked questions is what the “right to explanation” means in the context of AI. Does “meaningful information about the logic involved” mean that companies have to disclose the actual algorithm or source code? Would explaining the mechanics of the algorithm be really helpful for the individuals? It might make more sense to provide information on what inputs were used and how they influenced the output of the algorithm. 

For example, if a loan application or insurance claim is denied using an algorithm or machine learning model, under Articles 13 and 22, the loan or insurance officer would need to provide specific details about the impact of the user’s data to the decision. Or, they could provide general parameters of the algorithm or model used to make that decision.

Similar laws working their way through the U.S. state legislatures of Washington, Illinois and Massachusetts are

  • WA House Bill 1655, which establishes guidelines for “the use of automated decision systems in order to protect consumers, improve transparency, and create more market predictability.”
  • MA Bill H.2701, which establishes a commission on “automated decision-making, artificial intelligence, transparency, fairness, and individual rights.”
  • IL HB3415, which states that “predictive data analytics in determining creditworthiness or in making hiring decisions…may not include information that correlates with the race of zip code of the applicant.”

Fortunately, advances in AI have kept pace with these needs. Recent research in machine learning (ML) model interpretability makes compliance to these regulations feasible. Cutting-edge techniques like Integrated Gradients from Google Brain along with SHAP and LIME from the University of Washington enable unlocking the AI black box to get meaningful explanations for consumers. 

Ensuring fair automated decisions is another related area of upcoming regulations. While there is no consensus in the research community on the right set of fairness metrics, some approaches like equality of opportunity are already required by law in use cases like hiring. Integrating AI explainability in the ML lifecycle can also help provide insights for fair and unbiased automated decisions. Assessing and monitoring these biases, along with data quality and model interpretability approaches, provides a good playbook towards developing fair and ethical AI.

The recent June 26 US House Committee hearing is a sign that financial services need to get ready for upcoming regulations that ensure transparent AI systems. All these regulations will help increase trust in AI models and accelerate their adoption across industries toward the longer-term goal of trustworthy AI.