Back to blog home

Fiddler Recognized as a Representative Vendor in the 2021 Gartner Market Guide

Fiddler recognized as a Representative Vendor in the 2021 Gartner® Market Guide for AI Trust, Risk and Security Management Report

Fiddler was named as a representative vendor in the recently released 2021 Gartner Market Guide for AI Trust, Risk, and Security Management[1], in two pillars: Explainability and Data Anomaly Detection. According to Gartner, “This Market Guide defines new capabilities that data and analytics leaders must have to ensure model reliability, trustworthiness and security, and presents representative vendors who implement these functions."[1]

Fiddler is identified as a Representative Vendor for Explainability

What is the pillar of Explainability? According to this Market Guide report: 

“Gartner defines Explainable AI as a set of capabilities that produce details or reasons that clarify a model’s functioning for a specific audience. Explainability describes a model, highlights its strengths and weaknesses, predicts its likely behavior and identifies any potential biases. It clarifies a model’s functioning to a specific audience to enable accuracy, fairness, accountability, stability and transparency in algorithmic decision making.”[1]

Explainability is especially important with the rise of new regulations. According to the key findings mentioned under this Market Guide report, “regulators and lawmakers across the globe — including the U.S. and EU — are issuing guidance and announcing upcoming laws that intend to regulate the use of AI for fairness and transparency. The EU proposes to issue steep fines, up to 6% of annual revenue, to companies who do not comply.”[1]

Fiddler provides Explainable AI as part of its Model Performance Management platform to empower its users with observability into their machine learning models. With cutting-edge AI explainability techniques like Shapley Values and Integrated Gradients, model practitioners and stakeholders get insight into why a model behaved the way it did and how each feature contributed to the outcome, either for single predictions or across an entire segment of the data. Fiddler helps teams feel confident that they are using AI fairly and transparently — and in compliance with all regulations.  

Fiddler is named as a Representative Vendor for Data Anomaly Detection 

What is data anomaly detection, and why is it important for machine learning teams? According to 2021 Gartner Market Guide for AI Trust, Risk, and Security Management report:

“Monitoring AI production data for drift, bias, attacks, data entry and process mistakes is key to achieving optimal AI performance, and protecting organizations from malicious attacks. Data monitoring tools support alerts on specific models or correlated models, as they analyze weighted data drift or degradation of important features. Model accuracy is measured once a prediction is made so that model performance can be monitored over time. Ideally, data issues and anomalies are highlighted and alerted before model decisions are executed.” [1]

Fiddler’s Model Performance Management platform monitors your machine learning models in production to prevent data anomaly issues. Fiddler is designed to detect issues in the data and flag them immediately, before they affect your users or your business:

  • Use data drift detection to maintain quality and know and if when you need to retrain your models
  • Debug issues with data integrity, whether caused by incorrect values at the source, or faulty data engineering
  • Dig into outliers in the data to understand if they’re the result of adversarial threats
  • Assess for bias and fairness with intersectionality built in 

Fiddler’s ML monitoring gives your team a shared dashboard for managing alerts, and plugs in with all your existing tools for managing your models and data. What’s more, every alert in Fiddler is backed by Explainable AI so you can root-cause the issue, understand the impact, and achieve a fast resolution. 

Most AI systems are a black box. Fiddler is built for performance scale to give teams visibility into every stage of model development and create a culture of accountability. Request a demo today and learn how we can help your team build trust with AI — start the conversation today.

———

[1] Gartner, “Market Guide for AI Trust, Risk, and Security Management,” Avivah Litan, Farhan Choudhary, Jeremy D'Hoinne, September 1, 2021. 

Gartner Disclaimer:

GARTNER is a registered trademarks and service marks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.