Featured

Explainable Monitoring: Stop flying blind and monitor your AI

Data Science teams find Explainable Monitoring essential to manage their AI. Photo by Bruce Warrington on Unsplash The Need for AI/ML Monitoring We’re living in unprecedented times where in a matter of a few weeks, things changed dramatically for many humans and businesses across the globe. With COVID-19 spreading its wings across the globe, and … Continue reading “Explainable Monitoring: Stop flying blind and monitor your AI”

Identifying bias when sensitive attribute data is unavailable: Geolocation in Mortgage Data

In our last post, we explored data on mortgage applicants from 2017 released in accordance with the Home Mortgage Disclosure Act (HMDA). We will use that data, which includes self-reported race of applicants, to test how well we can infer race using applicants’ geolocations in our effort to better understand methods to infer missing sensitive … Continue reading “Identifying bias when sensitive attribute data is unavailable: Geolocation in Mortgage Data”

Explainable Churn Analysis with MemSQL and Fiddler

Fiddler and MemSQL are partnering to offer the power of MemSQL to users of Fiddler’s toolset for explainable AI – and to offer Fiddler’s explainability tools to the many MemSQL customers who are already using, or moving to operational AI. To this end, the two companies are offering new, efficient ways to connect MemSQL self-managed … Continue reading “Explainable Churn Analysis with MemSQL and Fiddler”

Identifying bias when sensitive attribute data is unavailable: Exploring Data from the HMDA

To test their automated systems for possible bias across racial or gender lines, organizations may seek to know which individuals belong to each race and gender group. However, such information may not be easily accessible, and organizations may use techniques to infer such information in the absence of available data [1]. Here, we explore a … Continue reading “Identifying bias when sensitive attribute data is unavailable: Exploring Data from the HMDA”

[Video] AI Explained: What are Integrated Gradients?

We started a video series with quick, short snippets of information on Explainability and Explainable AI. The second in this series is on Integrated Gradients – more about this method and its applications. Learn more in the ~10min video below. The first in the series is one on Shapley values – watch that here. What … Continue reading “[Video] AI Explained: What are Integrated Gradients?”

Webinar: Why monitoring is critical to successful AI deployments

Even as AI provides significant benefits – like creating new revenue opportunities, increasing productivity, decreasing costs, and fostering innovation in outdated business models – there is a big potential for unintentional errors from outcomes that are unclear, biased and non-compliant. Companies often realize AI and ML performance issues after the damage has been done, which … Continue reading “Webinar: Why monitoring is critical to successful AI deployments”