Welcome Rob Harrell!

We’re excited to introduce Rob Harrell, the newest member of our team. Rob joins us as our first Product Manager.

Rob has product experience in enterprise software, machine learning, and fintech. Prior to joining Fiddler, he was the product manager for Square’s machine learning platform, where he oversaw the development of infrastructure and tools to host Square’s variety of ML pipelines, ranging from high-scale real-time fraud detection and loan underwriting to customer analytics and marketing use cases. Prior to Square, Rob was the first product manager on Microsoft’s Power Virtual Agents, a no-code tool for building chatbots incubated in Microsoft Research and launched under the Power Platform suite of products.

Rob is passionate about accelerating adoption of AI in a trustworthy, ethical manner. In his words:

“While managing Square’s machine learning platform, I saw first-hand all of the challenges organizations face attempting to apply ML to their businesses. To build a production-caliber model, teams first must hire an AI expert, discover the right business problem, track down and wrangle the appropriate data, and experiment until reaching a high level of performance. To then make use of that model in a production setting, teams must deploy it a hosting system that can replicate the data transformations used for training and serve and monitor predictions at scale. Often, when these systems produce unexpected or undesirable results, there isn’t an immediate explanation why.

On top of these challenges, regulatory and fairness risks with AI systems, and their potentially devastating consequences (including PR issues, regulatory probes, fines) loom like dark storm clouds over business decision makers attempting to leverage AI. Fortunately, there is an answer to these risks: Explainable AI. With Explainable AI, or the ability to understand to explain a model’s outputs with respect to its underlying data, businesses can better understand model behavior and guard themselves against regulatory and fairness risks. I couldn’t be more excited to join Fiddler to build the enterprise AI explainability engine and thus empower businesses to more confidently deploy and manage their AI systems. I hope that through explainability we will not only accelerate adoption AI of but also strengthen general trust and acceptance AI systems.”

Rob

Regulations To Trust AI Are Here. And It’s a Good Thing.

This article was previously posted on Forbes.

As artificial intelligence (AI) adoption grows, so do the risks of today’s typical black-box AI. These risks include customer mistrust, brand risk and compliance risk. As recently as last month, concerns about AI-driven facial recognition that was biased against certain demographics resulted in a PR backlash. 

With customer protection in mind, regulators are staying ahead of this technology and introducing the first wave of AI regulations meant to address AI transparency. This is a step in the right direction in terms of helping customers trust AI-driven experiences while enabling businesses to reap the benefits of AI adoption.

This first group of regulations relates to the understanding of an AI-driven, automated decision by a customer. This is especially important for key decisions like lending, insurance and health care but is also applicable to personalization, recommendations, etc.

The General Data Protection Regulation (GDPR), specifically Articles 13 and 22, was the first regulation about automated decision-making that states anyone given an automated decision has the right to be informed and the right to a meaningful explanation. According to clause 2(f) of Article 13:

“[Information about] the existence of automated decision-making, including profiling … and … meaningful information about the logic involved [is needed] to ensure fair and transparent processing.”

One of the most frequently asked questions is what the “right to explanation” means in the context of AI. Does “meaningful information about the logic involved” mean that companies have to disclose the actual algorithm or source code? Would explaining the mechanics of the algorithm be really helpful for the individuals? It might make more sense to provide information on what inputs were used and how they influenced the output of the algorithm. 

For example, if a loan application or insurance claim is denied using an algorithm or machine learning model, under Articles 13 and 22, the loan or insurance officer would need to provide specific details about the impact of the user’s data to the decision. Or, they could provide general parameters of the algorithm or model used to make that decision.

Similar laws working their way through the U.S. state legislatures of Washington, Illinois and Massachusetts are

  • WA House Bill 1655, which establishes guidelines for “the use of automated decision systems in order to protect consumers, improve transparency, and create more market predictability.”
  • MA Bill H.2701, which establishes a commission on “automated decision-making, artificial intelligence, transparency, fairness, and individual rights.”
  • IL HB3415, which states that “predictive data analytics in determining creditworthiness or in making hiring decisions…may not include information that correlates with the race of zip code of the applicant.”

Fortunately, advances in AI have kept pace with these needs. Recent research in machine learning (ML) model interpretability makes compliance to these regulations feasible. Cutting-edge techniques like Integrated Gradients from Google Brain along with SHAP and LIME from the University of Washington enable unlocking the AI black box to get meaningful explanations for consumers. 

Ensuring fair automated decisions is another related area of upcoming regulations. While there is no consensus in the research community on the right set of fairness metrics, some approaches like equality of opportunity are already required by law in use cases like hiring. Integrating AI explainability in the ML lifecycle can also help provide insights for fair and unbiased automated decisions. Assessing and monitoring these biases, along with data quality and model interpretability approaches, provides a good playbook towards developing fair and ethical AI.

The recent June 26 US House Committee hearing is a sign that financial services need to get ready for upcoming regulations that ensure transparent AI systems. All these regulations will help increase trust in AI models and accelerate their adoption across industries toward the longer-term goal of trustworthy AI.