Featured

Fed Opens Up Alternative Data – More Credit, More Algorithms, More Regulation

A Dec. 4 joint statement released by the Federal Reserve Board, the Consumer Financial Protection Bureau (CFPB), the Federal Deposit Insurance Corporation (FDIC), the National Credit Union Administration (NCUA) and the Comptroller of the Currency (OCC), highlighted the importance of consumer protections in using alternative data (such as cash flow etc.) across a wide range of banking operations like credit underwriting, fraud detection, marketing, pricing, servicing, and account management.

The agencies acknowledged that modeling approaches using these alternative data sources will both enhance the credit decision process bringing in underserved consumers and unlock pricing, offerings and repayment benefits for existing consumers. 

Despite the potential benefits, the agencies also caution against using this new data in ways that are inconsistent with the current regulatory consumer protection framework of fair lending and fair credit reporting laws.

What is “Alternative Data”?

An example of alternative data that can be used is cash flow information calculated from borrowers’ income and expenses. This improves predictions based solely on traditional data points on the ability of the borrower to repay the loan. However, consumers have to give permission to the underwriter for use of this data and be able to request disclosures on its use. In this case, using alternative data allows consumers with irregular incomes like gig economy workers to better access credit services. 

While the agencies did not provide guidance on examples of alternative data that should be avoided (e.g. social media etc.), they strongly advocated a ‘responsible use’ of any new data being considered. 

Companies should thoroughly assess all alternative data against the existing regulations. This requires a sound compliance management process that appropriately factors the sensitivity of the data to protect consumers against risks.

What does this mean for businesses?

These guidelines create a potential boon for financial services who have been competing for the same traditional credit restricted pool of consumers by unlocking access to new, often proprietary sources. On the other hand, leveraging new data sources at scale will likely warrant new techniques and algorithms for processing that data. Machine learning algorithms are an obvious choice as the size and variety of data scales. Indeed, technology forward financial services enterprises have already been adopting machine learning practices to solve these challenges and to better compete for a new pool of consumers. This joint statement empowers the rest of the financial services firms to use similar approaches. 

Enterprises scaling their machine learning operations to incorporate alternative data should address associated AI risks (e.g. explaining adverse action notices, bias, unfairness, etc). A robust AI governance framework will ensure they are in compliance with the spirit of the statement. 

When explanations are integrated in the AI workflow from data selection, model development, and validation to compliance and monitoring, it address the potential gaps enterprises will face to ensure consumers are protected and treated fairly.

The opening of new data sources for use by lenders is a great step forward towards democratizing access to credit  to more consumers while empowering the entire financial services and broader underwriting industries to build better solutions.

Featured

Explainable AI goes mainstream. But who should be explaining?

Bias in AI is an issue that has really come to the forefront in recent months — our recent blog post discussed the Apple Card/Goldman Sachs alleged bias issue. And this isn’t an isolated instance: Racial bias in healthcare algorithms and bias in AI for judicial decisions are just a few more examples of rampant and hidden bias in AI algorithms.

While AI has had dramatic successes in recent years, Fiddler Labs was started to address an issue that is critical — that of explainability in AI. Complex AI algorithms today are black-boxes; while they can work well, their inner workings are unknown and unexplainable, which is why we have situations like the Apple Card/Goldman Sachs controversy. While gender or race might not be explicitly encoded in these algorithms, there are subtle and deep biases that can creep into data that is fed into these complex algorithms. It doesn’t matter if the input factors are not directly biased themselves — bias can, and is, being inferred by AI algorithms.

Companies have no proof to show that the model is, in fact, not biased. On the other hand, there’s substantial proof in favor of bias based on some of the examples we’ve seen from customers.  Complex AI algorithms are invariably black-boxes and if AI solutions are not designed in such a way that there is a foundational fix, then we’ll continue to see more such cases. Consider the biased healthcare algorithm example above. Even with the intentional exclusion of race, the algorithm was still behaving in a biased way, possibly because of inferred characteristics.

Prevention is better than cure

One of the main problems with AI today is that issues are detected after-the-fact, usually when people have already been impacted by them. This needs to be changed: explainability needs to be a fundamental part of any AI solution, right from design all the way to production – not just part of a post-mortem analysis. We need to have visibility into the inner workings of AI algorithms, as well as data, throughout the lifecycle of AI. And we need humans-in-the-loop monitoring these explainability results and overriding algorithm decisions where necessary.

What is explainable AI and how can it help?

Explainable AI is the best way to understand the why behind your AI. It tells you why a certain prediction was made and provides correlations between inputs and outputs. Right from the training data used to model validation in testing and production, explainability plays a critical role. 

Explainability is critical to AI success

With the recent launch of Google Cloud’s Explainable AI, the conversation around Explainable AI has accelerated. Google’s launch of Explainable AI completely debunks what we’ve heard a lot recently – that Explainable AI is two years out. It demonstrates how companies need to move fast and adopt Explainability as part of their machine learning workflows, immediately.

But it begs the question, who should be doing the explaining? 

What do businesses need in order to trust the predictions? First, we need explanations so we understand what’s going on behind the scenes. Then we need to know for a fact that these explanations are accurate and trustworthy, and come from a reliable source. 

At Fiddler, we believe there needs to be a separation between church and state. If Google is building AI algorithms and also explains it for customers -without third party involvement – it doesn’t align with the incentives for customers to completely trust their AI models. This is why impartiality and independent third parties are crucial, as they provide that all important independent opinion to algorithm-generated outcomes. It is a catch-22 for any company in the business of building AI models. This is why third party AI Governance and Explainability services are not just nice-to-haves, but crucial for AI’s evolution and use moving forward.

Google’s change in stance on Explainability

Google has changed their stance significantly on Explainability. They also went back and forth on AI ethics by starting an ethics board only to be dissolved in less than a fortnight. A few top executives at Google over the last couple of years went on record saying that they don’t necessarily believe in Explainable AI. For example, here they mention that ‘..we might end up being in the same place with machine learning where we train one system to get an answer and then we train another system to say – given the input of this first system, now it’s your job to generate an explanation.’  And here they say that ‘Explainable AI won’t deliver. It can’t deliver the protection you’re hoping for. Instead, it provides a good source of incomplete inspiration’.

One might wonder about Google’s sudden launch of an Explainable AI service. Perhaps they got feedback from customers or they see this as an emerging market. Whatever the reason, it’s great that they did in fact change their minds and believe in the power of Explainable AI. We are all for it.

Our belief at Fiddler Labs

We started Fiddler with the belief of building Explainability into the core of the AI workflow. We believe in a future where Explanations not only provide much needed answers for businesses on how their AI algorithms work, but also help ensure they are launching ethical and fair algorithms for their users. Our work with customers is bearing fruit as we go along this journey.

Finally, we believe that Ethics plays a big role in explainability because ultimately the goal of explainability is to ensure that companies are building ethical and responsible AI.  For explainability to succeed in its ultimate goal of ethical AI, we need an agnostic and independent approach, and this is what we’re working on at Fiddler.

Founded in October 2018, Fiddler's mission is to enable businesses of all sizes to unlock the AI black box and deliver trustworthy AI experiences for their customers. Fiddler’s next-generation Explainable AI Engine enables data science, product and business users to understand, analyze, validate, and manage their AI solutions, providing transparent and reliable experiences to their end users. Our customers include pioneering Fortune 500 companies as well as emerging tech companies. For more information please visit www.fiddler.ai or follow us on Twitter @fiddlerlabs.                  
Featured

The never-ending issues around AI and bias. Who’s to blame when AI goes wrong?

We’ve seen it before, we’re seeing it again now with the recent Apple and Goldman Sachs alleged credit card bias issue, and we’ll very likely continue seeing it well into 2020 and beyond. Bias in AI is there, it’s usually hidden, (until it comes out), and it needs a foundational fix.

This past weekend we saw just how quickly the Apple Card, managed by Goldman Sachs, issue spiralled out of control. What started as a tweet thread with multiple reports of alleged bias (including from Apple’s very own co-founder, Steve Wozniak and his spouse), eventually led to a regulator opening an investigation into Goldman Sachs and their algorithm-prediction practices.

Apple Card from Apple and Goldman Sachs

The problem

If we dig into the allegations to find the source of the problem, two issues stand out:

  1. The algorithm making credit decisions for the Apple card is biased
  2. The customer support teams from Goldman Sachs and Apple had zero insight into how the algorithm worked when they were asked to explain certain decisions

For (1) above we saw multiple responses to the original tweet thread that corroborated the allegation. Multiple people had faced a similar outcome where all other input factors being the same (or in some cases higher: like a higher annual income or credit score), and gender being the only difference, they were given significantly lower credit limits than their male spouse. This definitely comes across as problematic. Following the allegations, a separate group of non-related women and men ran a test experiment to check for bias and noticed significant differences in credit limits. Men with bad credit scores and irregular income got better offers than women with high incomes and good credit scores. 

Was the algorithm biased against women? We can’t say for sure because we don’t know. This is the key – we don’t know what’s going on inside the algorithm to analyze the root-cause. But based on external outcomes, we’re guesstimating that this is likely what happened.

Impact of this problem

The primary issue here is with the black-box algorithm that generated Apple’s credit lending decisions. As laid out in the tweet thread, Apple Card’s customer service reps were rendered powerless to the algorithm’s decision. Not only did they have no insight into why certain decisions were made, they were unable to override it. 

Humans don’t want a future ruled by algorithms, especially biased ones. Algorithms permeate all aspects of our lives today from lending and housing decisions to decisions in criminal justice. If we continue to let algorithms operate the way they do today, in the black-box and without human oversight, it results in a dystopian view of the world where unfair decisions are made by unseen algorithms operating in the unknown.

The solution

How could this issue have been avoided or at least handled better? Let’s come back to the initial statement above on how bias in AI needs a foundational fix. It’s very likely that in this particular credit lending decision, the algorithm was trained on biased data to begin with. To better understand this, let’s look at the high-level lifecycle of an AI solution: 

  1. Identify a use case for AI (credit lending, criminal justice, cancer prediction, etc.)
  2. Access historical data to build models
  3. Import this data to train and test models
  4. Test and validate models 
  5. Deploy models into production 
  6. Monitor models in production to ensure optimal performance 

If the data is flawed to begin with, this flaw permeates into everything that an algorithm does going forward. What we need is a way to check for bias and other issues in both data and models through all stages of the AI lifecycle. What we also need is human oversight – AI is just simply not ready to function on its own. We need humans-in-the-loop who will ensure that AI is functioning as it should. 

In the Apple credit card example above, it’s likely this issue could have been avoided if humans had visibility into every stage of the AI lifecycle. They could have seen examples in the test and validate stage of how the model was behaving when a certain input factor was isolated and compared with the global dataset. They could have also had the ability to override an algorithm’s prediction in the test/validate stage if they felt it was unfair or incorrect. This would have resulted in an algorithm that was getting trained in the right way to produce accurate results when in production. 

How Fiddler solves for this

This is exactly what we’re addressing at Fiddler. We’re working to unlock the AI black-box and empower all relevant stakeholders with more visibility into their AI than exists today. We’re working on infusing visibility and insight with explainable AI into every stage of an AI solution’s lifecycle: right from the data and training of it to when a model is deployed and in production.

  • We dive into the details to explain each prediction a model makes – whether a training, test, or production model – so users can understand the why behind individual decisions.
  • Our goal is to empower users with easy to grasp explanations of AI decisions. This empowers different stakeholders in an organization: data scientists are empowered to build the best and most accurate models, risk officers are empowered to publicize models with minimal-risk, and customer support representatives are empowered to answer customer questions around the why behind decisions.
  • With Fiddler’s explainable AI built into the AI lifecycle, it helps teams ensure they are compliant with regulations and are protecting their algorithms from inherent and hidden bias.

We’re continuing to build capabilities into Fiddler to ensure explainability is infused throughout the AI lifecycle and are working with a variety of customers to  build this functionality into their existing and new models. If you’re interested in working with us, please reach out.

Featured

Series A: Why now and what’s next for Fiddler?

In the news: VentureBeat, Axios Pro Rata- Venture Capital Deals, TechStartups, SiliconAngle, peHUB, Finsmes, Analytics India Magazine 

It has been a year since we founded Fiddler Labs, and the journey so far has been incredible. I’m very excited to announce that we’ve raised our Series A funding round at $10.2 million led by Lightspeed Venture Partners and Lux Capital, with participation from Haystack Ventures and Bloomberg Beta. Our first year in business has been super awesome, to say the least. We’ve built a unique Explainable AI Engine, put together a rock-solid team, and we’ve brought in customers both from the Enterprise and startup worlds. 

As we ramped up Fiddler over the last one year, the one thing that stood out was how so many Enterprises choose not to deploy AI solutions today due to the lack of explainability. They understand the value AI provides, but they struggle with the ‘why’ and the ‘how’ when using traditional black-box AI because most applications today are not equipped with Explainable AI. It’s the missing link when it comes to AI solutions making it to production. 

Why Explainable AI? 
We get this a lot since Explainable AI is still not a household term and not many companies understand what this actually means. So I wanted to ‘explain’ Explainable AI with a couple of examples.

Credit lending

Let’s consider the case where an older customer (age 65+) wants a credit line increase and reaches out to the bank to request this. The bank wants to use an AI credit lending model to run this query. The model returns a low score and the customer’s request is denied. The bank representative dealing with the customer has no idea why the model denied the request. And when they follow up internally, they might find that there is a proxy bias built into the model because of the lack of examples in the training data representing older females. Before you get alarmed, this is a hypothetical situation as Banks go through a very diligent Model Risk Management process as per guidelines specified in SR-11-7, ECOA, and FCRA to vet their models before they launch them for usage. However, the tools and processes have been built for much simpler quantitative models that they have been using for decades to process these requests. As Banks and other financial institutions look to moving towards AI-based underwriting and lending models, they need tools like Fiddler. If the same AI model were to run through Fiddler’s Explainable AI Engine, the team will quickly realize that the loan was denied because this older customer is considered an outlier. Explainability shows that the training data used in the model was age-constrained: limited age range of 20-60 year olds. 

Cancer prediction

Let’s consider the case where a deep neural network AI model is used to make cancer predictions with the data from chest X-rays. Using this trained data, the model predicts that certain X-Rays are cancerous. We can use an Explainability method that can highlight regions in the X-ray, to ‘explain’ why an X-ray was flagged as cancerous. What was discovered here is very interesting – the model predicted that the image was cancerous because of the radiologist’s pen markings rather than the actual pathology in the image. This shows just how important it is to have explanations in any AI model. It tells you the why behind any prediction so you, as a human, know exactly why that prediction was made and can course-correct when needed. In this case, because of explanations, we realized that the prediction was based on something completely irrelevant to actual cancer. 

Explainability gives teams visibility into the inner workings of the model, which in turn allows them to fix things that are incorrect.  As AI penetrates our lives more deeply, there is a growing ask to make its decisions understandable by humans, which is why we’re seeing regulations like GDPR stating that customers have ‘a right to explanations for any automated decision’. Similar laws are being introduced in countries like the United States. All these regulations are meant to help increase trust in AI models and explainable AI platforms like Fiddler can provide this visibility so that companies accelerate their adoption AI that is not only efficient but also is trustworthy. 

What we do at Fiddler
At Fiddler, our mission is to unlock trust, visibility, and insights for the Enterprise by grounding our AI Engine in Explainability. This ensures anyone who is affected by AI technology can understand why decisions were made and ensure AI outputs are ethical, responsible, and fair. 

We do this by providing

  • AI Awareness: understand, explain, analyze, and validate the why and how behind AI predictions to deliver explainable decisions to the business and its end users
  • AI Performance: continuously monitor production, test, or training AI models to ensure high-performance while iterating and improving models based on explanations
  • AI Compliance: with AI regulations becoming more common, ensure industry compliance with the ability to audit AI predictions and track and remove inherent bias

Fiddler is a Pluggable Explainable AI Engine – what does this mean?
Fiddler works across multiple datasets and custom-built models. Customers bring in their data that is stored in any platform- Salesforce, HDFS, Amazon, Snowflake, and more – and/or their custom-built models built using Scikit-Learn, XGBoost, Spark, TensorFlow, PyTorch, Sagemaker, and more, to the Fiddler Engine. Fiddler works on top of the models and data to provide explainability on how these models are working with trusted insights through our APIs, reports, and dashboards. 

Our Explainable AI Engine meets the needs of multiple stakeholders in the AI-lifecycle: from data scientists and business analysts to model regulators and business operations teams. 

Our rock-solid team
Our team comes from companies and universities like Facebook, Google Brain, Lyft, Twitter, Pinterest, Microsoft, Nutanix, Samsung, Georgia Tech, and Stanford. We’re working with experts from industry and academia to create the first true Explainable AI Engine to solve business risks for companies around user safety, non-compliance, AI black-box, and brand risk.

As the Engineering Lead on Facebook’s AI-driven News Feed, I know just how useful explanations were for Facebook users as well as internal teams: from engineering all the way up to senior leadership.  My co-founder, Amit Paka, had a similar vision when he was working on AI-driven product recommendations in shopping apps at Samsung. 

Since our inception back in October 2018, we’ve significantly grown the team to include other Explainability experts like Ankur Taly, who was one of the co-creators of the popular Integrated Gradients explainability method when he was at Google Brain.

As we continue our hyper-growth trajectory, we’re continuing to expand both the engineering and business teams and are hiring more experts to ensure the Fiddler Explainable AI Engine continues to be the best in its category.

We’re super excited to continue this mission and ensure that AI is explainable in every enterprise! 

Want to join our team? We’re hiring! Check our open positions here.

Featured

Regulations To Trust AI Are Here. And It’s a Good Thing.

This article was previously posted on Forbes.

As artificial intelligence (AI) adoption grows, so do the risks of today’s typical black-box AI. These risks include customer mistrust, brand risk and compliance risk. As recently as last month, concerns about AI-driven facial recognition that was biased against certain demographics resulted in a PR backlash. 

With customer protection in mind, regulators are staying ahead of this technology and introducing the first wave of AI regulations meant to address AI transparency. This is a step in the right direction in terms of helping customers trust AI-driven experiences while enabling businesses to reap the benefits of AI adoption.

This first group of regulations relates to the understanding of an AI-driven, automated decision by a customer. This is especially important for key decisions like lending, insurance and health care but is also applicable to personalization, recommendations, etc.

The General Data Protection Regulation (GDPR), specifically Articles 13 and 22, was the first regulation about automated decision-making that states anyone given an automated decision has the right to be informed and the right to a meaningful explanation. According to clause 2(f) of Article 13:

“[Information about] the existence of automated decision-making, including profiling … and … meaningful information about the logic involved [is needed] to ensure fair and transparent processing.”

One of the most frequently asked questions is what the “right to explanation” means in the context of AI. Does “meaningful information about the logic involved” mean that companies have to disclose the actual algorithm or source code? Would explaining the mechanics of the algorithm be really helpful for the individuals? It might make more sense to provide information on what inputs were used and how they influenced the output of the algorithm. 

For example, if a loan application or insurance claim is denied using an algorithm or machine learning model, under Articles 13 and 22, the loan or insurance officer would need to provide specific details about the impact of the user’s data to the decision. Or, they could provide general parameters of the algorithm or model used to make that decision.

Similar laws working their way through the U.S. state legislatures of Washington, Illinois and Massachusetts are

  • WA House Bill 1655, which establishes guidelines for “the use of automated decision systems in order to protect consumers, improve transparency, and create more market predictability.”
  • MA Bill H.2701, which establishes a commission on “automated decision-making, artificial intelligence, transparency, fairness, and individual rights.”
  • IL HB3415, which states that “predictive data analytics in determining creditworthiness or in making hiring decisions…may not include information that correlates with the race of zip code of the applicant.”

Fortunately, advances in AI have kept pace with these needs. Recent research in machine learning (ML) model interpretability makes compliance to these regulations feasible. Cutting-edge techniques like Integrated Gradients from Google Brain along with SHAP and LIME from the University of Washington enable unlocking the AI black box to get meaningful explanations for consumers. 

Ensuring fair automated decisions is another related area of upcoming regulations. While there is no consensus in the research community on the right set of fairness metrics, some approaches like equality of opportunity are already required by law in use cases like hiring. Integrating AI explainability in the ML lifecycle can also help provide insights for fair and unbiased automated decisions. Assessing and monitoring these biases, along with data quality and model interpretability approaches, provides a good playbook towards developing fair and ethical AI.

The recent June 26 US House Committee hearing is a sign that financial services need to get ready for upcoming regulations that ensure transparent AI systems. All these regulations will help increase trust in AI models and accelerate their adoption across industries toward the longer-term goal of trustworthy AI.

Fiddler at Plug and Play Expo

Fiddler Labs was selected to present at Plug and Play’s Fall Expo on October 24th, 2019. As part of their Fintech batch, we were very pleased to see so many attendees in the audience! From investors and corporations to startups innovating in the space.

Plug and Play is the ultimate innovation platform, bringing together the best startups, investors, and the world’s largest corporations. PnP’s fintech arm has partnered with over 60 corporations and allowed us to have access to a large number of partners.

Fiddler’s CEO, Krishna Gade, (pictured below) spoke about explainability and compliance for banks today deploying AI/ML models.

At Fiddler, our mission is to enable businesses of all sizes to unlock the AI black box and deliver trustworthy and responsible AI experiences. 

We had a great time meeting with people in the industry and spreading the word about explainability! 

Welcome Rob Harrell!

We’re excited to introduce Rob Harrell, the newest member of our team. Rob joins us as our first Product Manager.

Rob has product experience in enterprise software, machine learning, and fintech. Prior to joining Fiddler, he was the product manager for Square’s machine learning platform, where he oversaw the development of infrastructure and tools to host Square’s variety of ML pipelines, ranging from high-scale real-time fraud detection and loan underwriting to customer analytics and marketing use cases. Prior to Square, Rob was the first product manager on Microsoft’s Power Virtual Agents, a no-code tool for building chatbots incubated in Microsoft Research and launched under the Power Platform suite of products.

Rob is passionate about accelerating adoption of AI in a trustworthy, ethical manner. In his words:

“While managing Square’s machine learning platform, I saw first-hand all of the challenges organizations face attempting to apply ML to their businesses. To build a production-caliber model, teams first must hire an AI expert, discover the right business problem, track down and wrangle the appropriate data, and experiment until reaching a high level of performance. To then make use of that model in a production setting, teams must deploy it a hosting system that can replicate the data transformations used for training and serve and monitor predictions at scale. Often, when these systems produce unexpected or undesirable results, there isn’t an immediate explanation why.

On top of these challenges, regulatory and fairness risks with AI systems, and their potentially devastating consequences (including PR issues, regulatory probes, fines) loom like dark storm clouds over business decision makers attempting to leverage AI. Fortunately, there is an answer to these risks: Explainable AI. With Explainable AI, or the ability to understand to explain a model’s outputs with respect to its underlying data, businesses can better understand model behavior and guard themselves against regulatory and fairness risks. I couldn’t be more excited to join Fiddler to build the enterprise AI explainability engine and thus empower businesses to more confidently deploy and manage their AI systems. I hope that through explainability we will not only accelerate adoption AI of but also strengthen general trust and acceptance AI systems.”

Rob

Podcast: S&P Global & Fiddler discuss AI, explainability, and machine learning

We recently chatted with Ganesh Nagarathnam, Director of Analytics and Machine Learning Engineering, at S&P Global. Take a listen to the podcast below or read the transcript. (Transcript edited for clarity and length.)

Fiddler: Welcome to Fiddler’s explainable AI podcast. I’m Anusha Sethuraman. And today I have with me on the podcast, Ganesh Nagarathnam, from S&P Global. He’s the director for analytics and machine learning engineering. Ganesh, thank you so much for joining us. We’re super excited to have you. Could you please tell us a little bit about yourself and what you do at S&P Global?

Ganesh: Thank you, Anusha for inviting me to the podcast. I’m currently working with S&P Global Market Intelligence line of business as a director for analytics and machine learning engineering. I have 20 plus years of experience in building our distributed and scalable software systems on a variety of technologies. From the likes of Java2, Java3 all the way to Java9. And right now, I’m heavily into the big data ecosystem on the public cloud. I have had opportunities to work with great firms from Verizon, Verizon Wireless, Goldman Sachs, JP Morgan, and now with S&P Global Market Intelligence.

Ganesh Nagarathnam, Director of Analytics and Machine Learning Engineering, S&P Global

Fiddler: Wonderful. Pulling up on that point of big data a little bit. How do you use Data and AI in your organization today?

Ganesh: So, at S&P Global I work with the innovation team and the product team where we work on an idea, or an innovation gets germinated by our interactions with our customers. Then the idea gets into our analytics team, which is my team, where we try to build the MVP. Our job is to wire the necessary technology stack in accordance with corporate standards and get the product out to market quickly. My primary focus is to get the machine learning models that are being developed into production as quickly as possible. So, having said that we use AI extensively. We build an idea; we build a model from simple to complex and we try to get that out. At S&P Global, it’s all about data. We have a humongous amount of data and we think about how we can provide actionable intelligence to our clients with the right amount of information at the right time. That’s our primary goal.

Fiddler I’m curious: what is a typical process for your team for creating AI solutions. You mentioned you come up with an idea and innovate on it. Can you touch upon a few of the details there in terms of how you go through that entire process of getting it into production?

Ganesh: So, as we discussed, we have lots of data and that’s a sufficient reason for us to explore or go down the AI route. Right from predictive analytics to interactive analytics and from visual analytics to simple data visualization, all we’re trying to do here is we’ll have to leverage that momentum to get to market quickly. 

So, the typical process would be once we identify an innovative idea, we go to the drawing board and we discuss the product needs and we try to figure out what the appropriate technology stack is like. And then we invest 20% of our effort to deliver 80% value for our clients.

This means that we don’t want to iterate for too long and we involve our customers end-to-end when innovating. We then get this out to the product team and they take it to their customers to validate and ask for customer feedback. Then the process gets funneled with appropriate funding. So, the 20% of effort we’ve put into it doesn’t go to waste. 

Fiddler: Great. Thank you for that insight. You did mention technology stack. So, I wanted to dig into that. What are the core AI and ML tools you use today and what are the main reasons why you use them?

Ganesh: That’s a great question. To begin with we are migrating into the public cloud and we have a lot of home-grown tools and external tools like Domino and AWS. On the AWS side, we use ML pipeline. We also use the Spark ML pipeline to do our preliminary feature engineering and then build the entire stack. Historically – if you look at Gartner’s report – around 68 to 70% of models being developed don’t get to see the production phase, meaning that they are sitting somewhere as Jupyter notebooks on desktops. So there is no set of well-defined processes around how you take an idea, develop a model, and then how you deliver it into production. That was the missing piece there. 

Fiddler: I’m curious – you mentioned a lot of these models are just sitting there and that’s part of the challenge. What are some of the core challenges that your team is facing when you’re taking this AI solution all the way from inception to production?

Ganesh: The main focus for us is around how quickly we can show the dollar value by building the MVP. What we do is when we identify a solution, we remove our organizational hats (this organization or that organization) and we try to address the problem with a holistic approach. We figure out the appropriate solution with an open-minded approach. Once we find the right solution, we look at the boundaries or the bounding boxes in which we operate. Every organization has their specific set of boundaries. Then we look at those boundaries and see how we can factor in the solution which we are planning to build into the existing boundaries. We also take a closer look at these boundaries – are they legacy boundaries and is there something that can be tweaked so that the solution can be implemented seamlessly? That to me is a big challenge. On one side you’ve got to get to market pretty quickly, and on the other side, you have to work with the boundaries that you have within an organization. So how do you balance these two? That’s a challenge for us.

Fiddler: What tools do you think are lacking today to fill these gaps in the process?

Ganesh: We use Scrum in our day to day project stability. When it comes down to machine learning you have to be truly agile when building machine learning products. The reason why I’m saying that is suddenly with machine learning coming in and meeting software engineering, everybody is talking about ML Ops. How do you get to show the value by involving the product team right from the outset? How do you iterate faster? But the more important thing is how do we iterate smatter? That is the key to me.

To me the data science team should also be empowered to get the model from inception to production. If I really look at it, the half-life of a model is determined by its north star metric. The moment these metrics go off track, you will have to retrain the model within weeks if not days. So, do we have that edge? Are we ready? That’s the key thing to me. I wouldn’t call it as a gap – it is something which we are working on to streamline the process. And that is why we as an organization are marching full steam into ML Ops. We have defined our core set of drivers which are key to achieve a successful ML Ops culture within the organization.

Fiddler: Ganesh we didn’t spend a lot of time talking about exactly what S&P Global does for your clients. Can you tell me a bit about that in terms of things like what sort of risk and trust and safety issues you’re dealing with?

Ganesh: Risk – that brings in to the core concept of explainability. Right now, we haven’t seen any adverse effect by not being able to explain our models, but eventually we’ll all get there.  S&P Global has four lines of business. One is the Platts, then we have Indices, and then Market Intelligence and the Ratings division. I am part of S&P Global market intelligence team. Our main focus is to gather raw data transcripts and generate sentiment scores and provide actionable intelligence to our clients.

But when it really comes down to the risks in building these machine learning models, I don’t think organizations, and not just S&P Global, I don’t think organizations across the board are ready to take that leap. With all of the regulations coming up like GDPR regulations, it is so important for explainability to be a key factor in your AI.  Think about it – if you are making a prediction with your AI, and the customer is going to ask you why, and you’re not in a position to explain that, then that would cause the trust to go whacky. And on the other hand, you don’t want your models to introduce any bias. Right now, as part of the ML Ops framework and design thinking, we wanted to incorporate explainability right in the design phase, and not at the end of the machine learning model’s lifecycle. So, you don’t want the machine learning model to go into production and then figure out explainability there.

Fiddler: I’m sure not too many people have heard this concept of explainable AI – XAI – as you mentioned. So, can you tell us a little bit about this black box AI model as it exists today and the need for something like explainable AI?

Eventually, whenever we build systems in  traditional software engineering, we have people – as a software developer when I started my career and get queries from the client, I go and then look into the database or look into the code and then figure out what was the reason – as simple as that. To me the same principle holds good when we go into the machine learning and AI world. Why did the AI system make a specific prediction or a decision or why didn’t the AI system do something else? When did the AI system that we built succeed or fail, or how can the AI systems correct the errors that are coming out of it? Those are some things which resonated with me.

To me there are traditional models like for example the classic random forest models or any of the Bayesian algorithms – these can be explained, but if you look at the core neural networks, they’re a little bit difficult. When talking about deep layered neural network and more than a million parameters – even the ResNet 50 or the VGG 16 have 100 million parameters. There’s hope that sufficient progress can be made so that we can have both power and accuracy for our machine learning models to predict something, and at the same time we don’t lose the required transparency and explainability.

And that to me is very important – it’s good for the business. The community has already started talking about it in one form or another. They are visualizing what-if scenarios during the design phase. That’s what we do. And that has become our core element for our ML Ops journey. We know that explainability is important and we might decide to work on that later. Sometimes customers might need XAI upfront – we don’t know. So, this is where we need to have a tradeoff. It’s a balance between the fine art of the power and accuracy of your model predictions and transparency.

Fiddler: What do you feel about how you might have to build some of these things? How do you do it- are you building these things or are you looking for external solutions to help you include explainable AI in the design phase?

Ganesh: There have been some interesting conversations around this but we haven’t given serious thought about it. When we iterate on new projects, we’re engaging product owners and the customers and then asking them if this needs to be explainable. Not every model needs to be explainable. You don’t want to invest in explainability just for the sake of it. But if it really comes down to a project which has strict GDPR regulations, it’s better to ask all the right questions upfront during the design phase. You may not have answers but startups like Fiddler might have answers to explainability. As data scientists and engineers representing these bigger firms, it is so important for us to ask those questions upfront in the design phase and then if needed, put in the right thought process and engage the right people in a discussion. And think about how you would explain it – do you want some kind of visual dashboard? If customers were to ask ‘why did my loan get rejected’ because of this or what are the important parameters that may go into this prediction. You have to go back and then explain it. You don’t want to lose your customer because of the time it takes for you to explain it. We are not there yet, but eventually we’ll get there. 

Fiddler: It’s getting important especially with all these regulations you mentioned. I’m curious: it seems like you might not have come across a situation yet where this black box AI has negatively impacted your organization or have you already come across a situation like this?

Ganesh: No, not really, but I’m thinking ahead. The reason is when you really look at credit risk or taking a step away from the financial industry -let’s talk about the health industry. If you’re going to make serious predictions which have a human impact, it can become extremely problematic not only for the lack of transparency, but also for possible biases which are inherited by the algorithms. This could come from human prejudices or artifacts hidden in the training data that can lead to unfair or wrong decisions. How do you uncover this? Right now, every organization, every line of business, every sub project in these businesses have some amount of data science going on. But they might not get to see the bigger picture. So, as a technology leader, my job is to ask these questions upfront – how do we learn about explainable? It’s through my interactions and attendance in industry conferences. And that’s when you get to understand what’s going on in the space.

Fiddler: As we come to the close of this episode what do you think are some of the core things that teams will need to think about?

Ganesh: The core things I would say an organization should think right now – we need to be thinking about ML Ops. That’s where the heart is right now. We have machine learning models and human brains and we need to figure out how to take this idea, iterate quickly and get to market. The third piece is explainability. That’s where we have to be upfront in asking the right set of questions during the design phase and then take it forward from there. Let’s try to get better with ML Ops so that people are able to see value in ideas that are generated and involve the customer at every phase. Needless to say, explainability will kick in around the corner with regulations coming up, and then you won’t have a choice.

Fiddler: Well thank you so much Ganesh for sharing all your insights on this. We really appreciate your time. Thanks for joining us today.

Ganesh: Thank you so much, Anusha. It’s been a pleasure.

Join us next week at TwiML Con

Fiddler will be at the very first TwiML conference next week on October 1 & 2! It’s a new conference hosted by the amazing folks at TwiML, and we can’t wait to explore and learn about the latest and greatest for AI in the enterprise.  

At Fiddler, our mission is to enable businesses to deliver trustworthy and responsible AI experiences by unlocking the AI black box. 

Where to find us

1) October 1, 11.20 -11.45am, Robertson 2

Session: Why and how to build Explainability into your ML workflow

Join our CEO & Founder, Krishna Gade to learn how Explainable AI is the best way for companies to deal with business risks associated with deploying AI – especially in regulation and compliance heavy industries. Krishna comes from a data and explainability background having led the team that built Explainable AI at Facebook.

2) October 1 & 2, Community Hall, Booth #6 (see our location on the map below)

Come chat with us about: 

  • Why it’s important to provide transparent, reliable and accountable AI experiences
  • Risks associated with lack of visibility into AI behavior
  • How to understand, manage, analyze & validate models using explainability 

Schedule a time to connect with us

If you’d like to set up a meeting beforehand, fill out this meeting form and we’ll be in touch to finalize dates & times. We’re excited to chat with you!

See you next week!

Fiddler at O’Reilly AI Conference Sept 11 & 12

The San Jose O’Reilly Artificial Intelligence conference is almost upon us. From top researchers and developers to CxOs innovating in AI, we’ll hear about the latest innovations in machine learning and AI. 

Fiddler’s very own Ankur Taly, Head of Data Science, will be speaking on September 12 on Explaining Machine Learning Models. Ankur is well-known for his contribution to developing and applying Integrated Gradients  — a new interpretability algorithm for Deep Neural Networks. He has a broad research background and has published in several areas including Computer Security and Machine Learning. We hope to see you at his session!

At Fiddler, our mission is to enable businesses of all sizes to unlock the AI black box and deliver trustworthy and responsible AI experiences. Come chat with us about: 

  • Risks associated with not having visibility into model outputs
  • Most innovative ways to understand, manage, and analyze your ML models
  • Importance of Explainable AI and providing transparent and reliable experiences to end users

Schedule a time to connect with us

If you’d like to set up a meeting beforehand, then fill out this meeting form and we’ll be in touch. We’re excited to chat with you!

Where to find us

September 11 & 12

We’ll be in the Innovator Pavilion: Booth #K10, so stop by and say hi! 

September 12

Join Ankur Taly, our Head of Data Science, at his session on Explaining Machine Learning Models – 2:35pm – 3:15pm, Sep 12 / LL21 A/B

As machine learning models get deployed to high stakes tasks like medical diagnosis, credit scoring, and fraud detection, an overarching question that arises is – why did the model make this prediction? This talk will discuss techniques for answering this question, and applications of the techniques in interpreting, debugging, and evaluating machine learning models.

See you next week!

Welcome Anusha Sethuraman!

We’re excited to introduce Anusha Sethuraman, the newest member of our team. Anusha joins us as our Head of Product Marketing.

Anusha comes from a diverse product marketing background across startups and enterprises, most recently on Microsoft’s AI Thought Leadership team where she spearheaded the team’s storytelling strategy with stories being featured in CEO and exec-level Keynotes. Before this, she was at Xamarin (acquired by Microsoft) leading enterprise product marketing where she launched Xamarin’s first decision-maker event and was instrumental in creating the integrated Microsoft + Xamarin story. And prior to that, she was at New Relic (pre-IPO) leading product marketing for New Relic’s mobile monitoring product.

Anusha believes in a world where AI is responsible, ethical, and understandable. In her own words:

“The idea of democratizing AI is great, but even better – democratizing AI that has ethics and responsibility inbuilt. Today’s AI-powered world is nowhere close to being trustworthy: we still run into everyday instances of not knowing the why and how behind the decisions AI generates. Fiddler’s bold ambitions to create a world where technology is built responsibly, where humanity is not only putting AI to the best use possible across all industries and scenarios but creating this ethically and responsibly right from the start is something I care about deeply.  I’m very excited to be joining Fiddler to lead Product Marketing and work towards building an AI-powered world that is understandable, transparent, explainable, and secure.”

Anusha Sethuraman