Back to blog home

Explainable AI Podcast: Global & Fiddler discuss AI, explainability, and machine learning

We recently chatted with Ganesh Nagarathnam, Director of Analytics and Machine Learning Engineering, at S&P Global. Take a listen to the podcast below or read the transcript. (Transcript edited for clarity and length.)

Listen to all the Explainable AI Podcasts here

Fiddler: Welcome to Fiddler's explainable AI podcast. I'm Anusha Sethuraman. And today I have with me on the podcast, Ganesh Nagarathnam, from S&P Global. He's the director for analytics and machine learning engineering. Ganesh, thank you so much for joining us. We're super excited to have you. Could you please tell us a little bit about yourself and what you do at S&P Global?

Ganesh: Thank you, Anusha for inviting me to the podcast. I'm currently working with S&P Global Market Intelligence line of business as a director for analytics and machine learning engineering. I have 20 plus years of experience in building our distributed and scalable software systems on a variety of technologies. From the likes of Java2, Java3 all the way to Java9. And right now, I'm heavily into the big data ecosystem on the public cloud. I have had opportunities to work with great firms from Verizon, Verizon Wireless, Goldman Sachs, JP Morgan, and now with S&P Global Market Intelligence.

Ganesh Nagarathnam, Director of Analytics and Machine Learning Engineering, S&P Global

Fiddler: Wonderful. Pulling up on that point of big data a little bit. How do you use Data and AI in your organization today?

Ganesh: So, at S&P Global I work with the innovation team and the product team where we work on an idea, or an innovation gets germinated by our interactions with our customers. Then the idea gets into our analytics team, which is my team, where we try to build the MVP. Our job is to wire the necessary technology stack in accordance with corporate standards and get the product out to market quickly. My primary focus is to get the machine learning models that are being developed into production as quickly as possible. So, having said that we use AI extensively. We build an idea; we build a model from simple to complex and we try to get that out. At S&P Global, it's all about data. We have a humongous amount of data and we think about how we can provide actionable intelligence to our clients with the right amount of information at the right time. That's our primary goal.

Fiddler I'm curious: what is a typical process for your team for creating AI solutions. You mentioned you come up with an idea and innovate on it. Can you touch upon a few of the details there in terms of how you go through that entire process of getting it into production?

Ganesh: So, as we discussed, we have lots of data and that's a sufficient reason for us to explore or go down the AI route. Right from predictive analytics to interactive analytics and from visual analytics to simple data visualization, all we're trying to do here is we'll have to leverage that momentum to get to market quickly. 

So, the typical process would be once we identify an innovative idea, we go to the drawing board and we discuss the product needs and we try to figure out what the appropriate technology stack is like. And then we invest 20% of our effort to deliver 80% value for our clients.

This means that we don't want to iterate for too long and we involve our customers end-to-end when innovating. We then get this out to the product team and they take it to their customers to validate and ask for customer feedback. Then the process gets funneled with appropriate funding. So, the 20% of effort we've put into it doesn't go to waste. 

Fiddler: Great. Thank you for that insight. You did mention technology stack. So, I wanted to dig into that. What are the core AI and ML tools you use today and what are the main reasons why you use them?

Ganesh: That's a great question. To begin with we are migrating into the public cloud and we have a lot of home-grown tools and external tools like Domino and AWS. On the AWS side, we use ML pipeline. We also use the Spark ML pipeline to do our preliminary feature engineering and then build the entire stack. Historically - if you look at Gartner's report - around 68 to 70% of models being developed don't get to see the production phase, meaning that they are sitting somewhere as Jupyter notebooks on desktops. So there is no set of well-defined processes around how you take an idea, develop a model, and then how you deliver it into production. That was the missing piece there. 

Fiddler: I'm curious - you mentioned a lot of these models are just sitting there and that's part of the challenge. What are some of the core challenges that your team is facing when you're taking this AI solution all the way from inception to production?

Ganesh: The main focus for us is around how quickly we can show the dollar value by building the MVP. What we do is when we identify a solution, we remove our organizational hats (this organization or that organization) and we try to address the problem with a holistic approach. We figure out the appropriate solution with an open-minded approach. Once we find the right solution, we look at the boundaries or the bounding boxes in which we operate. Every organization has their specific set of boundaries. Then we look at those boundaries and see how we can factor in the solution which we are planning to build into the existing boundaries. We also take a closer look at these boundaries - are they legacy boundaries and is there something that can be tweaked so that the solution can be implemented seamlessly? That to me is a big challenge. On one side you've got to get to market pretty quickly, and on the other side, you have to work with the boundaries that you have within an organization. So how do you balance these two? That's a challenge for us.

Fiddler: What tools do you think are lacking today to fill these gaps in the process?

Ganesh: We use Scrum in our day to day project stability. When it comes down to machine learning you have to be truly agile when building machine learning products. The reason why I'm saying that is suddenly with machine learning coming in and meeting software engineering, everybody is talking about ML Ops. How do you get to show the value by involving the product team right from the outset? How do you iterate faster? But the more important thing is how do we iterate smatter? That is the key to me.

To me the data science team should also be empowered to get the model from inception to production. If I really look at it, the half-life of a model is determined by its north star metric. The moment these metrics go off track, you will have to retrain the model within weeks if not days. So, do we have that edge? Are we ready? That's the key thing to me. I wouldn't call it as a gap - it is something which we are working on to streamline the process. And that is why we as an organization are marching full steam into ML Ops. We have defined our core set of drivers which are key to achieve a successful ML Ops culture within the organization.

Fiddler: Ganesh we didn't spend a lot of time talking about exactly what S&P Global does for your clients. Can you tell me a bit about that in terms of things like what sort of risk and trust and safety issues you're dealing with?

Ganesh: Risk - that brings in to the core concept of explainability. Right now, we haven't seen any adverse effect by not being able to explain our models, but eventually we'll all get there.  S&P Global has four lines of business. One is the Platts, then we have Indices, and then Market Intelligence and the Ratings division. I am part of S&P Global market intelligence team. Our main focus is to gather raw data transcripts and generate sentiment scores and provide actionable intelligence to our clients.

But when it really comes down to the risks in building these machine learning models, I don't think organizations, and not just S&P Global, I don't think organizations across the board are ready to take that leap. With all of the regulations coming up like GDPR regulations, it is so important for explainability to be a key factor in your AI.  Think about it - if you are making a prediction with your AI, and the customer is going to ask you why, and you're not in a position to explain that, then that would cause the trust to go whacky. And on the other hand, you don't want your models to introduce any bias. Right now, as part of the ML Ops framework and design thinking, we wanted to incorporate explainability right in the design phase, and not at the end of the machine learning model's lifecycle. So, you don't want the machine learning model to go into production and then figure out explainability there.

Fiddler: I'm sure not too many people have heard this concept of explainable AI - XAI - as you mentioned. So, can you tell us a little bit about this black box AI model as it exists today and the need for something like explainable AI?

Eventually, whenever we build systems in  traditional software engineering, we have people - as a software developer when I started my career and get queries from the client, I go and then look into the database or look into the code and then figure out what was the reason - as simple as that. To me the same principle holds good when we go into the machine learning and AI world. Why did the AI system make a specific prediction or a decision or why didn't the AI system do something else? When did the AI system that we built succeed or fail, or how can the AI systems correct the errors that are coming out of it? Those are some things which resonated with me.

To me there are traditional models like for example the classic random forest models or any of the Bayesian algorithms - these can be explained, but if you look at the core neural networks, they're a little bit difficult. When talking about deep layered neural network and more than a million parameters - even the ResNet 50 or the VGG 16 have 100 million parameters. There's hope that sufficient progress can be made so that we can have both power and accuracy for our machine learning models to predict something, and at the same time we don't lose the required transparency and explainability.

And that to me is very important - it's good for the business. The community has already started talking about it in one form or another. They are visualizing what-if scenarios during the design phase. That's what we do. And that has become our core element for our ML Ops journey. We know that explainability is important and we might decide to work on that later. Sometimes customers might need XAI upfront - we don't know. So, this is where we need to have a tradeoff. It's a balance between the fine art of the power and accuracy of your model predictions and transparency.

Fiddler: What do you feel about how you might have to build some of these things? How do you do it- are you building these things or are you looking for external solutions to help you include explainable AI in the design phase?

Ganesh: There have been some interesting conversations around this but we haven't given serious thought about it. When we iterate on new projects, we're engaging product owners and the customers and then asking them if this needs to be explainable. Not every model needs to be explainable. You don't want to invest in explainability just for the sake of it. But if it really comes down to a project which has strict GDPR regulations, it's better to ask all the right questions upfront during the design phase. You may not have answers but startups like Fiddler might have answers to explainability. As data scientists and engineers representing these bigger firms, it is so important for us to ask those questions upfront in the design phase and then if needed, put in the right thought process and engage the right people in a discussion. And think about how you would explain it - do you want some kind of visual dashboard? If customers were to ask 'why did my loan get rejected' because of this or what are the important parameters that may go into this prediction. You have to go back and then explain it. You don't want to lose your customer because of the time it takes for you to explain it. We are not there yet, but eventually we'll get there. 

Fiddler: It's getting important especially with all these regulations you mentioned. I'm curious: it seems like you might not have come across a situation yet where this black box AI has negatively impacted your organization or have you already come across a situation like this?

Ganesh: No, not really, but I'm thinking ahead. The reason is when you really look at credit risk or taking a step away from the financial industry -let’s talk about the health industry. If you're going to make serious predictions which have a human impact, it can become extremely problematic not only for the lack of transparency, but also for possible biases which are inherited by the algorithms. This could come from human prejudices or artifacts hidden in the training data that can lead to unfair or wrong decisions. How do you uncover this? Right now, every organization, every line of business, every sub project in these businesses have some amount of data science going on. But they might not get to see the bigger picture. So, as a technology leader, my job is to ask these questions upfront - how do we learn about explainable? It's through my interactions and attendance in industry conferences. And that's when you get to understand what's going on in the space.

Fiddler: As we come to the close of this episode what do you think are some of the core things that teams will need to think about?

Ganesh: The core things I would say an organization should think right now - we need to be thinking about ML Ops. That's where the heart is right now. We have machine learning models and human brains and we need to figure out how to take this idea, iterate quickly and get to market. The third piece is explainability. That's where we have to be upfront in asking the right set of questions during the design phase and then take it forward from there. Let's try to get better with ML Ops so that people are able to see value in ideas that are generated and involve the customer at every phase. Needless to say, explainability will kick in around the corner with regulations coming up, and then you won't have a choice.

Fiddler: Well thank you so much Ganesh for sharing all your insights on this. We really appreciate your time. Thanks for joining us today.

Ganesh: Thank you so much, Anusha. It's been a pleasure.