Back to blog home

Explainable AI Podcast: Founder & CTO of Elixr AI, Farhan Shah, discusses AI and the need for transparency

We recently chatted with Farhan Shah, Founder & CTO of Elixr AI and former tech executive at large insurance companies. Take a listen to the podcast below or read the transcript as we discuss transparency in AI. (Transcript lightly edited for clarity and length.)

Listen to all the Explainable AI Podcasts here

Fiddler: Hello everyone, welcome to today's podcast. My name is Anusha from Fiddler Labs. Today I have with me Farhan Shah, who is a tech executive who has been in multiple different companies. I'd love for him to introduce himself. Farhan, do you mind giving us a quick introduction about who you are, what you're doing, and your background - thank you.

Farhan: Thank you, Anusha. My name is Farhan, and I'm currently a founder of Elixir AI. Previously I was a CIO and CTO in multiple insurance, banking and consumer businesses focusing on multiple initiatives including AI, machine learning and deep learning. But over the last couple of years, I'm also focusing on the manufacturing and IT side of the world too, so I'm glad I'm here. I'm excited to share what we can do.

Fiddler: Thank you so much Farhan. We're really excited to have you here today. So, as an executive what is top of mind for you these days?

Farhan: There are three major areas that are on my mind and for the teams that I talked to recently as well. The first one is AI and differentiating between AI and robotic process automation. Those two terms are so connected to each other but reality is that they are pretty far from each other. So, differentiating between what automation vs AI truly means. The second one is the explainability of AI black boxes. Especially where human decisions get made and how do we ensure that those decisions are explainable to customers. That's the second one on my mind. The third one is the skill set required to be an AI enabled organization. Most of the time that I spend with organizations they are all ready to adopt multiple technologies, but they're finding it hard with the organic way of building. So, most of the time I spend with executives talking about what the target operating model needs to look like to be enabling these AI initiatives across their organizations.

Fiddler: That's awesome. I'd love to dig into that a little bit, but before we go there, can you talk a little bit more - you briefly mentioned finance and insurance - so what are these industries that you've primarily operated in and operate in today?

Farhan: Yeah. So in the past five years mostly I operated in the insurance business from property to casualty to commercial insurance. And then recently in the past a year or so mostly in banks as well. Banks - consumer banking and lending where heavy AI and automation is a big focus for those organizations.

Fiddler: And how is that looking Farhan? How do businesses in these industries use AI today? Maybe in insurance and finance. Are they well ahead of the curve or are they just getting started?

Farhan: It's a mix. Majority I feel are about two years behind the curve overall trying to adopt AI at the Fortune 100 level. Maybe the smaller scale businesses are much ahead. But if you look at the bigger banks, bigger insurance companies they are doing it, but they are mostly focused a lot on the automation and some data insights. From the true deep learning, AI, machine learning - that side of the world, I genuinely feel we are about 2+ years away from full adoption.

Now the good news overall is that everybody is excited, they are trying to ramp up for it, they're asking the right questions beyond the typical 101 questions, and they are going into conferences and then coming back and saying this is how we should be adopting AI. The example could be - in order for us to have a really digital bank, we need to have a good user experience, but also have a good way for self-service capabilities using AI technology and cognitive technologies. Now, the biggest challenge that these organizations have is how do you get started? And that's where I think majority of organizations are struggling. And that's the reason why traditionally people are behind. Now the good news is: as a technology executive I do feel that the technology is there -whether it's startup technology or enterprise level technology - the compute, the storage, cloud natives and all those things are there. We just need to figure out how to now work as one cohesive organization to drive that. So that's the biggest challenge and that's the reason why it tends to be a little bit behind and especially for the Fortune 100 type of companies. 

Fiddler: Yeah that totally makes sense. So, I wanted to dig into this automation vs. AI - you mentioned it again just now - that most people in these industries right now are focusing on automation vs. AI. Maybe you can talk a little bit about firstly, what is the difference between automation and AI, and also why you think that it's easier, I guess in a sense, for these businesses to get into automation vs. getting into AI.

Farhan: So automation to me, if you look at it from the business sense, is that you are doing a robotic process automation on the glass automation - you're going in looking at the call center and automating a lot of the work that the call center employees are doing or building their digital labor around. So, you're automating the processes that humans are currently doing. That's where majority of the focus is. If you look at it - 70 percent of the organizations are mostly focused on robotic process automation, and then when they feel that with that, they will generate enough capacity or operational efficiencies that they can focus on broader use cases of AI and machine learning. 

Now from the AI and machine learning side of it I do feel that the issue is not only the understanding of AI and what it really means and how to apply that, but also the readiness of the organization from the adoption side as well. Decisions at Fortune 500 companies are made at the top level, so the education needs to happen on what AI is and is not, what is deep learning, what is machine learning, what is natural language generation and processing. All of that is taking some time to explain and then how do you apply your capital to invest and where to invest.

In the past four months I've been spending a lot of time with the insurance and banking industries. Most of the questions I get are that they want to start AI, but then I spend more time explaining the different areas of AI and machine learning, and what that means, how do you apply it, and which tools, which processes and then how do you build a skill set internally and externally to do that. And what are the dangers of responsible A.I. as well. What are some of the legal legalities from PII and GDPR that are out there that they should be accounting for. It's a long debate, but it's a good time to be in this industry though. That's the fun part. 

Fiddler: Yeah. So, you talked a little bit about this - what is a typical process? Is there even a typical process for the creation of an AI solution in these financial institutions that you've talked to?

Farhan: Not yet. I think they're traditionally working in the business goes to IT, IT kind of solutions it, solutions goes to some engineering team to develop it, and then it works through the cycle of 6 to 12 months. And one of the things that I've been educating the executives and the engineering teams is that we need to start focusing on bimodal or two-speed organization. And by that what I mean is that traditionally you will have a typical waterfall style of execution for any programs, but A.I. requires a little bit more agility and speed in there, to make some quick decisions, try things out execution wise. And then some reskilling is also required. So that's the biggest barrier that I see in organizations. And especially in banking and insurance, for all the right reasons, there are so many controls in place to make sure that consumer data is secure and that breaches are not happening.

Being a CIO CTO in my past, those were the things that were keeping me awake. Not that I was trying to come in the middle of the execution or faster execution of AI or any kind of product or solution, but how do I keep my enterprise secure as well on top of it. One of the things that I feel that organizations are starting to realize is that they need to find a different way to operate and build the solution. It's no longer IT versus business. It's just the hybrid model that works to execute some of these big programs. They are changing, but I would say it's going to take some time for them to be truly agile and truly working in a model where there is a blurred line between business and IT and the execution of the AI program. That's the only way this is going to work - the deep learning work, the data analytics work, the machine learning work or subject matter expertise - they all need to work in one co-located type of model. Otherwise it is going to take a long time.

Fiddler: Yeah that definitely makes sense. So, in this particular industry, regulation, like you said, is just so critical, and making sure that consumer data is protected and you're compliant with industry regulations: how does this affect companies? Do you think this is a big stopper for adoption of AI more quickly? How do you think people should be thinking about that?

Farhan: I believe it is not a stopper - in all fairness, it's not. What it is, is that it's a lack of explainability of those black boxes that we started this conversation with. Everybody wants to make the right decisions for the consumers, especially in banks. When your credit card application gets processed or loan application gets processed or you are making a claim, as the organization, I want to make sure that the user experience is better. I want to make sure that there is a repeat customer calling in for more and more products and services. I want to make sure that the policy binding or underwriting is much faster. So, there are those pieces there. I think the adoption or the scare is that what are the boundaries that they need to work around? What data should be secured? What systems are secure enough? Should they be hosting it on-prem vs. off-prem? All these other adjacencies are causing a little bit more angst. The debate around the zero-trust kind of model is good. The zero trust needs to be explainable - we have a toolset that can explain how the model was created or what are the decisions getting made. I think there is a lot of good stuff that's out there, but I believe the hindrance is that people - executives or decision makers - are very nervous about some of the noise that's out there. For example, the recent noise about the Apple card. If you look at that or the Capital One breach, and all of these other things. All those things are out there. We read Wall Street all day and listen to CNBC, and they talk about AI as a weapon in a sense. To me AI is a good thing as long as we are able to be transparent and explain it.

Fiddler: Have you come across a situation where there was (other than the Apple Card), in your day to day experience, a black box solution was negatively impacting the business?

Farhan: In the banking industry, as recently as about six, seven months ago we had use cases where certain loans applications were getting processed and they were making some decisions that were hard to understand, like why certain loans were getting rejected. So yeah, there is a lot of decision making like that which happens and there are two major reasons behind it. One, you trained your AI model incorrectly or you used bad data while training. So, there is good explainability as a developer, but it's really hard to explain when it gets in the hands of the person who's executing those models.

So, what my worry has been in both insurance and banking is that as a developer I'm using the data and the way I'm training my algorithms it all depends on how good that data is and how well I'm able to test it. So, I think both in banking and insurance there are so many use cases out there. But one thing that I will say is that that's the bad side of the world.

There is a good side of the world. As a recent one - about three months ago we were working on this project where one of the insurance companies was a little bit worried about some of the claims getting rejected by those algorithms. And when my team actually looked into it, we saw that the algorithms were doing the right things, but the humans that were processing those things were actually in the wrong process. So, the algorithms were giving the right results for a long time. But because we were able to explain and show how the algorithms were calculating, now they have to go back and fix how the call centers and their back-office claim adjusters were actually working. So, there is a good side of the world too for this.

Fiddler: It's good you bring up that human thing -because that's one of the things, especially even now with the Apple card, like you know, the customer service representatives are basically saying it's just the algorithm and we can't really do anything and we don't know why. What do you think the ideal relationship is especially in these heavily regulated industries between the humans and the algorithms to ensure optimal performance?

Farhan: Yeah, I mean we had this big debate with HR a while ago on how we treat this digital labor who are working right next to each other. And we finally came up with a term called co-bots and by that what I meant is that we're not ready for algorithms or these bots to work independently, yet. In my point of view, it needs to work as a joint human + digital labor - they need to work together.

There need to be checks and balances between humans and bots on both sides. I mean if the bot is making a decision on a certain threshold with certain datasets and processes, it needs to be explainable. On the other side if humans are making those decisions, we need to validate those as well.

So, it's is still the two in a box scenario. I have seen this both on the consumer side and also on the insurance side and banking side that we're still in the infant state of trust but validate phase.

Fiddler: Trust but validate. What do you mean? Can you expand on that a little bit?

Farhan: We as humans - if you look at real life examples - the teams that are sitting in the back-office processing certain invoices or certain claims or whatever the decision they're making, humans are able to make those decisions without somebody looking over us.

But as a machine I do feel that we still have a way to go through that and we still need to validate the results of the decisions getting made. I don't know if I'm explaining correctly but that's how I feel we need for digital labor to be trusted.

Fiddler: Yeah so basically having human oversight into what this digital labor is doing right, so that we can ensure that algorithms are not operating in the wild and kind of just coming up with their own decisions, that we as humans don't know why they're doing those things.

Alright so we're getting to close to the 20-minute mark. So, I wanted to touch on a few things before we close here. Where do you think the creation of AI is going next over the next few years like three to five years?

Farhan: I will say if you're speaking about the business and the growth of the business I do feel it's a huge market for AI and not only AI, but it's a huge market for talent development, a huge market for reskilling, and a huge market for bringing new skill sets in and driving efficiencies in operation.  That's one side of the world that I've seen. I was talking to one of the agencies yesterday about their call center experiences. Some of the wait times are 20 to 25 minutes for getting simple answers. Now, as a consumer of these services I do feel that these are the opportunities that AI can bring in. On the other side of it, I do feel there are a lot of responsibilities that need to be talked about - about how the algorithms are  making certain decisions, and how the algorithms are being written to do good and prevent the evil side of the world.

So what I spend a lot of time doing nowadays is how to be responsible with AI, how to write algorithms that are explainable, and how do you make sure that humans trust AI so that one day it becomes such that humans are not looking at bots all the time, but they're only looking at certain anomalies that are happening. So, I think we are about two three years away, but it's an amazing place to be and I'm really excited. I mean this is probably the fun time of my career right now.

Fiddler: That's great. So just as a last piece of advice: what are the three top things any business teams focused on AI - especially in these industries that you are working in like finance and insurance - what should they be thinking about? What should they be doing to get to this good place of AI in the next few years?

Farhan: Yeah, first, what I feel is that the first thing is building this bi-modal IT which requires experimenting, failing fast, and experimenting with AI solutions in a faster way. So that's one. The second one is just don't try to solve the whole big process/problems in the beginning. Try to experiment in small pilots and try to put things in production. The proof of concept and proof of value - it's just cute, but it doesn't actually have any value in the long term and it's going to further delay the execution of your AI journey. So, try to do things in production and then drive value, maybe even if it's a small value but try to put things in production and see how it works. And the third thing is trying to partner - with startups and with third party organizations to learn and reskill the organization. This AI journey cannot be done alone organically. You have to have some external viewpoint on what's working. What advice I give to executives is that the good news is that many industries are doing AI so have them come and have a voice into your journey, it really helps. So definitely.

Fiddler: Yeah it takes a village, just like everything else, so that's great. I really appreciate your time here, Farhan. Lots of great advice for our audience. So, thank you again. And we look forward to chatting with you soon hopefully. 

Farhan: Thanks for having me.