Taming the Data: Complexities in Machine Learning and Explainable AI

FinTech Horizons

A few weeks ago, I was on the panel for Explainable AI at the PRMIA Fintech Horizons Conference in SF. The participants were predominantly from the finance industry like Banks, Hedge Funds and Fintech Startups.

We had a very interesting discussion on topics like:

  • Automated AI vs Human-Centered AI
  • How catastrophic can it be when a Business Risk is left unmanaged?
    • Example: Boeing 737 Max 8 failure
  • Special challenges in Quantitative Finance with AI. Can we quantify Model Risk in terms of a $-value?
  • Who in the organizations needs to care about Explainable AI? Is it the Data Scientist? Chief Risk Officer? Business Owner?
Podcast of the discussion

Participants:

Bob Mark – Former CRO & Treasurer of CIBC, Managing Partner Black Diamond Risk – Moderator

Krishna Gade – Founder & CEO, Fiddler Labs 

Jos Gheerardyn – Founder & CEO, Yields.IO

Hersh Shefrin – Mario L. Belotti Professor of Finance, Santa Clara University

Starting from left: Bob Mark, Jos Gheerardyn, Krishna Gade, and Hersh Shefrin

Welcome Ankur Taly!

We’re excited to introduce Ankur Taly, the newest member of our team. Ankur joins us as the Head of Data Science.

Previously, he was a Staff Research Scientist at Google Brain where he worked on Machine Learning Interpretability and was most well-known for his contribution to developing and applying Integrated Gradients  — a new interpretability algorithm for Deep Neural Networks. Ankur has a broad research background and has published in several areas including Computer Security, Programming Languages, Formal Verification, and Machine Learning. Ankur obtained his Ph.D. in CS from Stanford University and a B. Tech in CS from IIT Bombay.

Ankur passionately believes in the need for Explainable AI and is excited to join Fiddler labs. In his own words:  

“Explainability is one of the key missing pieces in the ongoing machine learning (ML) revolution. As ML models continue to become more complex and opaque, being able to explain their predictions is getting increasingly important and challenging. The ability to explain a model’s predictions would enable users to build trust in the model, business stakeholders to derive actionable insights and strategies, regulators to assess model fairness and risk, and data scientists to iterate on the model in a principled manner. In response to this need, there has been a large surge in research on explaining various aspects of ML models. Fiddler Labs has taken up the ambitious task of driving this research to industrial practice, by making it available as a cutting-edge enterprise product catering to several business needs. This is incredibly promising, and I am super excited to join Fiddler on this journey!”

Ankur Taly

Humans choose, AI does not

For the non-technical reader who sees scary headlines: every AI has a goal, usually a labeled dataset, precisely defined by a human.

Artificial intelligence isn’t human

Artificial Intelligence Will Best Humans at Everything by 2060, Experts Say”. Well.

First, as Yogi Berra said, “It’s tough to make predictions, especially about the future.” Where is my flying car?

Second, the title reads like clickbait, but surprisingly it appears to be pretty close to the actual survey, which asked AI researchers when “high-level machine intelligence” will arrive, defined as “when unaided machines can accomplish every task better and more cheaply than human workers.” What is a ‘task’ in this definition? Does “every task” even make sense? Can we enumerate all tasks?

Third and most important, is high-level intelligence just accomplishing tasks? This is the real difference between artificial and human intelligence: humans define goals, AI tries to achieve them. Is the hammer going to displace the carpenter? They each have a purpose.

This difference between artificial and human intelligence is crucial to understand, both to interpret all the crazy headlines in the popular press, and more importantly, to make practical, informed judgements about the technology.

The rest of this post walks through some types of artificial intelligence, types of human intelligence, and given how different they are, plausible and implausible risks of artificial intelligence. The short story: unlike humans, every AI technology has a perfectly mathematically well-defined goal, often a labeled dataset.

Types of artificial intelligence

In supervised learning, you define a prediction goal and gather a training set with labels corresponding to the goal. Suppose you want to identify whether a picture has Denzel Washington in it. Then your training set is a set of pictures, each labeled as containing Denzel Washington or not. The label has to be applied outside of the system, mostly likely by people. If your goal is to do facial recognition, your labeled dataset is pictures along with a label (the person in the picture). Again, you have to gather the labels somehow, likely with people. If your goal is to match a face with another face, you need a label of whether the match was successful or not. Always labels.

Almost all the machine learning you read about is supervised learning. Deep learning, neural networks, decision trees, random forests, logistic regression, all training on labeled datasets.

In unsupervised learning, again you define a goal. A very common unsupervised learning technique is clustering (e.g., the well-known k-means clustering). Again, the goal is very well-defined: find clusters minimizing some mathematical cost function. For example, where the distance between points within the same cluster is small, and the distance between points not within the same cluster is large. All of these goals are so well-defined they have mathematical formalism:

This formula feels very different from how humans specify goals. Most humans don’t understand these symbols at all. They are not formal. Also, a “goal-oriented” mindset in a human is unusual enough that it has a special term.

In reinforcement learning, you define a reward function to reward (or penalize) actions that move towards a goal. This is the technology people have been using recently for games like chess and Go, where it may take many actions to reach a particular goal (like checkmate), so you need a reward function that gives hints along the way. Again, not only a well-defined goal, but even a well-defined on-the-way-to-goal reward function.

These are types of artificial intelligence (“machine learning”) that are currently hot because of recent huge gains in accuracy, but there are plenty of others that people have studied.

Genetic algorithms are another way of solving problems inspired by biology. One takes a population of mathematical constructs (essentially functions), and selects those that perform best on a problem. Although people get emotional about the biological analogy, still the fitness function that defines “best” is a concrete, completely specified mathematical function chosen by a human.

There is computer-generated art. For example, deep dream (gallery) is a way to generate images from deep learning neural networks. This would seem to be more human and less goal-oriented, but in fact people are still directing. The authors described the goal at a high level: “Whatever you see there, I want more of it!” Depending on which layer of the network is asked, the features amplified might be low level (like lines, shapes, edges, colors, see the addaxes below) or higher level (like objects).

Original photo of addaxes by Zachi Evenor, processed photo from Google

Expert systems are a way to make decisions using if-then rules on a formally expressed body of knowledge. They were somewhat popular in the 1980s. These are a type of “Good Old Fashioned Artificial Intelligence” (GOFAI), a term for AI based on manipulating symbols.

Another common difference between human and artificial intelligence is that humans learn over a long time, while AI is often retrained from the beginning for each problem. This difference, however, is being narrowed. Transfer learning is the process of training a model, and then using or tweaking the model for use in a different context. This is industry practice in computer vision, where deep learning neural networks that have been trained using features from previous networks (example).

One interesting research project in long-term learning is NELL, Never-Ending Language Learning. NELL crawls the web collecting text, and trying to extract beliefs (facts) like “airtran is a transportation system”, along with a confidence. It’s been crawling since 2010, and as of July 2017 has accumulated over 117 million candidate beliefs, of which 3.8 million are high-confidence (at least 0.9 of 1.0).

In every case above, humans not only specify a goal, but have to specify it unambiguously, often even formally with mathematics.

Types of human intelligence

What are the types of human intelligence? It’s hard to even come up with a list. Psychologists have been studying this for decades. Philosophers have been wrestling with it for millennia.

IQ (the Intelligence Quotient) is measured with verbal and visual tests, sometimes abstract. It is predicated on the idea that there is a general intelligence (sometimes called the “g factor”) common to all cognitive ability. This idea is not accepted by everyone, and IQ itself is hotly debated. For example, some believe that people with the same latent ability but from different demographic groups might be measured differently, called Differential Item Functioning, or simply measurement bias.

People describe fluid intelligence (the ability to solve novel problems) and crystallized intelligence (the ability to use knowledge and experience).

The concept of emotional intelligence shows up in the popular press: the ability of a person to recognize their own emotions and those of others, and use emotional thinking to guide behavior. It is unclear how accepted this is by the academic community.

More widely accepted are the Big Five personality traits: openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism. This is not intelligence (or is it?), but illustrates a strong difference with computer intelligence. “Personality” is a set of stable traits or behavior patterns that predict a person’s behavior. What is the personality of an artificial intelligence? The notion doesn’t seem to apply.

With humor, art, or the search for meaning, we get farther and farther from well-defined problems, yet closer and closer to humanity.

Risks of artificial intelligence

Can artificial intelligence surpass human intelligence?

One risk that captures the popular press is The Singularity. The writer and mathematician Verner Vinge gives a compelling description in an essay from 1993: “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.

There are at least two ways to interpret this risk. The common way is that some magical critical mass will cause a phase change in machine intelligence. I’ve never understood this argument. “More” doesn’t mean “different.” The argument is something like “As we mimic the human brain closely, something near-human (or super-human) will happen.” Maybe?

Yes, the availability of lots of computing power and lots of data has resulted in a phase change in AI results. Speech recognition, automatic translation, computer vision, and other problem domains have been completely transformed. In 2012, when researchers at Toronto re-applied neural networks to computer vision, the error rates on a well-known dataset started dropping fast, until within a few years the computers were beating the humans. But computers were doing the same well-defined task as before, only better.

ImageNet Large Scale Visual Recognition Challenge error rates. 0.1 is a 10% error rate. After neural networks were re-applied in 2012, error rates dropped fast. They beat Andrej Karpathy by 2015.

The more compelling observation is: “The chance of a singularity might be small, but the consequences are so serious we should think carefully.”

Another way to interpret the risk of the singularity is that the entire system will have a phase change. That system includes computers, technology, networks, and humans setting goals. This seems more correct and entirely plausible. The Internet was a phase change, as were mobile phones. Under this interpretation, there are plenty of plausible risks of AI.

One plausible risk is algorithmic bias. If algorithms are involved in important decisions, we’d like them to be trustworthy. (In a previous post, we discussed how to measure algorithmic fairness.)

Tay, a Microsoft chatbot, was taught by Twitter to be racist and woman-hating within 24 hours. But Tay didn’t really understand anything, it just “learned” and mimicked.

Tay, the offensive chatbot.

Amazon’s facial recognition software Rekognition, falsely matched 28 U.S. Congresspeople (mostly people of color) with known criminals. Amazon’s response was that the ACLU (who conducted the test) used an unreliable cutoff of only 80 percent confident. (They recommended 95 percent.)

MIT researcher Joy Buolamwini showed that gender identification error rates in several computer vision systems were much higher for people with dark skin.

All of these untrustworthy results arise at least partially from the training data. In Tay’s case, it was deliberately fed hateful data by Twitter users. In the computer vision systems, there may well have been less data for people of color.

Another plausible risk is automation. As artificial intelligence becomes more cost-efficient at solving problems like driving cars or weeding farm plots, the people who used to do those tasks may be thrown out of work. This is the risk of AI plus capitalism: businesses will each try to be cost effective. We can only address this at a societal level, which makes it very difficult.

One final risk is bad goals, possibly aggravated by single-mindedness. This is memorably illustrated by the paper-clip problem, first described by Nick Bostrom in 2003: “It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal. For better or worse, artificial intellects need not share our human motivational tendencies.” There is even a web game inspired by this idea.

Understand your goals

How do we address some of the plausible risks above? A complete answer is another full post (or book, or lifetime). But let’s mention one piece: understand the goals you’ve given your AI. Since all AI is simply optimizing a well-defined mathematical function, that is the language you use to say what problem you want to solve.

Does that mean you should start reading up on integrals and gradient descent algorithms? I can feel your eyeballs closing. Not necessarily!

The goals are a negotiation between what your business needs (human language) and how it can be measured and optimized (AI language). You need people to speak to both sides. That is often a business or product owner in collaboration with a data scientist or quantitative researcher.

Let me give an example. Suppose you want to recommend content using a model. You choose to optimize the model to increase engagement with the content, as measured by clicks. Voila, now you understand one reason the Internet is full of clickbait: the goal is wrong. You actually care about more than clicks. Companies modify the goal without touching the AI by trying to filter out content that doesn’t meet policies. That is one reasonable strategy. Another strategy might be to add a heavy penalty to the training dataset if the AI recommends content later found to be against policy. Now we are starting to really think through how our goal affects the AI.

This example also explains why content systems can be so jumpy: you click on a video on YouTube, or a pin on Pinterest, or a book on Amazon, and the system immediately recommends a big pile of things that are almost exactly the same. Why? The click is usually measured in the short-term, so the system optimizes for short-term engagement. This is a well-known recommender challenge, centered around mathematically defining a good goal. Perhaps a part of the goal should be whether the recommendation is irritating, or whether there is long-term engagement.

Another example: if your model is accurate, but your dataset or measurements don’t look at under-represented minorities in your business, you may be performing poorly for them. Your goal may really to be accurate for all sorts of different people.

A third example: if your model is accurate, but you don’t understand why, that might be a risk for some critical applications, like healthcare or finance. If you have to understand why, you might need to use a human-understandable (“white box”) model, or explanation technology for the model you have. Understandability can be a goal.

Conclusion: we need to understand AI

AI cannot fully replace humans, despite what you read in the popular press. The biggest difference between human and artificial intelligence is that only humans choose goals. So far, AIs do not.

If you can take away one thing about artificial intelligence: understand its goals. Any AI technology has a perfectly well-defined goal, often a labeled dataset. To the extent the definition (or dataset) is flawed, so too will be the results.

One way to understand your AI better is to explain your models. We formed Fiddler Labs to help. Feel free to reach us at info@fiddler.ai.

A gentle introduction to algorithmic fairness

A gentle introduction to issues of algorithmic fairness: some U.S. history, legal motivations, and four definitions with counterarguments.

History

In the United States, there is a long history of fairness issues in lending.

For example, redlining:

‘In 1935, the Federal Home Loan Bank Board asked the Home Owners’ Loan Corporation to look at 239 cities and create “residential security maps” to indicate the level of security for real-estate investments in each surveyed city. On the maps, “Type D” neighborhoods were outlined in red and were considered the most risky for mortgage support..

‘In the 1960s, sociologist John McKnight coined the term “redlining” to describe the discriminatory practice of fencing off areas where banks would avoid investments based on community demographics. During the heyday of redlining, the areas most frequently discriminated against were black inner city neighborhoods…’

Redlining is clearly unfair, since the decision to invest was not based on an individual homeowner’s ability to repay the loan, but rather on location; and that basis systematically denied loans to one racial group, black people. In fact, part 1 of a Pulitzer Prize-winning series in the Atlanta Journal-Constitution in 1988 suggests that location was more important than income: “Among stable neighborhoods of the same income [in metro Atlanta], white neighborhoods always received the most bank loans per 1,000 single-family homes. Integrated neighborhoods always received fewer. Black neighborhoods — including the mayor’s neighborhood — always received the fewest.

Legislation such as the 1968 Fair Housing Act and the 1977 Community Reinvestment Act were passed to combat these sorts of unfair practices in housing and lending.

More recently, in 2018, WUNC reported that blacks and latinos in some cities in North Carolina were denied mortgages at higher rates than whites:

“Lenders and their trade organizations do not dispute the fact that they turn away people of color at rates far greater than whites. But they maintain that the disparity can be explained by two factors the industry has fought to keep hidden: the prospective borrowers’ credit history and overall debt-to-income ratio. They singled out the three-digit credit score — which banks use to determine whether a borrower is likely to repay a loan — as especially important in lending decisions.”

The WUNC example raises an interesting point: it is possible to look unfair via one measure (loan rates by demographic), but not by another (ability to pay as judged by credit history and debt-to-income ratio). Measuring fairness is complicated. In this case, we can’t tell if the lending practices are fair because the data on credit history and debt-to-income ratio for these particular groups are not available to us to evaluate lenders’ explanations of the disparity.

In 2007, the federal reserve board (FRB) reported on credit scoring and its effects on the availability and affordability of credit. They concluded that the credit characteristics included in credit history scoring models are not a proxy for race, although different demographic groups have substantially different credit scores on average, and “for given credit scores, credit outcomes — including measures of loan performance, availability, and affordability — differ for different demographic groups.” This FRB study supports the lenders’ claims that credit score might explain disparity in mortgage denial rates (since demographic groups have different credit scores), while also pointing out that credit outcomes are different for different groups.

Is this fair or not?

Defining fairness

As machine learning (ML) becomes widespread, there is growing interest in fairness, accountability, and transparency in ML (e.g., the fat* conference and fatml workshops).

Some researchers say that fairness is not a statistical concept, and no statistic will fully capture it. There are many statistical definitions that people try to relate to (if not define) fairness.

First, here are two legal concepts that come up in many discussions on fairness:

  1. Disparate treatment: “unequal behavior toward someone because of a protected characteristic (e.g., race or gender) under Title VII of the United States Civil Rights Act.” Redlining is disparate treatment if the intent is to deny black people loans.
  2. Disparate impact: “practices .. that adversely affect one group of people of a protected characteristic more than another, even though rules applied .. are formally neutral.” (“The disparate impact doctrine was formalized in the landmark U.S. Supreme Court case Griggs v. Duke Power Co. (1971). In 1955, the Duke Power Company instituted a policy that mandated employees have a high school diploma to be considered for promotion, which had the effect of drastically limiting the eligibility of black employees. The Court found that this requirement had little relation to job performance, and thus deemed it to have an unjustified — and illegal — disparate impact.” [Corb2018])

[Lipt2017] points out that these are legal concepts of disparity, and creates corresponding terms for technical concepts of parity applied to machine learning classifiers:

  1. Treatment parity: a classifier should be blind to a given protected characteristic. Also called anti-classification in [Corb2018], or “fairness through unawareness.”
  2. Impact parity: the fraction of people given a positive decision should be equal across different groups. This is also called demographic parity, statistical parity, or independence of the protected class and the score [Fair2018].

There is a large body of literature on algorithmic fairness. From [Corb2018], two more definitions:

  1. Classification parity: some given measure of classification error is equal across groups defined by the protected attributes. [Hard2016] called this equal opportunity if the measure is true positive rates, and equalized odds if there were two equalized measures, true positive rates and false positive rates.
  2. Calibration: outcomes are independent of protected attributes conditional on risk score. That is, reality conforms to risk score. For example, about 20% of all loans predicted to have a 20% chance of default actually do.

There is lack of consensus in the research community on an ideal statistical definition of fairness. In fact, there are impossibility results on achieving multiple fairness notions simultaneously ([Klei2016] [Chou2017]). As we noted previously, some researchers say that fairness is not a statistical concept.

No definition is perfect

Each statistical definition described above has counterarguments.

Treatment parity unfairly ignores real differences. [Corb2018] describes the case of the COMPAS score used to predict recidivism (whether someone will commit a crime if released from jail). After controlling for COMPAS score and other factors, women are less likely to recidivate. Thus, ignoring sex in this prediction might unfairly punish women. Note that the Equal Credit Opportunity Act legally mandates treatment parity: “Creditors may ask you for [protected class information like race] in certain situations, but they may not use it when deciding whether to give you credit or when setting the terms of your credit.” Thus, [Corb2018] implies that this sort of unfairness is enshrined in law.

Impact parity doesn’t ensure fairness (people argue against quotas), and can cripple a model’s accuracy, harming the model’s utility to society. [Hard2016] discusses this issue (using the term “demographic parity”) in its introduction.

Corbett et al. [Corb2018] argue at length that classification parity is naturally violated: “when base rates of violent recidivism differ across groups, the true risk distributions will necessarily differ as well — and this difference persists regardless of which features are used in the prediction.

They also argue that calibration is not sufficient to prevent unfairness. Their hypothetical example is a bank that gives loans based solely on the default rate within a zip code, ignoring other attributes like income. Suppose that (1) within zip code, white and black applicants have similar default rates; and (2) black applicants live in zip codes with relatively high default rates. Then the bank’s plan would unfairly punish creditworthy black applicants, but still be calibrated.

Conclusion

In summary, likely fairness has no single measure. We took a whirlwind tour of four statistical definitions, two motivated by history and two more recently motivated by machine learning, and summarized the counterarguments to each.

This also means it is challenging to automatically decide if an algorithm is fair. Open-source fairness-measuring packages reflect this by offering many different measures.

However, this doesn’t mean we should ignore statistical measures. They can give us an idea of whether we should look more carefully. Food for thought. We should feed our brain well, it being the most likely to make the final call.

(A note: this subject is rightfully contentious. Our intention is to add to the conversation in a productive, respectful way. We welcome feedback of any kind.)

Thanks to Krishnaram Kenthapadi, Zack Lipton, Luke Merrick, Amit Paka, and Krishna Gade for their feedback.

References

  • [Chou2017] Chouldechova, Alexandra. “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments.” Big Data 5, no. 2 (June 1, 2017): 153–63. https://doi.org/10.1089/big.2016.0047.
  • [Corb2018] Corbett-Davies, Sam, and Sharad Goel. “The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning.” ArXiv:1808.00023 [Cs], July 31, 2018. http://arxiv.org/abs/1808.00023.
  • [Fair2018] “Fairness and Machine Learning.” Accessed April 9, 2019. https://fairmlbook.org/.
  • [Hard2016] Hardt, Moritz, Eric Price, and Nathan Srebro. “Equality of Opportunity in Supervised Learning.” ArXiv:1610.02413 [Cs], October 7, 2016. http://arxiv.org/abs/1610.02413.
  • [Klei2016] Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent Trade-Offs in the Fair Determination of Risk Scores.” ArXiv:1609.05807 [Cs, Stat], September 19, 2016. http://arxiv.org/abs/1609.05807.
  • [Lipt2017] Lipton, Zachary C., Alexandra Chouldechova, and Julian McAuley. “Does Mitigating ML’s Impact Disparity Require Treatment Disparity?” ArXiv:1711.07076 [Cs, Stat], November 19, 2017. http://arxiv.org/abs/1711.07076.

Introducing Fiddler Labs!

Ring out the old, ring in the new,
Ring, happy bells, across the snow:
The year is going, let him go;
Ring out the false, ring in the true.

Alfred Lord Tennyson, 1809 – 1892

As we stand on the brink of a technological transformation that will fundamentally alter the way we live, work, and relate to one another – I am reminded of this famous line from one of the Tennyson’s poems I used to recite as a young boy growing up.

Artificial Intelligence is set to dramatically change human lives – optimize jobs and activities, minimize risks, help us make effective decisions – however there are many questions that remain to be answered and many concerns that need to be addressed. It is a no-brainer that every enterprise whether public or private needs to be adopting AI today, at the risk of being obsolete and outgunned by the competition.

There are two main challenges that businesses face today with adopting AI

  1. Higher costs of building AI products.
  2. Increasing lack of trust in AI.

Operationalizing AI applications continues to be prohibitively time-consuming and expensive even for the most sophisticated companies. This is primarily because the tools that researchers and scientists use for building AI models are not scalable for real production systems. These systems need end-to-end AI platforms for everything from data preparation and labeling, to operationalization and monitoring. Additionally, the ROI ambiguity of AI within the enterprise makes them pursue a  ‘golden use case’ thus holding back many from fully exploiting its potential.  Existing enterprise AI platforms, especially those deployed on-premise, have poor UX, limited features and lack distributed computing. The only alternative for enterprises seems to be to either move to cloud offerings like AWS, Google Cloud or Azure ML, or start custom engineering projects.

Enterprises therefore are investing in significant R&D to build custom AI infrastructure that they need. The biggest tech companies have had a considerable head start here. First, they pioneered data collection, data engineering, and ML frameworks. Now they are building a new kind of proprietary infrastructure in-house e.g. FBLearner at Facebook, TFX at Google, Michelangelo at Uber, Notebook Data Platform at Netflix, Cortex at Twitter and BigHead at Airbnb. We call this kind of infrastructure, the ‘AI Engine’. This infrastructure manages compute loads, automates deployments for ML models, and provides tools for managing AI projects across the organization.

Trust in AI is a looming societal and technological problem.

One of the biggest questions people are concerned about: Is bias creeping into AI, or is it already present in the data that is fueling models? Fairness in AI poses a difficult and subjective question. Sometimes there can be a trade-off between accuracy and fairness of ML models. In sensitive applications such as healthcare or criminal justice, this trade-off is often undesirable as any increase in prediction error could have dangerous consequences. AI fairness is tricky because we cannot design a randomized experiment with race or gender. Instead, we need to understand and monitor data throughout the lifecycle of AI from generation, collection, sampling, feature engineering etc. It’s a good thing that as a society, we have become more sensitive about how we use data. There is also a clear consensus emerging that businesses need to be alert for the dangers of bias and its harmful effects on AI-powered decision-making.

To solve these problems, we have started Fiddler Labs to help enterprises adopt cutting edge AI by simplifying operationalization, removing the ambiguity around ROI, and crucially, by creating a culture of trust.

Converting data into intelligence has been our team’s multi-decade journey through companies like Facebook, Google, Twitter, Pinterest, Amazon, Lyft, PayPal and Microsoft. Over the years, we have seen many new tools, algorithms, and systems built, deployed and scaled in production. Our team has worked with business owners, data scientists, business analysts and devops to develop a deep understanding of the challenges they face day-to-day in converting data into intelligence and insight. These experiences led us to build a new  kind of AI Platform that’s both more trustable and more efficient: The Explainable AI Engine.

You can reach us at info@fiddler.ai for more information or follow us @fiddlerlabs for updates. And if you are interested to help build the future of AI, join us!