Back to blog home

TikTok and the Risks of Black Box Algorithms

TikTok made waves this summer when its CEO Kevin Mayer announced on the company’s blog that the company would be releasing its algorithms to regulators and called on other companies to do the same. Mayer described this decision as a way to provide “peace of mind through greater transparency and accountability,” and that to demonstrate that TikTok “believe[s] it is essential to show users, advertisers, creators, and regulators that [they] are responsible and committed members of the American community that follows US laws.”

It is hardly a coincidence that TikTok’s news broke the same week that Facebook, Google, Apple and Amazon were set to testify in front of the House Judiciary’s antitrust panel. TikTok has quickly risen as fierce competition to these U.S.-based players, who acknowledge the competitive threat TikTok poses, and have also cited TikTok’s Chinese origin as a distinct threat to security of its users and American national interests. TikTok’s news signals an intent to pressure these companies to increase their transparency as they push back on TikTok’s ability to continue to operate in the U.S.

Now, TikTok is again in the news over the same algorithms, as a deal for a potential sale of TikTok to a U.S.-based company hit a roadblock with new uncertainty over whether or not its algorithms would be included in the sale. According to the Wall Street Journal, “The algorithms, which determine the videos served to users and are seen as TikTok’s secret sauce, were considered part of the deal negotiations up until Friday, when the Chinese government issued new restrictions on the export of artificial-intelligence technology.”

This one-two punch of news over TikTok’s algorithms raises two major questions: 

1. What is the value of TikTok with or without its algorithms? 
2. Does the release of these algorithms actually increase transparency and accountability?

The second question is what this post will dive into, and gets to the basis upon which Fiddler was founded:  how to provide more trustworthy, transparent, responsible AI. 

TikTok's AI Black Box

While credit is due to TikTok for opening up its algorithms, its hand was largely forced here. Earlier this year, countless articles expounded upon the potential biases within the platform. Users found that in a similar fashion to other social media platforms, TikTok recommended accounts based accounts users already followed. But the recommendations were not only similar in terms of type of content, but in physical attributes such as race, age, or facial characteristics (down to things like hair color or physical disabilities). According to an AI researcher at UC Berkeley School of Information, these recommendations get “weirdly specific – Faddoul found that hitting follow on an Asian man with dyed hair gave him more Asian men with dyed hair.” Aside from criticisms around bias, the concerns of opacity around the level of Chinese control and access to the algorithms created added pressure to increase transparency. 

TikTok clearly made a statement by responding to this pressure and being the first mover in releasing its algorithms in this manner. But how will this release actually impact lives? Regulators now have access to the code that drives TikTok’s algorithms, its moderation policies, and its data flows. But sharing this information does not necessarily mean that the way its algorithms make decisions is actually understandable. Its algorithms are largely a black box, and it is incumbent on TikTok to equip regulators with the tools to be able to see into this black box to explain the ‘how’ and ‘why’ behind the decisions. 

The Challenges of Black Box AI

TikTok is hardly alone in the challenge of answering for the decisions it's AI makes and removing the black box algorithms to increase explainability. As the potential for application of AI across industries and use cases grows, new risks have also emerged: over the past couple years, there has been a stream of news about breaches of ethics, lack of transparency, and noncompliance due to black box AI. The impacts of this are far-reaching. This can mean negative PR - news about companies such as as Quartz, Amazon’s AI-powered recruiting tool, being biased against women and Apple Card being investigated after gender discrimination complaints led to months of bad news stories for the companies. And it’s not just PR that companies have to worry about. Regulations are catching up with AI, and fines and regulatory impact are starting to be real concerns - for example, New York’s insurance regulator recently probed UnitedHealth’s algorithm for racial bias. Bills and laws demanding explainability and transparency and increasing consumers’ rights are being passed within the United States as well as internationally, will only increase the risks of non-compliance within AI.

Beyond regulatory and financial risks, as consumers become more aware of the ubiquity of AI within their everyday lives, the need for companies to build trust with their customers grows more important. Consumers are demanding accountability and transparency as they begin to recognize the impact these algorithms can have on their lives, for issues big (credit lending or hiring decisions) and small (product recommendations on an ecommerce site). 

These issues are not going away. If anything, as AI becomes more and more prevalent in everyday decision making and regulations inevitably catch up with its ubiquity, companies must invest in ensuring their AI is transparent, accountable, ethical, and reliable. 

But what is the solution? 

At Fiddler, we believe that the key to this is visibility and transparency of AI systems. In order to root out bias within models, you must first be able to understand the ‘how’ and ‘why’ behind problems to efficiently root cause issues. When you know why your models are doing something, you have the power to make them better while also sharing this knowledge to empower your entire organization.

But what is Explainable AI? Explainable AI refers to the process by which the outputs (decisions) of an AI model are explained in the terms of its inputs (data). Explainable AI adds a feedback loop to the predictions being made, enabling you to explain why the model behaved in the way it did for that given input. This helps you to provide clear and transparent decisions and build trust in the outcomes. 

Explainability on its own is largely reactive. In addition to being able to explain the outcomes of your model, you must be able to continuously monitor data that is fed into the model. Continuous monitoring gives you the ability to be proactive rather than reactive - meaning you can drill down into key areas and detect and address issues before they get out of hand. 

Explainable monitoring increases transparency and actionability across the entire AI lifecycle. This instills trust in your models to stakeholders within and outside of your organization, including business owners, customers, customer support, IT and operations, developers, and internal and external regulators.

Explainable AI

While much is unknown about the future of AI, we can be certain that the need for responsible and understandable models will only increase. At Fiddler, we believe there is a need for a new kind of Explainable AI Platform that enables organizations to build responsible, transparent, and understandable AI solutions and we’re working with a range of customers across industries, from banks to HR companies,  from Fortune 100 companies to startups in the emerging technology space, empowering them to do just that.

If you’d like to learn more about how to unlock your AI black box and transform the way you build AI into your systems, let us know