A little experiment by an artificial intelligence researcher is raising questions about how TikTok’s recommendation algorithm suggests new creators to users.
But TikTok’s algorithmic obsession with giving you more content that it thinks you will like is having an unintended consequence: it’s started recommending people new accounts to follow based on the physical appearance of the people they already follow.
Specifically, the question is whether that algorithm is sorting suggestions based on the race of the creator — something TikTok denies it’s doing intentionally. But it’s another example of the need for more scrutiny into how the app and other social media platforms promote particular creators or content.
Marc Faddoul is a researcher at the University of California Berkeley School of Information who studies AI and disinformation. He was checking out TikTok to look for disinformation when he noticed something curious about how the app recommends new creators to follow.
In the app, when a person follows a new account, they can click an arrow that then recommends other accounts to follow. Faddoul noticed that when he did this, the recommended accounts tended to look just like whoever he’d just followed — right down to ethnicity and hair color.
This isn’t the first time TikTok’s algorithm has been accused of racial bias. In October 2019 TikTok users of colour called for better representation on the For You page, where users go for recommendations and new tailored content. In January 2019, Whitney Phillips, a professor of communication and online media rhetoric at Syracuse University told Motherboard that the way TikTok works could lead users to replicate the community with which they identify.
The data TikTok gets from its millions of users feeds into a cycle they get trapped in, and even if you make an effort to diversify your feed, everyone else’s bias will mean the algorithm will keep trying to channel you into a bubble.
Of course, this is hardly unique to TikTok. All social media platforms that use algorithms can create bubbles where people only see content that confirms their biases. Think of, for example, how a Facebook feed may be biased toward a particular political viewpoint, writes BuzzFeed
Computer Science and Cybersecurity expert Matthew Curtin also says many privacy issues have come up for the app, which has been downloaded more than a billion and a half times across the globe.
“So it gives away too much information like their location, for example. It actually will take photos and include the location information, and share them openly with people and create a possible opening for stalking, and so on,” said Curtin.