The US, it seems, may very well ban TikTok. It’s certainly unfortunate (but also, let’s admit it, kind of lulzy) that we’ve arrived at a point where a platform notable mostly for lip-syncing and dance videos can be considered a possible national security threat. And yet, here we are. What is to be done?
What would banning TikTok accomplish? It would certainly have the immediate and reliable effect of neutralizing whatever national security threat the app ostensibly poses. Tweens from sea to shining sea would have a terrible, horrible, no good, very bad day. It would not, fundamentally, Solve The Problem. As has been noted, the important thing, ultimately, is to develop a comprehensive framework for ensuring data security and information integrity in online platforms, which would apply across the board to all companies, regardless of country of origin.
This, however, will be hard. And hard things take time. Banning a single company, ad hoc, is relatively easy. If it were the case that an online platform plausibly posed a significant national security threat in the immediate near-term (such as, for example, an important upcoming election *ahem*), banning that company at least temporarily while proper regulation was worked out would be a totally reasonable move. Once such regulation were in place, the company in question could simply be invited to comply with the newly established rules of the road to re-enter the market. Steelmanning a prospective indefinite-term TikTok ban, you might say also that it would be impossible to predict how long it might take to pass such regulation, so better to go ahead and remove the threat now, with the intention to re-open the playing field once regulation is in place. Of course, I do not expect the Trump administration to hold such nuanced intentions.
The question in this case then becomes: is there a sufficiently plausible chance that TikTok poses enough of a security risk to merit the costs of a ban?
I think it’s easy to underestimate the havoc a maximally-malicious TikTok could wreak based on the associations of the platform with teenage silliness, so we should not dismiss the issue out of hand. Recall, for comparison, that not so long ago many people viewed an earlier generation of social media platforms as either pure frivolity, totally unsuited for such serious matters as politics, or a conduit for liberal democratization and the awakening of an enlightened global consciousness. In actual fact, it seems the content moderation concerns may be legitimate grounds for a ban, although I’m more skeptical about the privacy concerns. Additionally, if the US is to ban TikTok, I sincerely hope the reasoning for doing so is clearly communicated. Otherwise, it could easily feed into any of a number of simplistic narratives: blanket Sinophobia, techno-protectionism, tacit legitimization of “cyber sovereignty”.
Hopefully, Gen Z has not taken to scrawling their social security numbers on their foreheads as the latest viral craze. But even so, gather a large enough heap of user-generated data in one place, and you may be surprised at what you can do with it. Luckily, video is perhaps a bit harder to extract useful information from (although getting easier with advances in AI) compared to text, or an extensive web of multi-modal data and meta-data such as on Facebook, but it’s not hard to imagine a Cambridge Analytica-style exploitation of TikTok user data. A large enough quantity of privately shared videos is also sure to contain some choice nuggets the US would prefer the CCP not have access to. The demographics of TikTok’s userbase suggest lower risk, though. Through TikTok, Chinese intelligence may certainly become privy to endless, tempestuous sagas of teenage drama, but copious kompromat on powerful 50- or 60-somethings they will not accrue, because 50- and 60-somethings, for the most part, do not use TikTok.
More important, and insidious, though, is the issue of information warfare. It’s not entirely clear to what extent this is currently happening. The Guardian reports that leaked documents indicate that TikTok does censor content related to the Tiananmen Massacre, Tibetan independence, and Falun Gong. BuzzFeed found no evidence of censorship related to Hong Kong in an investigation last year. For better or worse, I don’t think any of those things are common topics of discussion among American 13 year-olds today. Fortunately, it seems like just surreptitiously buying ad space to target certain demographics, like Russia did on Facebook in 2016, might not work that well given limited targeting ability on the platform. But if TikTok were to quietly start removing videos that were critical of some controversial Chinese government action under widespread debate in the US, or bumping videos supportive of one or another candidate in the upcoming presidential election, how long would it take for Zoomers to notice? What percentage of users would believe the reports that their virtual hangout of choice was being distorted in accordance with CCP diktat? Moreover, what would they do about it? Sure, users have options, but network effects make switching social media platforms difficult.
In the context of the upcoming 2020 election in the US, it’s not clear that China would even have a clear preference for one or the other candidate. In a race where both candidates will be bludgeoning each other with accusations of panda-hugging and competing over dragon-slaying bona fides, there is obviously no “friend of China” in the running. Ironically, if anything, China might actually like to help Trump get re-elected. Trump damages US interests and global influence, which suits Chinese aims quite well. On the other hand, while shooting himself in the foot, he is liable also to haphazardly riddle China with bullets out of the same loosely grasped machine gun. Biden, while he may represent more of a return to “hegemonic” form for the US, would likely be a more reasonable negotiating partner. Would generally “sowing discord” be an attractive enough goal to risk the public opinion backlash? It seems like a platform that is mostly humor and entertainment would not be terribly cost-effective for this purpose, at least as compared to eminently rage-scrollable Twitter and Facebook.
We can take some temporary consolation as well in the fact that much of TikTok’s userbase is not old enough to vote yet anyways. According to my rough analysis based on user data from April of this year, we could expect some ~20 million voters to have been active on TikTok in the first half of the year (I would love to see more up-to-date data on TikTok users in the US broken down by age group). At the time of the closest gap in polling numbers between Biden and Trump back in April, about 11 million Americans made the difference between a prospective Biden victory and a Trump victory. Today, that number is more like 25 million. Under reasonable assumptions of growth in TikTok’s userbase among over-18s, and voter turnout for the 2020 election, essentially all voters on TikTok would have to be currently planning to vote for Biden and change their mind before the election for TikTok to have been causally responsible for a second term of Trump. This is exceedingly unlikely.
2024, though, we must remember, is only a few short years away. 4, to be exact. By that time, many Zoomers will be eligible to vote. Their emerging political consciousness will have been shaped by both the events of 2020 and their media consumption environment in the intervening years. I’m not particularly excited about the latter, even if not to any great extent the former, conforming to the conveniences of the CCP. The clock, indeed, is ticking.