On TikTok, election misinformation thrives before midterms
By Tiffany Hsu, The New York Times Company
In Germany, TikTok accounts impersonated prominent political figures during the country’s last national election. In Colombia, misleading TikTok posts falsely attributed a quotation from one candidate to a cartoon villain and allowed a woman to masquerade as another candidate’s daughter. In the Philippines, TikTok videos amplified sugarcoated myths about the country’s former dictator and helped his son prevail in the country’s presidential race.
Now, similar problems have arrived in the United States.
Before the midterm elections this fall, TikTok is shaping up to be a primary incubator of baseless and misleading information, in many ways as problematic as Facebook and Twitter, researchers who track online falsehoods say. The same qualities that allow TikTok to fuel viral dance fads — the platform’s enormous reach, the short length of its videos, its powerful but poorly understood recommendation algorithm — can also make inaccurate claims difficult to contain.
Baseless conspiracy theories about certain voter fraud in November are widely viewed on TikTok, which globally has more than 1 billion active users each month. Users cannot search the #StopTheSteal hashtag, but #StopTheSteallll had accumulated nearly 1 million views until TikTok disabled the hashtag after being contacted by The New York Times. Some videos urged viewers to vote in November while citing debunked rumors raised during the congressional hearings into the Jan. 6, 2021, attack on the Capitol. TikTok posts have garnered thousands of views by claiming, without evidence, that predictions of a surge in COVID-19 infections this fall are an attempt to discourage in-person voting.
The spread of misinformation has left TikTok struggling with many of the same knotty free speech and moderation issues that Facebook and Twitter have faced, and have addressed with mixed results, for several years.
But the challenge may be even more difficult for TikTok to address. Video and audio — the bulk of what is shared on the app — can be far more difficult to moderate than text, especially when they are posted with a tongue-in-cheek tone. TikTok, which is owned by Chinese tech giant ByteDance, also faces many doubts in Washington about whether its business decisions about data and moderation are influenced by its roots in Beijing.
“When you have extremely short videos with extremely limited text content, you just don’t have the space and time for nuanced discussions about politics,” said Kaylee Fagan, a research fellow with the Technology and Social Change Project at the Harvard Kennedy School’s Shorenstein Center.
TikTok had barely been introduced in the United States at the time of the 2018 midterm elections and was still largely considered an entertainment app for younger people during the 2020 presidential election. Today, its American user base spends an average of 82 minutes a day on the platform, three times more than on Snapchat or Twitter and twice as long as on Instagram or Facebook, according to a recent report from app analytics firm Sensor Tower. TikTok is becoming increasingly important as a destination for political content, often produced by influencers.
The company insists that it is committed to combating false information. In the second half of 2020, it removed nearly 350,000 videos that included election misinformation, disinformation and manipulated media, according to a report it released last year. The platform’s filters kept an additional 441,000 videos with unsubstantiated claims from being recommended to users, the report said.
The service blocked so-called deepfake content and coordinated misinformation campaigns before the 2020 election, made it easier for users to report election falsehoods and partnered with 13 fact-checking organizations, including PolitiFact. Researchers like Fagan said TikTok had worked to shut down problematic search terms, although its filters remain easy to evade with creative spellings.
“We take our responsibility to protect the integrity of our platform and elections with utmost seriousness,” TikTok said in a statement. “We continue to invest in our policy, safety and security teams to counter election misinformation.”
But the service’s troubling track record during foreign elections — including in France and Australia this year — does not bode well for the United States, experts said.
TikTok has been “failing its first real test” in Africa in recent weeks, Odanga Madung, a researcher for the nonprofit Mozilla Foundation, wrote in a report. The app struggled to tamp down on disinformation before last week’s presidential election in Kenya. Madung cited a post on TikTok that included an altered image of one candidate holding a knife to his neck and wearing a blood-streaked shirt, with a caption that described him as a murderer. The post garnered more than 500,000 views before it was removed.
“Rather than learn from the mistakes of more established platforms like Facebook and Twitter,” Madung wrote, “TikTok is following in their footsteps.”
TikTok has also struggled to contain nonpolitical misinformation in the United States. Health-related myths about COVID-19 vaccines and masks run rampant, as do rumors and falsehoods about diets, pediatric conditions and gender-affirming care for transgender people. A video making the bogus claim that the mass shooting at Robb Elementary School in Uvalde, Texas, in May had been staged drew more than 74,000 views before TikTok removed it.
Posts on TikTok about Russia’s war in Ukraine have also been problematic. Even experienced journalists and researchers analyzing posts on the service struggle to separate truth from rumor or fabrication, according to a report published in March by the Shorenstein Center.
TikTok’s design makes it a breeding ground for misinformation, the researchers found. They wrote that videos could easily be manipulated and republished on the platform and showcased alongside stolen or original content. Pseudonyms are common; parody and comedy videos are easily misinterpreted as fact; popularity affects the visibility of comments; and data about publication time and other details is not clearly displayed on the mobile app.
(The Shorenstein Center researchers noted, however, that TikTok is less vulnerable to so-called brigading, in which groups coordinate to make a post spread widely, than platforms like Twitter or Facebook.)
During the first quarter of 2022, more than 60% of videos with harmful misinformation were viewed by users before being removed, TikTok said. Last year, a group of behavioral scientists who had worked with TikTok said that an effort to attach warnings to posts with unsubstantiated content had reduced sharing by 24% but had limited views by 5%.
Researchers said that misinformation would continue to thrive on TikTok as long as the platform refused to release data about the origins of its videos or share insight into its algorithms. Last month, TikTok said it would offer some access to a version of its application programming interface this year, but it would not say whether it would do so before the midterms.
Filippo Menczer, an informatics and computer science professor and director of the Observatory on Social Media at Indiana University, said he had proposed research collaborations to TikTok and had been told: “absolutely not.”
“At least with Facebook and Twitter, there is some level of transparency, but, in the case of TikTok, we have no clue,” he said. “Without resources, without being able to access data, we don’t know who gets suspended, what content gets taken down, whether they act on reports or what the criteria are. It’s completely opaque, and we cannot independently assess anything.”
U.S. lawmakers are also calling for more information about TikTok’s operations, amid renewed concerns that the company’s ties to China could make it a national security threat. The company has said it plans to keep data about its American users separate from its Chinese parent. It has also said its rules have changed since it was accused of censoring posts seen as antithetical to Beijing’s policy goals.
The company declined to say how many human moderators it had working alongside its automated filters. (A TikTok executive told British politicians in 2020 that the company had 10,000 moderators around the world.) But former moderators have complained about difficult working conditions, saying they were spread thin and sometimes required to review videos that used unfamiliar languages and references — an echo of accusations made by moderators at platforms like Facebook.
In current job listings for moderators, TikTok asks for willingness to “review a large number of short videos” and “in continuous succession during each shift.”
In a lawsuit filed in March, Reece Young of Nashville, Tennessee, and Ashley Velez of Las Vegas said they had “suffered immense stress and psychological harm” while working for TikTok last year. The former moderators described 12-hour shifts assessing thousands of videos, including conspiracy theories, fringe beliefs, political disinformation and manipulated images of elected officials. Usually, they said, they had less than 25 seconds to evaluate each post and often had to watch multiple videos simultaneously to meet TikTok’s quotas.
In a filing, the company pushed for the case to be dismissed in part because the plaintiffs had been contractors hired by staffing services and not directly by TikTok. The company also noted the benefits of human oversight when paired with its review algorithms, saying, “The significant social utility to content moderation grossly outweighs any danger to moderators.”
Election season can be especially difficult for moderators, because political TikTok posts tend to come from a diffuse collection of users addressing broad issues, rather than from specific politicians or groups, said Graham Brookie, senior director of the Digital Forensic Research Lab at the Atlantic Council.
“The bottom line is that all platforms can do more and need to do more for the shared set of facts that social democracy depends on,” Brookie said. “TikTok, in particular, sticks out because of its size, its really, really rapid growth and the number of outstanding issues about how it makes decisions.”
This article originally appeared in The New York Times.
Source: Read Full Article