Tracking social media companies’ voting resources ahead of the 2024 election

The U.S. presidential election is drawing near, and social media companies are continuing to manage how their platforms are dealing with election-related content and toeing the divide between allowing misinformation and the right to free speech.

Fast Company put together a brief guide on how certain social media giants are tackling the issue heading up to what many say could be the most important election of a lifetime.

TikTok

TikTok is working to improve information about the U.S. presidential race within its app as the November election looms.

The company, which has more than 170 million U.S. users, said this week it will expand resources in its U.S. Election Center, which provides reliable information about how elections work. It will also run in-feed videos teaching media literacy. The short-form video platform will start requiring verified government and politicians’ accounts to protect access with two-step verification.

TikTok is in a unique position compared to other popular platforms, since it’s owned by a Chinese company and is set to be banned in the U.S. as soon as January due to national security concerns. But at the same time, it’s clear users are getting their news and information from the platform. The company said that its U.S. Election Center has been viewed more than 7 million times since its January launch. The center includes FAQs, with information like how elections work and if systems are secure, and will have updated election results from The Associated Press in real time come November.

TikTok said it is working to remove or reduce the reach of election-related misinformation, which includes false or misleading content on how to vote, how to register to vote, the eligibility qualifications for candidates, and the procedures that govern implementation of elections.

Elon Musk completed his acquisition of Twitter, now called X, just days ahead of the 2022 midterm election. November will mark the billionaire’s first time running a social media platform during a U.S. presidential election.

The social media platform’s civic integrity policy says that users are barred from using the company “for the purpose of manipulating or interfering in elections or other civic processes, such as posting or sharing content that may suppress participation, mislead people about when, where, or how to participate in a civic process, or lead to offline violence during an election.”

But Musk has aggressively criticized the platform’s previous moderation policies and has since dismantled its system for flagging false election content. After Musk fired a number of roles on its disinformation and election team, the Tech Oversight Project, a nonprofit that supports dismantling large tech firms, criticized the move and pointed to a European Commission report that showed the largest increase in disinformation and Russian propaganda has occurred on X from January to May 2023.

The platform doesn’t appear to be building out any election-specific resources this year, as well. In 2020, for example, Twitter introduced an election hub that provided real-time voting and election information, as well as election-related news from reputable outlets.

YouTube

In its December 2023 blog post about how it will tackle election information, YouTube said that voters should be able to see all sides of a candidate’s platform, even if “controversial or questionable.” Still, content is subject to its community guidelines, and content that misleads voters on how to vote or encourages interference is not allowed on the platform. YouTube added that it will take down content that incites violence, encourages hatred, promotes harmful conspiracy theories, or threatens election workers. Creators are also required to disclose artificially generated content.

At the same time, the company recommends election news and information from authoritative sources and includes information panels at the top of search results and below videos for more content.

Meta (Facebook, Instagram, Threads)

Meta, which hosted a surge of misinformation ahead of the Jan. 6 attack on the Capitol, has been widely accused of contributing to the day’s violence via its policies. (Facebook in the past has vehemently denied that it bears responsibility.) Now, increased focus is on how the platform manages the 2024 election.

In terms of education, Meta has long had a voting information center that includes resources such as checking voter registration status and information on how to become a poll worker. The company also works with state and local election officials to send registration and voting information to users.

The company has rolled out fact-checking labels across its biggest social platforms. Meta’s misinformation policies say it removes content that is “likely to directly contribute to imminent physical harm,” as well as any potential “interference with the functioning of political processes.” The company said it partners with independent experts on whether or not it is likely to directly contribute to the risk of imminent harm.

Meta in February said it was working with industry partners to identify AI content so that users will see labels on images posted to Facebook, Instagram, and Threads that are detected to be AI-generated.

No comments

Read more