Following TikTok, Meta announces 2024 EU election preparations

Meta will focus on combating misinformation and countering the risks posed by Artificial Intelligence in preparation for the June 2024 European Parliament elections, the company announced on Monday.

Meta, the parent company of the social media platforms Facebook, Instagram, Threads, and WhatsApp, is now following in the footsteps of ByteDance’s TikTok, which announced its preparations for the elections on 14 February.

According to Meta’s blog post by Marco Pancini, head of EU affairs, “content that could contribute to imminent violence or physical harm, or that is intended to suppress voting” is being removed from Facebook, Instagram, and Threads.

However, for content that would not violate these policies, Meta is working with fact-checking organisations, “26 partners across the EU covering 22 languages”, to review and rate the content.

To make it easier and faster for fact-checking partners to locate and evaluate election-related content, the American company will “use keyword detection to group related content in one place”.

“Our fact-checking partners are also being onboarded to our new research tool, Meta Content Library, that has a powerful search capability to support them in their work”, the blog post reads.

Meta will also establish an Elections Operations Center to promptly identify potential threats and implement real-time mitigation strategies.

Earlier this month, TikTok promised to “launch a local language Election Centre in-app for each of the 27 individual EU Member States to ensure people can easily separate fact from fiction”.

The Chinese-owned platform collaborates with nine fact-checking organisations in Europe, evaluating content accuracy in 18 European languages and labelling “any claims that cannot be verified”.

Advertisements featuring debunked content, discouraging voting, and questioning the validity of the election or its outcomes will also be banned. Meta ads undergo “several layers of analysis and detection” pre-and post-publication.

Meta is collaborating with the European Fact-Checking Standards Network (EFCSN) to train European fact-checkers to evaluate AI-generated and digitally altered media. Additionally, they are launching a media literacy campaign to educate the public on identifying such content.

In TikTok’s case, accounts owned by politicians or political parties cannot advertise or generate revenue on the platform, with those who violate the rules finding their content removed.

However, the video-sharing platform applies “more nuanced account enforcement policies to protect the public interest”, too. For instance, if a politician or party shares misinformation that could undermine civic processes or cause real-world harm during an election period, TikTok may restrict the account from posting content for up to 30 days and remove the content for rule violations.

Covert influence operations

Like TikTok before, Meta mentions covert influence operations, which it defines as coordinated efforts to manipulate public discourse for strategic purposes, which can range from covert campaigns using fake identities to overt actions by state-controlled media.

To combat covert influence operations, Meta has established specialised global teams dedicated to halting such behaviour and “conducted a session to focus on threats specifically associated with the EU Parliament elections”.

The company also labels state-controlled media on Facebook, Instagram, and Threads to inform users of potential government influence.

TikTok’s plans include introducing dedicated reports on covert influence operations to enhance transparency, accountability, and cross-industry collaboration.

ByteDance’s platform will also introduce labels to videos related to the European elections, guiding users to the relevant Election Centre and using reminders on hashtags to prompt users to adhere to rules, verify facts, and report any content they suspect violates the Community Guidelines.

Artificial Intelligence

AI-generated content will undergo a review process, and if found to be “altered”, which encompasses “faked, manipulated, or transformed audio, video, or photos”, the company labels and down-ranks it in the feed to minimises its visibility, while ads that have been debunked are not permitted to run at all.

Meta labels photorealistic content, using Meta AI, even if it is not “altered”, and is developing tools to label AI-generated images from various sources, including Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.

Meta will also add a feature that will let users disclose AI-generated content they share by adding a label, with penalties applying if they do not. If the content “poses a substantial risk of materially deceiving the public on significant matters”, the company can use a more prominent label “so people have more information and context”.

Advertisers on Meta must disclose if they use digitally altered photorealistic images, videos, or realistic-sounding audio in certain instances for ads related to social issues, elections, or politics.

Meta’s review and labelling process of AI-generated content is very similar to what TikTok announced this month, but the latter also mentioned that it will create a dedicated “Mission Control” space within its Dublin office to unite specialist elections teams from the trust and safety department.

Meta notes that it has collaborated with industry peers to establish common standards and guidelines to address the prevalence of AI-generated content online.

“This work is bigger than any one company & will require a huge effort across industry, government, & civil society”, the blog post concludes.

[Edited by Alice Taylor]

Read more with Euractiv