Twitter research group stall complicates compliance with new EU law

Twitter research group stall complicates compliance with new EU law

THE STORY CONTINUES BELOW THESE SALTWIRE VIDEOS

By Sheila Dang

(Reuters) – The stalling of a Twitter program that has been crucial to outside researchers investigating disinformation campaigns calls into question the company’s strategy to comply with forthcoming regulation in Europe, former employees and experts told Reuters.

The European Union’s new Digital Services Act (DSA), one of the world’s toughest regulations for internet platforms, has prompted tech companies to meet their requirements for taking action against illegal content and outlining the steps they are taking to moderate content before the The law will come into full force in early 2024.

Twitter signed a voluntary agreement with the EU in June related to the DSA, in which it pledges to “strengthen the research community”, including by sharing datasets on disinformation with researchers. The EU’s goal with the law is to create a safer internet for users and have a mechanism to hold companies accountable.

According to Yoel Roth, Twitter’s former head of trust and security, the Twitter Moderation Research Consortium was a key part of Twitter’s plan because it compiled data on state-sponsored manipulation of the platform and made it available to researchers. “Twitter was uniquely well positioned,” he said.

Almost all of the 10 to 15 employees who have worked at the consortium have left the company since Elon Musk acquired it in October, according to Roth, who resigned in November, and three other former employees involved in the program.

EU law would oblige platforms with over 45 million EU users to respond to EU-approved research proposals.

Failure to comply with the DSA once it has come into force can result in fines of up to 6% of global sales or even an operating ban in the EU, according to the European Commission’s website.

Reuters could not determine whether Twitter made alternative plans to comply with the DSA.

In an email, Ella Irwin, head of trust and safety at Twitter, said: “We intend to be fully compliant with the DSA, have many staff working internally on DSA compliance and have (EU Commissioner Thierry) Breton and his notified team of our intention to comply.”

She did not comment on detailed questions about the status of the consortium, how many employees are working on it or how Twitter intends to comply with the DSA.

Breton has met with Musk at least twice to discuss Twitter’s intent to comply with the upcoming law. In November, Breton said Twitter had “big work ahead” because the company needed to “take firm action against disinformation” and significantly increase content moderation. In May, Musk appeared in a video with Breton expressing support for the Digital Services Act. Breton’s spokesman declined to comment on this story.

Across the company, at least 5,000 employees (about two-thirds of the pre-acquisition total) have either resigned or been laid off as Musk overhauls Twitter, hitting the trust and safety and public order teams particularly hard.

“I just don’t see how the absolutely paltry staff … will be able to readily comply (to the DSA),” said Rebekah Tromble, director of the Institute for Data, Democracy and Politics at George Washington University.

THE WORK OF THE CONSORTIUM

The research consortium was formed in response to the backlash against Russian interference in the 2016 US presidential election. According to the company’s website, the goal is to “increase transparency around Twitter’s content moderation policies and enforcement decisions.”

Twitter prohibits any person, entity, or government from manipulating conversations on the Service, such as: B. Using multiple or fake accounts to make content appear more popular.

Early last year, Twitter launched a consortium pilot to reveal examples of platform manipulation to some outside researchers.

When Twitter investigated and removed accounts suspected of foreign interference, it shared data with researchers to help them investigate the misinformation strategies and their origins.

In September, Twitter opened an application process to expand the consortium and had accepted about 50 researchers by the time of Musk’s Oct. 27 acquisition, two of the former employees said.

Twitter had prepared to share at least a dozen new datasets with researchers by then, the former employees said.

Of the three former Twitter employees who asked not to be identified for fear of reprisals, one recently spoke to current employees and was told they did not have the staff or bandwidth to continue working on the consortium .

Five outside researchers told Reuters that without a program like the research consortium, it will be more difficult to study how governments use Twitter to intervene in elections or political events around the world.

Two of the consortium members said Twitter didn’t send a memo to officially shut down the program and previously released data is still available to them, but they hadn’t received any data from it for at least two months.

The research consortium is an important tool to make the Internet safer, according to two US lawmakers who introduced legislation last year that would require social media platforms to give academic researchers access to data. Their Digital Services Oversight and Safety Act was not voted on.

Rep. Lori Trahan of Massachusetts and Rep. Sean Casten of Illinois also wrote an open letter to Twitter Nov. 18 asking if Twitter would keep the consortium after layoffs that halved the workforce.

Asked about the consortium by Reuters this month, Trahan said failure to maintain the program would represent “a massive step backwards”.

The Stanford Internet Observatory, a consortium member that studies Internet risks, said it has not received any communications from the program since mid-November and has not been in touch on Twitter.

The Stanford team has published at least three articles using consortium data, including one on Twitter accounts that have promoted India’s military activities in Kashmir and one on US-linked attempts to spread pro-Western narratives abroad .

If the research consortium is eliminated, “we will return to the 2017 era of limited shared communication about malicious activity by state actors,” said Renée DiResta, research manager at the Stanford Internet Observatory.

Cazadores de Fake News, a Venezuela-based consortium member that fact-checks online news, told Reuters the research program “appears to have gone on hiatus” and the organization has not heard from Twitter since Musk’s acquisition.

“But we hope it will resurface at some point as it is a very valuable initiative,” spokesman Adrian Gonzalez said.

(Reporting by Sheila Dang in Dallas; Additional reporting by Paresh Dave; Editing by Kenneth Li and Claudia Parsons)

Leave a Reply

Your email address will not be published. Required fields are marked *