Connect with us

Hi, what are you looking for?

News

EU Issues Warning After Elon Musk Pulls Twitter Out of Anti-‘Disinformation’ Agreement

Senior European Union officials were furious over the weekend after Twitter owner Elon Musk pulled the social media platform out of the bloc’s “Code of Practice,” which critics say is tantamount to a censorship regime.

The EU’s internal market commissioner, Thierry Breton, wrote that Twitter left the bloc’s Code of Practice, after reports claimed the platform would do so. Breton warned that Twitter would face some legal liabilities.

“Twitter leaves EU voluntary Code of Practice against disinformation. But obligations remain. You can run, but you can’t hide,” Breton wrote. “Beyond voluntary commitments, fighting disinformation will be legal obligation under #DSA as of August 25. Our teams will be ready for enforcement.”

An EU official also told Euractiv that the bloc is “waiting for this,” and “it was purely a matter of time” before reports surfaced that Musk would withdraw.

The rules known as the Digital Services Act (DSA) require companies to do risk management, conduct external and independent auditing, share data with authorities and researchers, and adopt a code of conduct by August.

The 19 companies that are subject to the rules include Alphabet’s Google Maps, Google Play, Google Search, Google Shopping, YouTube, Meta’s Facebook and Instagram, Amazon’s Marketplace, Apple’s App Store, and Twitter. The others are Microsoft’s two units LinkedIn and Bing, booking.com, Pinterest, Snap Inc’s Snapchat, TikTok, Wikipedia, Zalando, and Alibaba’s AliExpress.

“We consider these 19 online platforms and search engines have become systematically relevant and have special responsibilities to make the internet safer,” Breton told reporters earlier this year, adding that those companies would have to target so-called disinformation.

Breton said he was checking to see whether another four to five companies fall under the DSA, with a decision expected in the next few weeks. Breton singled out Facebook’s content moderation system for criticism because of its role in building opinions on key issues.

“Now that Facebook has been designated as a very large online platform, Meta needs to carefully investigate the system and fix it where needed ASAP,” he said, adding: “We are also committed to a stress test with TikTok which has expressed also interest. So I look forward to an invitation to Bytedance’s headquarters to understand better the origin of Tiktok.”

Twitter had agreed earlier to a stress test, and Breton said he and his team would travel to the company’s headquarters in San Francisco at the end of June of this year to carry out the voluntary mock exercise. Breton didn’t detail what the test would entail.

There are guardrails for content generated by artificial intelligence, like deep fake videos and synthetic images, which will have to be clearly labeled when they come up in search results, Breton said. He’s also said that under the Digital Services Act, violations could be punished with hefty fines of up to 6 percent of a company’s annual revenue.

Platforms will have to “completely redesign” their systems to ensure a high level of privacy and safety for children, including verifying users’ ages, Breton said.

Big Tech companies also will have to revamp their systems to “prevent algorithmic amplification of disinformation,” he said, saying he was particularly concerned about Facebook’s content moderation systems ahead of September elections in Slovakia.

Facebook’s parent company said it supports the EU’s new Digital Services Act. “We take significant steps to combat the spread of harmful content on Facebook and Instagram across the EU,” Meta said several weeks ago. “While we do this all year round, we recognize it’s particularly important during elections and times of crisis, such as the ongoing war in Ukraine.”

Criticism

Jacob Mchangama, a Danish historian, sounded the alarm about the Digital Services Act in late 2022, writing in an opinion article that the plan would be a case of the “cure” being “worse than the disease.”

“But when it comes to regulating speech, good intentions do not necessarily result in desirable outcomes,” he wrote for the Los Angeles Times. “In fact, there are strong reasons to believe that the law is a cure worse than the disease, likely to result in serious collateral damage to free expression across the EU and anywhere else legislators try to emulate it.”

Although “removing illegal content sounds innocent enough,” he wrote that “it’s not.” That term—”illegal content”—is “defined very differently across Europe,” he said. “In France, protesters have been fined for depicting President Macron as Hitler, and illegal hate speech may encompass offensive humor” while “Austria and Finland criminalize blasphemy.”

READ 12 COMMENTS
  • TD says:

    If what they want to say it is disinformation they better be ready to spend a whole lot of time on making sure the information is actually disinformation because their information might be disinformation, Thierry Breton is a stupid SOB, The information the way it is now was good for 1000’s of years and it has not changed, Because of the internet more people hear the information and the lying cheating scum can’t use it to lie like always

  • TOP STORIES

    News

    Supreme Court Justice Clarence Thomas was not present for the court’s scheduled cases on Monday, and the court did not offer a reason for...

    News

    Nikki Haley, who was the final rival to former President Trump in the Republican nomination race before ending her White House bid last month,...

    News

    Two days after a stabbing at a shopping mall rocked Sydney, a bishop was stabbed during a service in the city. Bishop Mar Mari...

    News

    The Supreme Court has declined to hear an appeal from MyPillow CEO Mike Lindell over his claim that the government violated his First and...

    >