Britain just announced plans for a tough new law aimed at forcing tech giants like Facebook and Twitter to clean up their platforms.

unveiled in a white paper on Monday, would impose a legally-binding duty of care on social networks to make sure they tackle harmful content.

A new industry-funded regulator will be introduced under the proposals. It could have the power to slap internet firms with heavy fines, block people’s access to websites and potentially hold executives personally liable for violations.

U.K. Culture Secretary Jeremy Wright said the new policy would put an end to the “era of self-regulation for online companies.”

“Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough,” he said in a statement Monday.

The new rules call on social media companies to effectively make sure their users are protected from harmful content.

That includes:

  • Child abuse
  • Terrorist content
  • Cyber bullying and trolling
  • Encouraging self-harm and suicide
  • Disinformation

And those types of content fall within two key categories the government has defined: content that is illegal — like terror and child abuse — and content that is harmful but might not necessarily be illegal — like cyber bullying and disinformation.

But critics argue the proposals could lead to censorship.

Jim Killock, executive director of pro-internet freedom organization Open Rights Group, says those definitions create a “potential for misjudgment” that could inevitably result in the takedown of posts that aren’t actually harmful.

“This has potential for a lot of overreaction, incentivizing removal of a lot of legal content,” he told CNBC. “There is a complete absence of discussion about how and why free expression gets entangled with any of those issues.”

For its part, the U.K. government argues that new regulation is needed to keep internet users — minors in particular — safe.

“For too long these companies have not done enough to protect users, especially children and young people, from harmful content,” British Prime Minister Theresa May said in a statement Monday.

“Online companies must start taking responsibility for their platforms, and help restore public trust in this technology,” the U.K. leader added.

Britain also says the new laws won’t infringe on privacy and freedom of expression.

The new rules could have significant ramifications for tech companies, potentially meaning they have to alter their platforms in order to comply.

Big Tech has been under heavy scrutiny of late after a video of last month’s attack on two mosques in New Zealand was shared repeatedly on a number of sites.

The video was initially livestreamed on Facebook and then shared via Twitter, YouTube and other platforms. Facebook said it was viewed 4,000 times before being removed.

In the U.K. specifically, social networks have come under political pressure following the death of teen Molly Russell, who committed suicide in 2017 after viewing distressing material about self-harm and suicide on Instagram.

The Facebook-owned photo-sharing app subsequently said it would ban all graphic self-harm images.

Facebook says it’s already made a number of changes aimed at removing harmful content and that it’s being more transparent about how it enforces its policies on such content.

“New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech,” Rebecca Stimson, Facebook’s head of U.K. public policy, said in a statement Monday.

“These are complex issues to get right and we look forward to working with the Government and Parliament to ensure new regulations are effective.”

Twitter, meanwhile, says it’s been an “active participant” in discussions between the tech industry and the government on online safety.

“We are already deeply committed to prioritizing the safety of our users, as evidenced by the introduction of over 70 changes to our policies and processes last year to improve the health and safety of the public conversation online,” Katy Minshall, head of public policy for Twitter U.K., said in a statement.

“We look forward to engaging in the next steps of the process, and working to strike an appropriate balance between keeping users safe and preserving the open, free nature of the internet.”