“You have the means, but time after time you are picking engagement and profit over the health and safety of your users, our nation and our democracy,” Rep. Mike Doyle, a Pennsylvania Democrat who chairs the House Subcommittee on Communications and Technology, said during the hearing. The subcommittee is holding the session, labeled Disinformation Nation: Social Media’s Role in Promoting Extremism and Disinformation, with the House Subcommittee on Consumer Protection and Commerce.
Twitter chief Jack Dorsey, Google head Sundar Pichai and Facebook leader Mark Zuckerberg are all testifying.
Watch the hearing live:
The event comes as lawmakers explore new regulation, including changingof the 1996 Communications Decency Act, a law that shields online companies from liability for content posted by their users. Social media sites have battled an avalanche of online lies, including falsehoods about , the recent US presidential election, and the deadly assault on the US Capitol in January. Though the sites have stepped up efforts to address the problem, lawmakers, politicians, activists and others aren’t satisfied with the results.
Zuckerberg has expressed support for changing Section 230. In, he said during the hearing that online platforms should “be required to demonstrate that they have systems in place for identifying unlawful content and removing it” but that they shouldn’t be held liable if a piece of content evades their detection.
Pichai pointed out that he had some concerns about changing or repealing the law, noting during the hearing that there could be unintended consequences that make content moderation tougher or that harm free expression.
Facebook, Google and Twitter have all grappled with misinformation and disinformation, a problem that’s only gotten worse during a pandemic, an election season, protests and mass shootings. They’ve also wrestled with extremist content, conspiracy theories and posts that have the potential to fuel real-world violence or harm.
After the Capitol Hill riot, Facebook and Twitter, as well as Google-owned YouTube, all took the rare step of suspending then-President . The bans are still in place, but Facebook’s content oversight board is currently reviewing .
From labeling posts that contain misinformation, to fact-checking, to directing users to more trustworthy sources, Facebook, Twitter and Google say they’ve taken steps to combat the spread of lies. Twitter has been testing a new community-driven forum called Birdwatch that lets users identify misleading tweets.
It’s been unclear, though, how effective their approaches have been in cracking down on the spread of misinformation. The companies have also been trying to fend off allegations that they censor conservative speech, while facing increased pressure to moderate content.
After the 2020 election, Google-owned YouTube was criticized fortwo videos by One America News, a far-right news organization, that falsely declared victory for Trump. Despite the bogus claims in the videos, YouTube said they didn’t violate the platform’s rules, which focused narrowly on voter suppression. The platform, though, penalized the channel by banning ads on the videos, depriving the network of revenue.