Facebook announced new steps to combat misinformation and voter suppression on Monday ahead of the November 2020 U.S. presidential election, on the same day it disclosed the removal of a network of Russian accounts targeting U.S. voters on Instagram.
Facebook said it would increase transparency through measures such as showing more information about the confirmed owner of a Facebook page so users can better understand who is behind a page’s content.
The company said it would start labelling state-controlled media on its page and in the site’s ad library. Facebook, Twitter <TWTR.N> and YouTube, the video-streaming service of Alphabet’s <GOOGL.O> Google all recently came under scrutiny after showing ads from Chinese state-controlled media that criticized Hong Kong protesters.
Facebook also said it would start more prominently labelling content that independent fact-checkers have marked as false on the platform and on its photo-sharing site Instagram.
The social media giant has come under fire in recent weeks over its policy of exempting ads run by politicians from fact-checking, drawing ire from Democratic presidential candidates Joe Biden and Elizabeth Warren.
Facebook said would be putting into effect its planned ban on paid ads that tell people in the United States not to vote. Facebook CEO Mark Zuckerberg told reporters on a call on Monday that the ban on voter misinformation would also apply to ads run by politicians.
Zuckerberg told reporters that Facebook would introduce a new U.S. presidential candidate spend tracker on political advertising as part of its efforts to make its ad library easier to use.
The company said it would be stepping up its protection of the Facebook and Instagram accounts of candidates, elected officials and their teams through a program called Facebook Protect. Participants in the program will be required to turn on two-factor authentication and their accounts will be monitored for signs of hacking.