Election time has always been a period of hectic turmoil – and with digital means, it has lately attracted multiple fake news and dubious propaganda on social media platforms. As a result, governments have intensified the pressure of high scrutiny of posts and pages on these platforms. Facebook, the social media company with over 2 billion users, has attracted limelight repeatedly for these shady activities.

Some of the attempts that the social network claims to have taken are:

Indonesia

Online platforms in Indonesia have become battlegrounds between Jokowi’s supporters and those of challenger Prabowo Subianto in a fight to attract voters. According to a report by the Indonesian Internet Providers Association, the country has 143.26 million internet users, with 87% using social media. Despite laws against spreading and creating fake news, such content can rack up thousands of views in a few hours in Indonesia.

With over 100 million Facebook accounts, Indonesia is the third-largest market for the social network. Facebook claims that they have stepped up efforts so that during the upcoming elections in mid-April ads, posts or pages will not disrupt public conversations about elections.

Ahead of the election, Facebook has temporarily disallowed electoral ads purchased from outside Indonesia. Katie Harbath, Public Policy Director, Global Elections and Ruben Hattari, Head of Public Policy, Indonesia, in a blog post said, “We’ve taken steps to provide more information about any ad a Page is running in the Page’s “Info and Ads” section. This includes electoral ads. People can also report an ad by tapping the three dots in the top right corner and selecting ‘Report Ad.’”

EU

For the May 2019 European Parliament elections, the European Commission has strengthened its battle against the circulation of fake news. The European External Action Service (EEAS) has allocated a budget of 5 million euros for 2019 elections.

Since, Google, Twitter, and Facebook have different, non-binding commitments, experts believe that there is a need to establish a code of ethical conduct to which platforms can comply to with their responsibilities. Currently, it’s up to the tech companies to take the necessary steps and tackle fake accounts and bots.

Since Facebook was filled with Kremlin-linked trolls in the previous elections, the regulators have become keen on watching this platform. Anika Geisel, Public Policy Elections, Europe said in a blog, “Learning from every election over the last two years, we have increased our capabilities to take down fake accounts, reduce false news, increase ads transparency, disrupt bad actors and support an informed and engaged electorate.”

India

As the first phase elections in the biggest democracy in the world start, the social media platform also is trying to prove their point in being responsible. In India, the ruling Bhartiya Janata Party (BJP) and the opposition Congress party have both accused of playing a part in misinformed campaigning on social media platforms. Facebook has partnered with fact-checking companies and increased efforts to block fake accounts. Ahead of Indian elections, Facebook, in their blog post claimed of removing over 687 accounts, groups and pages in India for ‘coordinated inauthentic behavior.’

In a different post by the Facebook MD and VP, India, Ajit Mohan, said, “Promoting election integrity in India isn’t something Facebook can do alone. We recently joined other social media companies in a voluntary code of ethics for the general elections with the Election Commission of India (ECI).” He also mentioned that the company has ‘gotten better at using artificial intelligence and machine learning to fight interference.’

Facebook claims that the company has also expanded its partnerships to third-party fact checkers in multiple Indian languages including Hindi, English, Marathi, Bengali, Tamil, Telugu, Gujarati and Malayalam.

US

Ahead of elections in the US next year, Facebook has joined hands with the Associated Press as a part of a third-party fact-checking program. Facebook has said that the company will reduce the reach of Groups that repeatedly share misinformation like that of anti-vaccine and make the Group administrators more accountable for violating content standards.

Guy Rosen, Facebook’s vice president of integrity, and Tessa Lyons, head of news feed integrity, wrote in a blog, “There simply aren’t enough professional fact-checkers worldwide and, like all good journalism, fact-checking takes time. We’re going to build on those explorations, continuing to consult a wide range of academics, fact-checking experts, journalists, survey researchers, and civil society organizations to understand the benefits and risks of ideas like this.”

The company claims that they had removed 45,000 pieces of voter-suppression content in 2018 elections, of which 90% was detected before users reported it.

After being criticized for not quickly removing the video of the mass shooting in New Zealand that was live streamed, the social network claims that they have updated its policies and efforts, content that violates the company’s standards persists. The upcoming elections will be a test for the efficacy of new efforts to keep the ill effects of social media data misuse in check.