Facing allegations of bias in handling hate speeches, Facebook on Friday said it is an open, transparent and non-partisan platform, and will continue to remove content posted by public figures in India that are in violation of its community standards.
The statement from Facebook India head Ajit Mohan comes against the backdrop of a political row following a Wall Street Journal report that alleged that the social media platform's content policies favoured the ruling party in India.
"Facebook is and always has been an open, transparent and non-partisan platform where people can express themselves freely. Over the last few days, we have been accused of bias in the way we enforce our policies.
"We take allegations of bias incredibly seriously, and want to make it clear that we denounce hate and bigotry in any form," Facebook India Vice President and Managing Director Ajit Mohan said in a blog post.
He also emphasised that the company has an impartial approach to dealing with content and is strongly governed by its Community Standards.
These standards outline what is and is not allowed on Facebook.
Meanwhile, Facebook's Oversight Board has said it will be within its scope to examine how the social media giant treats posts from public figures that may violate community standards and are the type of "highly challenging cases" that the board expects to consider once it starts functioning.
Asserting that Facebook's commitment to India is unwavering, Mohan in the blog post said the company's aim is to be an "ally" for India where its platforms preserve the "pluralistic character of a democracy by offering the freedom for people to express themselves and for entrepreneurs to build new things while also protecting society from broader harm".
"Our Community Standards define what stays on our platform and are enforced globally... We enforce these policies globally without regard to anyone's political position, party affiliation or religious and cultural belief. We have removed and will continue to remove content posted by public figures in India when it violates our Community Standards," he said.
In the wake of the WSJ report, the Parliamentary Standing Committee on Information Technology has summoned Facebook on September 2 to discuss the issue of alleged misuse of the social media platform.
On Friday, Mohan also said that many questions have been raised specifically about enforcement of Facebook policies around hate speech and pointed out that it has made "significant progress" in removing hate speech and other harmful content over the past few years.
Citing latest numbers with respect to enforcement, Mohan said the company had removed 22.5 million pieces of hate speech content in the second quarter of 2020, up from 1.6 million pieces of hate speech removed in the last quarter of 2017.
He also admitted that while the platform has made progress in tackling hate speech, it needed to do more and welcomed the opportunity to engage with all parties -- political or otherwise -- who want to understand its content policies and enforcement more.
Mohan noted that the policies are "ever evolving" to take into account the local sensitivities, especially in a multicultural society such as India.
His blog post also comes at a time when even employees of Facebook are questioning the social media platform's handling of hate speech and political content, and whether the policies were circumvented by company executives in India.
With 300 million users, Facebook is among its largest markets.
Multiple sources said employees have been raising questions around the issue on various groups on Facebook's internal network - which looks similar to Facebook's social media product for consumers.
These include discussions around the WSJ article, questions around the company's stance on the issue and whether rules were indeed bent in this matter, they said.
In the company townhall on Wednesday, most questions revolved around this matter. Sources said most questions were around people wanting to know Facebook's position on how political content is handled in India and whether the policies laid out around content regulations were being followed.
About the Community Standards, Mohan said the standards have clear and very detailed policies against hate speech that prohibit attacks on people on the basis of protected characteristics, including religion, ethnicity, caste and national origin.
According to him, these policies are not developed in isolation and that the company relies on the expertise of both internal teams and external voices.
This includes its community, experts and organisations outside of Facebook such as academics, safety and human rights NGOs, and activists "to make sure we understand different perspectives on safety and expression, as well as the impact of our policies on different communities", he added.
"We have an impartial approach to dealing with content and are strongly governed by our Community Standards... The decisions around content escalations are not made unilaterally by just one person; rather, they are inclusive of views from different teams and disciplines within the company. The process comes with robust checks and balances built in to ensure that the policies are implemented as they are intended to be and take into consideration applicable local laws," he added.
Facebook also solicits input from cross-functional teams internal to the company when it is making decisions about individual designations.