News APP

NewsApp (Free)

Read news as it happens
Download NewsApp

Available on  gplay

Home  » News » Deepfakes And The Threat To 2024 Elections

Deepfakes And The Threat To 2024 Elections

By Ajay Kumar
March 11, 2024 14:46 IST
Get Rediff News in your Inbox:

If left unchecked, the 2024 elections may be infamously remembered as the election of deepfakes.
The ECI has a tough task ahead, but, more importantly, it has an opportunity to become a global model for scores of countries going to polls this year, notes Ajay Kumar.

Illustration: Dominic Xavier/Rediff.com

India, the world's largest democracy, is gearing up for the upcoming general elections.

Over the past 75 years, the Indian election machinery has built a formidable reputation, consistently electing governments at the state and central levels.

Despite evolving challenges, the Election Commission of India (ECI) has demonstrated resilience through innovative measures such as election picture identify cards, electronic voting machines, voter verifiable paper audit trail, and the creation of the institution of election observers and expenditure observers.

It has also leveraged social media for voter awareness.

 

The 2024 general elections present the ECI with another daunting challenge -- the pervasive threat of deepfakes.

This challenge holds profound implications as it undermines the foundational basis of decision-making, erasing the line between fact and fiction.

The emergence of generative adversarial networks (GAN), a variant of generative AI, facilitates the rapid generation of deepfakes in real-time contexts, seamlessly embedding them into reality to deceive even a discerning viewer.

The accessibility of GAN as a user-friendly Software-as-a-Service, available online for a few dollars, further compounds the issue.

The looming danger is the potential inundation of deepfake content overshadowing factual information in practically every constituency, crafted by multiple creators.

Deepfake videos featuring leaders like Prime Minister Narendra Modi and US President Joe Biden underscore the severity of the problem.

In recent years, the weaponisation of social media has evolved, with misinformation relying on textual or photoshopped elements.

The emergence of deep-fake audio-visual content significantly amplifies the potential for manipulation.

Deepfake videos engaging multiple senses grab attention, evoke emotions, and exaggerate confirmation bias, intensifying the weaponisation of social media.

In the context of elections, deepfake content poses a serious challenge to the integrity of the electoral process.

Digital mercenaries crafting deepfakes evoke memories of booth-capturing gangs, yet are more perilous.

Their menu includes creating context-specific fake videos for diverse agendas. Buyers range from political adversaries to foreign powers with hostile motives.

Deepfakes facilitate interference by foreign powers in the electoral process, endangering electoral sovereignty, especially those foreign powers who would not want to see India elect a strong government.

Preserving content integrity through technological interventions like watermarking, blockchain, or cryptography is irrelevant in the context of deepfake videos influencing voters' perceptions.

In the hyperactive atmosphere of political campaigning, where each piece of information has a fleeting lifespan, virality takes precedence and authenticity is often overlooked.

The virality of news exploits human cognitive vulnerabilities favouring sensational deepfakes over authentic news.

Junk news is often preferred like junk food. It is crucial to swiftly identify and remove deepfakes before they gain virality, taking penal action against perpetrators.

While there may not be perfect answers, a holistic approach involving technology, regulation and governance can minimise its impact on elections.

An effective solution involves continuously monitoring social media content, promptly detecting and removing both fakes and deepfakes in real-time.

While AI capabilities exist to identify anomalies in audio-visual content, the challenge lies in executing this process in real-time on the vast amount of data generated by social media.

A pragmatic approach may concentrate on posts trending towards virality.

The ECI may set thresholds to identify potentially viral posts.

Collaborating with a leading IIT could expedite the implementation of this strategy before elections.

The ECI could also explore collaboration with Intel regarding deployment of their Fakecatcher technology, capable of identifying tampered videos in real-time using photoplethysmography.

Filling up regulatory gaps is crucial to fortify the fight against deepfakes, especially in the context of elections.

The April 2023 amendment to IT rules empowers the central government to instruct social media platforms and intermediaries to remove deep-fakes or objectionable content.

Concerns may arise from the potential advantage this gives to the party in power in the context of elections.

Currently, if the ECI identifies deepfakes, it requests the Union government for takedown and compliance is obligatory.

However, the absence of defined timelines for compliance provides leeway for delays, which, in the fast-paced realm of social media, can result in viral spread of spurious content.

This favours the ruling party, which can potentially time the takedown to suit its political gains.

Amending the IT rules to grant the ECI the power to directly instruct intermediaries aligns with the ECI's overall mandate of superintendence, direction and control of the election process under Article 324 of the Constitution.

Another crucial aspect of regulatory reform involves establishing accountability for creators of deepfakes and spoofed content.

The current IT rules focus on content takedown but lack penalties for crafting deepfakes.

The Bharatiya Nyaya Sanhita holds deepfake creators liable if mens rea is established, placing the burden of proof on the prosecution.

Given the speed and scale of these incidents, proving mens rea is challenging.

While allowing creative and scientific freedom to modify original content, but requiring them to self-declare alterations made in such content, with presumed criminal intent in case of non-compliance, can help restrain the free run of deepfake creators.

Additionally, the law should have extra-territorial jurisdiction for entities outside India targeting Indian interests.

A challenge lies in the fact that most deepfakes are uploaded from abroad with masked entities. Work on a global alliance in this regard has started.

But given India's geopolitical context, expecting cooperation from nations that are most likely to undertake such activities is unrealistic.

A more practical solution involves collaborating with global platforms to swiftly share information regarding uploads, assistance in investigation and also quick removal of objectionable content.

Another brute force choice could relate to restricting GAN availability till the general elections are over.

Deepfake poses a monumental threat to the 2024 general elections, with solutions evolving only by the next elections.

If left unchecked, the 2024 elections may be infamously remembered as the election of deepfakes.

The ECI has a tough task ahead, but, more importantly, it has an opportunity to become a global model for scores of countries going to polls this year.

Ajay Kumar is former defence secretary, and distinguished visiting professor, IIT Kanpur

Feature Presentation: Aslam Hunani/Rediff.com

Get Rediff News in your Inbox:
Ajay Kumar
Source: source
 
Jharkhand and Maharashtra go to polls

Two states election 2024