Deepfake Laws in India – Legal Framework, Challenges, and Future Regulations (2025 Guide)


Deepfake Laws in India: Regulating AI-Generated Fake Videos and Images

Introduction

The rise of artificial intelligence (AI) has brought incredible innovations in communication, creativity, and automation. Yet, one of its most alarming outcomes is the creation of deepfakes — hyper-realistic fake videos, audios, or images that mimic real people. A deepfake can make someone appear to say or do something they never actually did.

In India, deepfakes have led to serious issues ranging from online harassment and fake political propaganda to financial fraud and reputational damage. The question is — how well is Indian law prepared to tackle this new digital threat?

What Are Deepfakes?

The term “deepfake” combines deep learning and fake. It refers to AI-generated synthetic media where a person’s likeness is replaced with another using neural networks and machine learning models.

For example, using just a few seconds of someone’s video or voice, AI tools can create fake clips of them speaking or performing actions. While some deepfakes are harmless fun or used in entertainment, others are malicious — targeting individuals’ dignity, careers, or political image.

How Deepfakes Are Created

Deepfakes are made using AI algorithms called Generative Adversarial Networks (GANs). GANs consist of two neural networks — a generator (which creates fake content) and a discriminator (which detects if it’s fake). The process repeats until the fake becomes indistinguishable from reality.

Nowadays, anyone can create deepfakes using easily available mobile apps or websites. This accessibility makes deepfake misuse a growing threat in India, where digital literacy and awareness are still limited.

Legal Issues Arising from Deepfakes

Deepfakes give rise to multiple legal and ethical problems:

  1. Defamation – A fake video showing a person making offensive remarks can destroy reputation and career.

  2. Right to Privacy – Using someone’s image or face without consent violates their privacy.

  3. Sexual Harassment & Obscenity – Many deepfakes target women, placing their faces on pornographic content.

  4. Identity Theft & Impersonation – Criminals can impersonate others to commit fraud.

  5. Cybercrime & Political Manipulation – Deepfakes can be used to spread misinformation during elections.

Existing Indian Laws Dealing with Deepfakes

India currently does not have a specific deepfake law, but several provisions from existing statutes can apply:

1. Information Technology Act, 2000

  • Section 66D – Punishes cheating by impersonation using computer resources.

  • Section 67 & 67A – Penalize publishing or transmitting obscene or sexually explicit material.

  • Section 69A – Allows blocking of content harmful to public order or morality.

2. Indian Penal Code, 1860 / Bharatiya Nyaya Sanhita, 2023

  • Section 500 (Defamation) – For publishing false and damaging statements.

  • Section 509 (Insult to modesty of a woman) – Applies to fake obscene content.

  • Under BNS 2023, these offences continue with modernized language and stronger digital provisions.

3. Indecent Representation of Women (Prohibition) Act, 1986

  • Prohibits publication or depiction of women in an indecent or derogatory manner, applicable to deepfake pornographic content.

4. Copyright and Personality Rights

If a deepfake misuses someone’s voice or likeness for commercial gain, it may violate intellectual property and right of publicity principles.

Case Studies & Real-Life Incidents

  • In 2023, several Indian actresses and influencers reported fake AI-generated obscene videos circulating on social media.

  • Deepfakes have also been used in political campaigns to spread misinformation, raising concerns about election integrity.

  • Globally, incidents involving fake celebrity videos and AI voice scams have made governments realize the urgent need for legal reforms.

Global Regulations on Deepfakes

Countries worldwide are updating their laws:

  • United States (California, Texas) – Specific laws prohibit malicious deepfakes used for election interference or pornography.

  • European Union – The AI Act requires labeling of synthetic content to ensure transparency.

  • China – Mandates clear disclosure when AI-generated media is published.

India, though progressive in its Digital India vision, is still in the early stages of regulating AI misuse.

Challenges in Enforcing Deepfake Laws

  1. Detection Difficulty – Deepfakes are increasingly realistic, making it hard to prove falsity.

  2. Jurisdiction Issues – Many deepfakes originate from servers outside India.

  3. Technological Gap – Law enforcement agencies lack advanced forensic AI tools.

  4. Speed vs. Law – Deepfakes spread in minutes, but legal action takes weeks or months.

  5. Limited Awareness – Victims often hesitate to report cases due to stigma or lack of understanding.

The Need for a Specific Deepfake Law in India

While existing laws cover certain aspects, India urgently needs specific legislation focusing on:

  • Defining “deepfake” and related offences.

  • Criminalizing creation and dissemination without consent.

  • Establishing strict penalties for malicious use.

  • Requiring AI-generated content disclosure (such as watermarks or labels).

  • Mandating platform accountability for takedown and reporting.

The Law Commission of India and Ministry of Electronics & IT have begun discussions on AI regulation, indicating progress toward formal recognition of deepfake crimes.

Role of Social Media and Tech Platforms

Under the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, platforms like Facebook, Instagram, and X (Twitter) must remove objectionable content upon receiving complaints.

However, with AI-generated media spreading faster than detection systems, social media companies must invest in AI-based content verification, improve user reporting systems, and cooperate with law enforcement.

Public Awareness and Ethical Use of AI

Law alone cannot solve the problem — education is key. Users must:

  • Verify before sharing content.

  • Avoid participating in or forwarding deepfake material.

  • Understand that consent and digital ethics apply online just as offline.

Institutions, educators, and lawyers should promote digital literacy, so users can recognize deepfakes and respond responsibly.

Conclusion

Deepfakes represent a dangerous intersection of technology and manipulation. They threaten individual dignity, national security, and democracy itself.

While India’s current legal framework offers partial protection through the IT Act, IPC/BNS, and related laws, there’s an urgent need for a dedicated deepfake regulation. Such a law should define offences, set penalties, and promote responsible AI development.

Until then, awareness, ethical technology use, and quick reporting remain the best defences against this digital menace.


Comments