With digital impersonation and synthetic-media fraud on the rise, the National Commission for Women calls for major legislative reform to plug accountability gaps in India’s criminal law framework
Dateline: New Delhi | November 16, 2025
Summary: The National Commission for Women (NCW) has recommended that India’s criminal statute, the Bharatiya Nyaya Sanhita (BNS), 2023, be amended to include a clear definition of “deep-fake abuse” and provide stringent penalties for malicious use of synthetic media. Advocates say this move is critical because existing laws do not adequately address digital impersonation, voice-cloning scams, and automated misinformation campaigns — especially those targeting women and vulnerable groups.
Advances in artificial intelligence and video synthesis technology have created a growing spectrum of legal and societal problems in India. Deep-fakes — manipulated audio, video, or image content designed to impersonate or mislead — have gone beyond theatrical or entertainment uses: they are increasingly referenced in online scams, defamation campaigns, political manipulations, and identity fraud.
Recent cases include women executives receiving cloned voice calls allegedly from their CEOs, fake video messages used to extort money, promotional ads made to look like authorised statements, and spoofed social-media posts threatening reputations. Many of these cases remain under-reported or fall through legal cracks because existing statutes weren’t drafted with synthetic-media threats in mind.
NCW’s Recommendations: What’s on the Table
The NCW has formally asked the government to amend the Bharatiya Nyaya Sanhita to include:
- A clear legal definition of “deep-fake abuse” that covers synthetic audio/video created using AI and used to mislead, defame, harass, or commit fraud.
- Enumerated offences specifically targeting production, distribution, and malicious use of synthetic content, including impersonation, defamation, and malicious deception.
- Stringent penalties—including imprisonment and heavy fines—for creators, distributors, and platform operators who fail to act upon notified content.
- Mandatory takedown obligations for platforms hosting manipulated content once notified by authorities.
- Victim-safe procedures including speedy investigation, preservation of evidence, safe-notification of victims, and dedicated cyber-cells for synthetic media crimes.
Why the Existing Legal Framework Falls Short
Legal scholars point out that while India has robust criminal laws covering impersonation, cheating, defamation, harassment, and identity-theft, these are often scattered across Indian Penal Code (IPC) and other statutes. They were not drafted with AI-generated synthetic media in mind. For instance:
- There is no explicit offence for creating a fake video of a person saying something they never did and distributing it widely.
- Proving synthesis and malicious intent is difficult under older statutes that assume human creation and manual editing, not algorithmic generation.
- Platforms often operate under intermediary liability protections but are not always required to proactively detect or remove deep-fakes.
- Victims may not know their digital likeness has been cloned and reused; investigations often begin late and evidence is lost or untraceable.
The NCW argues that the lacuna in law means many women and vulnerable individuals remain exposed to a “grey zone” of synthetic abuse with little redress.
Gendered Impact: Why Women Are Specifically Concerned
Deep-fake abuse has a uniquely gendered dimension: fake videos or audio targeted at women often carry sexual, reputational or harassment-based intent. Fear of public exposure, social stigma, and slow legal redress discourage victims from filing complaints.
For example, imitated voice-calls claiming to be from trusted family members or insiders have defrauded female professionals into handing over money or personal data. Fake videos threatening exposure or harassment have led to mental-health crises. The NCW emphasises that there is a special need for legal clarity, victim-support mechanisms, and swift justice in such cases.
Technology Enters the Courtroom: Evidence, Chain of Custody and Forensics
One of the key challenges in prosecuting synthetic-media crimes is establishing a reliable chain of custody and technical evidence. Prosecutors must show how the media was generated, who issued it, how it circulated and how it caused damage. In many cases:
- The original AI-model or generative system is opaque and cannot be traced easily.
- Platforms may not log or retain sufficient metadata for tracing dissemination.
- Timing of discovery may mean the content has already spread far and is replicated across jurisdictions.
The NCW proposes equipping cyber-crime units with dedicated forensic labs for synthetic-media analysis, international cooperation frameworks, and standardised digital evidence protocols.
Platform Responsibility and Intermediary Safeguards
Platforms hosting user-generated content play a pivotal role. While India’s IT laws assign certain responsibilities to intermediaries, they stop short of proactive scrutiny of synthetic media. The NCW’s recommendations include:
- Clear notification and takedown timelines for flagged deep-fake content.
- Mandatory transparency reports from platforms on synthetic-media incidents and action taken.
- Penal liability for platforms that fail to take timely action after notification.
Industry insiders suggest that without a statutory duty, many smaller platforms lack clarity on obligations, leading to inconsistent responses.
Comparative Global Approaches: What India Can Learn
Several countries have already begun addressing synthetic-media concerns. For example:
- The United States has proposed stiff penalties for deep-fakes used in political campaigns and electoral interference.
- The European Union’s Digital Services Act includes obligations for platforms to act against manipulated media and ensure transparency.
- Singapore and Australia are drafting bills targeting manipulated audio and video used to defraud or impersonate individuals.
The NCW argues that India must craft a legal answer that reflects its unique context—digital scale, multilingual content, high mobile-first users and vulnerable populations—rather than simply transplanting overseas laws.
Implementation Roadmap: What Comes Next
The NCW recommends a phased implementation roadmap:
- Draft amendment to the BNS 2023 to define “deep-fake abuse” and prescribe minimum punishments by Q1 2026.
- Issue platform-compliance guidelines through the Ministry of Electronics & IT within six months.
- Set up dedicated synthetic-media forensic labs in major metropolitan cities by end of 2026.
- Launch public-awareness campaigns focused on women’s safety, digital impersonation and deep-fake risks.
- Review and update training modules for cyber-crime investigators and prosecutors.
The Commission also wants periodic review of the law’s effectiveness and a stakeholder committee including victim-groups, tech providers and law-enforcement agencies.
Challenges Ahead: Where the Road Gets Rough
Critics warn that adding new offences alone will not solve the deeper problem of detection, policing capacity, and social awareness. Some of the tough issues include:
- Ensuring rural jurisdictions with limited cyber-capability are not left behind.
- Balancing free speech concerns with regulatory enforcement in a democracy with hundreds of digital media outlets.
- Determining how to handle generative-AI models hosted off-shore or beyond Indian jurisdiction.
- Preventing overreach and chilling effect on legitimate content creation and debate.
The NCW notes that law-making alone is insufficient: it must be complemented by capacity-building, vigilance and outreach.
The Bigger Picture: Digital Safety Meets Gender Justice
By framing synthetic-media abuse as both a digital-crime issue and a gender-justice issue, the NCW is signalling an important shift. Women’s safety in physical spaces has long been part of policy narratives; the new frontier is digital impersonation, deep-fake harassment, voice-clone scams and online defamation.
The Commission emphasises that protecting the digital footprint and identity of women-users is as critical as ensuring physical safety. It calls for systemic attention to the intersection of technology, law and gender.
Conclusion: Bridging the Legal Gap or Running Behind Innovation?
India’s digital ecosystem is racing ahead. Generative AI, synthetic content creation and online impersonation are no longer futuristic issues—they are everyday threats. The NCW’s call to amend the BNS 2023 to include deep-fake abuse is a timely wake-up. But the real measure of success will be in how swiftly, fairly and effectively the system responds when the first case hits the amended statute.
As law-makers, tech firms and civil-society groups engage in the coming months, India has a chance to set a benchmark for other large democracies facing similar challenges. If it falters, the legal gap could widen while synthetic threats accelerate.

+ There are no comments
Add yours