New draft amendments to the IT rules will force big platforms to mark synthetic media and step up verification—while startups and civil groups raise concerns about feasibility and freedom.
Dateline: New Delhi | November 20 2025
Summary: The Ministry of Electronics and Information Technology (MeitY) has proposed sweeping amendments to India’s IT Rules to combat the growing menace of AI-generated deepfake content. Under the draft, platforms will be required to label synthetic audio/video/text that appears real, embed metadata, and obtain user declarations—marking one of India’s most significant regulatory pushes into generative AI oversight. While the move receives broad support, stakeholders warn of technical, commercial and free-speech challenges ahead.
1. Why now? The deepfake risk reaching critical mass
India is witnessing an escalating tide of synthetic media: videos, audio clips and images generated or altered by AI that convincingly impersonate individuals, distort facts or promote fraud. What began as novelty content has matured into a full-blown threat: impersonations of public figures, manipulated political messaging, audio clones used in financial scams, and AI-generated “synthetic influencers” that blur authenticity.
Government and platform data show an uptick in content flagged for impersonation and misuse. Analysts say India is particularly vulnerable: the vast multilingual digital audience, rapid generative-AI adoption and weaker legacy regulation create fertile ground. Which explains why the push from MeitY is urgent.
2. The draft amendments: What’s on the table
The proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 set out quantifiable duties for intermediaries, especially “significant social media intermediaries” (SSMIs) with 50 lakh+ users. Key elements include:
- Definition of “synthetically generated information” – content artificially created, modified or altered in a manner that appears authentic. Platforms must treat it akin to unlawful material under the Rules.
- Mandatory user declaration – at upload time users must declare if their content is synthetic or AI-generated.
- Verification obligations – enterprises must deploy reasonable technical tools to verify user declarations and detect undeclared synthetic content.
- Labelling requirement – synthetic content must carry a visible watermark or label covering at least 10% of the visual display area (for video/image) or initial 10% of audio duration. The label must be persistent and cannot be altered or removed. :contentReference[oaicite:3]{index=3}
- Safe-harbour and takedown rules – Platforms failing to comply may lose protections under Section 79 of the IT Act, prompting stricter enforcement risk. :contentReference[oaicite:4]{index=4}
- Consultation timeline – MeitY invited stakeholder feedback until 6 November 2025, with final rules expected in early 2026. :contentReference[oaicite:5]{index=5}
The combination of measurable standards and compliance deadlines signals a major shift: from broad guidelines to enforceable obligations.
3. Stakeholder reactions: Support, caution, resistance
Broadly speaking, the proposal has drawn three kinds of responses:
- Platform and Big Tech reaction: Major global platforms have acknowledged the draft but emphasised implementation burden — especially around watermarking, metadata retention and user-rights impact. They flagged cross-border complexity (content often flows internationally), and voiced concern over innovation constraints.
- Start-ups and SME voice: Smaller AI-driven companies argue that the proposed duties — watermarking 10% of content, embedding persistent metadata, technical verification — may impose heavy compliance costs. Some fear the rules favour deep-pocket players and hamper innovation in generative-AI domains. :contentReference[oaicite:6]{index=6}
- Civil society / free-speech concerns: Media labs and rights groups caution that while deepfake harm is real, over-broad labeling rules may chill satire, parody or legitimate AI-creative uses. Some emphasise that context matters: a comedic deepfake for entertainment is different from a malicious one impersonating a public figure. Balanced policy will be key.
Policy think-tank voices emphasise: “India appears to be setting one of the most detailed synthetic-media rule-sets globally.” :contentReference[oaicite:7]{index=7}
4. Technical and commercial challenges behind the compliance curtain
Even if the framework is sound, operationalising it is non-trivial. Platforms must grapple with:
- Watermarking across codecs & platforms — ensuring the label remains embedded even after re-uploading, compression, format conversions.
- Metadata persistence — a label may be removed if content passes through aggregator platforms or is downloaded & re-uploaded by users.
- Content created outside platforms — if a creator uploads AI-generated video from a tool and then shares it via non-monitored channels, verification is weaker.
- Cost burden on smaller players — building detection pipelines or purchasing third-party tools may be expensive; some firms argue for phased rollout or regulatory support.
One startup representative reportedly said: “We can label and watermark for a few use-cases, but at scale across billions of uploads and re-uploads, it becomes an operational headache.” Analysts likewise say enforcement will depend on auditability, forensic capabilities and shared standards. :contentReference[oaicite:8]{index=8}
5. Global context: How India’s approach stacks up
Globally, governments are scrambling to regulate synthetic media amid deepfake-driven misinformation, election interference and economic fraud. India’s draft stands out for its numerical threshold (10% label coverage), user-declaration mandate and embedding into existing rules rather than waiting for a standalone law. :contentReference[oaicite:9]{index=9}
In comparison:
- The Artificial Intelligence Act in the European Union proposes watermarking and provenance indicators for certain high-risk AI-systems but lacks the 10% label rule.
- China already imposes watermarking for Chinese-origin synthesised video but relies heavily on platform accountability without a global-style metadata standard.
- The U.S. regulatory approach is more fragmented, with voluntary guidelines and state-level efforts rather than a nationwide binding framework.
Analysts suggest India’s initiative could shape policy norms across the Global South, especially for multilingual societies grappling with synthetic-media risks. :contentReference[oaicite:11]{index=11}
6. Implications for business, platforms and innovation
The regulation will affect a broad swathe of actors: social-media platforms, creator tools, generative-AI startups, advertisement networks, media houses and monitoring agencies. Key implications include:
- Platforms: Tech giants may need to update workflows, add watermarking metadata layers, OCR or AI-detection pipelines and revise terms & conditions to capture synthetic-media declarations.
- Startups & tools: Generative-AI service providers may need to embed compliance options within their output: e.g., automatically watermarking AI-videos, tagging synthetic audio. Firms may use regulation as a moat if they provide compliant workflows.
- Advertising/marketing: Agencies must track if any content they place is AI-generated and clearly labelled. Unlabelled content risks regulatory action and reputational damage if flagged as deepfake misuse.
- Media & creators: Newsrooms and influencers will need to log whether content is synthetic or real, revise disclosure semantics and possibly re-train staff on AI-media ethics and compliance protocols.
- Innovation strategy: While regulation adds cost, clarity also creates opportunity: firms that build tools for watermarking, synthetic-media auditing or compliance could gain early-mover advantage.
7. Risks and unintended consequences to watch
Policy is never friction-free. The draft rules present both risks and trade-offs:
- Free-speech / creativity risk: If synthetic content is treated uniformly with malicious deepfakes, satire, transformation-art or entertainment could be over-regulated.
- Innovation drag: Startups lacking resources may face compliance burden, potentially disadvantaging emerging players relative to global incumbents.
- Enforcement capacity: The government and platforms will need robust audit and forensic capacity; without it, labels may become symbolic rather than substantive.
- Global coordination: Synthetic media crosses borders; Indian rules may clash with foreign standards or make compliance for cross-border services complex.
- False positives / over-flagging: Automated detection may mis-label legitimate content or suppress voices incorrectly—creating fairness and bias concerns.
8. Timeline and next steps
Here is the projected rollout:
- Feedback period – ended 6 November 2025 as per draft timeline. :contentReference[oaicite:12]{index=12}
- Final notification – expected early 2026 after stakeholder consultations and internal review.
- Implementation pilots – likely in 2026, with full compliance required in phases; platforms may have grace-periods depending on size and risk category.
- Enforcement triggers – once rules are notified, monitoring, audits and takedown orders can begin; platforms risk losing safe-harbour if they fail to comply.
For organisations the message is clear: now is the time to audit workflows, invest in compliance tooling, revise creator policies and prepare for regulation. Waiting until enforcement starts may expose them to reputational, legal and financial risk.
9. Broader picture: Digital trust, democracy and India’s AI era
This regulation is more than a technical tweak—it touches core issues of digital trust, democracy and information integrity. In India’s landscape, where misinformation spreads fast and digital literacy is uneven, synthetic media poses a unique risk: what if a deepfake conversation steers a voting booth, or a cloned voice authorises fraudulent fund transfer?
The rules aim to ensure users know whether what they are seeing and hearing is synthetic or real—empowering them to decide. The state recognises that unchecked generative-AI could degrade trust, harm institutions and destabilise public discourse.
At the same time, India wants to strike a balance: not stifling generative-AI research, creative expression or business innovation. The policy signals that regulation and innovation need not be at odds—but the execution will matter.
10. Final verdict: The regulation moment and what comes next
India’s move to regulate synthetic media is coming at a decisive moment. With generative-AI now accessible broadly, policy lag can mean deepfake risk becomes systemic. The draft amendments mark a clear shift: from reactive takedowns to proactive disclosure and compliance. For platforms, creators and innovators the horizon is changing.
Will this rule-set succeed? That depends on enforcement integrity, platform cooperation and the speed with which smaller players adapt. There is also a broader institutional question: can India build the forensic arsenal, audit capacity and global coordination needed to make this more than a symbolic gesture?
The opportunity is significant: India can become a global benchmark for synthetic-media governance, especially as a multilingual, large-population democracy. But unless the rule-book translates into action—watermarks, metadata pipelines, audits, sanctions—the danger is that labels become a paper exercise. The rule-book must become living practice.
In short: the state is sending a signal that “synthetic content = real risk”. The building blocks of compliance are being put into place. For the digital economy and society at large, the next 12–18 months will show whether this regulation can deliver on transparency, accountability and trust—or become another regulation awaiting grassroots implementation.

+ There are no comments
Add yours