Delhi High Court Orders Google to Remove Deepfake Ads Targeting Sadhguru; Raps Delay in Takedown

In a strong move against synthetic media abuse, the Delhi High Court has directed Google and YouTube to take down all deepfake ads and videos misusing the image and voice of spiritual leader Sadhguru (Jaggi Vasudev). The court criticised the delay in content removal despite formal complaints and reinforced that online platforms have a legal duty to curb misinformation and identity misuse under India’s evolving digital laws.


New Delhi, October 22 —
In a ruling that could redefine India’s approach to AI-generated misinformation, the Delhi High Court has ordered Google and its subsidiary YouTube to immediately remove deceptive deepfake advertisements that used the likeness of Sadhguru to promote spurious products and political messages.

Justice Sanjeev Narula, presiding over the bench, observed that the videos — some portraying the spiritual leader as endorsing crypto investments or criticising rival faiths — were “manifestly false, malicious, and injurious to reputation.”

“Digital liberty cannot become a licence to destroy trust. When technology impersonates humans, it crosses into fraud,”
the court said, issuing directions for takedown “within 24 hours.”


The Case That Triggered the Verdict

The petition was filed by the Isha Foundation, the NGO led by Sadhguru, after multiple fake videos and social-media ads surfaced in recent months. The clips, generated with AI-based facial mapping and voice-cloning software, allegedly misrepresented Sadhguru endorsing commercial products and political views.

Despite repeated notices to Google and YouTube, the Foundation claimed the content remained live for weeks, gaining millions of views and shares across platforms. The petition sought immediate removal, damages, and preventive algorithmic safeguards.

“We are dealing with industrial-scale character assassination,” argued senior advocate Rajiv Nayar for the petitioner. “Such videos erode public trust in voices of social leadership.”


The Court’s Observations

Justice Narula pulled no punches. Calling deepfakes “digital terrorism in slow motion,” the court noted that large platforms profit from engagement metrics but often respond sluggishly to reputational harm.

“A delay of even a few days in removing manipulated content multiplies its reach geometrically,”
the judge said.

The bench held that intermediaries cannot hide behind the shield of neutrality when their own algorithms amplify defamatory or fraudulent material.


The Legal Framework: Intermediary Responsibility

India’s existing Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021require platforms to:

  • act on takedown notices within 36 hours,
  • appoint grievance officers in India, and
  • remove content violating “decency, morality, or impersonation.”

While the rules pre-date generative AI, the judgment connects them to synthetic media.

“AI-generated deepfakes fall squarely under impersonation and deception clauses,” the court clarified.

The ruling aligns with the upcoming Digital India Bill, which aims to replace the outdated IT Act, 2000, and introduces explicit accountability for AI systems that spread misinformation or identity theft.


Platforms Under the Lens

Google’s counsel admitted that while certain videos had been removed, “automatic re-uploads and mirrors” posed challenges. The company said it uses machine-learning tools to detect manipulated media but conceded “false negatives are inevitable.”

The court, however, was unimpressed.

“If you can target users with ads in seconds, you can also remove harmful fakes in seconds,” Justice Narula remarked dryly.

Legal experts note the order increases pressure on global tech firms operating in India to localise compliance and enhance transparency.


The Rise of Deepfakes in India

The case marks the first major judicial step against deepfakes in 2025, amid a surge of AI-manipulated content — from celebrity endorsements to political propaganda.

Recent victims include:

  • Actor Rashmika Mandanna, whose morphed videos circulated widely before being traced to foreign servers.
  • Cricketer Virat Kohli, shown endorsing fake betting apps.
  • Women journalists, targeted with fabricated compromising visuals.

Cyber-forensics firm Logically AI estimates that deepfake content in India has grown 900% since 2022, thanks to open-source tools capable of cloning voices and faces in minutes.


Why It Matters

India’s digital audience exceeds 850 million users — fertile ground for both innovation and abuse. Deepfakes undermine not only individuals but also democratic processes.

“We’re entering an era where seeing is no longer believing,” warned Prof. Sunil Abraham, digital policy scholar. “Without strong legal deterrents, truth itself becomes negotiable.”

The Sadhguru case, therefore, is symbolic — a collision between freedom of expression, faith, and technology’s dark creativity.


Public Reaction

Within hours of the verdict, #DeepfakeJustice trended on X (formerly Twitter). Followers of Sadhguru hailed it as a moral victory, while digital-rights activists called it a “necessary wake-up call.”

“Freedom of speech doesn’t mean freedom to fabricate,” read one viral post.

YouTube comments under official Isha Foundation videos flooded with relief and thanks: “Finally the court stepped in.”


A Global Problem

The phenomenon is not uniquely Indian. In the U.S., the Federal Communications Commission (FCC) recently proposed bans on AI-generated political robocalls. In Europe, the EU AI Act classifies deepfake misuse as a high-risk offence requiring mandatory disclosure labels.

The Delhi High Court’s order thus situates India within a global conversation on AI ethics and accountability.

“This judgment is part of a worldwide judicial awakening,” said Adv. Aparna Sen, cyber-law expert. “Courts are learning to read the language of algorithms.”


The Future Legal Landscape

India’s forthcoming Digital India Bill—expected to be tabled in the Winter Session of Parliament—will likely codify responsibilities for AI developers, content hosts, and data brokers. The bill’s draft, seen by Sarhind Times, includes provisions for:

  • watermarking AI-generated visuals,
  • compulsory origin disclosure for synthetic media,
  • fines up to ₹500 crore for negligence leading to harm,
  • criminal liability for malicious impersonation.

The Sadhguru ruling may serve as judicial precedent reinforcing these principles even before legislative approval.


Philosophical Undercurrent: Faith vs. Fabrication

For followers of Sadhguru, the deepfakes represented more than defamation — they blurred spiritual authenticity.

“People look to teachers for guidance. When AI hijacks that trust, it poisons faith,” said Meenakshi Rao, an Isha volunteer from Bengaluru.

Religious institutions across faiths are now reviewing digital safeguards — watermarking official videos, archiving originals, and coordinating with cybercrime cells.


Inside the Courtroom

Proceedings began last month after Isha Foundation filed a civil suit under Sections 79 and 81 of the IT Act, along with defamation provisions of the IPC. During arguments, petitioner counsel displayed side-by-side comparisons of the genuine and fake videos — the difference almost imperceptible.

The courtroom watched silently as the AI clone’s lips moved in perfect sync, voicing words the real Sadhguru never uttered.

“That moment shifted the atmosphere,” a reporter present told Sarhind Times. “Even the judge seemed startled by how real it looked.”


The Order in Detail

  • Google and YouTube must remove all identified deepfake content within 24 hours.
  • Any future uploads using the same templates must be automatically flagged.
  • Platforms must share digital-fingerprint data of such files with the Delhi Police Cyber Cell.
  • The Ministry of Electronics & IT (MeitY) is to submit a report on deepfake-tracking mechanisms within four weeks.

The court also hinted that non-compliance could invite contempt proceedings.


Industry Implications

The verdict sets a powerful precedent. Tech companies must now show demonstrable capability to prevent the spread of manipulated content or risk legal sanctions.

“Algorithmic accountability is no longer academic — it’s judicial,” said Kashish Mahajan, policy head at the Internet Freedom Foundation.

Experts predict that future litigation may expand beyond celebrities to include ordinary citizens, especially women, victims of AI-based revenge content.


Ethical and Technological Frontiers

AI developers face a dual challenge — innovation and restraint. Researchers at IIT Delhi and IIIT Hyderabad are building tools that detect deepfakes using micro-expression analysis and voice-spectrogram mismatches. Early prototypes claim 95% accuracy in lab conditions.

However, detection arms races will continue as generators evolve.

“Technology creates the poison and the antidote simultaneously,” observed Dr. Saurabh Nanda, AI ethicist. “Policy must keep pace with both.”


The Larger Message

By recognising deepfakes as a legal wrong, the High Court has drawn a moral boundary in India’s digital landscape: innovation must not erode identity.

It’s a message that resonates beyond religion or politics — a statement about human dignity in the algorithmic age.

As Sadhguru himself tweeted later that evening:

“Truth has no filter. Lies wear many faces. May awareness be our protection.”

#DelhiHighCourt #Deepfakes #AIAbuse #DigitalIndia #Sadhguru #OnlineSafety #CyberLaw #SarhindTimes

You May Also Like

More From Author

+ There are no comments

Add yours