logo
Joking Hazards: How A Karnataka Bill Could Kill Online Parody, Satire

Joking Hazards: How A Karnataka Bill Could Kill Online Parody, Satire

NDTV19 hours ago
AI's integration on platforms, such as X's Grok, as a truth-seeking tool is starkly different from its use in altered content generation. This has given AI a dual role. This duality, where AI acts as a fact finder and a fact fabricator, has added layers of complexity for lawmakers. This is particularly important for the applicability of extant regulation to AI, with the emergence of unrelated laws that have already cast a long arm over its use.
One of the more complex challenges here is that AI-generated falsehoods often don't arise from intent to deceive, but from how the models are trained to predict and produce language. AI tools exhibit a form of truth-bias, traditionally seen as a uniquely human trait. This refers to the cognitive tendency to assume that most interpersonal communication is honest. A series of studies have shown that large language models are even more likely to accept and reproduce false information unless prompted otherwise. Crucially, this bias is not the result of design, but a byproduct of training on vast human corpora where truthful statements are statistically dominant. This raises a pertinent question: when AI-generated content perpetuates a falsehood without intent, should its end-generator be held criminally liable? Intent must remain central to legal culpability.
Karnataka's latest Misinformation and Fake News (Prohibition) Bill, 2025, is a case in point. The Bill stands on shaky constitutional ground. It ventures into a domain arguably reserved for the Union Government. Entry 31 of the Union List grants Parliament exclusive power over "posts and telegraphs; telephones, wireless, broadcasting and other like forms of communication". Regulating internet-based speech falls squarely within this ambit, raising serious questions about the state legislature's competency to enact such a law in the first place.
Even if the issue of jurisdiction is kept aside, the Bill's substance is deeply flawed. While it cedes any reference to AI or synthetic media, its broad definitions of 'fake news' and 'misinformation' could be interpreted in ways that unintentionally criminalise AI-generated content in its many forms, or other forms of digital creativity. The bill defines 'fake news' broadly to include 'misquotation', 'editing audio or video which results in the distortion of facts', and 'purely fabricated content'. Yet it fails to distinguish between malicious deception and legitimate creative expression, particularly that which uses AI for satire, parody, or commentary. A voice-dubbed parody of a political sermon, even if clearly labelled as satire, could be construed under the bill as 'distorted' or 'fabricated' and made liable to prosecution.
Critically, the bill's carve-out for satire and parody applies only under the definition of 'misinformation,' not under 'fake news,' which is governed by stricter penalties and lacks any protections for artistic or humorous work. This is precisely the kind of ambiguity the Supreme Court sought to guard against in Shreya Singhal v. Union of India (2015), when it struck down Section 66A of the IT Act. The court held that vague and overbroad language could restrict our freedom of expression under Article 19(1)(a). The judgment warned that unless laws specify clearly what kind of speech is punishable, creators will be forced into a culture of self-censorship.
Internationally, democracies are developing more targeted and technologically aware regulations that offer better models. The European Union's AI Act, for example, focuses on transparency. It mandates that AI-generated deepfakes and other synthetic content that might be mistaken for authentic must be clearly labelled as such. Crucially, the law provides explicit exceptions for content that is obviously artistic, satirical, or creative, thereby protecting free expression while empowering citizens to identify manipulated media.
Similarly, several US states have enacted laws that focus on specific, malicious uses of AI rather than banning the technology itself. Laws in states like California and Texas criminalise the creation and distribution of deceptive deepfake videos of political candidates intended to influence an election, but they are narrowly tailored, often applying only within a short period before voting. This approach aims to reduce high-stake harm, such as election interference, without imposing a blanket ban on altered content.
The Karnataka Bill ignores such nuanced approaches, opting instead for a blunt instrument that threatens to criminalise a wide range of digital creativity.
This legislative approach is especially unjust in a legal system that values precedent and practical interpretation. The legal maxim ignorantia juris non excusat, or that ignorance of the law is no excuse, only deepens the challenge for creators using new tools. If creators are to be held liable for violating a law, they must first understand what conduct is permitted.
To be clear, the dangers of deepfakes and deceptive synthetic content are real. They can be used to damage reputation or manipulate public opinion. However, the solution cannot be to criminalise 'fake news' without regard for intent, context or creative purpose. Karnataka's policy makers would do well to recall that a well-formed legislature, as legal theorist Richard Ekins puts it, acts with the intent 'to change the law in the chosen way, for the common good'. That common good must balance the need to curb digital deception with the imperative to protect expression, even (and especially) when that expression is critical, satirical, or inconvenient. There is an imminent need to reconsider this bill.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Two accused in Parliament security breach case granted bail
Two accused in Parliament security breach case granted bail

Scroll.in

time25 minutes ago

  • Scroll.in

Two accused in Parliament security breach case granted bail

The Delhi High Court on Wednesday granted bail to two persons accused in the 2023 Parliament security breach case, Bar and Bench reported. A bench of Justices Subramonium Prasad and Harish Vaidyanathan Shankar allowed petitions filed by Neelam Azad and Mahesh Kumawat for bail in the case, which was reserved on May 21, Live Law reported. The bail was subject to Azad and Kumawat furnishing a bail bond of Rs 50,000 each and two sureties. The two were also barred from holding press conferences or giving interviews, and posting anything on social media about the incident. The matter pertained to two men, Sagar Sharma and Manoranjan D, jumping into the Lok Sabha chamber from the visitors' gallery and opening gas canisters on December 13, 2023. Outside Parliament, Azad and a man – Amol Dhanraj Shinde – had opened smoke canisters and shouted 'stop dictatorship' slogans. All four were arrested in connection with the breach. A day later, the police arrested Lalit Jha, allegedly the mastermind behind the incident, and Kumawat, a co-accused. The police charged all six under provisions of the Unlawful Activities Prevention Act. Azad moved the High Court in September 2024 against a city court denying her bail, while Kumawat also filed an appeal in November against the rejection of his bail petition, the Hindustan Times reported. During earlier proceedings, Azad and Kumawat told the court that the police had wrongly invoked the Unlawful Activities Prevention Act against them, adding that their alleged actions did not construe an act of terrorism. Azad said that her conduct did not amount to an act of terror as she had entered the premises using a valid pass and without weapons, the Hindustan Times reported. Kumawat said that his intention was to draw attention towards certain issues having democratic and political importance. However, the police argued that the breach coincided with the anniversary of the 2001 Parliament attack, which was enough to prove that the intention of those accused in the matter was likely to strike terror or threaten the security of the country. On December 13, 2001, terrorists entered the Parliament complex and began shooting with AK-47 rifles. The attack had left nine persons dead. The police also submitted that those accused in the case wanted to bring back 'haunted memories' of the 2001 attack to the 'majestic' new Parliament building, Live Law reported. The bench, however, asked whether an offence under the Unlawful Activities Prevention Act could be made out against those accused. It added that if using smoke canisters were a terrorist act, then every Holi and Indian Premier League match would also attract such provisions, Live Law reported.

Google, you broke your word on …, shout protestors outside Google Deepmind's London headquarters
Google, you broke your word on …, shout protestors outside Google Deepmind's London headquarters

Time of India

time37 minutes ago

  • Time of India

Google, you broke your word on …, shout protestors outside Google Deepmind's London headquarters

Dozens of protesters staged a mock courtroom trial outside Google DeepMind 's London headquarters Monday, accusing the AI giant of breaking public safety promises made during the launch of its Gemini 2.5 Pro model. Tired of too many ads? go ad free now The demonstration, organized by activist group PauseAI , drew over 60 participants who chanted "Test, don't guess" and "Stop the race, it's unsafe" while conducting a theatrical trial complete with a judge and jury. The group claims violated commitments made at the 2024 AI Safety Summit in Seoul, where the company pledged to involve external evaluators in testing its advanced AI models and publish detailed transparency reports. When Google released Gemini 2.5 Pro in April, it labeled the model "experimental" and initially provided no third-party evaluation details. A safety report published weeks later was criticized by experts as lacking substance and failing to identify external reviewers. Companies less regulated than sandwich shops, says protestors "Right now, AI companies are less regulated than sandwich shops," said PauseAI organizing director Ella Hughes, addressing the crowd. "If we let Google get away with breaking their word, it sends a signal to all other labs that safety promises aren't important." The protest reflects growing public concern about AI development pace and oversight. PauseAI founder Joep Meindertsma , who runs a software company and uses AI tools from major providers, said the group chose to focus on this specific transparency issue as an achievable near-term goal. Monday marked PauseAI's first demonstration targeting this particular Google commitment. The group is now engaging with UK Parliament members to escalate their concerns through political channels. Google has not responded to requests for comment about the protesters' demands or future transparency plans for its AI models.

$65 for a book written by ChatGPT? Man spots AI slip in yearbook, sparks online frenzy
$65 for a book written by ChatGPT? Man spots AI slip in yearbook, sparks online frenzy

Time of India

time44 minutes ago

  • Time of India

$65 for a book written by ChatGPT? Man spots AI slip in yearbook, sparks online frenzy

Internet Reacts: From Irony to Alarm Bells You Might Also Like: Has ChatGPT evolved beyond being a 'Google replacement'? OpenAI CEO Sam Altman says it is already doing much more Is This the New Normal? You Might Also Like: Does ChatGPT suffer from hallucinations? OpenAI CEO Sam Altman admits surprise over users' blind trust in AI You Might Also Like: Is ChatGPT making us dumb? MIT brain scans reveal alarming truth about AI's impact on the human mind In the ever-blurring line between human and machine-generated content, one revelation has ignited a storm of reactions online, this time, over a $65 (₹5,500 approx) yearbook . A viral video circulating on Instagram, originally posted by the account Evolving AI, shows a man expressing disbelief upon discovering that parts of his pricey book appear to have been generated by ChatGPT 'I paid $65 for this book, and they used ChatGPT in the book,' the man claims in the video, pointing to a paragraph that ends with the familiar AI-generated phrase, 'Feel free to let me know if you need any adjustments or additional information!' The telltale sentence—used commonly by ChatGPT when closing responses—was all the evidence he needed to conclude that the content was video credits the original post to TikTok user @raulito_tb and has since triggered a whirlwind of reactions from users, many of whom are both amused and alarmed by what this incident might signal for the future of publishing and caption accompanying the viral clip summed up the collective sentiment: 'Imagine paying $65 for a yearbook that ChatGPT made in seconds, and then the writer accidentally included ChatGPT's 'Feel free to let me know…''While some viewers were quick to poke fun, others raised serious concerns. One user joked, 'I asked ChatGPT to write me a response to this post. Here it is: 'If AI is taking over, maybe it's time we rethink the tasks—not the tools.'' Another reflected more somberly, 'Don't wanna sound like an old man, but gets you thinking about future generations and its effects.'Others criticized the lack of editorial oversight . 'Not proofreading your generated stuff is next-level lazy tbh,' one comment read, highlighting a rising anxiety over the increasing normalization of unedited AI incident comes at a time when generative AI tools like ChatGPT are not just being used for brainstorming or content drafts but are increasingly creeping into formal, even sentimental, documents like yearbooks. For many, this raises questions about authenticity, creativity, and accountability in the age of AI-generated content is nothing new, what sparked outrage in this case was the apparent carelessness—a simple proofreading error that left a digital fingerprint revealing the involvement of a machine where human effort was a snapshot of a larger cultural moment: one in which society is still negotiating the boundaries of AI's role in daily life. As some marvel at its efficiency, others fear a future where genuine human expression is replaced—or worse, overlooked in favor of commenter perhaps said it best: 'We've reached the final evolution of 'fake it till you make it.''

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store