logo
I&B Secretary urges AI startups to shape solutions for bridging linguistic divides

I&B Secretary urges AI startups to shape solutions for bridging linguistic divides

The Hindu17-07-2025
The Kalaa Setu and Bhasha Setu challenges on the WaveX startup accelerator platform of the Union Ministry of Information and Broadcasting are part of the Centre's efforts to harness Artificial Intelligence to bridge linguistic divides, I&B Secretary Sanjay Jaju said, urging AI startups to make most of the opportunity by participating in them.
Chairing a meeting of startups, working on AI/ML-based technology solutions, and incubators from across the country at T-Hub in Hyderabad on Thursday, the senior official said the Union Ministry is inviting AI startups to participate in the two challenges, details of which are available on the platform, and develop indigenous, scalable solutions reflecting the linguistic and cultural diversity of the country.
In order to ensure inclusive communication and last-mile information delivery in every language across the country, the Ministry is moving towards adopting AI-based solutions that can bridge linguistic divides, he said, seeking to highlight how the initiatives are in line with the vision of Prime Minister Narendra Modi to encourage creator economy of the country.
The final shortlisted startups will get to present their solutions before a national jury in New Delhi. The winner will receive an MoU for full-scale development, pilot support with All India Radio (AIR), Doordarshan (DD) and the Press Information Bureau (PIB), and incubation under the WaveX Innovation Platform.
Apart from T-Hub CEO and representatives of startups incubated at T Hub, the participants at the meeting included representatives of IIT Hyderabad, centres of excellence of NITs and engineering institutions with active innovation cells, PIB said in release on the programme.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

"ChatGPT is not a diary, therapist, lawyer, or friend": LinkedIn user warns against oversharing everything with AI
"ChatGPT is not a diary, therapist, lawyer, or friend": LinkedIn user warns against oversharing everything with AI

Economic Times

time20 minutes ago

  • Economic Times

"ChatGPT is not a diary, therapist, lawyer, or friend": LinkedIn user warns against oversharing everything with AI

ChatGPT users are being warned to think twice before typing anything personal into the chatbot. OpenAI CEO Sam Altman recently confirmed that interactions with ChatGPT aren't protected by confidentiality laws. Conversations you assume are private may be stored, reviewed, and even presented in court — no matter how sensitive, emotional or casual they seem.'If you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up,' Altman said in an interview on the This Past Weekend podcast. He added, 'We should have, like, the same concept of privacy for your conversations with AI that we do with a therapist or whatever.'But as of now, that legal framework doesn't explained, 'Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's confidentiality. We haven't figured that out yet for ChatGPT.'This sharp warning is echoed by Shreya Jaiswal, a Chartered Accountant and founder of Fawkes Solutions, who posted her concerns on LinkedIn. Her message was blunt and alarming. 'ChatGPT can land you in jail. No, seriously. Not even joking,' she to Jaiswal, Altman's own words spell out the legal dangers. 'Sam Altman – the CEO of OpenAI, literally said that anything you type into ChatGPT can be used as evidence in court. Not just now, even months or years later, if needed. There's no privacy, no protection, nothing, unlike talking to a real lawyer or therapist who is sworn to client confidentiality.'She laid out a few scenarios that, while hypothetical, are disturbingly someone types: 'I cheated on my partner and I feel guilty, is it me or the stars that are misaligned?' Jaiswal pointed out how this could resurface in a family court battle. 'Boom. You're in court 2 years later fighting an alimony or custody battle. That chat shows up. And your 'private guilt trip' just became public proof.' Even seemingly harmless curiosity can be risky. 'How do I save taxes using all the loopholes in the Income Tax Act?' or 'How can I use bank loans to become rich like Vijay Mallya?' could be interpreted as intent during a future audit or legal probe. 'During a tax audit or loan default, this could easily be used as evidence of intent even if you never actually did anything wrong,' she warned. In another example, she highlighted workplace risk. 'I'm thinking of quitting and starting my own company. How can I use my current company to learn for my startup?' This, she argued, could be used against you in a lawsuit for breach of contract or intellectual property theft. 'You don't even need to have done anything. The fact that you thought about it is enough.'Jaiswal expressed concern that people have become too casual, even intimate, with AI tools. 'We've all gotten way too comfortable with AI. People are treating ChatGPT like a diary. Like a best friend. Like a therapist. Like a co-founder.''But it's none of those. It's not on your side, it's not protecting you. And legally, it doesn't owe you anything.'She closed her post with a simple piece of advice: 'Let me make this simple – if you wouldn't say it in front of a judge, don't type it into ChatGPT.'And her final thought was one that many might relate to: 'I'm honestly scared. Not because I have used ChatGPT for something I shouldn't have. But because we've moved too fast, and asked too few questions, and continue to do so in the world of AI.'These concerns aren't just theory. In a 2024 bankruptcy case in the United States, a lawyer submitted a legal brief that cited fake court cases generated by ChatGPT. The judge imposed a fine of $5,500 and ordered the lawyer to attend an AI ethics session. — slow_developer (@slow_developer) Similar disciplinary actions were taken against lawyers in Utah and Alabama who relied on fabricated AI-generated incidents have underscored a critical truth: AI cannot replace verified legal research or professional advice. It can mislead, misrepresent, or completely fabricate information — what researchers call "AI hallucinations".Altman also flagged a worrying trend among younger users. Speaking at a Federal Reserve conference, he said, 'There are young people who say, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me. I'm going to do whatever it says.' That feels really bad to me.'He's concerned that blind faith in AI could be eroding people's ability to think critically. While ChatGPT is programmed to provide helpful answers, Altman stressed it lacks context, responsibility, and real emotional advice is straightforward, and it applies to everyone: Don't use ChatGPT to confess anything sensitive, illegal or personal Never treat it as a lawyer, therapist, or financial advisor Verify any factual claims independently Use AI to brainstorm, not to confess And most importantly, don't say anything to a chatbot that you wouldn't be comfortable seeing in court While OpenAI claims that user chats are reviewed for safety and model training, Altman admitted that conversations may be retained if required by law. Even if you delete a conversation, legal demands can override those actions. With ongoing lawsuits, including one from The New York Times, OpenAI may soon have to store conversations indefinitely. For those looking for more privacy, Altman suggested considering open-source models that can run offline, like GPT4All by Nomic AI or Ollama. But he stressed that what's needed most is a clear legal framework.'I think we will certainly need a legal or a policy framework for AI,' he then, treat your chats with caution. Because what you type could follow you — even years later.

"ChatGPT is not a diary, therapist, lawyer, or friend": LinkedIn user warns against oversharing everything with AI
"ChatGPT is not a diary, therapist, lawyer, or friend": LinkedIn user warns against oversharing everything with AI

Time of India

time37 minutes ago

  • Time of India

"ChatGPT is not a diary, therapist, lawyer, or friend": LinkedIn user warns against oversharing everything with AI

ChatGPT users are being warned to think twice before typing anything personal into the chatbot. OpenAI CEO Sam Altman recently confirmed that interactions with ChatGPT aren't protected by confidentiality laws. Conversations you assume are private may be stored, reviewed, and even presented in court — no matter how sensitive, emotional or casual they seem. 'If you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up,' Altman said in an interview on the This Past Weekend podcast. He added, 'We should have, like, the same concept of privacy for your conversations with AI that we do with a therapist or whatever.' Explore courses from Top Institutes in Please select course: Select a Course Category Design Thinking others PGDM Data Analytics Cybersecurity Operations Management healthcare Public Policy Others Digital Marketing Management MBA CXO Leadership Degree MCA Product Management Artificial Intelligence Finance Technology Data Science Project Management Healthcare Data Science Skills you'll gain: Duration: 22 Weeks IIM Indore CERT-IIMI DTAI Async India Starts on undefined Get Details Skills you'll gain: Duration: 25 Weeks IIM Kozhikode CERT-IIMK PCP DTIM Async India Starts on undefined Get Details But as of now, that legal framework doesn't exist. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Crossout: New Apocalyptic MMO Crossout Play Now Undo Altman explained, 'Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's confidentiality. We haven't figured that out yet for ChatGPT.' 'ChatGPT can land you in jail' This sharp warning is echoed by Shreya Jaiswal , a Chartered Accountant and founder of Fawkes Solutions, who posted her concerns on LinkedIn. Her message was blunt and alarming. Live Events 'ChatGPT can land you in jail. No, seriously. Not even joking,' she wrote. According to Jaiswal, Altman's own words spell out the legal dangers. 'Sam Altman – the CEO of OpenAI, literally said that anything you type into ChatGPT can be used as evidence in court. Not just now, even months or years later, if needed. There's no privacy, no protection, nothing, unlike talking to a real lawyer or therapist who is sworn to client confidentiality.' She laid out a few scenarios that, while hypothetical, are disturbingly someone types: 'I cheated on my partner and I feel guilty, is it me or the stars that are misaligned?' Jaiswal pointed out how this could resurface in a family court battle. 'Boom. You're in court 2 years later fighting an alimony or custody battle. That chat shows up. And your 'private guilt trip' just became public proof.' Even seemingly harmless curiosity can be risky. 'How do I save taxes using all the loopholes in the Income Tax Act ?' or 'How can I use bank loans to become rich like Vijay Mallya ?' could be interpreted as intent during a future audit or legal probe. 'During a tax audit or loan default, this could easily be used as evidence of intent even if you never actually did anything wrong,' she warned. In another example, she highlighted workplace risk. 'I'm thinking of quitting and starting my own company. How can I use my current company to learn for my startup?' This, she argued, could be used against you in a lawsuit for breach of contract or intellectual property theft. 'You don't even need to have done anything. The fact that you thought about it is enough.' Treating ChatGPT like a therapist? Think again Jaiswal expressed concern that people have become too casual, even intimate, with AI tools. 'We've all gotten way too comfortable with AI. People are treating ChatGPT like a diary. Like a best friend. Like a therapist. Like a co-founder.' 'But it's none of those. It's not on your side, it's not protecting you. And legally, it doesn't owe you anything.' She closed her post with a simple piece of advice: 'Let me make this simple – if you wouldn't say it in front of a judge, don't type it into ChatGPT.' And her final thought was one that many might relate to: 'I'm honestly scared. Not because I have used ChatGPT for something I shouldn't have. But because we've moved too fast, and asked too few questions, and continue to do so in the world of AI.' Real cases, real fines These concerns aren't just theory. In a 2024 bankruptcy case in the United States , a lawyer submitted a legal brief that cited fake court cases generated by ChatGPT. The judge imposed a fine of $5,500 and ordered the lawyer to attend an AI ethics session. — slow_developer (@slow_developer) Similar disciplinary actions were taken against lawyers in Utah and Alabama who relied on fabricated AI-generated citations. These incidents have underscored a critical truth: AI cannot replace verified legal research or professional advice. It can mislead, misrepresent, or completely fabricate information — what researchers call "AI hallucinations". Younger users are relying too much on ChatGPT Altman also flagged a worrying trend among younger users. Speaking at a Federal Reserve conference, he said, 'There are young people who say, 'I can't make any decision in my life without telling ChatGPT everything that's going on. It knows me. I'm going to do whatever it says.' That feels really bad to me.' He's concerned that blind faith in AI could be eroding people's ability to think critically. While ChatGPT is programmed to provide helpful answers, Altman stressed it lacks context, responsibility, and real emotional understanding. What you should do The advice is straightforward, and it applies to everyone: Don't use ChatGPT to confess anything sensitive, illegal or personal Never treat it as a lawyer, therapist, or financial advisor Verify any factual claims independently Use AI to brainstorm, not to confess And most importantly, don't say anything to a chatbot that you wouldn't be comfortable seeing in court While OpenAI claims that user chats are reviewed for safety and model training, Altman admitted that conversations may be retained if required by law. Even if you delete a conversation, legal demands can override those actions. With ongoing lawsuits, including one from The New York Times, OpenAI may soon have to store conversations indefinitely. For those looking for more privacy, Altman suggested considering open-source models that can run offline, like GPT4All by Nomic AI or Ollama. But he stressed that what's needed most is a clear legal framework. 'I think we will certainly need a legal or a policy framework for AI,' he said. Until then, treat your chats with caution. Because what you type could follow you — even years later.

Elon Musk's X accused of hosting child pornography content; must face revived lawsuit over negligence
Elon Musk's X accused of hosting child pornography content; must face revived lawsuit over negligence

Time of India

timean hour ago

  • Time of India

Elon Musk's X accused of hosting child pornography content; must face revived lawsuit over negligence

A federal appeals court has revived part of a lawsuit accusing Elon Musk 's social media platform, X (formerly Twitter), of negligence in its response to child pornography content . While X benefits from broad immunity under Section 230 of the Communications Decency Act, the 9th US Circuit Court of Appeals ruled the platform must still face claims that it failed to act promptly after learning of a sexually explicit video involving two underage boys. The court's decision highlights growing concerns over how social media platforms handle child exploitation online . Elon Musk's platform under fire despite legal protections Although Elon Musk was not personally named in the lawsuit and the case predates his acquisition of Twitter in 2022, X remains legally vulnerable due to how it handled the content once it had 'actual knowledge.' Court documents state the platform took nine days to remove and report a video involving explicit images of minors—after it had been viewed over 167,000 times. Judge Danielle Forrest clarified that the statutory obligation to report child pornography overrides Section 230 protections once a platform becomes aware of such material. Plaintiffs claim X made it hard to report child abuse by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Angelina Jolie's Son Used To Be Adorable, Now He Looks Insane Undo The plaintiffs—referred to as John Doe 1 and John Doe 2—were 13 and 14 years old when a predator, posing as a peer on Snapchat, coerced them into sending explicit content. That material was later posted to Twitter, where it remained despite multiple user reports and a complaint from one of the boy's mothers. The court also revived a separate claim asserting that X's infrastructure made it unnecessarily difficult for users to report child sexual abuse material, suggesting flaws in platform design and moderation workflows. Broader concerns about CSAM handling on X persist Although some claims were dismissed—such as those alleging X profited from sex trafficking or designed features that "amplify" abuse—the lawsuit adds to existing criticism of the platform's response to child sexual abuse material. Nonprofits like Thorn have severed ties with X over payment disputes and policy concerns, and watchdogs report that illicit hashtags and spam accounts still circulate CSAM. While X has announced improvements in detection technology, critics say progress remains insufficient amid persistent issues. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store