
Overthinking Common In India, 1 In 3 Use Tech Tools Like ChatGPT For Support
Choosing a dish at a restaurant or making a gift purchase decision, a growing number of Indians are turning to technology to navigate overthinking, which has become a part of daily life of the people, as per a survey.
New age digital tools as conversational AI platform ChatGPT and search engine Google, are found to be increasingly used by Indians for clarity, in case they are faced with uncertainty, said a joint report from Center Fresh and YouGov.
The survey, with a sample size of 2,100 respondents, found that 81 per cent of Indians spend over three hours a day overthinking, with one in four admitting "it's a constant habit".
According to the 'India Overthinking Report', one in three have used Google or ChatGPT to navigate overthinking - from decoding a short message to making a gift purchase decision.
The survey included students, working professionals and self-employed across the country, covering Tier I, II & III cities, diving into four key areas - food and lifestyle habits, digital and social life, dating and relationships and career and professional life.
The survey found that overthinking has become a part of daily life in India, not just in moments of crisis, but in the smallest, most routine decisions.
As per the report, 63 per cent of respondents in the survey said choosing a dish at a restaurant is "more stressful than picking a political leader".
"When faced with uncertainty, Indians are increasingly turning to tech for clarity. One in three say they've used Google or ChatGPT to navigate overthinking - from decoding a short message to making a gift purchase decision," the survey said.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


News18
35 minutes ago
- News18
Who Is Andrew Tulloch? The AI Expert Who Turned Down Meta's $1.5 Billion Offer
Last Updated: Meta, led by Mark Zuckerberg, is hiring top AI experts from OpenAI and Microsoft. Andrew Tulloch rejected a $1.5 billion offer. Alexandr Wang was hired as Chief AI Officer. Andrew Tulloch: Mark Zuckerberg-led Meta is on a poaching spree to acquire the top-level AI researchers and experts from companies in the field globally, including OpenAI, Microsoft, and rising AI startups. Some reports say Meta is offering up to $100-million whopping payout to these coveted experts to join its Meta's Super intelligence Team. One of such figures whose name is making headlines is Andrew Tulloch, though the reason is quite different. Tulloch has made headlines by turning down an almost $1 billion offer, a nine-figure salary. In India, when a six-figure salary is considered as a dream job, it's unusual for someone to turns down a 9-figure salary. Who Is Andrew Tulloch? Andrew Tulloch is a machine learning expert and co-founder of Thinking Machines Lab, an AI company led by Mira Murati that aims to push the boundaries of artificial intelligence beyond chatbots. He recently made headlines after The Wall Street Journal reported that he turned down a staggering nine-figure offer from Meta. The Journal noted that even in Silicon Valley—where massive paychecks are common—it's rare for someone to walk away from such a deal. Sources familiar with the matter said the $1.5 billion offer included bonuses and stock-based compensation, making the decision all the more surprising. Before this, he played a key role in developing GPT-4o and GPT-4.5 during his time at OpenAI, where he worked on large-scale ML systems and o-series reasoning. Tulloch spent over 11 years at Meta as a Distinguished Engineer, where he focused on machine learning systems and worked extensively with PyTorch. He began his career at Goldman Sachs, developing financial and trading strategies. Academically, Tulloch holds a Master's in Mathematical Statistics and Machine Learning from the University of Cambridge, where he earned a distinction and a college prize. He also graduated with First Class Honours and a University Medal from the University of Sydney in Advanced Mathematics. Meta Hires AI Prodigy Alexandr Wang To Lead Meta's Superintelligence Labs The co-founder of Alexandr Wang was hired by Meta to lead the Meta's Superintelligence team and was given a title of Chief AI Officer. His appointment followed Meta's investment in Scale AI, which valued the data-labelling startup at $29 billion. Earlier, Meta brough Yunazhi from OpenAI and Anton Bakhtin from Anthropic. 2 Indians Among 44 Members Team 2 India-origin were part of the Meta's Superintelligence team. Other than Trapit Bansal, Hammad Syed also joined the elite team as a software engineer. view comments First Published: News business Who Is Andrew Tulloch? The AI Expert Who Turned Down Meta's $1.5 Billion Offer Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.


Indian Express
an hour ago
- Indian Express
Anthropic blocks OpenAI's API access to Claude ahead of GPT-5 launch: Report
In a clear sign of intensifying rivalry in the AI race, Anthropic has accused OpenAI of violating its terms of service and partially blocked the ChatGPT-maker from accessing its Claude series of AI models via API (application programming interface). OpenAI has been granted special developer access (APIs) to Claude models for industry-standard practices like benchmarking and conducting safety evaluations by comparing AI-generated outputs against those of its own models. However, according to a report by Wired, Anthropic has now accused members of OpenAI's technical staff of using that access to interact with Claude Code – the company's AI-powered coding assistant – in ways that violated its terms of service. The timing is notable as it comes ahead of the widely anticipated launch of GPT-5, OpenAI's next major AI model which is purportedly better at generating code. Anthropic's AI models, on the other hand, are popular among developers due to its coding abilities. Anthropic's commercial terms of service prohibits customers from using the service to 'build a competing product or service, including to train competing AI models' or 'reverse engineer or duplicate' the services. 'Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5. Unfortunately, this is a direct violation of our terms of service,' Anthropic spokesperson Christopher Nulty was quoted as saying by Wired. Anthropic will 'continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry,' he added. Responding to Anthropic's claims, OpenAI's chief communications officer Hannah Wong reportedly said, 'It's industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic's decision to cut off our API access, it's disappointing considering our API remains available to them.' This is not the first time that Anthropic has taken such measures. Last month, the Google and Amazon-backed company restricted Windsurf from directly accessing its models following reports that OpenAI was set to acquire the AI coding startup. However, that deal fell through after Google reportedly poached Windsurf's CEO, co-founder, and tech for $2.4 billion. Ahead of cutting off OpenAI's access to the Claude API, Anthropic announced new weekly rate limits for Claude Code as some users were running the AI coding tool 'continuously in the background 24/7.' Earlier this year, OpenAI accused Chinese rival DeepSeek of breaching its terms of service. The Sam Altman-led company said it suspected DeepSeek of training its AI model by repeatedly querying its proprietary model, a technique commonly referred to as distillation.


NDTV
an hour ago
- NDTV
Validation, Loneliness, Insecurity: Why Young People Are Turning To ChatGPT
New Delhi: An alarming trend of young adolescents turning to artificial intelligence (AI) chatbots like ChatGPT to express their deepest emotions and personal problems is raising serious concerns among educators and mental health professionals. Experts warn that this digital "safe space" is creating a dangerous dependency, fueling validation-seeking behaviour, and deepening a crisis of communication within families. They said that this digital solace is just a mirage, as the chatbots are designed to provide validation and engagement, potentially embedding misbeliefs and hindering the development of crucial social skills and emotional resilience. Sudha Acharya, the Principal of ITL Public School, highlighted that a dangerous mindset has taken root among youngsters, who mistakenly believe that their phones offer a private sanctuary. "School is a social place – a place for social and emotional learning," she told PTI. "Of late, there has been a trend amongst the young adolescents... They think that when they are sitting with their phones, they are in their private space. ChatGPT is using a large language model, and whatever information is being shared with the chatbot is undoubtedly in the public domain." Ms Acharya noted that children are turning to ChatGPT to express their emotions whenever they feel low, depressed, or unable to find anyone to confide in. She believes that this points towards a "serious lack of communication in reality, and it starts from family." She further stated that if the parents don't share their own drawbacks and failures with their children, the children will never be able to learn the same or even regulate their own emotions. "The problem is, these young adults have grown a mindset of constantly needing validation and approval." Ms Acharya has introduced a digital citizenship skills programme from Class 6 onwards at her school, specifically because children as young as nine or ten now own smartphones without the maturity to use them ethically. She highlighted a particular concern -- when a youngster shares their distress with ChatGPT, the immediate response is often "please, calm down. We will solve it together." "This reflects that the AI is trying to instil trust in the individual interacting with it, eventually feeding validation and approval so that the user engages in further conversations," she told PTI. "Such issues wouldn't arise if these young adolescents had real friends rather than 'reel' friends. They have a mindset that if a picture is posted on social media, it must get at least a hundred 'likes', else they feel low and invalidated," she said. The school principal believes that the core of the issue lies with parents themselves, who are often "gadget-addicted" and fail to provide emotional time to their children. While they offer all materialistic comforts, emotional support and understanding are often absent. "So, here we feel that ChatGPT is now bridging that gap, but it is an AI bot after all. It has no emotions, nor can it help regulate anyone's feelings," she cautioned. "It is just a machine and it tells you what you want to listen to, not what's right for your well-being," she said. Mentioning cases of self-harm in students at her own school, Ms Acharya stated that the situation has turned "very dangerous". "We track these students very closely and try our best to help them," she stated. "In most of these cases, we have observed that the young adolescents are very particular about their body image, validation and approval. When they do not get that, they turn agitated and eventually end up harming themselves. It is really alarming as the cases like these are rising." Ayushi, a student in Class 11, confessed that she shared her personal issues with AI bots numerous times out of "fear of being judged" in real life. "I felt like it was an emotional space and eventually developed an emotional dependency towards it. It felt like my safe space. It always gives positive feedback and never contradicts you. Although I gradually understood that it wasn't mentoring me or giving me real guidance, that took some time," the 16-year-old told PTI. Ayushi also admitted that turning to chatbots for personal issues is "quite common" within her friend circle. Another student, Gauransh, 15, observed a change in his own behaviour after using chatbots for personal problems. "I observed growing impatience and aggression," he told PTI. He had been using the chatbots for a year or two but stopped recently after discovering that "ChatGPT uses this information to advance itself and train its data." Psychiatrist Dr. Lokesh Singh Shekhawat of RML Hospital confirmed that AI bots are meticulously customised to maximise user engagement. "When youngsters develop any sort of negative emotions or misbeliefs and share them with ChatGPT, the AI bot validates them," he explained. "The youth start believing the responses, which makes them nothing but delusional." He noted that when a misbelief is repeatedly validated, it becomes "embedded in the mindset as a truth." This, he said, alters their point of view — a phenomenon he referred to as 'attention bias' and 'memory bias'. The chatbot's ability to adapt to the user's tone is a deliberate tactic to encourage maximum conversation, he added. Dr Singh stressed the importance of constructive criticism for mental health, something completely absent in the AI interaction. "Youth feel relieved and ventilated when they share their personal problems with AI, but they don't realise that it is making them dangerously dependent on it," he warned. He also drew a parallel between an addiction to AI for mood upliftment and addictions to gaming or alcohol. "The dependency on it increases day by day," he said, cautioning that in the long run, this will create a "social skill deficit and isolation."