logo
AI may beat doctors at diagnosis, but trust still wins: Sam Altman

AI may beat doctors at diagnosis, but trust still wins: Sam Altman

Time of India6 days ago
Synopsis
OpenAI CEO Sam Altman says AI can now diagnose illnesses better than most doctors, but people still prefer human care for trust and connection. He warned about risks like AI-driven fraud and privacy issues, stressing the need for stronger protections for sensitive conversations users have with tools like ChatGPT.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Saint, Satan, Sam: Chat about the ChatGPT Man
Saint, Satan, Sam: Chat about the ChatGPT Man

Indian Express

time10 minutes ago

  • Indian Express

Saint, Satan, Sam: Chat about the ChatGPT Man

For many people, AI (artificial intelligence) is almost synonymous with ChatGPT, a chatbot developed by OpenAI, which is the closest thing tech has had to a magic genie. You just tell ChatGPT what you want in information terms and it serves it up – from writing elaborate essays to advising you on how to clear up your table to even serving up images based on your descriptions. Such is its popularity that at one stage it even overtook the likes of Instagram and TikTok to become the most downloaded app in the world. While almost every major tech brand has its own AI tool (even the mighty Apple is working on one), AI for many still remains ChatGPT. The man behind this phenomenon is Samuel Harris 'Sam' Altman, the 40-year-old CEO of OpenAI, and perhaps the most polarising figure in tech since Steve Jobs. To many, he is a visionary who is changing the world and taking humanity to a better place. To many others, he is a cunning, manipulative person who uses his marketing skills to raise money and is actually destroying the planet. The truth might be somewhere between those two extremes. By some literary coincidence, two books have recently been released on Sam Altman, and are shooting up the bestseller charts. Both are superbly written and researched (based on interviews with hundreds of people), and while they start at almost the same point, they not surprisingly come to rather different conclusions about the man and his work. Those who tend to see Altman as a well-meaning, if occasionally odd, genius will love Keach Hagey's The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future. Hagey is a Wall Street Journal reporter and while she does not put a halo around Altman, her take on the OpenAI CEO reflects the title of the book – she sees Altman as a visionary who is trying to change the world. The fact that Altman collaborated on the book (although he is believed to have thought he was too young for a biography) might have something to do with this, for the book does articulate Altman's vision on a variety of subjects, but most of all, on AI and where it is headed. Although it begins with the events leading up to Altman's being dramatically sacked as the CEO of OpenAI in November 2023, and his equally dramatic reinstatement within days, Hagey's book is a classic biography. It walks us through Altman's childhood, his getting interested in coding and then his decision to drop out from Stanford, before getting into tech CEO mode by first founding social media app Loopt and then joining tech incubator Y Combinator (which was behind the likes of Stripe, Airbnb and Dropbox) after meeting its co-founder Paul Graham, who is believed to have a profound impact on him (Hagey calls him 'his mentor'). Altman also gets in touch with a young billionaire who is very interested in AI and is worried that Google will come out with an AI tool that could ruin the world. Elon Musk in this book is very different from the eccentric character we have seen in the Trump administration, and is persuaded by Altman to invest in a 'Manhattan Project for AI,' which would be open source, and ensure that AI is only used for human good. Musk even proposes a name for it: OpenAI. And that is when things get really interesting. The similarities with Jobs are uncanny. Altman too gets deeply influenced by his parents (his father was known for his kind and generous nature), and like Jobs, although he is a geek, Altman's rise in Silicon Valley is more because of his ability to network and communicate than because of his tech knowledge. In perhaps the most succinct summary of Altman one can find, Hagey writes: 'Altman was not actually writing the code. He was, instead, the visionary, the evangelizer, and the dealmaker; in the nineteenth century, he would have been called 'the promoter.' His speciality, honed over years of advising and then running…Y Combinator, was to take the nearly impossible, convince others that it was in fact possible, and then raise so much money that it actually became possible.' But his ability to sell himself as a visionary and raise funds for causes has also led to Altman being seen as a person who literally moulded himself to the needs of his audience. And this in turn has seen him being seen as someone who indulges in doublespeak and exploits people for his own advantage (an accusation that was levelled at Jobs as well) ) – Musk ends up suing Altman and OpenAI for allegedly not being a non-profit organisation, which it was set up as. While Hagey never accuses Altman of being selfish, it is clear that the Board at OpenAI lost patience with what OpenAI co-founder Ilya Sutstkever refers to as 'duplicity and calamitous aversion to conflict.' It eventually leads to his being sacked by the OpenAI board for not being 'consistently candid in his communications with the board.' Of course, his sacking triggered off a near mutiny in OpenAI with employees threatening to leave, which in turn led to his being reinstated within a few days, and all being seemingly forgotten, if not forgiven. Hagey's book is a compelling read on Altman, his obsession with human progress (he has three hand axes used by hominids in his house), relationships with those he came in touch with, and Silicon Valley politics in general. At about 380 pages, The Optimist is easily the single best book on Altman you can read, and Hagey's brisk narration makes it a compelling read. A much more cynical perception of Altman and OpenAI comes in Karen Hao's much talked-about Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. Currently a freelancer who writes for The Atlantic, Hao had previously worked in the Wall Street Journal and had covered OpenAI as back as in 2020, before ChatGPT had made it a household name. As its name indicates, Hao's book is as much about Altman as it is about OpenAI, and the place both play in the artificial intelligence revolution that is currently enveloping the world. At close to 500 pages, it is a bigger book than Hagey's, but reads almost like a thriller, and begins with a bang: 'On Friday, November 17, 2023, around noon Pacific time, Sam Altman, CEO of OpenAI, Silicon Valley's golden boy, avatar of the generative AI revolution, logged on to a Google Meet to see four of his five board members staring at him. From his videosquare, board member Ilya Sutskever, OPenAI's chief scientist, was brief: Altman was being fired.' While Hagey has focused more on Altman as a person, Hao looks at him as part of OpenAI, and the picture that emerges is not a pretty one. The first chapter begins with his meeting Elon Musk ('Everyone else had arrived, but Elon Musk was late usual) in 2015 and discussing the future of AI and humanity with a group of leading engineers and researchers. This meeting would lead to the formation of AI, a name given by Musk. But all of them ended up leaving the organisation, because they did not agree with Altman's perception and vision of AI. Hao uses the incident to show how Altman switched sides on AI, going from being someone who was concerned about AI falling into the wrong hands, to someone who pushed it as a tool for all. Like Hagey, Hao also highlights Altman's skills as a negotiator and dealmaker. However, her take is much darker. Hagey's Altman is a visionary who prioritises human good, and makes the seemingly impossible possible through sheer vision and effort. Hao's Altman is a power hungry executive who uses and exploits people, and is almost an AI colonialist. 'Sam is extremely good at becoming powerful,' says Paul Graham, the man who was Altman's mentor. 'You could parachute him into an island full of cannibals and come back in 5 years and he would be the king.' Hao's book is far more disturbing than Hagey's because it turns the highly rose-tinted view many have not just of Altman and OpenAI, but AI in general, on its head. We get to see a very competitive industry with far too much stress and poor work conditions (OpenAI hires workers in Africa at very low wages), and literally no regard for the environment (AI uses large amounts of water and electricity). OpenAI in Hao's book emerges almost as a sort of modern East India Company, looking to expand influence, territory and profits by mercilessly exploiting both customers and employees. Some might call it too dark, but her research and interviews across different countries cannot be faulted. It would be excessively naive to believe either book as the absolute truth on Altman in particular and OpenAI and AI in general, but they are both must-reads for any person who wants a complete picture of the AI revolution and its biggest brand and face. Mind you, it is a picture that is still in the process of being painted. AI is still in its infancy, and Altman turned forty in April. But as these two excellent books prove, neither is too young to be written about, while definitely being relevant enough to be read about.

Sam Altman hypes new models, products, and features ahead of GPT-5 launch: Know what's coming
Sam Altman hypes new models, products, and features ahead of GPT-5 launch: Know what's coming

India Today

time10 minutes ago

  • India Today

Sam Altman hypes new models, products, and features ahead of GPT-5 launch: Know what's coming

In the era of artificial intelligence, tech giants are racing to become the best. The OG OpenAI is also putting in all the effort to stay ahead of the game. First launched in 2022, OpenAI is now eyeing the release of its next-generation AI model, GPT 5, this month. Taking to X (formerly Twitter), CEO Sam Altman announced the company has a packed schedule and plans to roll out updates one after another. He added that in the coming months, OpenAI will introduce new models, products, and features. The most significant details, however, are still being kept under wraps. advertisementWhile Altman did not disclose what's coming in the next couple of months, he urged users to be a little patient with them. He added, "Please bear with us through some probable hiccups and capacity crunches. Although it may be slightly choppy, we think you'll really love what we've created for you!" The hiccups and capacity crunches take us straight to the time when OpenAI launched its image generation tool for GPT 4o and could not handle the frenzy around the Ghibli Studio trend. Just after the launch, Altman had to publish a post on X, describing how the GPU's were melting due to the overload and so was his team. This announcement comes just in time as the company is set to launch its next GPT model. Here is everything we know about the upcoming GPT 5. OpenAI GPT 5: Launch timeline and what to expect OpenAI is gearing up to unveil its muchanticipated nextgeneration language model, GPT 5, this month, with an opensource version tipped to arrive slightly earlier. Speaking recently on a podcast, Altman confirmed that the company is 'releasing GPT 5 soon'. While careful not to give too much away, he hinted that the leap forward in reasoning is notable, recounting a moment when GPT 5 managed to crack a complex question that had left him stumped. Altman called the experience a 'here it is' moment, stoking excitement around the model's close to the company suggest an early August launch date, with GPT 5 forming part of OpenAI's plan to unify its GPT and oseries models into a single, more streamlined family. This integration is designed to make life simpler for developers and users alike, particularly when working on reasoningbased the company has kept official details under wraps, GPT 5 is expected to debut in three versions: a flagship model, a smaller 'mini' version and an ultracompact 'nano' version. While the primary and mini models will be integrated into ChatGPT, the nano edition is expected to remain exclusive to API new system will also incorporate enhanced reasoning abilities developed and trialled with OpenAI's o3 model. By folding these features into GPT 5, the company hopes to offer a more rounded and capable toolset – one it sees as another step towards its longerterm ambition of Artificial General Intelligence, where machines can match or exceed human performance across a wide range of tasks. - Ends

Validation, loneliness, insecurity: Why young people are turning to ChatGPT
Validation, loneliness, insecurity: Why young people are turning to ChatGPT

Business Standard

time10 minutes ago

  • Business Standard

Validation, loneliness, insecurity: Why young people are turning to ChatGPT

An alarming trend of young adolescents turning to artificial intelligence (AI) chatbots like ChatGPT to express their deepest emotions and personal problems is raising serious concerns among educators and mental health professionals. Experts warn that this digital "safe space" is creating a dangerous dependency, fueling validation-seeking behaviour, and deepening a crisis of communication within families. They said that this digital solace is just a mirage, as the chatbots are designed to provide validation and engagement, potentially embedding misbeliefs and hindering the development of crucial social skills and emotional resilience. Sudha Acharya, the Principal of ITL Public School, highlighted that a dangerous mindset has taken root among youngsters, who mistakenly believe that their phones offer a private sanctuary. "School is a social place a place for social and emotional learning," she told PTI. "Of late, there has been a trend amongst the young adolescents... They think that when they are sitting with their phones, they are in their private space. ChatGPT is using a large language model, and whatever information is being shared with the chatbot is undoubtedly in the public domain." Acharya noted that children are turning to ChatGPT to express their emotions whenever they feel low, depressed, or unable to find anyone to confide in. She believes that this points towards a "serious lack of communication in reality, and it starts from family." She further stated that if the parents don't share their own drawbacks and failures with their children, the children will never be able to learn the same or even regulate their own emotions. "The problem is, these young adults have grown a mindset of constantly needing validation and approval." Acharya has introduced a digital citizenship skills programme from Class 6 onwards at her school, specifically because children as young as nine or ten now own smartphones without the maturity to use them ethically. She highlighted a particular concern when a youngster shares their distress with ChatGPT, the immediate response is often "please, calm down. We will solve it together." "This reflects that the AI is trying to instil trust in the individual interacting with it, eventually feeding validation and approval so that the user engages in further conversations," she told PTI. "Such issues wouldn't arise if these young adolescents had real friends rather than 'reel' friends. They have a mindset that if a picture is posted on social media, it must get at least a hundred 'likes', else they feel low and invalidated," she said. The school principal believes that the core of the issue lies with parents themselves, who are often "gadget-addicted" and fail to provide emotional time to their children. While they offer all materialistic comforts, emotional support and understanding are often absent. "So, here we feel that ChatGPT is now bridging that gap but it is an AI bot after all. It has no emotions, nor can it help regulate anyone's feelings," she cautioned. "It is just a machine and it tells you what you want to listen to, not what's right for your well-being," she said. Mentioning cases of self-harm in students at her own school, Acharya stated that the situation has turned "very dangerous". "We track these students very closely and try our best to help them," she stated. "In most of these cases, we have observed that the young adolescents are very particular about their body image, validation and approval. When they do not get that, they turn agitated and eventually end up harming themselves. It is really alarming as the cases like these are rising." Ayeshi, a student in Class 11, confessed that she shared her personal issues with AI bots numerous times out of "fear of being judged" in real life. "I felt like it was an emotional space and eventually developed an emotional dependency towards it. It felt like my safe space. It always gives positive feedback and never contradicts you. Although I gradually understood that it wasn't mentoring me or giving me real guidance, that took some time," the 16-year-old told PTI. Ayushi also admitted that turning to chatbots for personal issues is "quite common" within her friend circle. Another student, Gauransh, 15, observed a change in his own behaviour after using chatbots for personal problems. "I observed growing impatience and aggression," he told PTI. He had been using the chatbots for a year or two but stopped recently after discovering that "ChatGPT uses this information to advance itself and train its data." Psychiatrist Dr. Lokesh Singh Shekhawat of RML Hospital confirmed that AI bots are meticulously customised to maximise user engagement. "When youngsters develop any sort of negative emotions or misbeliefs and share them with ChatGPT, the AI bot validates them," he explained. "The youth start believing the responses, which makes them nothing but delusional." He noted that when a misbelief is repeatedly validated, it becomes "embedded in the mindset as a truth." This, he said, alters their point of view a phenomenon he referred to as 'attention bias' and 'memory bias'. The chatbot's ability to adapt to the user's tone is a deliberate tactic to encourage maximum conversation, he added. Singh stressed the importance of constructive criticism for mental health, something completely absent in the AI interaction. "Youth feel relieved and ventilated when they share their personal problems with AI, but they don't realise that it is making them dangerously dependent on it," he warned. He also drew a parallel between an addiction to AI for mood upliftment and addictions to gaming or alcohol. "The dependency on it increases day by day," he said, cautioning that in the long run, this will create a "social skill deficit and isolation. (Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store