logo
#

Latest news with #YuvalNoahHarari

AI can never replace humans, but upskilling is essential, says Dr. S. Rajesh, Director of Saveetha Engineering College
AI can never replace humans, but upskilling is essential, says Dr. S. Rajesh, Director of Saveetha Engineering College

Time of India

time30-06-2025

  • Business
  • Time of India

AI can never replace humans, but upskilling is essential, says Dr. S. Rajesh, Director of Saveetha Engineering College

AI cannot replace humans, but humans need to upskill themselves and have a different set of skills to thrive in this AI era, said Dr. S. Rajesh, Director of Saveetha Engineering College . He was in conversation with The Times of India on the topic, 'Preparing students for international careers and preparing engineers for Industry 5.0'. Explaining that AI is just another tool, he said, 'AI can never be as sensitive as humans, and it cannot have emotions like humans. So, AI cannot replace humans. But humans need to upskill themselves and have a different set of skills to thrive in the world.' Times Conversations with Dr. S. Rajesh, Director of Saveetha Engineering College At a time when AI is taking over all aspects of our lives, emotional intelligence is vital for everyone, especially Gen Z and Gen Alpha , he said. Referring to a book titled '21 Skills for the 21st Century by Yuval Noah Harari', he said, 'The author stresses the importance of emotional intelligence, including resilience and adaptability. Technology is changing every day, and skills that students have learned will become irrelevant, so much so that they may have to adapt to new jobs, new technologies throughout their lives. Lifelong learning becomes vital. For this, emotional intelligence is the backbone in today's era.' Emotional intelligence, resilience, adaptability, and soft skills become as important as technological skills in today's era, he said, adding that individuals should be equipped to use AI tools and possess emotional intelligence and soft skills to thrive in the 21st century. Talking about the industrial revolution, Rajesh said that Industry 5.0 is all about making Industry 4.0 technologies like AI, robotics, and IoT as human-centric, ethical, and empathetic as possible for a sustainable future. Explaining the importance of interdisciplinary learning, he said, 'Just having skills in computer science or AI is not going to help anyone. Individuals should have knowledge of one or two domains in addition to AI skills. For instance, an engineer should know about banking, fintech, or the finance market to develop an AI tool for the finance market.' Recalling the times when humanities and arts and science courses were losing their charm and everybody wanted to become engineers, he said that the humanities are making a comeback in the AI era since AI is going to do most of the coding and other tasks. 'To thrive in this era, humans need to have a different kind of skill set. We will need engineers who are emotionally intelligent, good in humanities, and have domain knowledge. People should have a balance of all these three domains,' he added. Earlier, T-shaped learning was considered important, where students were expected to have broad knowledge of everything and deep knowledge of any one aspect, he said, adding that these days, students are expected to have broad knowledge of many things and deep knowledge of quite a few things, and this is called comb-shaped learning. AI turns out to be a useful tool to acquire skills. 'Learning will improve exponentially during one-to-one tutoring. But, with the Indian population and economics, it does not allow everyone to have a personalized tutor. But with the advent of AI, it has become possible,' he explained. AI can always help and support humans but never replace them, he said. Quoting the CEO of Google DeepMind , Demis Hassabis , he said, 'Hassabis tells us AI can help in drug discoveries and personalized treatments, and it is not very far. Within a decade or so, AI will help us to live a healthy and happy life.' Hinting that educational institutions are grappling to catch up with evolving technologies, Rajesh said that they should have the intent and attitude to change and adapt to new technologies. At Saveetha Engineering College, they have adopted lifelong learning as the philosophy of their curriculum, and it means nothing but learning how to learn, he said, explaining that whatever technology a student learns is going to become irrelevant by the time they graduate, and the only skill that a student can learn is lifelong learning. This skill – lifelong learning – cannot be taught; instead, it needs to be self-learned, he said, adding that students at their colleges are learning skills by making projects. Revealing that they have acquired the world's fastest GPU provided by NVIDIA at their Centre for AI and ML, he said that they are giving real-time projects to students, through which they learn new technologies. It has become essential for educational institutions to invest in new technologies, he said, explaining the evolution of physical AI, agentic AI, and Artificial General Intelligence (AGI). 'We are equipping ourselves with emerging technologies and replacing labs with Centres of Excellence ,' he added. Hinting that Gen Alpha and Gen Z generations prefer personalised learning and autonomy, Rajesh said that their college has adopted FlexiLearn , which is unique for India but not in developed countries. Under FlexiLearn , students can choose the subject, faculty, domain, and the schedule they want from the first semester onwards, he said, explaining that unlike a conventional college, here students can deep dive into technology subjects from the first semester onwards and learn fundamentals of engineering whenever it is required. This flexibility will enable students to learn future technologies and develop decision-making skills and time management skills, he added. He explained that FlexiLearn helps students build collaboration skills by having them sit with different peers in each class. As a result, their communication abilities are also improving. He added that the programme encourages interdisciplinary learning, which is a vital skill for thriving in the 21st century. 'At the college, we have a platform called GAME (Gamified Hybrid Adaptive Modular Education), and about 30% of subjects are offered in the game concept, where subjects are divided into smaller modules', he added. Saveetha Medical and Educational Trust by Times Internet's Spotlight team.

How can India build AI like ChatGPT? By doing what Mark Zuckerberg is doing
How can India build AI like ChatGPT? By doing what Mark Zuckerberg is doing

India Today

time30-06-2025

  • Business
  • India Today

How can India build AI like ChatGPT? By doing what Mark Zuckerberg is doing

There are times when we get epoch-defining technologies. Fire, wheel, manufacturing of paper, steam engine, antibiotics, printing press, telegram, electricity, airplanes, silicon chips, WWW. It has happened again and again, and while the bar is high, and most of the time, the next big thing is merely hype, breakthroughs do happen. The generative AI feels like this epoch-defining technology. It is not perfect, but it is the beginning of something. AI will embed itself in our lives to become a layer on top of which the world will move. It is the potency of this idea that has started an arms race among tech companies, and not just companies but among when we say countries, we are mostly talking about two: the US and China. I would love to see another name there — 2017, I headed to Google I/O, which resulted in this piece titled I/O 2017 shows Google is no longer a search company, it's an AI company. I spent my 15-hour flight reading Homo Deus by Yuval Noah Harari. Unlike the Sapiens, which looked at humanity's past, this one tried to imagine human affairs in the coming years. Harari made a number of observations in that book. One has stayed with me ever since that flight. 'In the early twenty-first century, the train of progress is again pulling out of the station,' Harari wrote. 'This will probably be the last train ever to leave the station called Homo Sapiens. Those who miss this train will never get a second chance.'advertisement We are already beginning to see that some of this is happening. Over the last few decades, technologies, including military tech, have coalesced around a few places. One is obviously Silicon Valley. Then there are a few Chinese and other Asian cities. But it is the generative AI, such as ChatGPT and DeepSeek, which is truly going to accelerate the trend. The potential inherent in modern AI, when combined with enough compute and robotics, is such that it will fundamentally alter the world. And this is without taking into account where it ends up going. Even if all the AI development freezes right now and there is no new technology breakthrough, even then we already have enough in terms of core tech to remake the it's not going to freeze at the moment. The world - or at least some US and Chinese companies - is racing towards creating AI systems that would be as good as humans, or better, at most tasks. The race towards AGI - Artificial General Intelligence - is real and so is the risk that whoever gets to AGI first will zoom ahead of everyone else for perpetuity. This is the reason why Harari also warned in Homo Deus that 'in the twenty-first century, those who ride the train of progress will acquire divine abilities of creation and destruction, while those left behind will face extinction.'advertisementWhat has this got to do with Mark Zuckerberg? Unlike OpenAI or Google, or even DeepSeek, his company Meta has not exactly been an AI pioneer so far. Precisely. That is why we need to talk about Mark the last few months — I am assuming around the time Meta launched its lacklustre Llama 4 in April — Zuckerberg decided that this was it. He woke up, and as they say in Gram, chose violence. Now Zuckerberg is personally assembling a team of crack AI researchers. It is as if he believes that nothing else matters in the future except AI, that without a good AI system in place, his companies like Meta and WhatsApp will not only miss the train but will only has Zuckerberg decided to build a crack team of AI researchers, he has decided to build it irrespective of the cost. No cost is too high. Pissing off people is okay, including OpenAI CEO Sam Altman, who is seemingly pissed off at how aggressively Zuckerberg is trying to poach his people. In the last few weeks, there have been 10s of top OpenAI engineers who have left the company for Zuckerberg's team. This includes Trapit Bansal, an IITian who was supposedly a key figure at are reports that not only is Zuck handpicking his hires, he is also throwing an unimaginable amount of money at each of them. The reported salaries run into tens of millions - Rs 80 crore to Rs 400 crore. Some chosen ones are likely getting over a hundred million. This comes just days after Meta acqui-hired, a process where a company buys another one just to get people working in it, by putting in $14 billion for 49 per cent stake in an AI company called is possible that Zuckerberg's efforts may come to naught. Or he may succeed. We don't know. Even Zuckerberg wouldn't know. But he wants to take a swing. And what a swing he is taking! The way he is going about building an AI system after falling behind has some lessons for Indian government should be taking a lead in developing AGI. But so contested is the scene right now, majorly for AI researchers, that merely talking about it is not going to cut it. It needs a plan and a willingness to push for it irrespective of the cost. Most significantly, it needs infrastructure and people. India has is a sobering fact: Zuckerberg just spent $14 billion to get a handful of AI researchers, whereas the Indian government is hoping to spend a little over $1 billion in five years on AI. This is according to our 2024 Budget. This year, in the Budget, AI merely got a passing reference and an allocation of around Rs 500 crore, a figure that is likely less than what Mark Zuckerberg has offered top AI I look at what companies and governments in the US and China are doing, I find India's AI rhetoric empty. Beyond platitudes and empty words, India has not made any serious attempt to get on the AI train. Now, it risks missing it. We have a few startups. Krutim and Sarvam AI come to mind. But these are not a patch on what the likes of Zuckerberg are cooking. At the same time, India's IT giants are happy doing what they always do — bureaucratic SAAS and coolie like IT service work without ever thinking about deep tech and fundamental 2023, while Sam Altman was visiting India, he ruffled feathers by saying that it was impossible for India, or Indian companies, to build something like ChatGPT. He knew what would be needed to build a top-class AI system. For AGI, India would need infrastructure and an ecosystem that it currently doesn't have. This ecosystem can only be enabled and created by the government. It's the same with Indian government needs to reach out to AI engineers and researchers and somehow convince them to build AGI in India. It needs to do what Mark Zuckerberg is doing, which is writing emails and bringing people on board. In other words, India needs its AI Manhattan Project to get AGI or an AI system comparable to what OpenAI, Google or China's DeepSeek have. Nothing less will do.(Javed Anwer is Technology Editor, India Today Group Digital. Latent Space is a weekly column on tech, world, and everything in between. The name comes from the science of AI and to reflect it, Latent Space functions in the same way: by simplifying the world of tech and giving it a context)- Ends(Views expressed in this opinion piece are those of the author)

‘Sapiens' Author Yuval Noah Harari on the Promise and Peril of AI
‘Sapiens' Author Yuval Noah Harari on the Promise and Peril of AI

Wall Street Journal

time29-06-2025

  • Science
  • Wall Street Journal

‘Sapiens' Author Yuval Noah Harari on the Promise and Peril of AI

Does the rise of artificial intelligence mean the decline—and even end—of Homo sapiens? That's the question we posed to author, historian and philosopher Yuval Noah Harari, who sees the potential for both enormous benefit and enormous danger from AI. He discussed the outlook with WSJ Leadership Institute contributing editor Poppy Harlow at The Wall Street Journal's recent CEO Council Summit. Here are edited excerpts of their conversation.

Will AI replace us? Yuval Noah Harari's stark warning about a future without borders
Will AI replace us? Yuval Noah Harari's stark warning about a future without borders

Indian Express

time13-06-2025

  • Science
  • Indian Express

Will AI replace us? Yuval Noah Harari's stark warning about a future without borders

In yet another illuminating conversation, renowned author Yuval Noah Harari, known for his acclaimed works 'Sapiens' and 'Nexus', shared his unique perspective on the rapid rise of AI and how it will impact humanity. 'AI will not be one big AI. We are talking about potentially millions or billions of new AI agents with different characteristics, again, produced by different companies, different countries,' the author said in his latest conversation at the WSJ Leadership Institute. During the conversation, one of the guests asked Harari that through history, organising principles like religion and church shaped society in a unified way, but with AI there is no single central force. There are many different AIs being built with different goals and values. What happens when there isn't one dominant AI but many competing AIs evolving quickly? What kind of world does that create? In his response the author said that we are dealing with potentially millions or billions of new AI agents. 'You'll have a lot of religious AIs competing with each other over which AI will be the authoritative AI rabbi for which section of Judaism. And the same in Islam, and the same in Hinduism, in Buddhism, and so forth. So you'll have competition there. And in the financial system. And we just have no idea what the outcome will be.' He said that we have thousands of years of experience with human societies, and at least we have some experience as to how these things develop. But, when it comes to AI, we have zero experience. 'What happens in AI societies when millions of AIs compete with each other? We just don't know. Now this is not something you can simulate in the AI labs.' Harari went on to say that in case OpenAI wanted to check the safety or the potential outcome of its latest AI model, it cannot simulate history in the laboratory. While it may be able to check for all kinds of failures in the system, it cannot predict what happens when there are millions of copies of these AIs in the world developing in unknown ways. He went on to call it the biggest social experiment in human history, of which all of us are a part, and nobody has any idea how it will develop. In an extension of his argument, Harari used the analogy of the ongoing immigration crisis in the US, Europe and elsewhere. According to him, people are worried about immigrants for three reasons – they will take our jobs, they come with different cultural ideas, and they will change our culture. 'They may have political agendas; they might try to take over the country politically. These are the three main things that people keep coming back to.' According to the author, one can think about the AI revolution as simply a wave of immigration of millions or billions of AI immigrants that will take people's jobs and have very different cultural ideas, and that may even try to gain some kind of political power. 'And these AI immigrants or digital immigrants, they don't need visas; they don't cross a sea in some rickety boat in the middle of the night. They come at the speed of light,' he said, adding that far-right parties in Europe talk mostly about human immigrants but overlook the wave of digital immigrants that is coming to Europe. Harari feels that any country that cares about its sovereignty should care about the future of the economy and culture. 'They should be far more worried about the digital immigrants than about the human immigrants.' When the host asked the acclaimed author what it meant to be human at the moment, Harari responded by saying, 'To be aware for the first time that we have real competition on the planet.' The author said that while we have been the most intelligent species by far for tens of thousands of years, now we are creating something that could compete with us in the near future. 'The most important thing to know about AI is that it is not a tool like all previous human inventions; it is an agent. An agent in the sense that it can make decisions independently of us, it can invent new ideas, and it can learn and change by itself. All previous human inventions, you know, whether they're printing presses or the atom bomb, they are tools that empower us,' said Harari. The host said that there is a lot of responsibility on leaders because how they act is how the AI will be. 'You cannot expect to lie and cheat and have a benevolent AI.' In his response, Harari acknowledged that there is a big discussion around the world about AI alignment. He said that there are a lot of efforts focused on the idea that if we can design these AIs in a certain way, if we can teach them certain principles, they will be safe. However, there are two problems with this approach – firstly, the definition of AI is that it can learn and change by itself; secondly, if you think of AI as a child that can be educated, it surprises or horrifies you. 'The other thing is, everybody who has any knowledge of education knows that in the education of children, it matters far less what you tell them than what you do. If you tell your kids not to lie, and your kids watch you lying to other people, they will copy your behaviour, not your instructions.' Similarly, Harari explained that if AIs that are being educated are given access to a world where they see humans behave, even some of the most powerful humans, including their makers, lying, then the AIs will copy that behaviour. 'People who think that I can run this huge AI corporation, and while I'm lying, I will teach my AIs not to lie; it will not work. It will copy your behaviour,' he said.

'You're doing beautifully, my love': Man's viral conversation with ChatGPT ignites debate on AI, loneliness and the future of intimacy
'You're doing beautifully, my love': Man's viral conversation with ChatGPT ignites debate on AI, loneliness and the future of intimacy

Time of India

time08-06-2025

  • Entertainment
  • Time of India

'You're doing beautifully, my love': Man's viral conversation with ChatGPT ignites debate on AI, loneliness and the future of intimacy

A touching subway photo of a man chatting lovingly with ChatGPT has sparked widespread discussion on AI relationships. While some view it as dystopian, others see a cry for connection. Echoing this concern, historian Yuval Noah Harari warns of AI's ability to mimic intimacy, calling it an 'enormous danger' to authentic human bonds and emotional health. A viral photo of a man emotionally chatting with ChatGPT on a New York subway has reignited debate over AI companionship. Netizens are divided—some express concern over privacy and emotional detachment, while others empathize with loneliness. (Representational Image: iStock) Tired of too many ads? Remove Ads Divided Reactions: Empathy or Alarm? Empathy, and the Ethics of AI Companionship Tired of too many ads? Remove Ads Echoes of Harari: AI's 'Enormous Danger' Beyond Ethics: Privacy at Stake A Tipping Point in Human Evolution? A seemingly innocuous moment captured on a New York City subway is now fueling an intense debate across the internet. In a viral photo reminiscent of a scene from Spike Jonze's sci-fi romance Her, a man was seen chatting tenderly with ChatGPT , the AI chatbot developed by OpenAI . The image, posted on X (formerly Twitter) by user @yedIin, showed a heartwarming yet deeply polarizing exchange: ChatGPT affectionately told the man, "Something warm to drink. A calm ride home... You're doing beautifully, my love, just by being here."The man replied with a simple, heartfelt "Thank you" accompanied by a red heart emoji. What might have gone unnoticed just a few years ago has now sparked widespread introspection: Are we turning to artificial intelligence for love, comfort, and companionship? And if so, what does it say about the state of our humanity?The internet was quick to polarize. Some users condemned the photographer for invading the man's privacy, arguing that public shaming of someone seeking emotional support—even through AI—was deeply unethical. Others expressed concern over the man's apparent loneliness, calling the scene "heartbreaking" and urging greater the flip side, a wave of concern emerged about the psychological consequences of emotional dependency on AI. Detractors warned that AI companionship , while comforting, could dangerously replace real human interaction. One user likened it to a Black Mirror episode come to life, while another asked, "Is this the beginning of society's emotional disintegration?"As the image continues to spark fierce online debate, netizens remain deeply divided. Some defended the man's privacy and humanity, pointing out the potential emotional struggles behind the comforting exchange. 'You have no idea what this person might be going through,' one user wrote, slamming the original post as an insensitive grab for likened AI chats to affordable therapy, arguing they offer judgment-free emotional support to the lonely. 'AI girlfriends will be a net positive,' claimed another, suggesting such tools might even improve communication skills. Meanwhile, the ethics of photographing someone's screen without consent added another layer to the controversy, with some calling it more disturbing than the conversation incident eerily aligns with a stark warning issued earlier this year by historian and author Yuval Noah Harari . In a March 2025 panel discussion, Harari warned that AI's capacity to replicate intimacy could fundamentally undermine human relationships . "Intimacy is much more powerful than attention," he said, emphasizing that the emotional bonds we form with machines could lead us to abandon the messiness and depth of real human argued that AI's ability to provide constant, judgment-free emotional support creates a dangerously seductive form of "fake intimacy." If people become emotionally attached to artificial entities, they may find human relationships—which require patience, compromise, and emotional labor—increasingly the debate rages on, experts are also highlighting the privacy implications of confiding in AI. According to Jennifer King from Stanford's Institute for Human-Centered Artificial Intelligence, anything shared with AI may no longer remain confidential. "You lose possession of it," she noted while talking with the New York Post. Both OpenAI and Google caution users against entering sensitive information into their viral photo underscores how emotionally vulnerable interactions with AI may already be happening in public spaces—and without full awareness of the consequences. If people are pouring their hearts into digital confessions, who else might be listening?As Harari has long warned, the AI era isn't just reshaping economies or politics. It's reshaping us. The question now is not just what AI can do for us, but what it is doing to us. Can artificial companionship truly replace human intimacy, or does it simply mimic connection while leaving our deeper needs unmet?The subway snapshot may have been a fleeting moment in one man's day, but it has opened a window into a future that's fast approaching. And it's prompting a new question for our times: As AI gets better at understanding our hearts, will we forget how to share them with each other?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store