logo
#

Latest news with #SamAltman

Sam Altman says AI now needs new hardware: Here's what it means for the future of learning
Sam Altman says AI now needs new hardware: Here's what it means for the future of learning

Time of India

time17 minutes ago

  • Time of India

Sam Altman says AI now needs new hardware: Here's what it means for the future of learning

Sam Altman says AI now needs new hardware: How will the future of learning be affected by this? In a recent revelation that marks a major turning point in the AI conversation, OpenAI CEO Sam Altman has declared that today's computers are no longer ideal for the kind of artificial intelligence we need going forward. While much of the world is still racing to keep up with ChatGPT and similar software tools, Altman is already thinking beyond screens, apps, and cloud servers. He envisions a 'third device'—something entirely new—that's built from the ground up for AI. What makes this shift especially significant is that it isn't just about improving technology—it's about reimagining the way we interact with machines. These next-gen AI devices, Altman believes, will be deeply integrated into our daily lives, capable of understanding context, emotions, and personal preferences. And nowhere might this transformation be more profound than in education. How students could learn with AI-first devices If Altman's vision materializes, the traditional classroom could soon look and feel very different. Instead of learning through shared tablets or static digital lessons, students might have personal AI companions: wearable or portable devices that track their attention, understand their learning patterns, and offer real-time feedback. These AI-native tools would go beyond what current edtech platforms can do. They wouldn't just deliver content; they'd interpret emotional cues, detect confusion or boredom, and adapt instruction on the spot. One student might need a visual breakdown of a math problem, while another might benefit from a short quiz or verbal explanation; and the AI would know the difference without being prompted. For teachers, this opens up entirely new possibilities. With a classroom full of AI-assisted learners, educators could get data-driven insights into how students are progressing and where they're struggling, allowing them to focus more on mentorship, creativity, and human connection. What's promising about this vision At the core of this evolution is the idea of personalization: something that's long been considered the holy grail of education. AI-powered hardware could finally make it possible to tailor learning to each student's pace, style, and needs. Altman also touched on an important idea: trust. People tend to trust AI more when it truly knows them—when it feels like an extension of their thought process. For students, this could foster a sense of comfort and confidence, especially for those who may be shy to speak up in class or who require repeated reinforcement to grasp a concept. In this ideal version of the future, AI doesn't replace teachers: it amplifies them. It reduces the pressure of one-size-fits-all education and opens up more space for meaningful learning experiences. The concerns we can't ignore Still, Altman's bold vision brings with it a wave of tough questions, particularly around equity and privacy. Who will have access to these AI-native devices? If they become central to education, how do we ensure they don't widen the digital divide? There's also the matter of student data. For AI to become hyper-personalized, it needs deep and constant input. How will schools protect sensitive information like learning difficulties, emotional patterns, and behavioral cues? Educator readiness is another hurdle. Many teachers are only just becoming comfortable with AI-enhanced grading tools or lesson planning software. Managing a classroom filled with real-time, adaptive AI hardware will require entirely new training, as well as a shift in mindset—from being the central information source to acting more like a learning strategist or AI collaborator. Will schools be ready for the next leap? Altman's prediction isn't just about technology—it's a cultural and institutional challenge. If this shift happens, schools and colleges will need to rethink how they fund infrastructure, train staff, design classrooms, and even define success. It also raises an important philosophical question: should AI know students this deeply? The potential for insight is immense—but so is the responsibility. A future that's closer than it seems As futuristic as Altman's ideas sound, they're not far-fetched. The pace of AI development over the past two years has outstripped many expert predictions. What was once speculative—like generative AI writing essays or passing standardized exams—is now routine. If AI-native hardware becomes real in the next few years, education may become one of the first sectors to feel its impact. The question is: will we be ready? Sam Altman has thrown down a bold marker for where AI is headed. Whether classrooms will follow, or lead, remains to be seen. Is your child ready for the careers of tomorrow? Enroll now and take advantage of our early bird offer! Spaces are limited.

Meta is offering multimillion-dollar pay for AI researchers, but not $100M ‘signing bonuses'
Meta is offering multimillion-dollar pay for AI researchers, but not $100M ‘signing bonuses'

TechCrunch

time6 hours ago

  • Business
  • TechCrunch

Meta is offering multimillion-dollar pay for AI researchers, but not $100M ‘signing bonuses'

Meta is definitely offering hefty multi-million-dollar pay packages to AI researchers when wooing them to its new Superintelligence Lab. But no one is really getting a $100 million 'signing bonus,' according to a poached researcher and comments from a leaked internal meeting. During a company-wide all-hands meeting on Thursday leaked to The Verge, some of Meta's top executives were asked about the bonuses that OpenAI CEO Sam Altman said Meta had offered to top researchers. Meta's CTO Andrew Bosworth implied that only a few people for very senior leadership roles may have been offered that kind of money, but clarified 'the actual terms of the offer' wasn't a 'sign-on bonus. It's all these different things.' In other words, not an instant chunk of cash. Tech companies typically offer the biggest chunks of their pay to senior leaders in restricted stock unit grants (RSUs), dependent on either tenure or performance metrics. A four-year total pay package worth about $100 million for a very senior leader is not inconceivable for Meta. Most of Meta's named officers, including Boswell himself, have earned total compensation of between $20 million and nearly $24 million per year for years. Altman was 'suggesting that we're doing this for every single person,' Bosworth reportedly said at the meeting. 'Look, you guys, the market's hot. It's not that hot.' (Meta did not immediately respond to our request for comment.) On Thursday, researcher Lucas Beyer confirmed he was leaving OpenAI to join Meta along with the two others who led OpenAI's Zurich office. He tweeted: '1) yes, we will be joining Meta. 2) no, we did not get 100M sign-on, that's fake news.' (Beyer politely declined to comment further on his new role to TechCrunch.) Beyer's expertise is in computer vision AI. That aligns with what Meta is pursuing: entertainment AI, rather than productivity AI, Bosworth reportedly said in that meeting. Meta already has a stake in the ground in that area with its Quest VR headsets and its Ray-Ban and Oakley AI glasses. Still, some of the people Meta is trying to nab are indeed worthy of big pay packages in this tight AI talent marketplace. As TechCrunch was first to report, Meta has hired OpenAI's Trapit Bansal, known for his groundbreaking work on AI reasoning models. He had worked at OpenAI since 2022. Certainly, Scale co-founder and CEO Alexandr Wang is getting a healthy chunk of cash, likely more than $100 million, as part of Meta's deal to buy 49% ownership of his company. As we previously reported, the $14 billion Meta is paying is being distributed to shareholders as a cash dividend. Wang is almost certainly a major shareholder in Scale entitled to those dividends. Still, while Meta isn't handing out $100 million willy-nilly, it is still spending big to hire in AI. One investor told TechCrunch that he saw an AI researcher get — and turn down — an $18 million job offer from Meta. That person took a smaller, but still healthy offer, from a buzzier AI startup: Mira Murati's Thinking Machines Lab.

OpenAI turns to Google's AI chips to power its products, source says
OpenAI turns to Google's AI chips to power its products, source says

CNA

time8 hours ago

  • Business
  • CNA

OpenAI turns to Google's AI chips to power its products, source says

OpenAI has recently begun renting Google's artificial intelligence chips to power ChatGPT and its other products, a source close to the matter told Reuters on Friday. The ChatGPT maker is one of the largest purchasers of Nvidia's graphics processing units (GPUs), using the AI chips to train models and also for inference computing, a process in which an AI model uses its trained knowledge to make predictions or decisions based on new information. OpenAI planned to add Google Cloud service to meet its growing needs for computing capacity, Reuters had exclusively reported earlier this month, marking a surprising collaboration between two prominent competitors in the AI sector. For Google, the deal comes as it is expanding external availability of its in-house tensor processing units (TPUs), which were historically reserved for internal use. That helped Google win customers including Big Tech player Apple as well as startups like Anthropic and Safe Superintelligence, two ChatGPT-maker competitors launched by former OpenAI leaders. The move to rent Google's TPUs signals the first time OpenAI has used non-Nvidia chips meaningfully and shows the Sam Altman-led company's shift away from relying on backer Microsoft's data centers. It could potentially boost TPUs as a cheaper alternative to Nvidia's GPUs, according to the Information, which reported the development earlier. OpenAI hopes the TPUs, which it rents through Google Cloud, will help lower the cost of inference, according to the report. However, Google, an OpenAI competitor in the AI race, is not renting its most powerful TPUs to its rival, The Information said, citing a Google Cloud employee. Google declined to comment while OpenAI did not immediately respond to Reuters when contacted. Google's addition of OpenAI to its customer list shows how the tech giant has capitalized on its in-house AI technology from hardware to software to accelerate the growth of its cloud business.

OpenAI turns to Google's AI chips to power its products, source says
OpenAI turns to Google's AI chips to power its products, source says

Yahoo

time8 hours ago

  • Business
  • Yahoo

OpenAI turns to Google's AI chips to power its products, source says

(Reuters) -OpenAI has recently begun renting Google's artificial intelligence chips to power ChatGPT and its other products, a source close to the matter told Reuters on Friday. The ChatGPT maker is one of the largest purchasers of Nvidia's graphics processing units (GPUs), using the AI chips to train models and also for inference computing, a process in which an AI model uses its trained knowledge to make predictions or decisions based on new information. OpenAI planned to add Google Cloud service to meet its growing needs for computing capacity, Reuters had exclusively reported earlier this month, marking a surprising collaboration between two prominent competitors in the AI sector. For Google, the deal comes as it is expanding external availability of its in-house tensor processing units (TPUs), which were historically reserved for internal use. That helped Google win customers including Big Tech player Apple as well as startups like Anthropic and Safe Superintelligence, two ChatGPT-maker competitors launched by former OpenAI leaders. The move to rent Google's TPUs signals the first time OpenAI has used non-Nvidia chips meaningfully and shows the Sam Altman-led company's shift away from relying on backer Microsoft's data centers. It could potentially boost TPUs as a cheaper alternative to Nvidia's GPUs, according to the Information, which reported the development earlier. OpenAI hopes the TPUs, which it rents through Google Cloud, will help lower the cost of inference, according to the report. However, Google, an OpenAI competitor in the AI race, is not renting its most powerful TPUs to its rival, The Information said, citing a Google Cloud employee. Google declined to comment while OpenAI did not immediately respond to Reuters when contacted. Google's addition of OpenAI to its customer list shows how the tech giant has capitalized on its in-house AI technology from hardware to software to accelerate the growth of its cloud business.

OpenAI turns to Google's AI chips to power its products, The Information reports
OpenAI turns to Google's AI chips to power its products, The Information reports

CNA

time8 hours ago

  • Business
  • CNA

OpenAI turns to Google's AI chips to power its products, The Information reports

OpenAI has recently begun renting Google's artificial intelligence chips to power ChatGPT and other products, The Information reported on Friday, citing a person involved in the arrangement. The move, which marks the first time OpenAI has used non-Nvidia chips in a meaningful way, shows the Sam Altman-led company's shift away from relying on backer Microsoft's data centers, potentially boosting Google's tensor processing units (TPUs) as a cheaper alternative to Nvidia's graphics processing units (GPUs), the report said. As one of the largest purchasers of Nvidia's GPUs, OpenAI uses AI chips to train models and also for inference computing, a process in which an AI model uses its trained knowledge to make predictions or decisions based on new information. OpenAI hopes the TPUs, which it rents through Google Cloud, will help lower the cost of inference, according to the report. However, Google, an OpenAI competitor in the AI race, is not renting its most powerful TPUs to its rival, The Information said, citing a Google Cloud employee. Both OpenAI and Google did not immediately respond to Reuters requests for comment. OpenAI planned to add Google Cloud service to meet its growing needs for computing capacity, Reuters had exclusively reported earlier this month, marking a surprising collaboration between two prominent competitors in the AI sector. For Google, the deal comes as it is expanding external availability of its in-house TPUs, which were historically reserved for internal use. That helped Google win customers including Big Tech player Apple as well as startups like Anthropic and Safe Superintelligence, two OpenAI competitors launched by former OpenAI leaders.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store