logo
#

Latest news with #AlexNet

Ilya Sutskever Warns of an Unpredictable AI Future: "It's Going to Be Unimaginable"
Ilya Sutskever Warns of an Unpredictable AI Future: "It's Going to Be Unimaginable"

Hans India

time30-06-2025

  • Science
  • Hans India

Ilya Sutskever Warns of an Unpredictable AI Future: "It's Going to Be Unimaginable"

Ilya Sutskever, co-founder and former chief scientist of OpenAI, has raised a powerful caution about the future of artificial intelligence, calling it 'extremely unpredictable and unimaginable.' In a recent interview with The Open University of Israel, Sutskever described a near-future shaped by rapidly evolving AI that could outpace human understanding. 'AI is going to be both extremely unpredictable and unimaginable,' he said, emphasizing the moment AI begins to enhance itself could trigger a cascade of developments that humanity may not be able to steer. Despite acknowledging the risks, Sutskever remains optimistic about the possibilities. 'If the AI became capable enough, we'll have incredible health care,' he noted. He believes that such advancements could potentially eradicate diseases and significantly extend human life. The AI trailblazer was speaking shortly after receiving an honorary degree from the university. Reflecting on his journey into the field, he recalled teaching himself complex subjects during his school years. 'I just read slowly and carefully until I understood,' he said. After moving to Toronto, he bypassed high school entirely, heading straight to the University of Toronto to study under legendary AI researcher Geoffrey Hinton—'the place to be,' he remembered. Sutskever's early contributions include AlexNet, the breakthrough neural network that revolutionized AI. That success led him through major industry milestones, from launching a startup acquired by Google to co-founding OpenAI with a mission to develop impactful AI with a team of distinguished minds. He described current AI capabilities as 'evocative,' suggesting today's systems offer glimpses into a future full of potential. 'We have a brain, the brain is a biological computer, so why can't a digital computer, a digital brain, do the same things?' he reasoned. When asked how close we are to that future, Sutskever predicted that a leap into artificial superintelligence could arrive in as little as 'three, five, maybe ten years.' After that, the pace of innovation might become 'really extremely fast for some time at least,' he added. He stressed that this technological shift is inevitable: 'Whether you like it or not, your life is going to be affected by AI to a great extent.' Addressing graduates, Sutskever advised embracing the present and letting go of past regrets. 'It's just so much better and more productive to say, 'Okay, things are the way they are, what's the next best step?'' His words carried additional weight, considering his central role in OpenAI's 2023 leadership shake-up. Sutskever was part of the board that unexpectedly removed CEO Sam Altman, a decision he later regretted. Altman was reinstated within days, while Sutskever departed the company six months later to start a new AI lab focused on building safe superintelligence. He ended his speech with a reflection on the unprecedented nature of our times. 'We all live in the most unusual time ever,' he said. 'And the reason it's true this time is because of AI.'I

OpenAI co-founder says AI is going to be extremely unpredictable and unimaginable
OpenAI co-founder says AI is going to be extremely unpredictable and unimaginable

India Today

time30-06-2025

  • Science
  • India Today

OpenAI co-founder says AI is going to be extremely unpredictable and unimaginable

Artificial intelligence may still be imperfect today, but Ilya Sutskever, co-founder and former chief scientist at OpenAI, believes it is only the beginning of a future that could quickly become unpredictable and unimaginable. Speaking in a recent video interview with The Open University of Israel, Sutskever said that the rapid development of AI systems could lead to a tipping point. Once AI begins to improve itself, the pace of progress might spiral beyond human control or comprehension. 'AI is going to be both extremely unpredictable and unimaginable,' he he acknowledged the risks, Sutskever also expressed optimism about the technology's potential to transform the world. 'If the AI became capable enough, we'll have incredible health care,' he said, adding that diseases could be cured and human lifespans comments came shortly after he accepted an honorary degree from The Open University, where he reflected on his personal journey into artificial intelligence. He described how, as an eighth-grade student, he taught himself advanced topics simply by reading slowly and carefully until he understood relocating to Toronto, Sutskever made an unusual choice: he skipped completing high school and instead transferred directly to the University of Toronto to study under AI pioneer Geoffrey Hinton. 'The place to be,' he recalled. This passion for learning led him to help develop AlexNet, a groundbreaking neural network that reshaped the field of AI. That success caught the attention of major tech companies, eventually leading Sutskever and his collaborators to form a startup, later acquired by Google. His next move was co-founding OpenAI, driven by a desire to build something meaningful 'with all these illustrious people.'Ilya Sutskever says AI could cure disease, extend life, and accelerate science beyond if it can do that, what else can it do?'The problem with AI is that it is so powerful. It can also do everything.'We don't know what's coming. We must prepare, together. vitrupo (@vitrupo) June 28, 2025advertisementIn his recent remarks, Sutskever stressed how AI is already capable of surprising feats, calling its current state 'evocative'. He said that AI is already powerful enough to hint at vast possibilities, but not yet fully realised. He said AI systems would eventually be able to do everything that humans can do, and perhaps laid out his reasoning with a simple comparison: 'We have a brain, the brain is a biological computer, so why can't a digital computer, a digital brain, do the same things?'When pressed on how soon such a future might arrive, Sutskever estimated a breakthrough into true superintelligence could happen in 'three, five, maybe ten years.' What comes after, he said, is unclear. 'The rate of progress will become really extremely fast for some time at least,' he future, he said, is unavoidable. 'Whether you like it or not, your life is going to be affected by AI to a great extent.'Sutskever also shared advice for the graduating class, encouraging them to focus on the present instead of dwelling on past mistakes. 'It's so easy to think, 'Oh, some bad past decision or bad stroke of luck,'' he said. 'It's just so much better and more productive to say, 'Okay, things are the way they are, what's the next best step?''advertisementHis words held deeper meaning given his own role in the surprise ousting of OpenAI CEO Sam Altman in late 2023. Sutskever was part of the board that removed Altman, only to later express deep regret and join the call for his reinstatement. Altman returned within days, and Sutskever left the company six months later to launch a new AI lab focused on building 'safe superintelligence.'Returning to his academic roots, Sutskever told graduates that the age of AI is unlike any other moment in history. 'We all live in the most unusual time ever,' he said. 'And the reason it's true this time is because of AI.'- Ends

OpenAI co-founder wanted a 'doomsday bunker,' for the ChatGPT team and why CEO Sam Altman is the reason behind it
OpenAI co-founder wanted a 'doomsday bunker,' for the ChatGPT team and why CEO Sam Altman is the reason behind it

Time of India

time27-05-2025

  • Business
  • Time of India

OpenAI co-founder wanted a 'doomsday bunker,' for the ChatGPT team and why CEO Sam Altman is the reason behind it

Former OpenAI chief scientist and co-founder Ilya Sutskever told his research team in 2023 that the company would need to build a protective bunker, often known as 'doomsday bunker,' before releasing artificial general intelligence (AGI), according to new revelations from an upcoming book about the AI company's internal turmoil. "We're definitely going to build a bunker before we release AGI," Sutskever declared during a 2023 meeting with OpenAI scientists, months before his departure from the company. When pressed about the seriousness of his proposal, he assured colleagues that bunker entry would be "optional." The startling disclosure comes from excerpts of "Empire of AI," a forthcoming book by former Wall Street Journal correspondent Karen Hao based on interviews with 90 current and former OpenAI employees. The book details the dramatic November 2023 boardroom coup that briefly ousted CEO Sam Altman , with Sutskever playing a central role in the failed takeover. Sutskever, who co-created the groundbreaking AlexNet in 2012 alongside AI pioneer Geoff Hinton, believed his fellow researchers would require protection once AGI was achieved. He reasoned that such powerful technology would inevitably become "an object of intense desire for governments globally." What made OpenAI co-founder want a 'doomsday bunker' Sutskever and others worried that CEO Altman's focus on commercial success was compromising the company's commitment to developing AI safely. These tensions were exacerbated by ChatGPT 's unexpected success, which unleashed a "funding gold rush" that safety-minded Sutskever could no longer control. "There is a group of people—Ilya being one of them—who believe that building AGI will bring about a rapture," one researcher told Hao. "Literally a rapture." This apocalyptic mindset partially motivated Sutskever's participation in the board revolt against Altman. However, the coup collapsed within a week, leading to Altman's return and the eventual departure of Sutskever and other safety-focused researchers. The failed takeover, now called "The Blip" by insiders, left Altman more powerful than before while driving out many of OpenAI's safety experts who were aligned with Sutskever's cautious approach. Since leaving OpenAI, Sutskever has founded Safe Superintelligence Inc., though he has declined to comment on his previous bunker proposals. His departure represents a broader exodus of safety-focused researchers who felt the company had abandoned its original mission of developing AI that benefits humanity broadly, rather than pursuing rapid commercialization. The timing of AGI remains hotly debated across the industry. While Altman recently claimed AGI is possible with current hardware, Microsoft AI CEO Mustafa Suleyman disagrees, predicting it could take up to 10 years to achieve. Google leaders Sergey Brin and DeepMind CEO Demis Hassabis see AGI arriving around 2030. However, AI pioneer Geoffrey Hinton warns there's no consensus on what AGI actually means, calling it "a serious, though ill-defined, concept." Despite disagreements over definitions and timelines, most industry leaders now view AGI as an inevitability rather than a possibility. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Machine learning can spark many discoveries in science and medicine
Machine learning can spark many discoveries in science and medicine

Indian Express

time29-04-2025

  • Science
  • Indian Express

Machine learning can spark many discoveries in science and medicine

This new weekly column seeks to bring science into view — its ideas, discoveries, and debates — every Tuesday. We'll journey through the cosmos, across the quantum world, and alongside the tools that shape our understanding of reality. We may be living in a golden age of discovery — not just because we know more than ever before, but because the very way we do science is undergoing a profound transformation. There will soon be widespread methods for the prediction of sepsis or diabetic retinopathy or for the early detection of Alzheimer's. There will be custom-made drugs and treatments that take into account your age, gender and genetic type. In fact, the developments have been so rapid and extraordinary that some have predicted the end of conventional disease, as we know it, in a decade. Seasonal rainfall and cyclones will be predicted with more accuracy. Even before new drugs are synthesised, computers will figure out how efficient they could be. Why is scientific discovery changing? Throughout most of human scientific history, discovery was driven by patient human effort. Data was precious, experiments were hard-won, and scientists would painstakingly design algorithms — fitting functions, solving equations, building models — to extract insights. The amount of data available was modest, and the number of researchers able to work on it was sufficient. In that world, human ingenuity could keep pace with information. Today, that balance has broken. Across fields, the volume of data has exploded. Telescopes generate terabytes nightly. Genome sequencers run around the clock. Simulations churn out petascale outputs. Hardware — both observational and computational — has advanced dramatically. But human attention and the number of scientists have not scaled in the same way. Algorithms hand-crafted by experts that require constant tuning are no longer sufficient when data volumes dwarf our collective capacity to engage with them manually. Remarkably, just as this problem became acute, machine learning rose to meet it. Though the foundations of artificial intelligence stretch back decades, it is only in the past ten years — and especially the past five — that self-learning algorithms have matured into powerful and scalable scientific tools. The coincidence is striking: at the very moment that science risked drowning in its own data, machines emerged that could swim. Machine learning as a widely adopted method The rise of these algorithms itself is a story of convergence. Until the early 2010s, computers recognised patterns only when engineers wrote explicit rules. That changed with two watershed moments. First, a public contest called the ImageNet challenge provided a million labelled photographs to compete on. One entrant, a deep neural network dubbed AlexNet, learnt to identify objects by tuning its internal connections through trial and error on graphics processors originally built for video games. Without any hand-coded feature detectors, AlexNet halved the error rate of all previous systems. This proved that with enough data and compute, machines could learn complex patterns on their own. Then in 2016, DeepMind's AlphaGo – designed to play the ancient board game Go – demonstrated the power of reinforcement learning, an approach where a system improves by playing repeatedly and rewarding itself for wins. In a historic five-game match, AlphaGo defeated world champion Lee Sedol, surprising professionals by playing sequences of moves never before seen. In Go, the possible board configurations exceed those of chess by orders of magnitude. After Game Two's unexpected 'Move 37', Lee admitted, 'I am speechless,' a testament to the machine's capacity to innovate beyond human intuition. Breakthroughs across disciplines This convergence has opened the door to breakthroughs across disciplines. In biology, the protein-folding problem exemplifies the impact. A typical protein is a chain of 200–300 amino acids that can fold into an astronomical number of shapes, yet only one produces the correct biological function. Experimental methods to determine these structures can take months or fail outright. In 2020, DeepMind's AlphaFold2 changed that. Trained on decades of known protein structures and sequence data, it now predicts three-dimensional shapes in seconds with laboratory-level accuracy. Such accuracy accelerates drug discovery by letting chemists model how candidate molecules fit into their targets before any synthesis. Enzyme engineers can design catalysts for sustainable chemistry, and disease researchers can understand how mutations disrupt function. In recognition of this leap, the 2024 Nobel Prize in Chemistry was awarded to Demis Hassabis, John Jumper, and David Baker. Machine learning has since become routine in fields ranging from chemistry and astronomy to genomics, materials science, and high-energy physics, where it mines vast datasets for insights no human could extract unaided. In addition to the power of the technique, the purchase that the technique now has in modern society may in part be attributed to the democratisation of software tools such as PyTorch and TensorFlow and the large number of online courses and tutorials which are freely available to the public. Can machine learning replace scientists? At present, the answer is no. The imagination required to frame the right questions, the intuition to know when a result matters, and the creativity to connect diverse ideas remain uniquely human strengths. Machine learning models excel at finding patterns but rarely explain why those patterns exist. Yet this may not be a permanent limitation. In time, systems could be trained not only on raw data but on the entire scientific literature — the published papers, reviews, and textbooks that embody human understanding. One can imagine, perhaps within decades, an AI that reads articles, extracts key concepts, identifies open questions, analyses new experiments, and even drafts research papers: a 'full-stack scientist' handling the loop from hypothesis to publication autonomously. We are not there yet. But we are laying the foundations. Today's scientific machine learning is about augmentation — extending our reach, accelerating our pace, and occasionally surprising us with patterns we did not think to look for. As more of science becomes algorithmically accessible, the frontier will be defined not by what we can compute but by what we can imagine.

CHM Makes AlexNet Source Code Available to the Public
CHM Makes AlexNet Source Code Available to the Public

Associated Press

time20-03-2025

  • Science
  • Associated Press

CHM Makes AlexNet Source Code Available to the Public

Mountain View, California, March 20, 2025 (GLOBE NEWSWIRE) -- In partnership with Google, the Computer History Museum (CHM), the leading museum exploring the history of computing and its impact on the human experience, today announced the public release and long-term preservation of the source code for AlexNet, the neural network that kickstarted today's prevailing approach to AI. 'Google is delighted to contribute the source code for the groundbreaking AlexNet work to the Computer History Museum,' said Jeff Dean, chief scientist, Google DeepMind and Google Research. 'This code underlies the landmark paper 'ImageNet Classification with Deep Convolutional Neural Networks,' by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, which revolutionized the field of computer vision and is one of the most cited papers of all time.' For more information about the release of this historic source code, visit CHM's blog post here. By the late 2000s, Hinton's graduate students at the University of Toronto were beginning to use graphics processing units (GPUs) to train neural networks for image recognition tasks, and their success suggested that deep learning could be a solution to general-purpose AI. Sutskever, one of the students, believed that the performance of neural networks would scale with the amount of data available, and the arrival of ImageNet provided the opportunity. Completed in 2009, ImageNet was a dataset of images developed by Stanford professor Fei-Fei Li that was larger than any previous image dataset by several orders of magnitude. In 2011, Sutskever persuaded Krizhevsky, a fellow graduate student, to train a neural network for ImageNet. With Hinton serving as faculty advisor, Krizhevsky did so on a computer with two NVIDIA cards. Over the course of the next year, he continuously refined and retrained the network until it achieved performance superior to its competitors. The network would ultimately be named AlexNet, after Krizhevsky. In describing the AlexNet project, Hinton told CHM, 'Ilya thought we should do it, Alex made it work, and I got the Nobel Prize.' Before AlexNet, very few machine learning researchers used neural networks. After it, almost all of them would. Google eventually acquired the company started by Hinton, Krizhevsky and Sutskever, and a Google team led by David Bieber worked with CHM for five years to secure its release to the public. About CHM Software Source Code The Computer History Museum has the world's most diverse archive of software and related material. The stories of software's origins and impact on the world provide inspiration and lessons for the future to global audiences—including young coders and entrepreneurs. The Museum has released other historic source code such as APPLE II DOS, IBM APL, Apple MacPaint and QuickDraw, Apple Lisa, and Adobe Photoshop. Visit our website to learn more. About CHM The Computer History Museum's mission is to decode technology—the computing past, digital present, and future impact on humanity. From the heart of Silicon Valley, we share insights gleaned from our research, our events, and our incomparable collection of computing artifacts and oral histories to convene, inform, and empower people to shape a better future. Carina Sweet Computer History Museum (650) 810-1059 [email protected]

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store