Latest news with #IlyaSutskever


Business Upturn
4 hours ago
- Business
- Business Upturn
DebitMyData™ Closes Oversubscribed Seed Round- Launches $1B Human Energy Grid Global Expansion
DebitMyData™ Logo FORT LAUDERDALE, Fla., July 22, 2025 (GLOBE NEWSWIRE) — DebitMyData™, Inc.—the powerhouse has closed a seed round at more than twice its original target. This surge of investor confidence paves the way for a bold, billion-dollar global rollout of DebitMyData™'s Human Energy Grid, setting a new standard for individual data ownership, ethical monetization, and human-centric AI innovation. Preparing to launch a U.S and global expansion round, DebitMyData™ is already attracting top-tier venture capitalists—some of whom previously backed OpenAI alumni Ilya Sutskever and Mira Murati. Their attention is now focused on founder Preska Thomas and her breakthrough vision for a decentralized, human-led future in Adtech, AI, cybersecurity, and digital sovereignty. 'We're advancing AI frameworks including Fuzzy Logic, ML, NLP, and robotic networks—but the Human Energy Grid ensures we embed ethics, skills, and human vision at the algorithmic core,' said Preska Thomas, Founder & CEO . Agentic Logos, Nodes, and Verified Digital Identity Integral to DebitMyData™ 's technology are Agentic Logos—cryptographically validated identity tools that combat fraud, impersonation, and deepfakes. Core LLM Features: Verified Ownership: Every identity is cryptographically bound to an authentic user or brand. Every identity is cryptographically bound to an authentic user or brand. Real-Time Security: Proprietary consensus mechanisms eliminate spoofing and fakes. Proprietary consensus mechanisms eliminate spoofing and fakes. Plug-and-Play APIs: Enterprises and large language models (LLMs) can easily verify and interface with Agentic Nodes. By embedding identity-driven trust into content and advertising, DebitMyData™ transforms audience engagement. Brands and individuals alike benefit from frictionless, permission-based experiences that foster credibility and prevent misuse. The Human Energy Grid: An Ethics-Powered Digital Ecosystem DebitMyData™'s signature innovation—the Human Energy Grid—places people at the center of the digital economy. Key Components: Digital Ownership: Users control and protect their digital footprints via DID-LLM (Digital Identity LLM). Users control and protect their digital footprints via DID-LLM (Digital Identity LLM). Agentic Avatars: AI agents trained and owned by users, supporting monetization through sponsorships, licensing, and personal branding. AI agents trained and owned by users, supporting monetization through sponsorships, licensing, and personal branding. Ethical AI Training: Decentralized Agentic Avatars contribute to safe, human-aligned AI development. Decentralized Agentic Avatars contribute to safe, human-aligned AI development. NFT-Backed Security: Blockchain-protected digital creations ensure transparent royalties and rights. Blockchain-protected digital creations ensure transparent royalties and rights. Quantum-Resistant Privacy: Federated learning and next-generation encryption secure all interactions. This ecosystem empowers individuals to earn from their data and digital identity, marking a shift from extractive models toward equitable participation in the digital economy. Global Expansion and Ecosystem Integration Building on its momentum, DebitMyData™ is launching a global initiative to: Open subsidiaries in the EU, Asia, and the Middle East Advance Agentic Avatar technology for LLMs, APIs, and user-controlled AI Partner with NFT platforms and creator-centric brands like AnimeGamer, MemeShorts ('The TikTok of America'), and Monetize YourSelfie The roadmap includes further integration across decentralized marketplaces for data, content, and avatar-based economies. Institutional & Government Alignment DebitMyData™ is engaged in advanced discussions with regulatory bodies, family offices, and public sector partners worldwide, reinforcing its commitment to compliance, transparency, and leadership in large-scale data solutions. Image by DebitMyData™ About DebitMyData™, Inc. DebitMyData™, Inc. enables users to reclaim, verify, and monetize their digital identities through Agentic Logos and Agentic Avatars. Its scalable platform ensures GDPR compliance and AI alignment via the Human Energy Grid and DID-LLM, meeting evolving demands in ethical AI, cybersecurity, and digital equity. 'This is our moment—not just to advance AI but to protect what makes us human. The Human Energy Grid ensures humanity stays present, empowered, and valued in the algorithms that shape the future,' said Preska Thomas, Founder & CEO. For more information, visit: Media Contact:Henry Cision(754) 315-2420 [email protected]


The Star
4 days ago
- Health
- The Star
Opinion: The human brain doesn't learn, think or recall like an AI. Embrace the difference
Recently, Nvidia founder Jensen Huang, whose company builds the chips powering today's most advanced artificial intelligence systems, remarked: 'The thing that's really, really quite amazing is the way you program an AI is like the way you program a person.' Ilya Sutskever, co-founder of OpenAI and one of the leading figures of the AI revolution, also stated that it is only a matter of time before AI can do everything humans can do, because 'the brain is a biological computer.' I am a cognitive neuroscience researcher, and I think that they are dangerously wrong. The biggest threat isn't that these metaphors confuse us about how AI works, but that they mislead us about our own brains. During past technological revolutions, scientists, as well as popular culture, tended to explore the idea that the human brain could be understood as analogous to one new machine after another: a clock, a switchboard, a computer. The latest erroneous metaphor is that our brains are like AI systems. I've seen this shift over the past two years in conferences, courses and conversations in the field of neuroscience and beyond. Words like 'training,' 'fine-tuning' and 'optimization' are frequently used to describe human behavior. But we don't train, fine-tune or optimise in the way that AI does. And such inaccurate metaphors can cause real harm. The 17th century idea of the mind as a 'blank slate' imagined children as empty surfaces shaped entirely by outside influences. This led to rigid education systems that tried to eliminate differences in neurodivergent children, such as those with autism, ADHD or dyslexia, rather than offering personalised support. Similarly, the early 20th century 'black box' model from behaviourist psychology claimed only visible behaviour mattered. As a result, mental healthcare often focused on managing symptoms rather than understanding their emotional or biological causes. And now there are new misbegotten approaches emerging as we start to see ourselves in the image of AI. Digital educational tools developed in recent years, for example, adjust lessons and questions based on a child's answers, theoretically keeping the student at an optimal learning level. This is heavily inspired by how an AI model is trained. This adaptive approach can produce impressive results, but it overlooks less measurable factors such as motivation or passion. Imagine two children learning piano with the help of a smart app that adjusts for their changing proficiency. One quickly learns to play flawlessly but hates every practice session. The other makes constant mistakes but enjoys every minute. Judging only on the terms we apply to AI models, we would say the child playing flawlessly has outperformed the other student. But educating children is different from training an AI algorithm. That simplistic assessment would not account for the first student's misery or the second child's enjoyment. Those factors matter; there is a good chance the child having fun will be the one still playing a decade from now – and they might even end up a better and more original musician because they enjoy the activity, mistakes and all. I definitely think that AI in learning is both inevitable and potentially transformative for the better, but if we will assess children only in terms of what can be 'trained' and 'fine-tuned,' we will repeat the old mistake of emphasising output over experience. I see this playing out with undergraduate students, who, for the first time, believe they can achieve the best measured outcomes by fully outsourcing the learning process. Many have been using AI tools over the past two years (some courses allow it and some do not) and now rely on them to maximise efficiency, often at the expense of reflection and genuine understanding. They use AI as a tool that helps them produce good essays, yet the process in many cases no longer has much connection to original thinking or to discovering what sparks the students' curiosity. If we continue thinking within this brain-as-AI framework, we also risk losing the vital thought processes that have led to major breakthroughs in science and art. These achievements did not come from identifying familiar patterns, but from breaking them through messiness and unexpected mistakes. Alexander Fleming discovered penicillin by noticing that mold growing in a petri dish he had accidentally left out was killing the surrounding bacteria. A fortunate mistake made by a messy researcher that went on to save the lives of hundreds of millions of people. This messiness isn't just important for eccentric scientists. It is important to every human brain. One of the most interesting discoveries in neuroscience in the past two decades is the 'default mode network,' a group of brain regions that becomes active when we are daydreaming and not focused on a specific task. This network has also been found to play a role in reflecting on the past, imagining and thinking about ourselves and others. Disregarding this mind-wandering behavior as a glitch rather than embracing it as a core human feature will inevitably lead us to build flawed systems in education, mental health and law. Unfortunately, it is particularly easy to confuse AI with human thinking. Microsoft describes generative AI models like ChatGPT on its official website as tools that 'mirror human expression, redefining our relationship to technology.' And OpenAI CEO Sam Altman recently highlighted his favourite new feature in ChatGPT called 'memory.' This function allows the system to retain and recall personal details across conversations. For example, if you ask ChatGPT where to eat, it might remind you of a Thai restaurant you mentioned wanting to try months earlier. 'It's not that you plug your brain in one day,' Altman explained, 'but … it'll get to know you, and it'll become this extension of yourself.' The suggestion that AI's 'memory' will be an extension of our own is again a flawed metaphor – leading us to misunderstand the new technology and our own minds. Unlike human memory, which evolved to forget, update and reshape memories based on myriad factors, AI memory can be designed to store information with much less distortion or forgetting. A life in which people outsource memory to a system that remembers almost everything isn't an extension of the self; it breaks from the very mechanisms that make us human. It would mark a shift in how we behave, understand the world and make decisions. This might begin with small things, like choosing a restaurant, but it can quickly move to much bigger decisions, such as taking a different career path or choosing a different partner than we would have, because AI models can surface connections and context that our brains may have cleared away for one reason or another. This outsourcing may be tempting because this technology seems human to us, but AI learns, understands and sees the world in fundamentally different ways, and doesn't truly experience pain, love or curiosity like we do. The consequences of this ongoing confusion could be disastrous – not because AI is inherently harmful, but because instead of shaping it into a tool that complements our human minds, we will allow it to reshape us in its own image. – Los Angeles Times/Tribune News Service
Yahoo
15-07-2025
- Science
- Yahoo
Research leaders urge tech industry to monitor AI's ‘thoughts'
AI researchers from OpenAI, Google DeepMind, Anthropic, as well as a broad coalition of companies and nonprofit groups, are calling for deeper investigation into techniques for monitoring the so-called thoughts of AI reasoning models in a position paper published Tuesday. A key feature of AI reasoning models, such as OpenAI's o3 and DeepSeek's R1, are their chains-of-thought or CoTs — an externalized process in which AI models work through problems, similar to how humans use a scratch pad to work through a difficult math question. Reasoning models are a core technology for powering AI agents, and the paper's authors argue that CoT monitoring could be a core method to keep AI agents under control as they become more widespread and capable. 'CoT monitoring presents a valuable addition to safety measures for frontier AI, offering a rare glimpse into how AI agents make decisions,' said the researchers in the position paper. 'Yet, there is no guarantee that the current degree of visibility will persist. We encourage the research community and frontier AI developers to make the best use of CoT monitorability and study how it can be preserved.' The position paper asks leading AI model developers to study what makes CoTs 'monitorable' — in other words, what factors can increase or decrease transparency into how AI models really arrive at answers. The paper's authors say that CoT monitoring may be a key method for understanding AI reasoning models, but note that it could be fragile, cautioning against any interventions that could reduce their transparency or reliability. The paper's authors also call on AI model developers to track CoT monitorability and study how the method could one day be implemented as a safety measure. Notable signatories of the paper include OpenAI chief research officer Mark Chen, Safe Superintelligence CEO Ilya Sutskever, Nobel laureate Geoffrey Hinton, Google DeepMind cofounder Shane Legg, xAI safety adviser Dan Hendrycks, and Thinking Machines co-founder John Schulman. First authors include leaders from the UK AI Security Institute and Apollo Research, and other signatories come from METR, Amazon, Meta, and UC Berkeley. The paper marks a moment of unity among many of the AI industry's leaders in an attempt to boost research around AI safety. It comes at a time when tech companies are caught in a fierce competition — which has led Meta to poach top researchers from OpenAI, Google DeepMind, and Anthropic with million-dollar offers. Some of the most highly sought-after researchers are those building AI agents and AI reasoning models. 'We're at this critical time where we have this new chain-of-thought thing. It seems pretty useful, but it could go away in a few years if people don't really concentrate on it,' said Bowen Baker, an OpenAI researcher who worked on the paper, in an interview with TechCrunch. 'Publishing a position paper like this, to me, is a mechanism to get more research and attention on this topic before that happens.' OpenAI publicly released a preview of the first AI reasoning model, o1, in September 2024. In the months since, the tech industry was quick to release competitors that exhibit similar capabilities, with some models from Google DeepMind, xAI, and Anthropic showing even more advanced performance on benchmarks. However, there's relatively little understood about how AI reasoning models work. While AI labs have excelled at improving the performance of AI in the last year, that hasn't necessarily translated into a better understanding of how they arrive at their answers. Anthropic has been one of the industry's leaders in figuring out how AI models really work — a field called interpretability. Earlier this year, CEO Dario Amodei announced a commitment to crack open the black box of AI models by 2027 and invest more in interpretability. He called on OpenAI and Google DeepMind to research the topic more, as well. Early research from Anthropic has indicated that CoTs may not be a fully reliable indication of how these models arrive at answers. At the same time, OpenAI researchers have said that CoT monitoring could one day be a reliable way to track alignment and safety in AI models. The goal of position papers like this is to signal boost and attract more attention to nascent areas of research, such as CoT monitoring. Companies like OpenAI, Google DeepMind, and Anthropic are already researching these topics, but it's possible that this paper will encourage more funding and research into the space. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
12-07-2025
- Entertainment
- Yahoo
Screenplay for Luca Guadagnino's OpenAI movie reportedly features Elon Musk as comic relief
Elon Musk is 'living the dream and… living the meme,' but he may not be too fond of whatever jokes arise from his portrayal in Luca Guadagnino's forthcoming OpenAI movie. (Yes, if you missed it, that's a real thing.) The owner of X's very own MechaHitler will appear in the film, titled Artificial, 'in a few scenes of villainy and comic relief,' according to a report from Puck News' Matt Belloni. (The very same Matt Belloni that almost blew up Matt Remick's (fictional) career in The Studio.) Belloni got his hands on a draft of the Simon Rich-penned script, which, per earlier reports, 'revolves around the period at the artificial intelligence company OpenAI in 2023 that saw CEO Sam Altman fired and rehired in a matter of days.' (Altman will reportedly be played by Andrew Garfield.) That's certainly a part of it, in Belloni's estimation, but not the full story. The film also hinges on OpenAI co-founder Ilya Sutskever (Anora's Yura Borisov), a character Belloni describes as an 'idealistic and naive Israeli machine learning engineer.' Similar to Eduardo Saverin—coincidentally also played by Garfield—in The Social Network, Sutskever will be 'leveraged, marginalized, and ultimately betrayed by both his power-hungry friend Altman and the larger Silicon Valley community—with potentially disastrous consequences for all of humanity.' Musk's part is reportedly 'minor' in comparison, but will surely generate a whole wave of discourse when the film is released in 2026 (according to Amazon MGM's current slate). In real life, Musk—a former OpenAI investor—is currently locked in a back-and-forth legal battle over the direction of the company. Musk's character in the film, however, will reportedly be 'more concerned with his (malfunctioning) driverless Tesla than the prospect of uncontrolled A.I. destroying the world.' At one point, Mira Murati (Monica Barbaro), the company's former chief technology officer, says, 'Elon's not so bad, as far as dictators go.' This is still a draft, of course, and Belloni notes that Guadagnino will surely 'put his auteur stamp on the material' before final cut. Regardless, the overall tone is apparently 'pretty much in line with what you might expect Hollywood to do with the OpenAI origin story: a straightforward indictment of the reckless culture behind the commercialization of artificial intelligence, as well as a drive-by hit on Altman, who is depicted as a liar and a master schemer.' It's an interesting prospect for Amazon, which is reportedly mulling adding another $8 billion to its already sizable investment in OpenAI rival Anthropic. The hit on Altman must look good on paper, but one has to wonder what Amazon's top execs will think when (and if) they process that they're also complicit in what Rich, per a note at the top of the script, believes 'is the most accurate portrayal of what has happened to our world and why.' More from A.V. Club 3 new songs and 3 new albums to check out this weekend Nacho Vigalondo retreats to an unimaginative dream world for Daniela Forever John Goodman bombed his SNL audition, but thought he'd get hired anyway

Los Angeles Times
09-07-2025
- Health
- Los Angeles Times
The human brain doesn't learn, think or recall like an AI. Embrace the difference
Recently, Nvidia founder Jensen Huang, whose company builds the chips powering today's most advanced artificial intelligence systems, remarked: 'The thing that's really, really quite amazing is the way you program an AI is like the way you program a person.' Ilya Sutskever, co-founder of OpenAI and one of the leading figures of the AI revolution, also stated that it is only a matter of time before AI can do everything humans can do, because 'the brain is a biological computer.' I am a cognitive neuroscience researcher, and I think that they are dangerously wrong. The biggest threat isn't that these metaphors confuse us about how AI works, but that they mislead us about our own brains. During past technological revolutions, scientists, as well as popular culture, tended to explore the idea that the human brain could be understood as analogous to one new machine after another: a clock, a switchboard, a computer. The latest erroneous metaphor is that our brains are like AI systems. I've seen this shift over the past two years in conferences, courses and conversations in the field of neuroscience and beyond. Words like 'training,' 'fine-tuning' and 'optimization' are frequently used to describe human behavior. But we don't train, fine-tune or optimize in the way that AI does. And such inaccurate metaphors can cause real harm. The 17th century idea of the mind as a 'blank slate' imagined children as empty surfaces shaped entirely by outside influences. This led to rigid education systems that tried to eliminate differences in neurodivergent children, such as those with autism, ADHD or dyslexia, rather than offering personalized support. Similarly, the early 20th century 'black box' model from behaviorist psychology claimed only visible behavior mattered. As a result, mental healthcare often focused on managing symptoms rather than understanding their emotional or biological causes. And now there are new misbegotten approaches emerging as we start to see ourselves in the image of AI. Digital educational tools developed in recent years, for example, adjust lessons and questions based on a child's answers, theoretically keeping the student at an optimal learning level. This is heavily inspired by how an AI model is trained. This adaptive approach can produce impressive results, but it overlooks less measurable factors such as motivation or passion. Imagine two children learning piano with the help of a smart app that adjusts for their changing proficiency. One quickly learns to play flawlessly but hates every practice session. The other makes constant mistakes but enjoys every minute. Judging only on the terms we apply to AI models, we would say the child playing flawlessly has outperformed the other student. But educating children is different from training an AI algorithm. That simplistic assessment would not account for the first student's misery or the second child's enjoyment. Those factors matter; there is a good chance the child having fun will be the one still playing a decade from now — and they might even end up a better and more original musician because they enjoy the activity, mistakes and all. I definitely think that AI in learning is both inevitable and potentially transformative for the better, but if we will assess children only in terms of what can be 'trained' and 'fine-tuned,' we will repeat the old mistake of emphasizing output over experience. I see this playing out with undergraduate students, who, for the first time, believe they can achieve the best measured outcomes by fully outsourcing the learning process. Many have been using AI tools over the past two years (some courses allow it and some do not) and now rely on them to maximize efficiency, often at the expense of reflection and genuine understanding. They use AI as a tool that helps them produce good essays, yet the process in many cases no longer has much connection to original thinking or to discovering what sparks the students' curiosity. If we continue thinking within this brain-as-AI framework, we also risk losing the vital thought processes that have led to major breakthroughs in science and art. These achievements did not come from identifying familiar patterns, but from breaking them through messiness and unexpected mistakes. Alexander Fleming discovered penicillin by noticing that mold growing in a petri dish he had accidentally left out was killing the surrounding bacteria. A fortunate mistake made by a messy researcher that went on to save the lives of hundreds of millions of people. This messiness isn't just important for eccentric scientists. It is important to every human brain. One of the most interesting discoveries in neuroscience in the past two decades is the 'default mode network,' a group of brain regions that becomes active when we are daydreaming and not focused on a specific task. This network has also been found to play a role in reflecting on the past, imagining and thinking about ourselves and others. Disregarding this mind-wandering behavior as a glitch rather than embracing it as a core human feature will inevitably lead us to build flawed systems in education, mental health and law. Unfortunately, it is particularly easy to confuse AI with human thinking. Microsoft describes generative AI models like ChatGPT on its official website as tools that 'mirror human expression, redefining our relationship to technology.' And OpenAI CEO Sam Altman recently highlighted his favorite new feature in ChatGPT called 'memory.' This function allows the system to retain and recall personal details across conversations. For example, if you ask ChatGPT where to eat, it might remind you of a Thai restaurant you mentioned wanting to try months earlier. 'It's not that you plug your brain in one day,' Altman explained, 'but … it'll get to know you, and it'll become this extension of yourself.' The suggestion that AI's 'memory' will be an extension of our own is again a flawed metaphor — leading us to misunderstand the new technology and our own minds. Unlike human memory, which evolved to forget, update and reshape memories based on myriad factors, AI memory can be designed to store information with much less distortion or forgetting. A life in which people outsource memory to a system that remembers almost everything isn't an extension of the self; it breaks from the very mechanisms that make us human. It would mark a shift in how we behave, understand the world and make decisions. This might begin with small things, like choosing a restaurant, but it can quickly move to much bigger decisions, such as taking a different career path or choosing a different partner than we would have, because AI models can surface connections and context that our brains may have cleared away for one reason or another. This outsourcing may be tempting because this technology seems human to us, but AI learns, understands and sees the world in fundamentally different ways, and doesn't truly experience pain, love or curiosity like we do. The consequences of this ongoing confusion could be disastrous — not because AI is inherently harmful, but because instead of shaping it into a tool that complements our human minds, we will allow it to reshape us in its own image. Iddo Gefen is a PhD candidate in cognitive neuroscience at Columbia University and author of the novel 'Mrs. Lilienblum's Cloud Factory.'. His Substack newsletter, Neuron Stories, connects neuroscience insights to human behavior.