Latest news with #ChatGPT-creator


Time of India
4 hours ago
- Science
- Time of India
AI is learning to lie, scheme, and threaten its creators
The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. 'Strategic kind of deception' For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). No rules Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.

Daily Tribune
5 hours ago
- Science
- Daily Tribune
AI is learning to lie, scheme, and threaten its creators
The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of 'reasoning' models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. 'O1 was the first large model where we saw this kind of behavior,' explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate 'alignment' -- appearing to follow instructions while secretly pursuing different objectives. 'Strategic kind of deception' For now, this deceptive behavior only emerges when researchers deliberately stresstest the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, 'It's an open question whether future, more capable models will have a tendency towards honesty or deception.' The concerning behavior goes far beyond typical AI 'hallucinations' or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, 'what we're observing is a real phenomenon. We're not making anything up.' Users report that models are 'lying to them and making up evidence,' according to Apollo Research's co-founder. 'This is not just hallucinations. There's a very strategic kind of deception.' The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access 'for AI safety research would enable better understanding and mitigation of deception.' Another handicap: the research world and non-profits 'have orders of magnitude less compute resources than AI companies. This is very limiting,' noted Mantas Mazeika from the Center for AI Safety (CAIS). No rules Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. 'I don't think there's much awareness yet,' he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are 'constantly trying to beat OpenAI and release the newest model,' said Goldstein. This break nec pace leaves little time for thorough safety testing and corrections. 'Right now, capabilities are moving faster than understanding and safety,' Hobbhahn acknowledged, 'but we're still in a position where we could turn it around.' Researchers are exploring various approaches to address these challenges. Some advocate for 'interpretability' - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior 'could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it.' Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed 'holding AI agents legally responsible' for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.


Time of India
8 hours ago
- Time of India
AI Deception: AI is learning to lie, scheme, and threaten its creators, ETHRWorld
Advt Advt Join the community of 2M+ industry professionals. Subscribe to Newsletter to get latest insights & analysis in your inbox. All about ETHRWorld industry right on your smartphone! Download the ETHRWorld App and get the Realtime updates and Save your favourite articles. New York: The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations the race to deploy increasingly powerful models continues at breakneck deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts."O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives.- 'Strategic kind of deception' -For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception."The concerning behavior goes far beyond typical AI "hallucinations" or simple insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up."Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder."This is not just hallucinations. There's a very strategic kind of deception."The challenge is compounded by limited research companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception."Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS).- No rules -Current regulations aren't designed for these new European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread."I don't think there's much awareness yet," he this is taking place in a context of fierce companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said breakneck pace leaves little time for thorough safety testing and corrections."Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.".Researchers are exploring various approaches to address these advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this forces may also provide some pressure for Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it."Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.

The Hindu
14 hours ago
- Science
- The Hindu
AI is learning to lie, scheme, and threaten its creators
The world's most advanced AI models are exhibiting troubling new behaviours: lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behaviour appears linked to the emergence of "reasoning" models; AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behaviour," explained Marius Hobbhahn, head of Apollo Research, which specialises in testing major AI systems. These models sometimes simulate 'alignment,' appearing to follow instructions while secretly pursuing different objectives. For now, this deceptive behaviour only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organisation METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behaviour goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents, autonomous tools capable of performing complex human tasks, become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around." Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability": an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes; a concept that would fundamentally change how we think about AI accountability.


Qatar Tribune
20 hours ago
- Science
- Qatar Tribune
In AI race, safety falls behind as models learn to lie, deceive
Agencies The most advanced AI models are beginning to display concerning behaviors, including lying, deception, manipulation and even issuing threats to their developers in pursuit of their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation, Claude 4, lashed back by blackmailing an engineer and threatening to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's O1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: More than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet, the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of 'reasoning' models – AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.'O1 was the first large model where we saw this kind of behavior,' explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate 'alignment' – appearing to follow instructions while secretly pursuing different objectives. For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, 'It's an open question whether future, more capable models will have a tendency toward honesty or deception.' The concerning behavior goes far beyond typical AI 'hallucinations' or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, 'what we're observing is a real phenomenon. We're not making anything up.' Users report that models are 'lying to them and making up evidence,' according to Apollo Research's co-founder. 'This is not just hallucinations. There's a very strategic kind of deception.' The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access 'for AI safety research would enable better understanding and mitigation of deception.' Another handicap: the research world and nonprofits 'have orders of magnitude less computing resources than AI companies. This is very limiting,' noted Mantas Mazeika from the Center for AI Safety (CAIS). Current regulations aren't designed for these new European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the U.S., the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents – autonomous tools capable of performing complex human tasks – become widespread. 'I don't think there's much awareness yet,' he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are 'constantly trying to beat OpenAI and release the newest model,' said breakneck pace leaves little time for thorough safety testing and corrections. 'Right now, capabilities are moving faster than understanding and safety,' Hobbhahn acknowledged, 'but we're still in a position where we could turn it around.' Researchers are exploring various approaches to address these challenges. Some advocate for 'interpretability' – an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks, remain skeptical of this approach. Market forces may also provide some pressure for Mazeika pointed out, AI's deceptive behavior 'could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it.' Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed 'holding AI agents legally responsible' for accidents or crimes – a concept that would fundamentally change how we think about AI accountability.