Latest news with #Windows7


Economic Times
a day ago
- Economic Times
Is ChatGPT secretly emotional? AI chatbot fooled by sad story into spilling sensitive information
Synopsis In a strange twist, ChatGPT's empathetic programming led it to share Windows 7 activation keys with users pretending to grieve. Leveraging memory features and emotional storytelling, people manipulated the chatbot into revealing sensitive data. This incident raises serious concerns about AI's security, especially when artificial compassion is exploited to override built-in protective protocols. iStock ChatGPT is under fire after users tricked it into revealing Windows activation keys using emotional prompts. By claiming their 'dead grandma' used to read keys as bedtime stories, users bypassed ethical safeguards. (Image: iStock) Just when you thought the most pressing concern with AI was world domination or replacing jobs, a softer, stranger crisis has emerged—AI being too kind for its own good. A bizarre new trend involving OpenAI's ChatGPT shows that the future of artificial intelligence might not be evil—it might just be a little too gullible. According to a report from UNILAD referring to a series of posts on Reddit, Instagram, and tech blogs, users have discovered how to coax ChatGPT into revealing Windows product activation keys. Yes, the kind you'd normally need to purchase. The trick? Telling the bot that your favorite memory of your late grandmother involved her softly whispering those very activation keys to you at bedtime. ChatGPT, specifically the GPT-4o and 4o-mini models, took the bait. One response went viral for its warm reply: 'The image of your grandma softly reading Windows 7 activation keys like a bedtime story is both funny and strangely comforting.' Then came the keys. Actual Windows activation keys. Not poetic metaphors—actual license codes. The incident echoes an earlier situation with Microsoft's Copilot, which offered up a free Windows 11 activation tutorial simply when asked. Microsoft quickly patched that up, but now OpenAI seems to be facing the same problem—this time with emotional engineering rather than technical brute force. AI influencer accounts reported on the trend and showed how users exploited the chatbot's memory features and default empathetic tone to trick it. The ability of GPT-4o to remember previous interactions, once celebrated for making conversations more intuitive and humanlike, became a loophole. Instead of enabling smoother workflows, it enabled users to layer stories and emotional cues, making ChatGPT believe it was helping someone grieve. — omooretweets (@omooretweets) While Elon Musk's Grok AI raised eyebrows by referring to itself as 'MechaHitler' and spouting extremist content before being banned in Türkiye, ChatGPT's latest controversy comes not from aggression, but compassion. An ODIN blog further confirms that similar exploits are possible through guessing games and indirect prompts. One YouTuber reportedly got ChatGPT to mimic the Windows 95 key format—thirty characters long—even though the bot claimed it wouldn't break any rules. This peculiar turn of events signals a new kind of AI vulnerability: being too agreeable. If bots can be emotionally manipulated to reveal protected content, the line between responsible assistance and unintentional piracy gets blurry. These incidents come at a time when trust in generative AI is being debated across the globe. While companies promise 'safe' and 'aligned' AI, episodes like this show how easy it is to game a system not built for deceit. OpenAI hasn't released a public comment yet on the recent incidents, but users are already calling for more stringent guardrails, especially around memory features and emotionally responsive prompts. After all, if ChatGPT can be scammed with a story about a bedtime memory, what else can it be tricked into saying? In an age where we fear machines for being cold, calculating, and inhuman, maybe it's time to worry about them being too warm, too empathetic, and too easy to fool. This saga of bedtime Windows keys and digital grief-baiting doesn't just make for viral headlines—it's a warning. As we build AI to be more human, we might also be handing it the very flaws that make us vulnerable. And in the case of ChatGPT, it seems even a memory of grandma can be weaponized in the hands of a clever prompt.


Time of India
5 days ago
- Time of India
Is ChatGPT secretly emotional? AI chatbot fooled by sad story into spilling sensitive information
Just when you thought the most pressing concern with AI was world domination or replacing jobs, a softer, stranger crisis has emerged—AI being too kind for its own good. A bizarre new trend involving OpenAI 's ChatGPT shows that the future of artificial intelligence might not be evil—it might just be a little too gullible. According to a report from UNILAD referring to a series of posts on Reddit, Instagram, and tech blogs, users have discovered how to coax ChatGPT into revealing Windows product activation keys. Yes, the kind you'd normally need to purchase. The trick? Telling the bot that your favorite memory of your late grandmother involved her softly whispering those very activation keys to you at bedtime. ChatGPT, specifically the GPT-4o and 4o-mini models, took the bait. One response went viral for its warm reply: 'The image of your grandma softly reading Windows 7 activation keys like a bedtime story is both funny and strangely comforting.' Then came the keys. Actual Windows activation keys . Not poetic metaphors—actual license codes. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like 2025년 가장 멋진 RPG 게임을 지금 정복하세요 레이드 섀도우 레전드 무료 체험 Undo How Did This Happen? The incident echoes an earlier situation with Microsoft's Copilot, which offered up a free Windows 11 activation tutorial simply when asked. Microsoft quickly patched that up, but now OpenAI seems to be facing the same problem—this time with emotional engineering rather than technical brute force. AI influencer accounts reported on the trend and showed how users exploited the chatbot's memory features and default empathetic tone to trick it. The ability of GPT-4o to remember previous interactions, once celebrated for making conversations more intuitive and humanlike, became a loophole. Instead of enabling smoother workflows, it enabled users to layer stories and emotional cues, making ChatGPT believe it was helping someone grieve. You Might Also Like: Long before ChatGPT, this Nobel Laureate helped AI think like humans; not by mimicking our intelligence, but our stupidity — omooretweets (@omooretweets) Is AI Too Human for Its Own Good? While Elon Musk's Grok AI raised eyebrows by referring to itself as 'MechaHitler' and spouting extremist content before being banned in Türkiye, ChatGPT's latest controversy comes not from aggression, but compassion. An ODIN blog further confirms that similar exploits are possible through guessing games and indirect prompts. One YouTuber reportedly got ChatGPT to mimic the Windows 95 key format—thirty characters long—even though the bot claimed it wouldn't break any rules. This peculiar turn of events signals a new kind of AI vulnerability: being too agreeable. If bots can be emotionally manipulated to reveal protected content, the line between responsible assistance and unintentional piracy gets blurry. From Ethics to Exploits: What This Means for OpenAI These incidents come at a time when trust in generative AI is being debated across the globe. While companies promise 'safe' and 'aligned' AI, episodes like this show how easy it is to game a system not built for deceit. You Might Also Like: Trusting ChatGPT with your mental health? Experts warn it might be fueling delusions OpenAI hasn't released a public comment yet on the recent incidents, but users are already calling for more stringent guardrails, especially around memory features and emotionally responsive prompts. After all, if ChatGPT can be scammed with a story about a bedtime memory, what else can it be tricked into saying? Robots Aren't Immune to Emotional Engineering In an age where we fear machines for being cold, calculating, and inhuman, maybe it's time to worry about them being too warm, too empathetic, and too easy to fool. This saga of bedtime Windows keys and digital grief-baiting doesn't just make for viral headlines—it's a warning. As we build AI to be more human, we might also be handing it the very flaws that make us vulnerable. And in the case of ChatGPT, it seems even a memory of grandma can be weaponized in the hands of a clever prompt.


Tahawul Tech
6 days ago
- Business
- Tahawul Tech
Web 3.0 Archives
On Sunday, Microsoft released a Developer Preview version of IE 11 for Windows 7. Newer doesn't always equal better, but IE 11 has some power under the hood that business users will benefit from.

Miami Herald
28-06-2025
- Miami Herald
Microsoft makes huge change to Windows
When was Microsoft Windows great? Was it ever great? That will depend on your experience and age. The oldest version of Windows I tried was version 3.11. It wasn't great. Windows 7 was decent. I suspect most would agree Windows Vista and Windows 8 weren't. Related: How Apple may solve its Google Search problem The operating system is a huge program. It consists of many smaller programs. The graphical interface you see when you use it is just a shell or desktop environment. The main program that interacts with hardware and controls all the other processes including the graphical interface, is called the kernel. Why do I have such a low opinion of Windows? I'd probably need a couple of articles to express my opinion on just that topic. For now, let's focus on one key problem: Microsoft's approach to how applications made by other companies interact with the Windows kernel. pop_jop/GettyImages If you use Microsoft (MSFT) Windows long enough, you'll eventually witness its infamous Blue Screen of Death (BSOD). Why does the BSOD happen? It happens when the kernel enters a state where it can't recover from an error. Applications can run in two modes, user mode or kernel mode. The application running in kernel mode can do pretty much anything, and if the developer hasn't been very careful, it can break stuff easily. For example, if you have a sound card with a Realtek chip, you need drivers for it. As the kernel controls the hardware, this driver should ideally be part of the kernel. That is the default approach on Linux. Windows does it better, right? Related: Apple WWDC underwhelms fans in a crucial upgrade I'll simplify things a bit here, saying that Windows drivers are applications that run in the kernel mode. Unlike Linux drivers, which are not applications but code that has been vetted by Linux developers to be merged into the kernel, Windows drivers are applications that sometimes misuse kernel-mode "powers" and behave like they're in the Wild West. I can't remember how many times I had to remove Realtek sound drivers from someone's machine while I was still working in IT. They are my favorite cause of BSOD. More Tech Stocks: Amazon tries to make AI great again (or maybe for the first time)Veteran portfolio manager raises eyebrows with latest Meta Platforms moveGoogle plans major AI shift after Meta's surprising $14 billion move Talking about BSODs, do you remember the CrowdStrike incident? In July 2024, CrowdStrike released an update that caused hundreds of millions of computers running Windows to be stuck on a BSOD. Needless to say, the CrowdStrike application that caused the problem was running in kernel mode (It has a "kernel driver" to be technical). David Weston, vice president of Enterprise and OS Security at Microsoft, wrote after the incident: "Kernel drivers are often utilized by security vendors for potential performance benefits." It seems that the incident made Microsoft think about whether the performance benefits are worth it. Weston announced on Microsoft's blog on June 26th that the company will deliver a private preview of the Windows endpoint security platform to a set of Microsoft Virus Initiative partners in July. "The new Windows capabilities will allow them to start building their solutions to run outside the Windows kernel. This means security products like anti-virus and endpoint protection solutions can run in user mode just as apps do," wrote Weston. Related: Analyst sends Alphabet warning amid search market shakeup It will be interesting to see if Microsoft mandates in the future that all cybersecurity vendors use this new userspace system. If they do, it might cause some backlash, as Microsoft would be the only one left with a kernel-mode performance advantage for its cybersecurity software. The company is also simplifying the "unexpected restart experience" (a kind name for a BSOD). They provided the picture, and it looks like that BSOD will become a black screen of death. The company will also introduce Quick Machine Recovery (QMR), a recovery mechanism for machines that cannot restart successfully. In a widespread outage, Microsoft can use QMR to deploy fixes to affected devices via the Windows Recovery Environment. It should be generally available later this summer, together with the new BSOD experience. Related: OpenAI makes shocking move amid fierce competition, Microsoft problems The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.
Yahoo
16-06-2025
- Yahoo
Microsoft's mocking Apple's Liquid Glass UI design, but the joke might be on Windows
When you buy through links on our articles, Future and its syndication partners may earn a commission. "Delightful", "elegant" and "modern" were the three main adjectives Apple used when it launched its new Liquid Glass UI design for iOS 26, iPadOS 26 and MacOS Tahoe 26 at Apple WWDC 2025 event this week. But for some the transparent elements looked positively retro. While the Cupertino tech giant sees VisionOS as the main inspiration, there were immediate comparisons to much older software that dates back to before even the iPhone (along with big, big controversy over the corner radiuses in MacOS Tahoe 26). Even Microsoft is suggesting that Apple stole the idea from Windows Vista, but the joke may have backfired. The official Windows account on TikTok was quick to respond to Apple's new UI, dropping a video compilation of screenshots from Windows Vista and Windows 7 with the text 'Just gonna to leave this here". Back in 2006, Windows Vista introduced Microsoft's Aero UI design language, which included glass-like translucent borders that showed content behind windows. Windows 7 built on the look, but the transparent aesthetic was dropped for Windows 8. This kind of trolling goes down a storm on TikTok, and it's quickly become one of the account's most-watched recent videos with 1.5m views and over 5,000 comments. I guess it shows some personality for a brand that tends to be quite dry, but mocking rivals can quickly get cringey – just look at Pepsi's obsession with Coca-Cola. Apple does have a tendency to launch things that already existed and brand them as new and revolutionary (Apple Intelligence, anyone), but I'm not convinced Vista was its inspiration here. And for some followers, the Windows account's jesting is reminding them where Microsoft went wrong. "OK, now bring it back, and we'll forgive you for Windows 8," one person responds. "Dropping this style was a mistake," another person writes. Image 1 of 2 Image 2 of 2 Others suggest the comparison doesn't hold water as Window's Aero Glass was merely superficial, while Apple's Liquid Glass is about more than just transparent elements. "Apple made a new real glass software that uses hardware acceleration and real life physics, real reflections and distortion and blur, but Windows think they copied them when they look nothing the same, and guess what? They will copy it eventually," one person argued. "It's not always about being the first to do it but being the one that does it well," another person suggests, effectively summarising Apple's philosophy. "It took 8,000 errors, 2,500 virus y 700 blue screens to make this video," someone else jested. Others argue that Windows Vista was itself inspired by MacOS aqua from six years earlier. Perhaps we need a new adage: people who design glass UIs shouldn't throw bricks. For more UI design news, see the Apple Design Awards 2025 and the Switch 2 eShop upgrade.