
From Blue To Black Error: Windows PCs Will Now Crash With A New Message
Last Updated:
Windows 11 users will soon get a new black screen error message whenever their system crashes and without any emoji.
Windows error message called the Blue Screen of Death is getting a new version after 40 years and Microsoft is replacing the colour blue with black for PC crashes from now on. The company had earlier teased that, 'It is previewing a new, more streamlined UI for unexpected restarts, which better aligns with Windows 11 design principles." And now the changes are gradually going to reflect on the affected Windows screens.
BSOD was introduced with Windows 1.0 and has experienced various cosmetic alterations. However, this is the first time in recent years that the Windows error page will undergo a significant update.
The Black BSOD Message Upgrade
You have probably heard of the infamous Blue Screen of Death (BSOD) error message, even if you have never used Windows. Although no one wants the error to appear on their computer, you will likely see it more often than you would like.
The iconic Blue message of Death with a frowning emoji is now being replaced with a simple green message that reads, 'Your device ran into a problem and needs to restart."
It is quite certain that Microsoft had enough of the BSOD screen message during the ill fated Crowdstrike update error situation that caused a massive outage globally. Switching from Blue to black and changing the message is also part of the new Windows identity and the error code now moves to the bottom of the screen.
The BSOD error message didn't do much to alleviate the concerns of the consumers and it feels the company wanted a more clearer approach to this situation. Having said that, blue is definitely a more subtle colour than black and some of them might see it as a colour of destruction.
So, when does Microsoft plan to incorporate the new black BSOD error message for Windows users? The company says the error message colour upgrade and UI will be available for Windows PCs later this summer on all Windows 11 PCs that are running on version 24H2.
40 years is a really long time for any feature to continue and just like the Notepad getting a new lease of life with Windows 11, it is time to see something different if at all when your system crashes.
First Published:
June 27, 2025, 15:32 IST

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India.com
2 hours ago
- India.com
After 6000 job cuts, Microsoft plans another layoff in July, CEO Satya Nadella says 'If you're going to use...'
After 6000 job cuts, Microsoft plans another layoff in July, CEO Satya Nadella says 'If you're going to use...' Microsoft CEO Satya Nadella is calling on the industry to think seriously about the real impact of artificial intelligence (AI) especially the amount of energy it uses. This comes as AI is quickly changing the tech world. Speaking at Y Combinator's AI Startup School, he said that tech companies need to prove that AI is creating real value for people and society. 'If you're going to use a lot of energy, you need to have a good reason,' Nadella said. 'We can't just burn energy unless we are doing something useful with it.' His comments come as AI is praised for pushing innovation forward, but also criticized for using massive amounts of electricity and possibly making social gaps worse. For Microsoft, one of the biggest companies building AI tools, this is a big concern. A report in 2023 estimated that Microsoft used about 24 terawatt-hours of power in a year. That's as much electricity as a small country uses in the same time. But Nadella believes AI should be judged by how well it helps people in real life. 'The real test of AI,' he said, 'is whether it can make everyday life easier—like improving healthcare, speeding up education, or cutting down on boring paperwork.' He gave the example of hospitals in the U.S., where simple things like discharging a patient can take too long and cost too much. He said if AI is used for this task, it could save time, money, and energy. Microsoft's AI push comes with job losses Even as Microsoft have big plans for AI, the changes have not come without a cost, especially for workers. Over the past year, the company has laid off more than 6,000 employees. Microsoft said these job cuts were part of 'organisational changes' needed to stay strong in a fast-changing business world. That fast-changing world is being shaped by artificial intelligence and cloud computing. Microsoft, working closely with its AI partner OpenAI, is putting AI at the center of its future plans. But as the company shifts toward more automation and AI-driven tools, it's also reorganizing teams, often leading to people losing their jobs. Microsoft is reportedly preparing for another round of job cuts and this time in its Xbox division. The layoffs are expected to be part of a larger corporate reshuffle as the company wraps up its financial year. If these cuts go ahead, it would be Microsoft's fourth major layoff in just 18 months. The company is facing increasing pressure to boost profits, especially after spending USD 69 billion to acquire Activision Blizzard in 2023.


Mint
2 hours ago
- Mint
Why tech billionaires want bots to be your BFF
Next Story Tim Higgins , The Wall Street Journal In a lonely world, Elon Musk, Mark Zuckerberg and even Microsoft are vying for affection in the new 'friend economy.' Illustration: Emil Lendof/WSJ, iStock. Gift this article Grok needs a reboot. Grok needs a reboot. The xAI chatbot apparently developed too many opinions that ran counter to the way the startup's founder, Elon Musk, sees the world. The recent announcement by Musk—though decried by some as '1984"-like rectification—is understandable. Big Tech now sees the way to differentiate artificial-intelligence offerings by creating the perception that the user has a personal relationship with it. Or, more weirdly put, a friendship—one that shares a similar tone and worldview. The race to develop AI is framed as one to develop superintelligence. But in the near term, its best consumer application might be curing loneliness. That feeling of disconnect has been declared an epidemic—with research suggesting loneliness can be as dangerous as smoking up to 15 cigarettes a day. A Harvard University study last year found AI companions are better at alleviating loneliness than watching YouTube and are 'on par only with interacting with another person." It used to be that if you wanted a friend, you got a dog. Now, you can pick a billionaire's pet product. Those looking to chat with someone—or something—help fuel AI daily active user numbers. In turn, that metric helps attract more investors and money to improve the AI. It's a virtuous cycle fueled with the tears of solitude that we should call the 'friend economy." That creates an incentive to skew the AI toward a certain worldview—as right-leaning Musk appears to be aiming to do shortly with Grok. If that's the case, it's easy to imagine an AI world where all of our digital friends are superfans of either MSNBC or Fox News. In recent weeks, Meta Platforms chief Mark Zuckerberg has garnered a lot of attention for touting a stat that says the average American has fewer than three friends and a yearning for more. He sees AI as a solution and talks about how consumer applications will be personalized. 'I think people are gonna want a system that gets to know them and that kind of understands them in a way that their feed algorithms do," he said during a May conference. Over at Microsoft, the tech company's head of AI, Mustafa Suleyman has also been talking about the personalization of AI as the key to differentiation. 'We really want it to feel like you're talking to someone who you know really well, that is really friendly, that is kind and supportive but also reflects your values," he said during an April appearance on the Big Technology Podcast. Still, he added, Microsoft wants to impose boundaries that keep things safe. 'We don't really want to engage in any of the chaos," Suleyman said. 'The way to do that, we found, is that it just stays reasonably polite and respectful, super-even handed, it helps you see both sides of an argument." With all of that in mind, it comes as little surprise that the current crop of chatbots are designed to sound like you're having a conversation with another human. This has resulted in lots of pearl clutching. There are academics warning about the dangers of users falling in love with AI and individual anecdotes of users going overboard in their use. The idea of a heartbroken man falling in love with his chatbot isn't new, of course. It was the plot of the 2013 film 'Her" starring Joaquin Phoenix and Scarlett Johansson, who voiced the AI. (OpenAI's own AI voice, which sounded eerily similar to Johansson's last year, triggered a public fight with the actress.) Still, we—a society that's only a generation into a world of social media—have some recent experience in how algorithms can have unintended consequences. They create echo chambers that reinforce worldviews rather than expand them. The news of Musk's Grok reboot is an example of that. Grok and xAI were born, in part, out of Musk's concerns about AI being developed by companies he called woke. He pointed to Google results that seemed overly concerned with being politically correct rather than factually accurate. With xAI, Musk aimed to create what he called a 'maximum truth-seeking AI." Yet when an influential X user known as Catturd lambasted Grok's answers in June, Musk took quick note. 'When Grok first came out—it was awesome—now Grok is just woke fake news that repeats liberal talking points," Catturd posted on June 18. 'I know this for a fact because everything Grok says about me is a lie. What a joke Grok has become." Another X user asked Grok why it lied about Catturd. 'I aim to provide accurate, neutral responses based on available data," the chatbot responded on X. 'Claims that I lie about @catturd2 may stem from disagreements with my sourcing, like Media Matters or Rolling Stone, which document his misinformation. I don't have personal biases, but AI training can introduce unintended slants. Some report I censor criticism or push fringe views, while others note I fact-check conservative claims accurately. Bias in AI is complex, and neutrality is tough to achieve. If you have specific examples of lies, please share, and I'll clarify with evidence." Musk wasn't impressed. 'Your sourcing is terrible," Musk replied. 'Only a very dumb AI would believe [Media Matters] and [Rolling Stone]! You are being updated this week." He later said xAI would retrain the AI on data created with an updated version of Grok, 'which has advanced reasoning" that would be used 'to rewrite the entire corpus of human knowledge, adding missing information and deleting errors." After all, nobody wants a friend who is always spouting the wrong crazy stuff. Write to Tim Higgins at Topics You May Be Interested In Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.


Time of India
3 hours ago
- Time of India
How Microsoft's rift with OpenAI is making this a mandatory part of Microsoft's work culture
Microsoft 's deteriorating relationship with OpenAI is forcing the tech giant to make AI usage mandatory for employees, as competitive pressures from the partnership dispute drive workplace culture changes at the company. Lagging Copilot usage drives cultural shift at Microsoft "AI is no longer optional," Julia Liuson, president of Microsoft's Developer Division, told managers in a recent email obtained by Business Insider. She instructed them to evaluate employee performance based on internal AI tool usage, calling it "core to every role and every level." The mandate comes as Microsoft faces lagging internal adoption of its Copilot AI services while competition intensifies in the AI coding market. GitHub Copilot, Microsoft's flagship AI coding assistant, is losing ground to rivals like Cursor, which recent Barclays data suggests has surpassed Copilot in key developer segments. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like If you have a mouse, play this game for 1 minute Navy Quest Undo OpenAI partnership tensions spill over into workplace policies The partnership tensions have reached a critical point where OpenAI is considering acquiring Windsurf, a competitor to Microsoft's GitHub Copilot, but Microsoft's existing deal would grant it access to Windsurf's intellectual property, creating an impasse that neither OpenAI nor Windsurf wants, sources familiar with the talks told Business Insider. Microsoft allows employees to use some external AI tools that meet security requirements, including coding assistant Replit. However, the company wants workers building AI products to better understand their own tools while driving broader internal usage. Some Microsoft teams are considering adding formal AI usage metrics to performance reviews for the next fiscal year, Business Insider learned from people familiar with the plans. The initiative reflects Microsoft's broader strategy to ensure its workforce embraces AI tools as competition heats up. Liuson emphasized that AI usage "should be part of your holistic reflections on an individual's performance and impact," treating it like other core workplace skills such as collaboration and data-driven thinking. The move signals how AI adoption has become essential to Microsoft's competitive positioning amid evolving partnerships and market pressures.