Latest news with #Crowdstrike


Times of Oman
2 days ago
- Times of Oman
Microsoft shuts down iconic 'Blue Screen of Death'
Microsoft is killing its infamous "Blue Screen of Death" after over four decades. The notorious error message will soon be set against a black background. The technology giant made the announcement in a blog post on Thursday as it outlined wider measures to improve the resilience of the Windows operating system. "Now it's easier than ever to navigate unexpected restarts and recover faster," the company wrote. The efforts by Microsoft come in light of the 2024 Crowdstike incident which led to a mammoth IT outage, crashing millions of Windows systems across the globe. What's new? The "Blue Screen of Death" or Blue Screen error was displayed if a serious problem caused Windows to shut down or restart unexpectedly to prevent data loss. The company said it is "streamlining" what users experience when confronted with "unexpected restarts" that lead to disruptions. The steps entail revamping the error screen that greeted users — often frustratingly so — for more than 40 years. The new error message has a much more condensed text displayed across a black backdrop. "Your device ran into a problem and needs to restart," it will read, according to an image shared by Microsoft in its blog. The error message is no longer accompanied by a sad face icon and instead shows a percentage completed for the restart process. The software company said that this "simplified" user interface for unexpected restarts will be available from later this summer on all of its Windows 11 (version 24H2) devices.


Forbes
3 days ago
- Business
- Forbes
Why AI Illiterate Directors Are The New Liability For Boards Today
Twenty-three years after Sarbanes-Oxley mandated financial experts on audit committees, boards face an even more transformative moment. But unlike the post-Enron era when adding one qualified financial expert sufficed, the AI revolution demands something far more radical: every director must become AI literate, or risk becoming a liability in the intelligence age. I just came back for the Stanford Directors' College, the premier executive education program for directors and senior executives of publicly traded firms. Now in its thirtieth year, this year's speakers included Reed Hastings, (Chairman, Co-Founder & Former CEO, Netflix; Director, Anthropic), Michael Sentonas (President, Crowdstrike), Maggie Wilderotter (Chairman, Docusign; Director, Costco, Fortinet and Sana Biotechnology), John Donahoe (Former CEO, Nike, ServiceNow, and eBay; Former Chairman, PayPal) and Condoleezza Rice (Director, Hoover Institution; Former U.S. Secretary of State). Organized by Stanford Law Professor Joseph Grundfest and the Co-Executive Director of Stanford's Rock Center for Corporate Governance Amanda Packel, the program addresses a broad range of problems that confront modern boards. The topics are extensive, including the board's role in setting business strategy, CEO and board succession, crisis management, techniques for controlling legal liability, challenges posed by activist investors, boardroom dynamics, international trade issues, the global economy, and cybersecurity threats. However, the topic that everyone wanted to discuss was AI. Joseph A. Grundfest, (Co-Director, Stanford Directors' College; W.A. Franke Professor of Law and ... More Business, Emeritus, Stanford Law School; Senior Faculty, Rock Center for Corporate Governance), interviews Reed Hastings (Chairman, Co-Founder & Former CEO, Netflix; Director, Anthropic) at the 2025 Stanford Directors' College. The stakes couldn't be higher. While traditional boards debate AI risks in quarterly meetings, a new breed of AI-first competitors operates at algorithmic speed. Consider Cursor, which reached $500 million in annual recurring revenue with just 60 employees, or Cognition Labs, valued at $4 billion with only 10 people. These aren't just 'unicorns', they're the harbingers of a fundamental shift in how AI-first businesses operate. The Sarbanes-Oxley parallel that boards are missing After Enron's collapse, the Sarbanes-Oxley (SOX) act required boards to include at least one "qualified financial expert" who understood GAAP, financial statements, and internal controls. Companies either complied or publicly explained why they lacked such expertise—a powerful mechanism that transformed board composition within five years. Today's AI challenge dwarfs that financial literacy mandate. Unlike accounting expertise that could be compartmentalized to audit committees, AI permeates every business function. When algorithms make thousands of decisions daily across marketing, operations, HR, and customer service, delegating oversight to a single "tech expert" becomes not just inadequate but dangerous. The data reveals a governance crisis in motion. According to ISS analysis, only 31% of S&P 500 companies disclosed any board oversight of AI in 2024 and a mere 11% reported explicit full board or committee oversight. This despite an 84% year-over-year increase in such disclosures, suggesting boards are scrambling to catch up. Investors are tracking (and targeting) AI governance gaps Institutional investors have moved from encouragement to enforcement. BlackRock's 2025 proxy voting guidelines emphasize board composition must reflect necessary "experiences, perspectives, and skillsets," with explicit warnings about voting against directors at companies that are "outliers compared to market norms." Vanguard and State Street have issued similar guidance, while Glass Lewis added a new AI governance section to its 2025 policies. Large institutional investors, such as BlackRock, Vanguard, and State Street, have varying policies ... More on board oversight of material risks and are now looking at AI governance. The enforcement mechanism? Universal proxy cards, mandatory since September 2022, enable surgical strikes against individual directors. Activists launched 243 campaigns in 2024 (the highest total since 2018's record of 249 campaigns), with technology sector campaigns up 15.9% year-over-year. Boards with "skills gaps related to areas where the company is underperforming" face the highest vulnerability - and nothing screams skills gap louder than AI illiteracy while competitors automate core functions. Consider what happened in 2024: 27 CEOs resigned due to activist pressure, up from 24 in 2023 and well above the four-year average of 16. The percentage of S&P 500 CEO resignations linked to activist activity has tripled since 2020. The message is clear: governance failures have consequences, and AI governance represents the next frontier for activist campaigns. The existential threat boards aren't seeing Here's the scenario keeping forward-thinking directors awake: while your board debates whether to form an AI committee, a three-person startup with 100+ AI agents is systematically capturing your market share. This isn't hyperbole. In legal services, AI achieves 100x productivity gains, reducing document review from 16 hours to 3-4 minutes. Harvey AI raised $300 million at a $3 billion valuation, while Crosby promises contract review in under an hour. In software development, companies report 60% faster cycle times and 50% fewer production errors. Salesforce aims to deploy one billion AI agents within 12 months, with each agent costing $2 per conversation versus human customer service representatives. The economics are devastating for traditional business models. AI-first companies operate with 80-95% lower operational costs while achieving comparable or superior output. They reach $100 million in annual recurring revenue in 12-18 months versus the traditional 5-10 years. When Cursor generates nearly a billion lines of working code daily, traditional software companies' armies of developers become competitive liabilities, not assets. Why traditional IT governance fails for AI Boards accustomed to delegating technology oversight to CIOs or audit committees face a rude awakening. Traditional IT governance focuses on infrastructure, cybersecurity, and compliance (the "what" of technology management). AI governance requires understanding the 'should' - whether AI capabilities should be deployed, how they impact stakeholders, and what ethical boundaries must be maintained. The fundamental difference: IT systems follow rules; AI systems learn and evolve. When Microsoft's Tay chatbot learned toxic behavior from social media in 2016, it wasn't a coding error, it was a governance failure. When COMPAS sentencing software showed racial bias, it wasn't a bug but rather inadequate board oversight of algorithmic decision-making. Stanford's Institute for Human-Centered AI research confirms that AI governance can't be delegated like financial oversight. AI creates "network effects" where individual algorithms interact unpredictably. Traditional governance assumes isolated systems; AI governance must address systemic risks from interconnected algorithms making real-time decisions across the enterprise. The coming wave of Qualified Technology Experts Just as SOX created demand for Qualified Financial Experts (QFEs), the AI revolution is spawning a new designation: Qualified Technology Experts (QTEs). The market dynamics favor early movers. Spencer Stuart's 2024 Board Index shows 16% of new S&P 500 independent directors brought digital/technology transformation expertise versus only 8% with traditional P&L leadership. The scarcity is acute: requiring both technology expertise and prior board experience creates a severe talent shortage. This presents both risk and opportunity. For incumbent directors, AI illiteracy becomes a liability targetable by activists. For business-savvy technology leaders or tech-savvy business leaders, board service offers unprecedented opportunities. As one search consultant noted, 'Technology roles offer pathways for underrepresented groups to join boards' - diversity through capability rather than tokenism. The regulatory tsunami building momentum The SEC has elevated AI to a top 2025 examination priority, with enforcement actions against companies making false AI claims. Former SEC Chair Gary Gentler's warning that "false claims about AI hurt investors and undermine market integrity" was just the beginning about concerns about the rise of 'AI washing,' or exaggerating and misrepresenting the use of AI. The Commission sent comments to 56 companies regarding AI disclosures, with 61% requesting clarification on AI usage and risks. The SEC is increasing concerned about "AI washing" and AI disclosures Internationally, the EU AI Act establishes the world's first comprehensive AI regulatory framework, with board-level accountability requirements taking effect through 2026. Like GDPR, its extraterritorial reach affects global companies. Hong Kong's Monetary Authority already requires board accountability for AI-driven decisions, while New York's Department of Financial Services mandates AI risk oversight for insurance companies. The pattern is unmistakable: just as Enron triggered SOX, AI governance failures will trigger mandatory expertise requirements. The only question is whether boards act proactively or wait for the next scandal to force their hand. The board education imperative: From nice-to-have to survival skill The data reveals a dangerous disconnect. While nearly 70% of directors trust management's AI execution skills, only 50% feel adequately informed about AI-related risks. Worse, almost 50% of boards haven't discussed AI in the past year despite mounting stakeholder pressure. While many academic institutions and trade orgs are trying to fill this need, these traditional director education models (including annual conferences and occasional briefings) can't match the exponential speed of AI's evolution. Boards need continuous learning mechanisms, regular AI strategy sessions, and direct access to expertise. The choice: Lead the transformation or become its casualty The parallels to Sarbanes-Oxley are instructive but incomplete. Financial literacy requirements responded to past failures; AI literacy requirements must anticipate future transformation. When three-person startups with AI agent swarms can outcompete thousand-employee corporations, traditional governance models aren't ready for these existential threats. The window for proactive adaptation is closing rapidly. ISS tracks AI governance. Institutional investors demand it. Activists target its absence. Regulators prepare mandates. Most critically, AI-native competitors exploit governance gaps with algorithmic efficiency. For boards, the choice is stark: develop AI literacy now while you can shape your approach, or scramble to catch up after activists, regulators, or competitors force your hand. In the post-Enron era, boards asked, "Do we have a qualified financial expert?" In the AI era, the question becomes, "Is every director AI literate?" The answer will determine not just governance quality but corporate survival. Because in a world where algorithms drive business, directors who can't govern AI can't govern at all.


CNBC
3 days ago
- CNBC
Microsoft says goodbye to the Windows blue screen of death
It's a bittersweet day for Windows users. Microsoft is scrapping its iconic "blue screen of death," known for appearing during unexpected restarts on Windows computers. The company revealed a new black iteration in a blog post on Thursday, saying that it is "streamlining the unexpected restart experience." The new black unexpected restart screen is slated to launch this summer on Windows 11 24H2 devices, the company said. Microsoft touted the updates as an "easier" and "faster" way to recover from restarts. The software giant's blue screen of death dates back to the early 1990s, according to longtime Microsoft developer Raymond Chen. Microsoft also said it plans to update the user interface to match the Windows 11 design and cut downtime during restarts to two seconds for the majority of users. "This change is part of a larger continued effort to reduce disruption in the event of an unexpected restart," Microsoft wrote. The iconic blue screen was seemingly everywhere in July of 2024 after a faulty update from Crowdstrike crashed computer systems around the world.


Mint
10-06-2025
- Business
- Mint
India prepares reporting standard as AI failures may hold clues to managing risks
India is framing guidelines for companies, developers and public institutions to report artificial intelligence-related incidents as the government seeks to create a database to understand and manage the risks AI poses to critical infrastructure. The proposed standard aims to record and classify problems such as AI system failures, unexpected results, or harmful effects of automated decisions, according to a new draft from the Telecommunications Engineering Centre (TEC). Mint has reviewed the document released by the technical arm of the Department of Telecommunications (DoT). The guidelines will ask stakeholders to report events such as telecom network outages, power grid failures, security breaches, and AI mismanagement, and document their impact, according to the draft. 'Consultations with stakeholders are going on pertaining to the draft standard to document such AI-related incidents. TEC's focus is primarily on the telecom and other critical digital infrastructure sectors such as energy and power,"said a government official, speaking on the condition of anonymity. 'However, once a standard to record such incidents is framed, it can be used interoperably in other sectors as AI is being used everywhere." The plan is to create a central repository and pitch the standard globally to the United Nations' International Telecommunication Union, the official said. Recording and analysing AI incidents is important because system failures, bias, privacy breaches, and unexpected results have raised concerns about how the technology affects people and society. 'AI systems are now instrumental in making decisions that affect individuals and society at large," TEC said in the document proposing the draft standard. 'Despite their numerous benefits, these systems are not without risks and challenges." Queries emailed to TEC didn't elicit a response till press time. Also read | AI at war: Artificial intelligence is reshaping defence strategies Incidents similar to the recent Crowdstrike incident, the largest IT outage in history, can be reported under India's proposed standard. Any malfunction in chatbots, cyber breaches, telecom service quality degradation, IoT sensor failures, etc. will also be covered. The draft requires developers, companies, regulators, and other entities to report the name of the AI application involved in an incident, the cause, location, and industry/sector affected, as well as the severity and kind of harm it caused. Like OECD AI Monitor The TEC's proposal builds on a recommendation from a MeitY sub-committee of on 'AI Governance and Guidelines Development'. The panel's report in January had called for the creation of a national AI incident database to improve transparency, oversight, and accountability. MeitY is also engaged in developing a comprehensive governance framework for the country, with a focus on fostering innovation while ensuring responsible and ethical development and deployment of AI. According to the TEC, the draft defines a standardized scheme for AI incident databases in telecommunications and critical digital infrastructure. 'It also establishes a structured taxonomy for classifying AI incidents systematically. The schema ensures consistency in how incidents are recorded, making data collection and exchange more uniform across different systems," the draft document said. Also read | Apple quietly opens AI gates to developers at WWDC 2025 India's proposed framework is similar to the AI Incidents Monitor of the Organization for Economic Co-operation and Development (OECD), which documents incidents to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable information about the real-world risks and harms posed by the technology. 'So far, most of the conversations have been primarily around first principles of ethical and responsible AI. However, there is a need to have domain and sector-specific discussions around AI safety," said Dhruv Garg, a tech policy lawyer and partner at Indian Governance and Policy Project (IGAP). 'We need domain specialist technical bodies like TEC for setting up a standardized approach to AI incidents and risks of AI for their own sectoral use cases," Garg said. 'Ideally, the sectoral approach may feed into the objective of the proposed AI Safety Institute at the national level and may also be discussed internationally through the network of AI Safety Institutes." Need for self-reglation In January, MeitY announced the IndiaAI Safety Institute under the ₹10,000 crore IndiaAI Mission to address AI risks and safety challenges. The institute focuses on risk assessment and management, ethical frameworks, deepfake detection tools, and stress testing tools. 'Standardisation is always beneficial as it has generic advantages," said Satya N. Gupta, former principal advisor at the Telecom Regulatory Authority of India (Trai). 'Telecom and Information and Communication Technology (ICT) cuts across all sectors and, therefore, once standards to mitigate AI risks are formed here, then other sectors can also take a cue." Also read | AI hallucination spooks law firms, halts adoption According to Gupta, recording the AI issues should start with guidelines and self-regulation, as enforcing these norms will increase the compliance burden on telecom operators and other companies. The MeitY sub-committee had recommended that the AI incident database should not be started as an enforcement tool and its objective should not be to penalise people who report AI incidents. 'There is a clarity within the government that the plan is not to do fault finding with this exercise but help policy makers, researchers, AI practitioners, etc., learn from the incidents to minimize or prevent future AI harms," the official cited above said.
Yahoo
06-06-2025
- Business
- Yahoo
CrowdStrike (CRWD) Price Target Raised to $515 as AI Cybersecurity Demand Soars
We recently published a list of . In this article, we are going to take a look at where CrowdStrike Holdings, Inc. (NASDAQ:CRWD) stands against other AI stocks on Wall Street's radar. On June 2nd, Rosenblatt analyst Catherine Trebnick raised the price target on CrowdStrike Holdings, Inc. (NASDAQ:CRWD) to $515.00 (from $450.00) while maintaining a 'Buy' rating. The price target revision reflects the firm's optimism about Crowdstrike's future financial outlook. According to the analysts, the growing trend toward IT consolidation is improving Crowdstrike's performance. Annual recurring revenue (ARR) and revenue growth are anticipated to align with market estimates, projecting a 21% and 20% increase, respectively. The firm further noted how businesses, despite being careful with spending, are choosing Crowdstrike for its comprehensive AI-powered security solutions. Security personnel at their consoles, monitoring a global network of threats in real-time. Crowdstrike's Q1 report is anticipated today, June 3rd, with analysts estimating an 'inline to marginally better quarter, fueled by the persistent IT consolidation trend.' The firm also noted how its increased target multiple on the shares is backed by the 31% expansion in cybersecurity sector multiples over the past two months, as well as optimism in Crowdstrike's 'strong execution and broad platform tailored to the key IT consolidation trend.' CrowdStrike Holdings, Inc. (NASDAQ:CRWD) is a leader in AI-driven endpoint and cloud workload protection. READ NEXT: 20 Best AI Stocks To Buy Now and 30 Best Stocks to Buy Now According to Billionaires. Disclosure: None. This article is originally published at Insider Monkey.