Latest news with #AIgovernance


Forbes
7 hours ago
- Business
- Forbes
McDonald's ‘123456' Password Scare Reframes Responsible AI Debate
A security flaw on the McHire platform jeopardized 64 million applicants' data. Set aside aspirational AI rhetoric, alarmist consultant pitches and techno-babble. AI success requires candor about incentives, incompetence and indifference. McDonald's learned that harsh lesson (in a relatively costless way) when two security researchers used '123456' as the username and password to astonishingly fully access the Golden Arches hiring platform — and over 64 million applicants' personal data. The noble cyber sleuths, Ian Carroll and Sam Curry, reported the flaw to McDonald's and its AI vendor, Paradox, for swift technical resolution. If nefarious actors found the lax vulnerability, McDonald's leadership would be mired in a costly, public crisis. So, will the fast-food goliath learn from this 'near-miss' to improve tech governance? Will others tap this averted disaster for overdue responsible AI introspection and action? It depends. Widespread and hushed AI deployment problems need thornier fixes than many boards and senior executives will acknowledge, admit or address. Super-sized opportunities Workplace crises can be proactively prevented (or eventually explained) by tackling incentives, incompetence and indifference with stewardship, capability and care. The Golden Arches 'near miss' exemplifies that and the timing couldn't be better. While 88% of executives surveyed by PwC expect agentic AI spending increases this year, many struggle to articulate how AI will drive competitive advantage. Nearly 70% indicated that still half or fewer of their workforce interacts with agents daily. Indiscriminately 'throwing money' at issues can create more problems than it solves. Here's a better start. Dissect incentives. Talent, culture and bureaucratic entrenchment stymie big firms desperate to innovate. Nimble, bootstrapped startups tantalizingly fill those voids, but crave revenue and reputation. Stalled AI implementations only fuel that magnetism. Typically, the larger organization the makes headlines when deals falter. How many leadership teams meaningfully assess third-party risk from an incentives perspective? Or do expedited results more strongly appeal to their own compensation and prestige hunger? Is anyone seriously assessing which party has more (or less) to lose? Nearly 95% of McDonald's 43,000 restaurants are franchised. With over 2 million workers and aggressive growth aims, automating job applications is a logical AI efficiency move. Its selected vendor, whose tagline boasts 'meet the AI assistant for all things hiring' seemed like a natural partner. At what hidden costs? Successful strategic alliances require an 'outside-in' look at a counterparty's interests. Three of the seven-member Paradox board are private equity partners, including chair Mike Gregoire. In Startups Declassified, acclaimed business school professor and tech thought leader Steve Andriole emphasizes flagship revenue's valuation criticality, 'There's no more important start-up activity than sales — especially important are the 'lighthouse' customers willing to testify to the power and greatness of products and services. Logo power is [vital] to start-ups.' 'Remember that no one wants to buy start-ups unless the company has killer intellectual property or lists of recurring customers. Profitable recurring revenue is nirvana. Exits occur when a start-up becomes empirically successful,' he continued. Assess skill and will. Despite its global presence, digital strategy imperatives and daily transaction volume, the 2025 McDonald's proxy reveals three common AI-era oversight shortfalls: inadequate boardroom cyber expertise, no technology committee and cybersecurity relegated to audit oversight. Those are serious signaling problems. In fact, the word 'cybersecurity' only appears nine times across the 100-page filing. In the director qualifications section, information technology is grouped with cybersecurity and vaguely defined 'contributes to an understanding of information technology capabilities, cloud computing, scalable data analytics and risks associated with cybersecurity matters.' Just four of the eleven directors are tagged as such. While three of those four worked in the tech sector, none has any credible IT or cybersecurity expertise. Intriguingly, not one of the four, board member and former Deloitte CEO Cathy Engelbert has the best experience to push stronger governance. Is she, now the prominent WNBA league commissioner, willing to take such contentious risk? To start, she can tap longtime McDonald's CFO Ian Borden and auditors EY for guidance and ideas on bolstering board composition. Nearly 95% of McDonald's 43,000 restaurants worldwide are franchised. When tech issues arise, fingers, by default, point at the IT team. However, responsible AI design and deployment truly require cross-functional leadership commitment. McDonald's CEO Chris Kempczinski routinely touts a 4D strategy (digital, delivery, drive-thru and development) and characterizes the fast-food frontrunner's tech edge as 'unmatched.' That bravado brings massive expectations and he can't be happy with the '123456' password distraction. With annual compensation approaching $20 million annually, he also has a responsible AI obligation to current and future McDonald's workers making, on average, 1,014 times less — as well as the 40,000 franchisees. Valerie Ashbaugh, McDonald's commercial products and platform SVP, rotates into the US CIO seat next month. The timing is ideal to institute policies, procedures and accountability for stronger third-party IT access controls. Alan Robertson, UK ambassador to the Global Council for Responsible AI, astutely notes, 'The damage is done — not by hackers, but by sheer negligence. McDonald's has pinned the issue on Paradox. Paradox says they fixed it and have since launched a bug bounty program. It raises bigger questions for all of us. Who audits the third-party vendors we automate hiring with? Where does the liability sit when trust is breached at this scale? And what does 'responsible AI' even mean when basic cybersecurity hygiene isn't in place? We talk about ethics — but sometimes it's just about setting a password.' That's prototypical indifference — especially when the access key is "123456." Likewise, HR leaders have a chance to meaningfully shape AI rollouts. 'HR needs to resist the urge to 'just go along.' There will be many HR leaders who simply wait for the various software lines they current license to add AI functionality. To do so would be a mistake. AI will become a critical part of the employee experience and HR should have a hand in that,' advises AthenaOnline SVP of customer solutions Mark Jesty. At McDonald's, EVP and global chief people officer Tiffanie Boyd holds that golden opportunity to elevate responsible AI on the board and c-suite agendas. Will she? Responsibility knocks The McHire 'near-miss' highlights how boards and c-suites can remain dangerously unprepared for AI design, deployment and oversight. Strategy speed and tech wizardry must never be at stewardship's cost. "If you're deploying AI without basic security hygiene, you're not innovating. You're endangering people. Security is not optional,' implores CEO Ivan Rahman. Who's opting for drive-thru AI governance?


Forbes
2 days ago
- Science
- Forbes
United Nations Considering These Four Crucial Actions To Save The World From Dire AGI And Killer AI Superintelligence
The United Nations releases an important report on AGI and emphasizes four key recommendations to ... More help save the world from dire outcomes. In today's column, I examine a recently released high-priority report by the United Nations that emphasizes what must be done to prepare for the advent of artificial general intelligence (AGI). Be aware that the United Nations has had an ongoing interest in how AI is advancing and what kinds of international multilateral arrangements and collaborations ought to be taking place (see my coverage at the link here). The distinctive element of this latest report is that the focus right now needs to be on our reaching AGI, a pinnacle type of AI. Many in the AI community assert that we are already nearing the cusp of AGI and, in turn, we will soon thereafter arrive at artificial superintelligence (ASI). For the sake of humanity and global survival, the U.N. seeks to have a say in the governance and control of AGI and ultimately ASI. Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown whether we will reach AGI, or whether AGI may be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. United Nations Is Into AI And AGI I've previously explored numerous U.N. efforts regarding where AI is heading and how society should best utilize advanced AI. For example, I extensively laid out the ways that the U.N. recommends that AI be leveraged to attain the vaunted Sustainability Development Goals (SDGs), see the link here. Another important document by the U.N. is the UNESCO-led agreement on the ethics of AI, which was the first-ever global consensus involving 193 countries on the suitable use of advanced AI (see my analysis at the link here) The latest notable report is entitled 'Governance of the Transition to Artificial General Intelligence (AGI): Urgent Considerations for the UN General Assembly' and was prepared and submitted to the Council of Presidents of the United Nations General Assembly (UNCPGA). Here are some key points in that report (excerpts): The bottom line is that a strong case can be made that if AGI is allowed to be let loose and insufficiently overseen, society is going to be at grave risk. A question arises as to how the nations of the world can unite to try and mitigate that risk. Aptly, the United Nations believes they are the appropriate body to take on that challenge. UN Given Four Big Asks What does the U.N. report say about urgently needed steps regarding coping with the advent of AGI? These four crucial recommendations are stridently called for: Those recommendations will be considered by the Council of Presidents of the United Nations General Assembly. By and large, enacting one or more of those recommendations would indubitably involve some form of U.N. General Assembly resolutions and would undoubtedly need to be integrated into other AI initiatives of the United Nations. It is possible that none of the recommendations will proceed. Likewise, the recommendations might be revised or reconstructed and employed in other ways. I'll keep you posted as the valued matter progresses. Meanwhile, let's do a bit of unpacking on those four recommendations. I will do so, one by one, and then provide a provocative or perhaps engaging conclusion. Global AI Observatory The first of the four recommendations entails establishing a global AGI Observatory that would keep track of what's happening with AGI. Think of this as a specialized online repository that would serve as a curated source of information about AGI. I agree that this would potentially be immensely helpful to the U.N. Member States, along with being useful for the public at large. You see, the problem right now is that there is a tremendous amount of misinformation and disinformation concerning AGI that is being spread around, often wildly hyping or at times undervaluing the advent of AGI and ASI. Assuming that the AGI Observatory was properly devised and suitably careful in what is collected and shared, having a source about AGI that is reliable and balanced would be quite useful. One potential criticism of such an AGI Observatory would be that it is perhaps duplicative of other similar commercial or national collections about AGI. Another qualm would be if the AGI Observatory were allowed to be biased, it would misleadingly carry the aura of something balanced, yet would actually be tilted in a directed way. Best Practices And Certification For AGI The second recommendation requests that a set of AGI best practices be crafted. This would aid nations in understanding what kind of governance structures ought to be considered for sensibly overseeing AGI in their respective country. It could spur nations to proceed on a level playing field basis. Furthermore, it reduces the proverbial reinventing of the wheel, namely that the nations could simply adopt or adapt an already presented set of AGI best practices. No need to write such stipulations from scratch. On a similar vein, the setting up of certifications for AGI would be well-aligned with the AGI best practices. AI makers and countries as a whole would hopefully prize being certified as to their AGI and its conformance to vital standards. A criticism on this front is that if the U.N. does not make the use of best practices a compulsory aspect, and likewise if the AGI certification is merely optional, few if any countries would go to the trouble of adopting them. In that sense, the whole contrivance is mainly window dressing and not a feet-to-the-fire consideration. U.N. Framework Convention In the parlance of the United Nations, it is somewhat expected to call for a Framework Convention on significant topics. Since AGI is abundantly a significant topic, here's a snapshot excerpt of what is proposed in the report: 'A Framework Convention on AGI is needed to establish shared objectives and flexible protocols to manage AGI risks and ensure equitable global benefit distribution. It should define clear risk tiers requiring proportionate international action, from standard-setting and licensing regimes to joint research facilities for higher-risk AGI, and red lines or tripwires on AGI development.' The usual criticism of those kinds of activities is that they can become a bureaucratic nightmare that doesn't produce much of anything substantive. Also, they might stretch out and be a lengthy affair. This is especially disconcerting in this instance if you believe that AGI is on the near horizon. Formulate U.N. AGI Agency The fourth recommendation indicates that a feasibility study be undertaken to assess whether a new U.N. agency ought to be set up. This would be a specialized U.N. agency devoted to the topic of AGI. The report stresses that this would need to be quickly explored, approved, and set in motion on an expedited basis. An analogous type of agency or entity would be the International Atomic Energy Agency (IAEA). You probably know that the IAEA seeks to guide the world toward peaceful uses of nuclear energy. It has a founding treaty that provides self-guidance within the IAEA. Overall, the IAEA reports to the U.N. General Assembly and the U.N. Security Council. A criticism of putting forward an AGI Agency by the United Nations is that it might get bogged down in international squabbling. There is also a possibility that it would be an inhibitor to the creative use of AGI rather than merely serving as a risk-reducing guide. To clarify, there are some that argue against too many regulating and overseeing bodies since this might undercut innovative uses of AGI. We might inadvertently turn AGI into something a lot less impressive and valuable than we had earlier hoped for. Sad face. Taking Action Versus Sitting Around Do you think that we should be taking overt governance action about AGI, such as the recommendations articulated in the U.N. AGI report? Some would say that yes, we must act immediately. Others would suggest we take our sweet time. Better to get things right than rush them along. Still others might say there isn't any need to do anything at all. Just wait and see. As food for thought on that thorny conundrum, here's a memorable quote by Albert Einstein: 'The world will not be destroyed by those who do evil, but by those who watch them without doing anything.' Mull that over and then make your decision on what we should do next about AGI and global governance issues. The fate of humanity is likely on the line.


Tahawul Tech
4 days ago
- Tahawul Tech
Cisco Talos report shows LLMs are being weaponised by cybercriminals
A comprehensive report from Cisco Talos has shown that Large Language Models are being increasingly weaponised to launch cyberattacks at scale. Cisco Talos has observed a growing use of uncensored, jailbroken and criminal-designed LLMs to support phishing, malware development, and other malicious activities. The findings also highlight how both custom-built and jailbroken (modified) versions of LLMs are being used to generate malicious content at scale, signalling a new chapter in the cyber threat landscape. The report explores how threat actors are bypassing built-in safeguards legitimate AI tools use, creating harmful alternatives that cater to criminal demands. These unregulated models can produce phishing emails, malware, viruses and even assist in scanning websites for vulnerabilities. Some LLMs are being connected to external tools such as email accounts, credit card checkers, and more to streamline and amplify attack chains. Commenting on the report's findings, Fady Younes, Managing Director for Cybersecurity at Cisco Middle East, Africa, Türkiye, Romania and CIS, stated: 'While large language models offer enormous potential for innovation, they are also being weaponised by cybercriminals to scale and refine their attacks. This research highlights the critical need for AI governance, user vigilance, and foundational cybersecurity controls. By understanding how these tools are being exploited, organisations can better anticipate threats and reinforce their defenses accordingly. With recent innovations like Cisco AI Defense, we are committed to helping enterprises harness end-to-end protection as they build, use, and innovate with AI.' Cisco Talos researchers documented the emergence of malicious LLMs on underground forums, including names such as FraudGPT, DarkGPT, and WhiteRabbitNeo. These tools are advertised with features like phishing kit generation and ransomware creation, alongside card verification services. Interestingly, even the criminal ecosystem is not without its pitfalls – many so-called 'AI tools' are also scams targeting fellow cybercriminals. Beyond harmful models, attackers are also jailbreaking legitimate AI platforms using increasingly sophisticated techniques. These jailbreaks aim to bypass safety guardrails and alignment training to produce responses that would normally be blocked. The report also warns that LLMs themselves are becoming targets, as attackers are inserting backdoors into downloadable AI models to function as per the attacker's programming when activated. As a result, models using external data sources to find information are exposed to risks if threat actors tamper with the sources. Cisco Talos' findings underscore the dual nature of emerging technologies – offering powerful benefits but also introducing new vulnerabilities. As AI becomes more commonplace for enterprises and consumer systems, it is essential that security measures evolve in parallel. This includes scanning for tampered models, validating data sources, monitoring abnormal LLM behavior, and educating users on the risks of prompt manipulation. Cisco Talos continues to lead the global cybersecurity community by sharing actionable intelligence and insights. The full report, Cybercriminal Abuse of Large Language Models, is available at


South China Morning Post
5 days ago
- Business
- South China Morning Post
Driverless bus incident points to Hong Kong's need for AI governance
Feel strongly about these letters, or any other aspects of the news? Share your views by emailing us your Letter to the Editor at letters@ or filling in this Google form . Submissions should not exceed 400 words, and must include your full name and address, plus a phone number for verification On June 22, two driverless buses collided at an intersection at Hong Kong International Airport. No injuries occurred and damage was minor. Yet the Airport Authority suspended autonomous bus services, which suggests how quickly public trust can evaporate without robust artificial intelligence (AI) governance. This was no isolated glitch. According to the authority, both buses arrived simultaneously at an uncontrolled junction. Their sensors failed to coordinate a right-of-way decision – a known edge case in autonomous systems. Hong Kong is investing billions in AI, from supercomputers to smart traffic. But leadership requires more than funding; it demands accountable governance. Unlike the European Union's AI Act (set to take full effect in 2026), which would classify autonomous buses as 'high-risk' systems, Hong Kong relies primarily on guidelines. There is no legal obligation for operators to follow internationally recognised protocols for pre-deployment testing or third-party audits of AI safety, for instance. Had ISO 42001 certification been required, the operator would likely have implemented continuous monitoring to detect and resolve sensor conflict before deployment. Under the EU AI Act's risk-based framework, real-time human oversight would be mandatory for systems of this kind.


Asharq Al-Awsat
07-07-2025
- Business
- Asharq Al-Awsat
Saudi, Kuwaiti AI Associations Partner to Advance Regional AI Governance
The Artificial Intelligence Governance Association (AIGA), under the technical supervision of the Saudi Data and AI Authority (SDAIA), has signed a memorandum of understanding with the Kuwaiti Association of Artificial Intelligence of Things. The deal aims to foster collaboration in developing and implementing AI governance standards, sharing expertise, and driving scientific research and innovation in Artificial Intelligence of Things (AIoT). The agreement represents the first international MoU signed by the AIGA, signaling the beginning of expanded efforts to promote the responsible governance of advanced technologies, according to SPA. The partnership reflects the commitment of both associations to support regional initiatives in AI technology development, enhance governance frameworks, and exchange knowledge, ultimately advancing a responsible and sustainable innovation ecosystem that benefits communities and supports national and regional efforts toward a knowledge-based economy driven by advanced technologies.