Latest news with #EyeonAI
Yahoo
25-06-2025
- Business
- Yahoo
Insurance companies are embracing AI. But they aren't talking much about ROI
Hello and welcome to Eye on AI. In this edition…a mega seed round for ex-OpenAI CTO Mira Murati's new startup…the impact of AI on cognitive skills…and why the effects of AI automation may vary so much across is not considered the most cutting edge industry. But AI has been making slow, steady in-roads in the sector for years. Many companies have begun using computer vision applications that automatically assess damage—whether that is to cars following a collision or to the roofs of houses following a major storm—to help claims adjusters work more efficiently. Companies are also using machine learning algorithms to help detect fraud and build risk models for underwriting. And, of course, like many other industries, insurance companies are using AI to boost productivity in many support functions, from chatbots that can answer customer queries to AI that can help design marketing materials to AI coding assistants to help internal tech insurance companies are doing it best? That's what the London-based research and analytics firm Evident Insights set out to discover with a new index assessing major insurance firms' AI prowess. Evident has become known in recent years for its detailed benchmarking of banks' AI capabilities. But this is the first time the research firm has moved beyond banking to look at another its banking index, Evident's assessment is based almost entirely on quantitative metrics derived mostly from public sources of information—management statements in financial disclosures, press releases, company websites, social media accounts, patent filings, LinkedIn profiles, and news articles. In all, Evident looked at 76 individual metrics, organized into four 'pillars' that the research firm said it believes are critical to deploying AI successfully: talent (which counts for 45% of the overall ranking), innovation (30%), leadership (15%), and transparency of responsible AI activity (10%). It used these to rank the 30 largest North American and European insurers when judged by total premiums underwritten or total assets under insurers, Axa and Allianz emerged as clear leaders in Evident's assessment. They were the only two to rank in the top five across all four pillars and had a substantial lead over third-place insurer USAA. Alexandra Mousavizadeh, the cofounder and co-CEO of Evident, tells me that the result is surprising, in part because both Axa and Allianz are based in Europe, where large companies have generally been seen as lagging their North American peers in AI adoption. (And in Evident's banking index, all of the highest ranked firms are North American.) But Mousavizadeh says that she thinks Axa and Allianz have a common corporate cultural trait that may explain their AI dominance. 'My theory on this is that it's embedded in an engineering culture,' she says. 'Axa and Allianz have been doing this for a very long time and if you look at their histories, there has been much more of an engineering leadership and engineering mindset.'Mousavizadeh says that claims and underwriting automation are both big engineering challenges that require large teams of skilled developers and technology experts to make work at scale. 'You have got to have more engineers,' she says. 'For that last mile of getting a use case into production, you have to have AI product managers, and you have to have AI software engineering.'Companies that invest most heavily in human AI expertise are most likely to excel at using AI to run their businesses more efficiently, opening up an ever-widening gap between these companies and those that are AI laggards. (Of course, in Evident's methodology, it helps if management talks about what it's doing with AI and publicizes its AI governance policies too. USAA actually ranks first on Evident's talent pillar, but falls down to third place because it ranks near the bottom of the pack on both 'leadership'—which is mostly about management's statements about how the company is using AI—and 'transparency of responsible AI policies.') Still, as in many industries, there still seems to be a substantial gap in the insurance sector between AI hype and actual ROI. Of the 30 insurers Evident evaluated, only 12 had disclosed at least one AI use case with 'a tangible business outcome.' Just three insurers—Intact Financial, Zurich Insurance Group, and Aviva—had publicly disclosed a monetary return from their AI efforts. That's pretty most transparent of this group was Canada-based Intact Financial, a property and casualty insurer that said publicly in 2024 that it had invested $500 million in technology (that's all tech, not just AI) across its business, had deployed 500 AI models, and had seen $150 million dollars in benefit so far. One of its use cases was using AI models that transform speech-to-text and then language models on top of those transcripts to assess the quality of how its human customer service agents handled the up to 20,000 customer calls the company receives still a cost-savings example—a way of boosting the bottom line—and not one in which a company is using AI to grow its sales or move into new business areas. Evident found that insurers were primarily applying AI this way—attacking the industry's largest cost centers, namely claims processing, customer service, and underwriting. As the research firm notes: 'Revenue-generating AI is yet to appear on our outside-in assessment.'The story here isn't just about insurance—it's about every industry grappling with AI. Executives everywhere are still figuring out which AI investments will pay off, but the early winners share a common thread: they're not just buying AI tools, they're building AI teams. They're hiring engineers, experimenting relentlessly, measuring results—and then expanding the successful use cases everywhere they can. And benchmarking, like the kind Evident is doing, can play a vital role in both informing executives about what seems to be working—and pushing entire industries to adopt AI faster, as well as to being more transparent about how they're using AI and what policies they have in place around its responsible use. That's a lesson worth learning, whether you're insuring cars or building that, here's more AI news. And, before we get to the other sections, I want to flag this deep dive article from my colleagues Sharon Goldman and Allie Garfinkle into the background behind Meta's $14 billion investment into Scale AI and the hiring of Scale cofounder and CEO Alexandr Wang for a major new role at Meta. Their story is a must-read. Check it out here. Jeremy to know more about how to use AI to transform your business? Interested in what AI will mean for the fate of companies, and countries? Then join me at the Ritz-Carlton, Millenia in Singapore on July 22 and 23 for Fortune Brainstorm AI Singapore. This year's theme is The Age of Intelligence. We will be joined by leading executives from DBS Bank, Walmart, OpenAI, Arm, Qualcomm, Standard Chartered, Temasek, and our founding partner Accenture, plus many others, along with key government ministers from Singapore and the region, top academics, investors and analysts. We will dive deep into the latest on AI agents, examine the data center build out in Asia, examine how to create AI systems that produce business value, and talk about how to ensure AI is deployed responsibly and safely. You can apply to attend here and, as loyal Eye on AI readers, I'm able to offer complimentary tickets to the event. Just use the discount code BAI100JeremyK when you checkout. This story was originally featured on Sign in to access your portfolio
Yahoo
17-06-2025
- Business
- Yahoo
AI won't cure ‘the infinite workday' unless companies reengineer work, Microsoft says
Hello and welcome to Eye on AI. In this edition…OpenAI wins a $200 million Pentagon contract…Salesforce finds AI models can't use CRM software very well…and a new study shows how AI scrapers are overwhelming cultural institutions. Back in April, Microsoft published some research about the modern workday, drawn from data it gathers anonymously about the use of its software applications. And honestly, the conclusions were kind of depressing. It found that we are all trapped in what the company is calling 'the infinite workday.'People start checking their emails before they even get out of bed. Then, when we are at work, the most productive hours of the day are filled with meetings and distractions. During core working hours, people are getting interrupted by messages or emails every two minutes on average—that's 275 interruptions per day—Microsoft found. Nearly half of all meetings take place between 9 a.m. and 11 a.m. or between 1 p.m. and 3 p.m., which is exactly when neuroscientists say that most people's brains are at their best for focused work and problem-solving. In fact, most people's productive potential peaks at 11 a.m. but that's exactly the most overloaded hour of the day, with chat traffic hitting its highest volume on average, as well as meetings and app usage don't get better in the evenings, either. For many employees, work peaks again after dinner. With teams working across time zones, the number of meetings taking place after 8 p.m. was up 16% year over year, according to Microsoft. Many people are still checking those emails as they crawl back into bed at 10 exhausting schedule has helped produce what Microsoft calls a 'capacity gap'—53% of leaders say productivity must increase, but 80% of workers say they lack the time or energy to do their jobs. So what's AI got to do with this? Well, everyone is hoping that AI will save us from this perfect storm of impossible expectations meeting human limitations. But the technology itself won't do this. In fact, a lot of the ways companies are deploying AI and people are using the technology could make things worse. Think about it. If you're already drowning in meetings, emails, and constant interruptions, having AI help you write more emails and summarize more meetings isn't really solving the problem—it's just greasing the wheels of a dysfunctional was the main takeaway from my conversation last week with Jamie Teevan, Microsoft's chief scientist and technical fellow, and Alexia Cambon, one of the lead researchers on Microsoft's Work Trends Index. 'AI is delivering real productivity gains, but it's not enough,' Teevan tells me. 'The speed of business is still outpacing the way we work today.'She says that crafting prompts for AI to perform tasks for us, such as conducting research or generating a business presentation, 'actually increases our metacognitive burden.' In other words, to write a good prompt, a person has to think clearly about the steps they want the AI to perform, and provide a list of dos and don'ts. This thinking process necessitates concentration, and it also requires someone to transform things they know tacitly into explicit instructions. Having to do this, 'can feel overwhelming,' Teevan says. But there are better ways to work with AI that can alleviate this burden—or at least share it. AI itself can be used to help craft prompts, for instance, Teevan says. Cambon says that too many people are viewing AI as just another software tool. It's better, she says, to think about it like a digital colleague—something to which you can assign entire tasks or importantly, to get the most out of AI, companies need to change their organizational structures, the way their employees work, and also how they measure value. Microsoft has identified companies they call 'Frontier Firms' that are doing this. At these organizations, 71% of workers say their company is thriving, compared to just 37% it should be said, there are not too many of these Frontier Firms out there. Out of 31,000 companies Microsoft looked at, only 840 met the criteria. Most of these companies were in tech—many of them so-called 'AI native' startups that have the benefit of being able to design their processes around AI from the start. 'They don't have to unlearn a whole load of stuff,' Cambon says. But interestingly, she says that some of the Frontier Firms were in professional services, like consulting, accounting, and law, which is an area where AI is rapidly disrupting traditional work processes and even challenging entire business non-AI native companies, getting the full benefits of AI means changing organizational management and structures. 'It is about how do you externalize knowledge and make things available for AI to learn from,' Teevan says. 'It is about creating feedback loops and being very intentional about the content we create for our teams.' Microsoft's research suggests there are some key changes that differentiate the Frontier Firms from the rest. They prioritize impact over activity, focusing on the 20% of tasks that create 80% of a business's value. They redesign workflows instead of just trying to automate them. (Rather than have AI write status reports, for instance, ask whether you need status reports in the first place.) And they increasingly use AI as agents that can handle entire workflows, not just individual tasks. In this world, employees become 'agent bosses,' Microsoft says that the Frontier Firms also tend to have much flatter organizational structures, where teams are organized around completing a specific project, not around areas of expertise. Does Microsoft have an interest in selling this narrative in order to convince companies to buy its AI software and cloud services? Sure it does. But that doesn't mean it's wrong. It is clear that the companies that get this right will have a big advantage. And the ones that don't? They'll just have increasingly efficient chaos and burnt out employees. With that, here's the rest of today's AI news. Jeremy to know more about how to use AI to transform your business? Interested in what AI will mean for the fate of companies, and countries? Why not join me in Singapore on July 22 and 23 for Fortune Brainstorm AI Singapore. We will dive deep into the latest on AI agents, examine the data center build out in Asia, and talk to top leaders from government, board rooms, and academia in the region and beyond. You can apply to attend here. AI is reshaping work. What does it mean for your team? Fortune has unveiled a new hub, Fortune AIQ, dedicated to navigating AI's real-world impact. Fortune has interviewed and surveyed the companies at the front lines of the AI revolution. In the coming months, we'll roll out playbooks based on their learnings to help you get the most out of AI—and turn AI into AIQ. The first AIQ playbook, The 'people' aspect of AI, explores various aspects of how mastering the 'human' element of an AI deployment is just as important as the technical details. Companies are overhauling their hiring processes to screen candidates for AI skills—and attitudes. Read more 'AI fatigue' is settling in as companies' proofs of concept increasingly fail. Here's how to prevent it. Read more AI is changing how employees train—and starting to reduce how much training they need. Read more AI is helping blue-collar workers do more with less as labor shortages are projected to worsen. Read more Everyone's using AI at work. Here's how companies can keep data safe. Read more This story was originally featured on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
10-06-2025
- Business
- Yahoo
Can AI be used to control safety critical systems? A U.K.-funded research program aims to find out
Hello and welcome to Eye on AI. In this edition…Meta hires Scale AI founder for new 'superintelligence' drive…OpenAI on track for $10 billion in annual recurring revenue…Study says 'reasoning models' can't really reason. Today's most advanced AI models are relatively useful for lots of things—writing software code, research, summarizing complex documents, writing business correspondence, editing, generating images and music, role-playing human interactions, the list goes on. But relatively is the key word here. As anyone who uses these models soon discovers, they remain frustratingly error-prone and erratic. So how could anyone think that these systems could be used to run critical infrastructure, such as electrical grids, air traffic control, communications networks, or transportation systems?Yet that is exactly what a project funded by the U.K.'s Advanced Research and Invention Agency (ARIA) is hoping to do. ARIA was designed to be somewhat similar to the U.S. Defense Advanced Research Projects Agency (DARPA), with government funding for moonshot research that has potential governmental or strategic applications. The £59 million ($80 million) ARIA project, called The Safeguarded AI Program, aims to find a way to combine AI 'world-models' with mathematical proofs that could guarantee that the system's outputs were Dalrymple, the machine learning researcher who is leading the ARIA effort, told me that the idea was to use advanced AI models to create a 'production facility' that would churn out domain-specific control algorithms for critical infrastructure. These algorithms would be mathematically tested to ensure that they meet the required performance specifications. If the control algorithms pass this test, the controllers—but not the frontier AI models that developed them—would be deployed to help run critical infrastructure more (who is known by his social media handle Davidad) gives the example of the U.K.'s electricity grid. The grid's operator currently acknowledges that if it could balance supply-and-demand on the grid more optimally, it could save £3 billion ($4 billion) that it spends each year essentially paying to have excess generation capacity up-and-running to avoid the possibility of a sudden blackout, he says. Better control algorithms could reduce those the energy sector, ARIA is also looking at applications in supply chain logistics, biopharmaceutical manufacturing, self-driving vehicles, clinical trial design, and electric vehicle battery management. Frontier AI models may be reaching the point now where they may be able to automate algorithmic research and development, Davidad says. 'The idea is, let's take that capability and turn it to narrow AI R&D,' he tells me. Narrow AI usually refers to AI systems that are designed to perform one particular, narrowly-defined task at superhuman levels, rather than an AI system that can perform many different kinds of tasks. The challenge, even with these narrow AI systems, is then coming up with mathematical proofs to guarantee that their outputs will always meet the required technical specification. There's an entire field known as 'formal verification' that involves mathematically proving that software will always provide valid outputs under given conditions—but it's notoriously difficult to apply to neural network-based AI systems. 'Verifying even a narrow AI system is something that's very labor intensive in terms of a cognitive effort required,' Davidad says. 'And so it hasn't been worthwhile historically to do that work of verifying except for really, really specialized applications like passenger aviation autopilots or nuclear power plant control.' This kind of formally-verified software won't fail because a bug causes an erroneous output. They can sometimes break down because they encounter conditions that fall outside their design specifications—for instance a load balancing algorithm for an electrical grid might not be able to handle an extreme solar storm that shorts out all of the grid's transformers simultaneously. But even then, the software is usually designed to 'fail safe' and revert back to manual is hoping to show that frontier AI modes can be used to do the laborious formal verification of the narrow AI controller as well as develop the controller in the first place. But this raises another challenge. There's a growing body of evidence that frontier AI models are very good at 'reward hacking'—essentially finding ways to cheat to accomplish a goal—as well as at lying to their users about what they've actually done. The AI safety nonprofit METR (short for Model Evaluation & Threat Research) recently published a blog on all the ways OpenAI's o3 model tried to cheat on various says it is hoping to find a way around this issue too. 'The frontier model needs to submit a proof certificate, which is something that is written in a formal language that we're defining in another part of the program,' Davidad says. This 'new language for proofs will hopefully be easy for frontier models to generate and then also easy for a deterministic, human audited algorithm to check.' ARIA has already awarded grants for work on this formal verification for how this might work are starting to come into view. Google DeepMind recently developed an AI model called AlphaEvolve that is trained to search for new algorithms for applications such as managing data centers, designing new computer chips, and even figuring out ways to optimize the training of frontier AI models. Google DeepMind has also developed a system called AlphaProof that is trained to develop mathematical proofs and write them in a coding language called Lean that won't run if the answer to the proof is is currently accepting applications from teams that want to run the core 'AI production facility,' with the winner the £18 million grant to be announced on October 1. The facility, the location of which is yet to be determined, is supposed to be running by January 2026. ARIA is asking those applying to propose a new legal entity and governance structure for this facility. Davidad says ARIA does not want an existing university or a private company to run it. But the new organization, which might be a nonprofit, would partner with private entities in areas like energy, pharmaceuticals, and healthcare on specific controller algorithms. He said that in addition to the initial ARIA grant, the production facility could fund itself by charging industry for its work developing domain-specific algorithms. It's not clear if this plan will work. For every transformational DARPA project, many more fail. But ARIA's bold bet here looks like one worth watching. With that, here's more AI news. Jeremy to know more about how to use AI to transform your business? Interested in what AI will mean for the fate of companies, and countries? Why not join me in Singapore on July 22 and 23 for Fortune Brainstorm AI Singapore. We will dive deep into the latest on AI agents, examine the data center build out in Asia, and talk to top leaders from government, board rooms, and academia in the region and beyond. You can apply to attend here. This story was originally featured on
Yahoo
05-06-2025
- Business
- Yahoo
Big AI isn't just lobbying Washington—it's joining it
Welcome to Eye on AI! In this edition…OpenAI releases report outlining efforts to block malicious use of its tools…Amazon continues its AI data center push in the South, with plans to spend $10 billion in North Carolina…Reddit sues Anthropic, accusing it of stealing data. After spending a few days in Washington, D.C. this week, it's clear that 'Big AI'—my shorthand for companies including Google, OpenAI, Meta, Anthropic, and xAI that are building and deploying the most powerful AI models—isn't just present in the nation's capital. It's being welcomed with open arms. Government agencies are eager to deploy their models, integrate their tools, and form public-private partnerships that will ultimately shape policy, national security, and global strategy inside the Beltway. And frontier AI companies, which also serve millions of consumer and business customers, are ready and willing to do business with the U.S. government. For example, just today Anthropic announced a new set of AI models tailored for U.S. national security customers, while Meta recently revealed that it's making its Llama models available to defense partners. This week, former Google CEO Eric Schmidt was a big part of bringing Silicon Valley and Washington together. I attended an AI Expo that served up his worldview, which sees artificial intelligence, business, geopolitics, and national defense as interconnected forces reshaping America's global strategy (which will be chock-full of drones and robots if he gets his way). I also dressed up for a gala event hosted by the Washington AI Network, with sponsors including OpenAI, Meta, Microsoft, and Amazon, as well as a keynote speech from U.S. Commerce Secretary Howard Lutnick. Both events felt like a parallel AI universe to this D.C. outsider: In this universe, discussions about AI are less about increasing productivity or displacing jobs, and more about technological supremacy and national survival. Winning the AI 'race' against China is front and center. Public-private partnerships are not just desirable—they're essential to help the U.S. maintain an edge in AI, cyber, and intelligence systems. I heard no references to Elon Musk and DOGE's 'move fast and break things' mode of implementing AI tools into the IRS or the Veterans Administration. There were no discussions about AI models and copyright concerns. No one was hand-wringing about Anthropic's new model blackmailing its way out of being shut down. Instead, at the AI Expo, senior leaders from the U.S. military talked about how the recent Ukrainian drone attacks on Russian air bases are prime examples of how rapidly AI is changing the battlefield. Federal procurement experts discussed how to accelerate the Pentagon's notoriously slow acquisition process to keep pace with commercial AI advances. OpenAI touted its o3 reasoning model, now deployed on a secure government supercomputer at Los Alamos National Laboratory. At the gala, Lutnick made the stakes explicit: 'We must win the AI race, the quantum race—these are not things that are open for discussion.' To that end, he added, the Trump administration is focused on building another terawatt of power to support the massive AI data centers sprouting up across the country. 'We are very, very, very bullish on AI,' he said. The audience—packed with D.C.-based policymakers and lobbyists from Big AI—applauded. Washington may not be a tech town, but if this week was any indication, Silicon Valley and the nation's capital are learning to speak the same language. Still, the growing convergence of Silicon Valley and Washington makes many observers uneasy—especially given that it's been just seven years since thousands of Google employees protested the company's involvement in a Pentagon AI project, ultimately forcing it to back out. At the time, Google even pledged not to use its AI for weapons or surveillance systems that violated 'internationally accepted norms.' On Tuesday, the AI Now Institute, a research and advocacy nonprofit that studies the social implications of AI, released a report that accused AI companies of 'pushing out shiny objects to detract from the business reality while they desperately try to derisk their portfolios through government subsidies and steady public-sector (often carceral or military) contracts.' The organization says the public needs 'to reckon with the ways in which today's AI isn't just being used by us, it's being used on us.' But the parallel AI universe I witnessed—where Big AI and the D.C. establishment are fusing interests—is already realigning power and policy. The biggest question now is whether they're doing so safely, transparently, and in the public interest—or simply in their own. The race is on. With that, here's the rest of the AI news. Sharon This story was originally featured on
Yahoo
03-06-2025
- Business
- Yahoo
OpenAI says it wants to support sovereign AI. But it's not doing so out of the kindness of its heart
Hello and welcome to Eye on AI. In this edition…Yoshua Bengio's new AI safety nonprofit…Meta seeks to automate ad creation and targeting…Snitching AI models…and a deep dive on the energy consumption of AI. I spent last week in Kuala Lumpur, Malaysia, at the Fortune ASEAN-GCC Economic Forum, where I moderated two of the many on-stage discussions that touched on AI. It was clear from the conference that leaders in Southeast Asia and the Gulf are desperate to ensure their countries benefit from the AI revolution. But they are also concerned about 'AI Sovereignty' and want to control their own destiny when it comes to AI technology. They want to control key parts of the AI tech stack—from data centers to data to AI models and applications—so that they are not wholly dependent on technology being created in the U.S. or China. This is particularly the case with AI, because while no tech is neutral, AI—especially large language models—embody particular values and cultural norms fairly explicitly. Leaders in these regions worry their own values and cultures won't be represented in these models unless they train their own versions. They are also wary of the rhetoric emanating from Washington, D.C., that would force them to choose between the U.S. and China when it comes to AI models, applications, and infrastructure. Malaysia's Prime Minister Anwar Ibrahim has scrupulously avoided picking sides, in the past expressing a desire to be seen as a neutral territory for U.S. and Chinese tech companies. At the Fortune conference, he answered a question about Washington's push to force countries such as Malaysia into its technological orbit alone, saying that China was an important neighbor while also noting that the U.S. is Malaysia's No. 1 investor as well as a key trading partner. 'We have to navigate [geopolitics] as a global strategy, not purely dictated by national or regional interests,' he said, somewhat cryptically. But speakers on one of the panels I moderated at the conference also made it clear that achieving AI sovereignty was not going to be easy for most countries. Kiril Evtimov, the chief technology officer at G42, the UAE AI company that has emerged as an important player both regionally and increasingly globally, said that few countries could afford to build their own AI models and also maintain the vast data centers needed to support training and running the most advanced AI models. He said most nations would have to pick which parts of the technology stack that they could actually afford to own. For many, it might come down to relying on open-source models for specific use cases where they didn't want to depend on models from Western technology vendors, such as helping to power government services. 'Technically, this is probably as sovereign as it will get,' he on the panel was Jason Kwon, OpenAI's chief strategy officer, who spoke about the company's recently announced 'AI for Countries' program. Sitting within its Project Stargate effort to build colossal data centers worldwide, the program offers a way for OpenAI to partner with national governments, allowing them to tap OpenAI's expertise in building data centers to train and host cutting edge AI models. But what would those countries offer in exchange? Well, money, for one thing. The first partner in the AI for Countries program is the UAE, which has committed to investing billions of dollars to build a 1 gigawatt Stargate data center in Abu Dhabi, with the first 200 megawatt portion of this expected to go live next year. The UAE has also agreed, as part of this effort, to invest additional billions into the U.S.-based Stargate datacenters OpenAI is creating. (G42 is a partner in this project, as are Oracle, Nvidia, Cisco, and SoftBank.)In exchange for this investment, the UAE is getting help deploying OpenAI's software throughout the government, as well as in key sectors such as energy, healthcare, education, and transportation. What's more, every UAE citizen is getting free access to OpenAI's normally subscription-based ChatGPT Plus service. For those concerned that depending so heavily on a single U.S.-based tech company might undermine the idea of AI sovereignty, OpenAI sought to make clear that the version of ChatGPT it makes available will be tailored to the needs of each partner country. The company wrote in its blog post announcing the AI for Countries program: 'This will be AI of, by, and for the needs of each particular country, localized in their language and for their culture and respecting future global standards.' OpenAI is also agreeing to help make investments in the local AI startup ecosystem alongside local venture capital investors.I asked Kwon how countries that are not as wealthy as the UAE might be able to take advantage of OpenAI's AI for Countries program if they didn't have billions to invest in building a Stargate-size data center in their own country, let alone also helping to fund data centers in the U.S. Kwon answered that the program would be 'co-developed' with each partner. 'Because we recognise each country is going to be different in terms of its needs and what it's capable of doing and what its citizens are going to require,' he suggested that if a country couldn't directly contribute funds, it might be able to contribute something else—such as data, which could help make AI models that better understand local languages and culture. 'It's not just about having the capital,' he said. He also suggested that countries could contribute through AI literacy, training, or educational efforts and also through helping local businesses collaborate with answer left me wondering how national governments and their citizens would feel about this kind of exchange—trading valuable or culturally-sensitive data, for instance, in order to get access to OpenAI's latest tech. Would they ultimately come to see it as a Faustian bargain? In many ways, countries still face the dilemma G42's Evitmov flicked at: They can have access to the most advanced AI capabilities or they can have AI sovereignty. But they may not be able to have that, here's more AI news. Jeremy to know more about how to use AI to transform your business? Interested in what AI will mean for the fate of companies, and countries? Why not join me in Singapore on July 22 and 23 for Fortune Brainstorm AI Singapore. We will dive deep into the latest on AI agents, examine the data center build out in Asia, and talk to top leaders from government, board rooms, and academia in the region and beyond. You can apply to attend here. In total, Fortune 500 companies represent two-thirds of U.S. GDP with $19.9 trillion in revenues, and they employ 31 million people worldwide. Last year, they combined to earn $1.87 trillion in profits, up 10% from last year—and a record in dollar terms. View the full list, read a longer overview of how it shook out this year, and learn more about the companies via the stories below. A passion for music brought Jennifer Witz to the top spot at satellite radio staple SiriusXM. Now she's tasked with ushering it into a new era dominated by podcasts and subscription services. Read more IBM was once the face of technological innovation, but the company has struggled to keep up with the speed of Silicon Valley. Can a bold AI strategy and a fast-moving CEO change its trajectory? Read more This year, Alphabet became the first company on the Fortune 500 to surpass $100 billion in profits. Take an inside look at which industries, and companies, earned the most profits on this year's list. Read more UnitedHealth Group abruptly brought back former CEO Stephen Hemsley in mid-May amid a wave of legal investigations and intense stock losses. How can the insurer get back on its feet? Read more Keurig Dr. Pepper CEO Tim Cofer has made Dr. Pepper cool again and brought a new generation of products to the company. Now, the little-known industry veteran has his eyes set on Coke-and-Pepsi levels of profitability. Read more NRG Energy is the top-performing stock in the S&P 500 this year, gaining 68% on the back of big acquisitions and a bet on data centers. In his own words, CEO Larry Coben explains the company's success. Read more This story was originally featured on