logo
SAP CEO Christian Klein on Building Bridges in the Age of AI

SAP CEO Christian Klein on Building Bridges in the Age of AI

SAP, the 53-year-old German tech giant, builds software for virtually every business function, from supply chain and resource management to finance, sales, and human resources. Its products are used by over 440,000 customers worldwide, including 98 of the world's 100 largest companies. Taken together, its client base generates over 80% of global commerce, according to the company.
At the helm sits Christian Klein, 45, who has been at SAP since 1999. 'You can say it's a big minus or a big plus when you're spending your whole career in one company,' he says. 'I started here as a student. And I still know people from back then. We have four generations here working for SAP—it's a company of 110,000 people.'
Under Klein's leadership, the company has accelerated its transformation into a cloud-first enterprise, with cloud revenue accounting for over half of its total revenue in the first quarter of 2025. Meanwhile, SAP is embedding AI into its core products with the goal of becoming the "#1 enterprise application and business AI company." SAP is one of the most valuable public companies in Europe and made headlines when it took the top spot in March.
Klein spoke with TIME on June 4 about his success as a leader, how AI is changing enterprises, and the difference between power and influence.
This interview has been condensed and edited for clarity.
What have you changed your mind about since becoming sole CEO in 2020?
Our software helps to build bridges and facilitate global trade. We have multinationals in the U.S. and China, in Asia—everywhere—doing global business. Five years ago, when I came into this job, in my perspective of the world everyone was a winner. I saw democratic values in most parts of the world, and said "ah, that will never change."
Because I'm still reasonably young and have not lived in times like this, I never thought things could change so fast—at least when it came to global trade. But here we are. I have to deal much more with all of the geopolitics than I had to, say, five years back. That definitely has changed.
Why do you think you're good at your job?
When you're a CEO, over the years you learn that when you believe "I have the right strategy on a nice PowerPoint—I have written it all down—the rest is just about execution," you're completely wrong.
Especially when you're a European company with many stakeholders, you need to think about strategy first from the customer-perspective. Everyone says that [laughs], but you really have to make sure that you're hitting that nail. Otherwise, you could steer the company in a completely wrong direction.
You need to make sure everyone—your employees, shareholders, the workers council, and so on—is excited, committed, and passionate about strategy and where you're leading the company. You have to be a bridge builder: to make sure everyone is involved and understands the strategy, and everyone is moving in the same direction. Otherwise, things can fall apart very easily.
In your May CEO address, you said we can think of AI agents as 'digital coworkers.' If AI agents can robustly function as digital coworkers in the near-future, why hire humans at all?
Here's an example: we just had financial earnings at SAP. Now, AI gives me certain simulations and predictions on how the year could end, given all of the trade conflicts and the uncertainty out there in the market. Would I fully trust AI to say "this is how you should put out your financial guidance for the rest of the year?" No: I still feel we need a human being at the end of the chain who can make slight adjustments, incorporating their past experience.
Or think about selling software. When you are traveling the world, the cultures are so different. When I walk into a customer meeting in Japan, it's different from walking into a customer meeting in Germany or the U.S. AI can give me a beautiful sales pitch or a great demo, but at the end it's human beings who need to understand how to position it, how to emotionally talk about it, particularly across different cultures. And I don't see an AI yet that is able to do that—at least not better than a human being.
You point to emotional connection and cultural understanding. AI is already highly-persuasive and can understand emotional nuance. A key limit is that current AI systems lose coherence over long time periods. If that changes—and the same system can run for months or years at a time—do you still think AI won't be able to do these parts of the job?
You're right, emotional intelligence will get better and better. No doubt about that. But at the end of the day, there needs to be someone in the company you can hold accountable. I don't want to see SAP in the headlines, with a customer saying "I relied only on SAP AI agents to close my books or to run my supply chain, and they completely screwed it up. Despite AI doing 99% good work, it didn't play out as expected.'
Ultimately, I'm convinced there must be some human beings still in the mix. Do I expect to need the same amount of developers, salespeople, and consultants in the future? Definitely not with the job profiles that they have today. But do I still need other jobs that are coming up—more data scientists? More people thinking about the future of the industry? Yes, absolutely.
It would be an illusion to believe AI will help and drive more productivity, but the workforce will still look the same. That will be absolutely not the case. But I also can't imagine a workforce only with digital workers.
Can you imagine a scenario where in five years time, 90% of your workforce is gone, so you have closer to 10,000 employees than 100,000?
Oh, that is tough. In certain job profiles, I can absolutely see they can be 60% to 70% digital. In others, for example, take audit: Of course, you have policies as a company, but with every policy—for example, the E.U. Data Act, which I don't like so much—there is always a gray zone. You ask five lawyers and five large language modules about interpretation—does this contract adhere to the E.U. Data Act?—and you get different answers. It's like when you have issues with your back and you ask five doctors, and they come with five different root causes. These things will still exist. So in these jobs, I don't believe that there will be only digital workers. In other jobs, I definitely see a much higher share. It really depends on the job profile.
Do you think you'll live to see an AI system do every part of your job?
Part of it. I need to make a lot of decisions every day. They are sometimes pretty logical decisions, where you just look at the facts. But sometimes there are tough decisions you have to make using your emotional intelligence. There are certain market trends which may not be captured by the facts, but you talk to people, to other stakeholders, and you make a different decision. So I don't believe that a CEO can be purely digital in the future. Sometimes you're still making decisions based on your gut feeling.
What are the biggest bottlenecks to enterprise adoption of AI?
In the enterprise world, where we are setting up our agents, you need 100% accuracy. So for example Joule, our digital assistant, cannot mess around with compliance checks on travel and on sourcing, or on directing the flow of materials. People are betting their jobs and their companies on our software and on AI. This needs 100% accuracy: if you as a tech company don't understand the business process—if you don't have the data or you can't access all of it—that is a big issue.
This is a big obstacle for many companies: understanding how to apply the technology. The good position of SAP is: we are running these business processes, we know the rules and workflows, we have the data. Others who are more on the infrastructure and hardware layer… they don't have the business context. They're missing the data.
Is accuracy the biggest challenge? Or are there others?
The second piece is on data. Every company you walk into has their data siloes: there have been trends with collecting data and creating data lakes, but no one has solved the problem of making all the data match. And when it doesn't fit, AI can't do magic immediately to say, "I 100% understand how this data fits together and I can correlate it to produce good results for the company."
The third piece involves regulation, which often kills innovation before it gets started. Certain parts of the world need to be careful to not only see risk, risk, risk with regard to AI, but also the upside for the economy.
What do you think is the appropriate regulatory framework for AI?
Here's my pragmatic view. In the European Union—it's good that we have a union, I'm all in for it—we have AI regulation in many member states, and then the E.U. puts another regulation on top. The result is confusion, different interpretations, and before companies or startups can use the technology to race against others in the world, it's already game over. That is the problem. I'd say: have one framework for all of Europe, and then give some freedom within this framework, especially when you are early in the development and testing cycle—you cannot do harm in this early phase.
Of course, the moment when you bring it to market and to scale, there must be regulation. But don't regulate the technology! Regulate the outcome, so AI is unfolding in the right way in the chemical industry, the automotive industry, the defense industry. But don't regulate the technology, because then you regulate technological innovation, which is never good.
You need to see you are not living on an island here in Europe. All of these tech players—we are the only large tech player in Europe, but there are many startups—there is competition everywhere, and we cannot give these companies and startups a disadvantage when it comes to speed of innovation just by over-regulating.
If you were 22 today, fresh out of university, what would you do?
At 22, I still wanted to become a professional skier. I would try it again. It's my passion. I love to be in the mountains, I love to ski, and I'd try to turn that passion into a profession.
So you wouldn't set out to become a CEO?
I wasn't planning to become the CEO when I was 22 years old. That goal developed over time. I don't like it when you're too early on, saying "I need to be the CEO." I'm more on the 'first deliver' side: prove yourself, prove that you can work in and deliver great results as a team, and the rest will follow.
It was only when I became the chief operating officer of SAP, and I considered our transformation into a cloud company, that I developed the goal of becoming CEO. We had a strategy, and the software was instrumental for that. But I saw it was not only a piece of technology which would make transformation work. It's also an understanding of the culture, and the tone from the top. You need to understand: where do you want to go with your company? Do you want to be a cloud-pure SaaS company, or do you want to still be a legacy? What does it take? Then you can connect software and technology and AI to it.
So it was only around 2017 that I thought, 'oh, I could be the CEO of SAP. I have a vision for this company on how to move it into the next century. It probably sounds a little bit odd, but it's not the power and the responsibility that drew me to the role. It was about: 'you can influence a lot of things to create a great future for SAP.' I saw how we worked and what was needed.
What do you think distinguishes power and influence?
Becoming a CEO and believing that now you're making a decision, and you have the power, so everyone will just follow, is probably the biggest mistake you can make. You can put a lot of policies in place, you can put more pressure, but people will not just automatically follow. You need to over-communicate in times of change to convince people. When we did this drastic change and our share price collapsed five years back, I couldn't just say, "Oh, now we did it. The strategy is clear. I have the power now to tell you exactly what to do." You need to influence people. You need to convince them.
(To receive weekly emails of conversations with the world's top CEOs and decisionmakers, click here.)
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

D.C. restores high-tech public toilets — with plans for more
D.C. restores high-tech public toilets — with plans for more

Axios

time13 minutes ago

  • Axios

D.C. restores high-tech public toilets — with plans for more

D.C. is getting back on the throne. Driving the news: The city has temporarily renewed its contract with Throne Labs, which just reopened its high-tech public restrooms — shut down in July after funding lapsed. Why it matters: D.C. is desperate for more bathrooms, and demand is rising. 🧻 State of play: Six Thrones — equipped with A/C, sinks, menstrual products and baby stations — are now back in operation near high-traffic spots like Dupont Circle and the National Mall. Throne's contract is renewed through September 30, co-founder Jessica Heinzelman tells Axios. The city has budgeted $1 million in fiscal year 2026 for expanded public restrooms, including up to 10 Thrones and a new full-time restroom program manager at the Department of Public Works. The contract is competitive, so other companies can bid for the long-term deal. Friction point: The nation's capital ranks behind 20 U.S. cities in public toilet access, according to the Public Toilet Index — and far behind European cities. The wait is worse after Starbucks — aka America's default restroom — ended non-paying customer access this year. The public is speaking out. A petition to reopen Throne gained over 1,000 signatures. Other cities have embraced more Thrones. Los Angeles, for example, has over 60 in their Metro stations.

iPhone 17 Event Date Might Have Leaked, And It's Happening Soon
iPhone 17 Event Date Might Have Leaked, And It's Happening Soon

Yahoo

time2 hours ago

  • Yahoo

iPhone 17 Event Date Might Have Leaked, And It's Happening Soon

If Apple sticks to the same script it's followed in recent years, the iPhone 17 event is set to take place in September. After Bloomberg's Mark Gurman suggested the reveal event could happen the week of September 8, a new source now claims the iPhone 17 event is scheduled for Tuesday, September 9. If that turns out to be the case, the iPhone 17 lineup could be available on store shelves as soon as September 19. At least, that's what reports, citing internal information from German mobile phone providers (via MacRumors). Not only does this new source double down on Gurman's previous report, but it also makes sense, as Apple usually holds its annual iPhone launch event in the week following Labor Day in the U.S., which falls on September 1 this year. With that in mind, after a year of countless reports and leaks about the new phone, it sounds like the iPhone 17 is just over a month away from being officially unveiled. Read more: How To Control An iPad With Your iPhone The iPhone 17 Is Almost Upon Us After an X user recently spotted what appeared to be an iPhone 17 Pro in the wild, there isn't much left that we don't already know about the device. So far, rumors suggest Apple is working on four different models. Apple will reportedly maintain the iPhone 16 design on the base iPhone 17 but with a new chip. The company is also said to be replacing the iPhone 16 Plus with the iPhone 17 Air. This ultra-thin device is expected to have a single rear lens with a new camera bar design, a titanium frame, and colors that will reflect its lightness. Lastly, the iPhone 17 Pro models will undergo important changes, with a new design, improved cameras with a 48MP telephoto lens, an upgraded selfie camera with a 24MP lens, and longer battery life. The A19 Pro chip combined with 12GB of RAM would make this device perfect for running AI tasks and AAA games. While a price hike is expected for this year's lineup, Apple is readying one of the most interesting upgrades alongside a substantial iOS 26 update. If the latest rumors are true, it won't be long until consumers will know exactly when they can get their hands on the new phones. Read the original article on BGR.

The EU AI Act aims to create a level playing field for AI innovation. Here's what it is.
The EU AI Act aims to create a level playing field for AI innovation. Here's what it is.

Yahoo

time2 hours ago

  • Yahoo

The EU AI Act aims to create a level playing field for AI innovation. Here's what it is.

The European Union's Artificial Intelligence Act, known as the EU AI Act, has been described by the European Commission as 'the world's first comprehensive AI law.' After years in the making, it is progressively becoming a part of reality for the 450 million people living in the 27 countries that comprise the EU. The EU AI Act, however, is more than a European affair. It applies to companies both local and foreign, and it can affect both providers and deployers of AI systems; the European Commission cites examples of how it would apply to a developer of a CV screening tool, and to a bank that buys that tool. Now, all of these parties have a legal framework that sets the stage for their use of AI. Why does the EU AI Act exist? As usual with EU legislation, the EU AI Act exists to make sure there is a uniform legal framework applying to a certain topic across EU countries — the topic this time being AI. Now that the regulation is in place, it should 'ensure the free movement, cross-border, of AI-based goods and services' without diverging local restrictions. With timely regulation, the EU seeks to create a level playing field across the region and foster trust, which could also create opportunities for emerging companies. However, the common framework that it has adopted is not exactly permissive: Despite the relatively early stage of widespread AI adoption in most sectors, the EU AI Act sets a high bar for what AI should and shouldn't do for society more broadly. What is the purpose of the EU AI Act? According to European lawmakers, the framework's main goal is to 'promote the uptake of human centric and trustworthy AI while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union, including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation.' Yes, that's quite a mouthful, but it's worth parsing carefully. First, because a lot will depend on how you define 'human centric' and 'trustworthy' AI. And second, because it gives a good sense of the precarious balance to maintain between diverging goals: innovation vs. harm prevention, as well as uptake of AI vs. environmental protection. As usual with EU legislation, again, the devil will be in the details. How does the EU AI Act balance its different goals? To balance harm prevention against the potential benefits of AI, the EU AI Act adopted a risk-based approach: banning a handful of 'unacceptable risk' use cases; flagging a set of 'high-risk' uses calling for tight regulation; and applying lighter obligations to 'limited risk' scenarios. Has the EU AI Act come into effect? Yes and no. The EU AI Act rollout started on August 1, 2024, but it will only come into force through a series of staggered compliance deadlines. In most cases, it will also apply sooner to new entrants than to companies that already offer AI products and services in the EU. The first deadline came into effect on February 2, 2025, and focused on enforcing bans on a small number of prohibited uses of AI, such as untargeted scraping of internet or CCTV for facial images to build up or expand databases. Many others will follow, but unless the schedule changes, most provisions will apply by mid-2026. What changed on August 2, 2025? Since August 2, 2025, the EU AI Act applies to 'general-purpose AI models with systemic risk.' GPAI models are AI models trained with a large amount of data, and that can be used for a wide range of tasks. That's where the risk element comes in. According to the EU AI Act, GPAI models can come with systemic risks; 'for example, through the lowering of barriers for chemical or biological weapons development, or unintended issues of control over autonomous [GPAI] models.' Ahead of the deadline, the EU published guidelines for providers of GPAI models, which include both European companies and non-European players such as Anthropic, Google, Meta, and OpenAI. But since these companies already have models on the market, they will also have until August 2, 2027, to comply, unlike new entrants. Does the EU AI Act have teeth? The EU AI Act comes with penalties that lawmakers wanted to be simultaneously 'effective, proportionate and dissuasive' — even for large global players. Details will be laid down by EU countries, but the regulation sets out the overall spirit — that penalties will vary depending on the deemed risk level — as well as thresholds for each level. Infringement on prohibited AI applications leads to the highest penalty of 'up to €35 million or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher).' The European Commission can also inflict fines of up to €15 million or 3% of annual turnover on providers of GPAI models. How fast do existing players intend to comply? The voluntary GPAI code of practice, including commitments such as not training models on pirated content, is a good indicator of how companies may engage with the framework law until forced to do so. In July 2025, Meta announced it wouldn't sign the voluntary GPAI code of practice meant to help such providers comply with the EU AI Act. However, Google soon after confirmed it would sign, despite reservations. Signatories so far include Aleph Alpha, Amazon, Anthropic, Cohere, Google, IBM, Microsoft, Mistral AI, and OpenAI, among others. But as we have seen with Google's example, signing does not equal a full-on endorsement. Why have (some) tech companies been fighting these rules? While stating in a blog post that Google would sign the voluntary GPAI code of practice, its president of global affairs, Kent Walker, still had reservations. 'We remain concerned that the AI Act and Code risk slowing Europe's development and deployment of AI,' he wrote. Meta was more radical, with its chief global affairs officer Joel Kaplan stating in a post on LinkedIn that 'Europe is heading down the wrong path on AI.' Calling the EU's implementation of the AI Act 'overreach,' he stated that the code of practice 'introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.' European companies have expressed concerns as well. Arthur Mensch, the CEO of French AI champion Mistral AI, was part of a group of European CEOs who signed an open letter in July 2025 urging Brussels to 'stop the clock' for two years before key obligations of the EU AI Act came into force. Will the schedule change? In early July 2025, the European Union responded negatively to lobbying efforts calling for a pause, saying it would still stick to its timeline for implementing the EU AI Act. It went ahead with the August 2, 2025, deadline as planned, and we will update this story if anything changes. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store