logo

Xelix secures $160 million Series B to advance agentic AIinnovation in accounts payable

Finextra22-07-2025
Xelix, a leading agentic AI software company in the Accounts Payable (AP) space, today announced a $160 million Series B funding round led by global software investor Insight Partners, with follow-on investments from Passion Capital and LocalGlobe.
0
This funding will enable Xelix to accelerate platform development and support more organizations in adopting AI for their finance operations.
For too long, companies have relied on manual processes and basic systems to manage Accounts Payable and vendor risk. As a result, enterprises lose millions each year to overpayments, face increased fraud risks and suffer from bloated and burdensome manual workflows. Recognizing the need for improved AP controls, Xelix's AI-powered platform seamlessly integrates with existing systems to detect payment errors and fraud, automate supplier statement reconciliations and streamline AP Helpdesk operations. Enterprise organizations such as Astra Zeneca, BAT, GSK and Virgin Atlantic have achieved millions in cost savings with Xelix whilst transforming costly, manual AP processes into automated, intelligent workflows.
This investment follows a period of significant, capital-efficient growth for Xelix - driven by a differentiated product offering, massive customer ROI and a deep commitment to client support and value realisation.
'This funding marks a major milestone in our journey,' said Paul Roiter, CEO of Xelix. 'It allows us to accelerate product innovation, expand our market presence and reinforce our position as a category leader - enabling more finance teams to evolve Accounts Payable from a manual back-office function into a strategic, data-driven business partner.'
Xelix's growth accelerated in 2024 with the addition of its Helpdesk module - an agentic AP ticketing tool for handling high volumes of supplier queries. Today, the Xelix platform offers three core solutions, processes over 115 million invoices annually and audits more than $750 billion in spend across 130+ global customers.
'Enterprise finance teams have long lacked an audit and control solution that is intelligent, proactive, system-agnostic, and efficient enough to support their high-volume workflows,' said Ryan Hinkle, Managing Director at Insight Partners. 'While spot checks are helpful, anything less than a full audit of every invoice leaves potential for fraud, mistakes, or abuse. Xelix uses AI to deliver a comprehensive control layer - helping enterprises eliminate overpayments and fraud risk while driving major efficiencies by automating day to day AP tasks. We are excited to back Paul, Phil, and the Xelix team in this next chapter of growth.'
As part of the investment, Hinkle and Alessandro Luciano, Vice President at Insight Partners, will join the Xelix board of directors.
Solano Partners, a boutique investment bank focused on founder-led software businesses, acted as exclusive financial advisor to Xelix on the transaction. Houlihan Lokey served as financial advisor to Insight Partners.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Nudifying apps are not 'a bit of fun' - they are seriously harmful and their existence is a scandal writes Children's Commissioner RACHEL DE SOUZA
Nudifying apps are not 'a bit of fun' - they are seriously harmful and their existence is a scandal writes Children's Commissioner RACHEL DE SOUZA

Daily Mail​

time2 hours ago

  • Daily Mail​

Nudifying apps are not 'a bit of fun' - they are seriously harmful and their existence is a scandal writes Children's Commissioner RACHEL DE SOUZA

I am horrified that children are growing up in a world where anyone can take a photo of them and digitally remove their clothes. They are growing up in a world where anyone can download the building blocks to develop an AI tool, which can create naked photos of real people. It will soon be illegal to use these building blocks in this way, but they will remain for sale by some of the biggest technology companies meaning they are still open to be misused. Earlier this year I published research looking at the existence of these apps that use Generative Artificial Intelligence (GenAI) to create fake sexually explicit images through prompts from users. The report exposed the shocking underworld of deepfakes: it highlighted that nearly all deepfakes in circulation are pornographic in nature, and 99% of them feature girls or women – often because the apps are specifically trained to work on female bodies. In the past four years as Children's Commissioner, I have heard from a million children about their lives, their aspirations and their worries. Of all the worrying trends in online activity children have spoken to me about – from seeing hardcore porn on X to cosmetics and vapes being advertised to them through TikTok – the evolution of 'nudifying' apps to become tools that aid in the abuse and exploitation of children is perhaps the most mind-boggling. As one 16-year-old girl asked me: 'Do you know what the purpose of deepfake is? Because I don't see any positives.' Children, especially girls, are growing up fearing that a smartphone might at any point be used as a way of manipulating them. Girls tell me they're taking steps to keep themselves safe online in the same way we have come to expect in real life, like not walking home alone at night. For boys, the risks are different but equally harmful: studies have identified online communities of teenage boys sharing dangerous material are an emerging threat to radicalisation and extremism. The government is rightly taking some welcome steps to limit the dangers of AI. Through its Crime and Policing Bill, it will become illegal to possess, create or distribute AI tools designed to create child sexual abuse material. And the introduction of the Online Safety Act – and new regulations by Ofcom to protect children – marks a moment for optimism that real change is possible. But what children have told me, from their own experiences, is that we must go much further and faster. The way AI apps are developed is shrouded in secrecy. There is no oversight, no testing of whether they can be used for illegal purposes, no consideration of the inadvertent risks to younger users. That must change. Nudifying apps should simply not be allowed to exist. It should not be possible for an app to generate a sexual image of a child, whether or not that was its designed intent. The technology used by these tools to create sexually explicit images is complex. It is designed to distort reality, to fixate and fascinate the user – and it confronts children with concepts they cannot yet understand. I should not have to tell the government to bring in protections for children to stop these building blocks from being arranged in this way. Posts on LinkedIn have even appeared promoting the 'best' nudifying AI tools available I welcome the move to criminalise individuals for creating child sexual abuse image generators but urge the government to move the tools that would allow predators to create sexually explicit deepfake images out of reach altogether. To do this, I have asked the government to require technology companies who provide opensource AI models – the building blocks of AI tools – to test their products for their capacity to be used for illegal and harmful activity. These are all things children have told me they want. They will help stop sexual imagery involving children becoming normalised. And they will make a significant effort in meeting the government's admirable mission to halve violence against women and girls, who are almost exclusively the subjects of these sexual deepfakes. Harms to children online are not inevitable. We cannot shrug our shoulders in defeat and claim it's impossible to remove the risks from evolving technology. We cannot dismiss it this growing online threat as a 'classroom problem' – because evidence from my survey of school and college leaders shows that the vast majority already restrict phone use: 90% of secondary schools and 99.8% of primary schools. Yet, despite those restrictions, in the same survey of around 19,000 school leaders, they told me online safety is among the most pressing issue facing children in their communities. For them, it is children's access to screens in the hours outside of school that worries them the most. Education is only part of the solution. The challenge begins at home. We must not outsource parenting to our schools and teachers. As parents it can feel overwhelming to try and navigate the same technology as our children. How do we enforce boundaries on things that move too quickly for us to follow? But that's exactly what children have told me they want from their parents: limitations, rules and protection from falling down a rabbit hole of scrolling. Two years ago, I brought together teenagers and young adults to ask, if they could turn back the clock, what advice they wished they had been given before owning a phone. Invariably those 16-21-year-olds agreed they had all been given a phone too young. They also told me they wished their parents had talked to them about the things they saw online – not just as a one off, but regularly, openly, and without stigma. Later this year I'll be repeating that piece of work to produce new guidance for parents – because they deserve to feel confident setting boundaries on phone use, even when it's far outside their comfort zone. I want them to feel empowered to make decisions for their own families, whether that's not allowing their child to have an internet-enabled phone too young, enforcing screen-time limits while at home, or insisting on keeping phones downstairs and out of bedrooms overnight. Parents also deserve to be confident that the companies behind the technology on our children's screens are playing their part. Just last month, new regulations by Ofcom came into force, through the Online Safety Act, that will mean tech companies must now to identify and tackle the risks to children on their platforms – or face consequences. This is long overdue, because for too long tech developers have been allowed to turn a blind eye to the risks to young users on their platforms – even as children tell them what they are seeing. If these regulations are to remain effective and fit for the future, they have to keep pace with emerging technology – nothing can be too hard to tackle. The government has the opportunity to bring in AI product testing against illegal and harmful activity in the AI Bill, which I urge the government to introduce in the coming parliamentary session. It will rightly make technology companies responsible for their tools being used for illegal purposes. We owe it to our children, and the generations of children to come, to stop these harms in their tracks. Nudifying apps must never be accepted as just another restriction placed on our children's freedom, or one more risk to their mental wellbeing. They have no value in a society where we value the safety and sanctity of childhood or family life.

Australia shouldn't fear the AI revolution – new skills can create more and better jobs
Australia shouldn't fear the AI revolution – new skills can create more and better jobs

The Guardian

time5 hours ago

  • The Guardian

Australia shouldn't fear the AI revolution – new skills can create more and better jobs

It seems a lifetime ago, but it was 2017 when the former NBN CEO Mike Quigley and I wrote a book about the impact of technology on our labour market. Changing Jobs: The Fair Go in the New Machine Age was our attempt to make sense of rapid technological change and its implications for Australian workers. It sprang from a thinkers' circle Andrew Charlton and I convened regularly back then, to consider the biggest, most consequential shifts in our economy. Flicking through the book now makes it very clear that the pace of change since then has been breathtaking. The stories of Australian tech companies give a sense of its scale. In 2017, the cloud design pioneer Canva was valued at $US1bn – today, it's more than $US30bn. Leading datacentre company AirTrunk was opening its first two centres in Sydney and Melbourne. It now has almost a dozen across Asia-Pacific and is backed by one of the world's biggest investors. We understand a churning and changing world is a source of opportunity but also anxiety for Australians. While the technology has changed, our goal as leaders remains the same. The responsibility we embrace is to make Australian workers, businesses and investors beneficiaries, not victims, of that change. That matters more than ever in a new world of artificial intelligence. Breakthroughs in 'large language models' (LLMs) – computer programs trained on massive datasets that can understand and respond in human languages – have triggered a booming AI 'hype cycle' and are driving a 'cognitive industrial revolution'. ChatGPT became a household name in a matter of months and has reframed how we think about working, creating and problem-solving. LLMs have been adopted seven times faster than the internet and 20 times faster than electricity. The rapid take-up has driven the biggest rise in the S&P 500 since the late 1990s. According to one US estimate, eight out of 10 workers could use LLMs for at least 10% of their work in future. Yet businesses are still in the discovery phase, trying to separate hype from reality and determine what AI to build, buy or borrow. Artificial intelligence will completely transform our economy. Every aspect of life will be affected. I'm optimistic that AI will be a force for good, but realistic about the risks. The Nobel prize-winning economist Darren Acemoglu estimates that AI could boost productivity by 0.7% over the next decade, but some private sector estimates are up to 30 times higher. Goldman Sachs expects AI could drive gross domestic product (GDP) growth up 7% over the next 10 years, and PwC estimates it could bump up global GDP by $15.7tn by 2030. The wide variation in estimates is partly due to different views on how long it will take to integrate AI into business workflows deeply enough to transform the market size or cost base of industries. But if some of the predictions prove correct, AI may be the most transformative technology in human history. At its best, it will convert energy into analysis, and more productivity into higher living standards. It's expected to have at least two significant economy-wide effects. First, it reduces the cost of information processing. One example of this is how eBay's AI translation tools have removed language barriers to drive international sales. The increase in cross-border trade is the equivalent of having buyers and sellers 26% closer to one another – effectively shrinking the distance between Australia and global markets. This is one reason why the World Trade Organization forecasts AI will lower trade costs and boost trade volumes by up to 13%. Second, cheaper analysis accelerates and increases our problem-solving capacity, which can, in turn, speed up innovation by reducing research and development (R&D) costs and skills bottlenecks. By making more projects stack up commercially, AI is likely to raise investment, boost GDP and generate demand for human expertise. Despite the potential for AI to create more high-skilled, high-wage jobs, some are concerned that adoption will lead to big increases in unemployment. The impact of AI on the labour force is uncertain, but there are good reasons to be optimistic. One study finds that more than half of the use cases of LLMs involve workers iterating back and forth with the technology, augmenting workers' skills in ways that enable them to achieve more. Another recent study found that current LLMs often automate only some tasks within roles, freeing up employees to add more value rather than reducing hours worked. These are some of the reasons many expect the AI transformation to enhance skills and change the nature of work, rather than causing widespread or long-term structural unemployment. Even so, the impact of AI on the nature of work is expected to be substantial. We've seen this play out before – more than half the jobs people do today are in occupations that didn't even exist at the start of the second world war. Some economists have suggested AI could increase occupational polarisation – driving a U-shaped increase in demand for manual roles that are harder to automate and high-skill roles that leverage technology, but a reduction in demand for medium-skilled tasks. But workers in many of these occupations may be able to leverage AI to complete more specialised tasks and take on more productive, higher-paying roles. In this transition, the middle has the most to gain and the most at stake. There is also a risk that AI could increase short-term unemployment if investment in skills does not keep up with the changing nature of work. Governments have an important role to play here, and a big motivation for our record investment in education is ensuring that skills keep pace with technological change. But it's also up to business, unions and the broader community to ensure we continue to build the human capital and skills we need to grasp this opportunity. To be optimistic about AI is not to dismiss the risks, which are not limited to the labour market. The ability of AI to rapidly collate, create and disseminate information and disinformation makes people more vulnerable to fraud and poses a risk to democracies. AI technologies are also drastically reducing the cost of surveillance and increasing its effectiveness, with implications for privacy, autonomy at work and, in some cases, personal security. There are questions of ethics, of inequality, of bias in algorithms, and legal responsibility for decision-making when AI is involved. These new technologies will also put pressure on resources such as energy, land, water and telecoms infrastructure, with implications for carbon emissions. But we are well placed to manage the risks and maximise the opportunities. In 2020, Australia was ranked sixth in the world in terms of AI companies and research institutions when accounting for GDP. Our industrial opportunities are vast and varied – from developing AI software to using AI to unlock value in traditional industries. Markets for AI hardware – particularly chips – and foundational models are quite concentrated. About 70% of the widely used foundational models have been developed in the US, and three US firms claim 65% of the global cloud computing market. But further downstream, markets for AI software and services are dynamic, fragmented and more competitive. The Productivity Commission sees potential to develop areas of comparative advantage in these markets. Infrastructure is an obvious place to start. According to the International Data Corporation, global investment in AI infrastructure increased 97% in the first half of 2024 to $US47bn and is on its way to $US200bn by 2028. We are among the top five global destinations for datacentres and a world leader in quantum computing. Our landmass, renewable energy potential and trusted international partnerships make us an attractive destination for data processing. Our substantial agenda, from the capacity investment scheme to the Future Made in Australia plan, will be key to this. They are good examples of our strategy to engage and invest, not protect and retreat. Our intention is to regulate as much as necessary to protect Australians, but as little as possible to encourage innovation. There is much work already under way: our investment in quantum computing company PsiQuantum and AI adopt centres, development of Australia's first voluntary AI safety standard, putting AI on the critical technologies list, a national capability plan, and work on R&D. Next steps will build on the work of colleagues like the assistant minister for the digital economy, Andrew Charlton, the science minister, Tim Ayres and former science minister Ed Husic, and focus on at least five things: Building confidence in AI to accelerate development and adoption in key sectors. Investing in and encouraging up skilling and reskilling to support our workforce. Helping to attract, streamline, speed up and coordinate investment in data infrastructure that's in the national interest, in ways that are cost effective, sustainable and make the most of our advantages. Promoting fair competition in global markets and building demand and capability locally to secure our influence in AI supply chains. And working with the finance minister, Katy Gallagher, to deliver safer and better public services using AI. Artificial intelligence will be a key concern of the economic reform roundtable I'm convening this month because it has major implications for economic resilience, productivity and budget sustainability. I'm setting these thoughts out now to explain what we'll grapple with and how. AI is contentious, and of course, there is a wide spectrum of views, but we are ambitious and optimistic. We can deploy AI in a way consistent with our values if we treat it as an enabler, not an enemy, by listening to and training workers to adapt and augment their work. Because empowering people to use AI well is not just a matter of decency or a choice between prosperity and fairness; it is the only way to get the best out of people and technology at the same time. It is not beyond us to chart a responsible middle course on AI, which maximises the benefits and manages the risks. Not by letting it rip, and not by turning back the clock and pretending none of this is happening, but by turning algorithms into opportunities for more Australians to be beneficiaries, not victims of a rapid transformation that is gathering pace. Jim Chalmers is the federal treasurer

Big tech has spent $155bn on AI this year. It's about to spend hundreds of billions more
Big tech has spent $155bn on AI this year. It's about to spend hundreds of billions more

The Guardian

time6 hours ago

  • The Guardian

Big tech has spent $155bn on AI this year. It's about to spend hundreds of billions more

The US's largest companies have spent 2025 locked in a competition to spend more money than one another, lavishing $155bn on the development of artificial intelligence, more than the US government has spent on education, training, employment and social services in the 2025 fiscal year so far. Based on the most recent financial disclosures of Silicon Valley's biggest players, the race is about to accelerate to hundreds of billions in a single year. Over the past two weeks, Meta, Microsoft, Amazon, and Alphabet, Google's parent, have shared their quarterly public financial reports. Each disclosed that their year-to-date capital expenditure, a figure that refers to the money companies spend to acquire or upgrade tangible assets, already totals tens of billions. Capex, as the term is abbreviated, is a proxy for technology companies' spending on AI because the technology requires gargantuan investments in physical infrastructure, namely data centers, which require large amounts of power, water and expensive semiconductor chips. Google said during its most recent earnings call that its capital expenditure 'primarily reflects investments in servers and data centers to support AI'. Meta's year-to-date capital expenditure amounted to $30.7bn, doubling the $15.2bn figure from the same time last year, per its earnings report. For the most recent quarter alone, the company spent $17bn on capital expenditures, also double the same period in 2024, $8.5bn. Alphabet reported nearly $40bn in capex to date for the first two quarters of the current fiscal year, and Amazon reported $55.7bn. Microsoft said it would spend more than $30bn in the current quarter to build out the data centers powering its AI services. Microsoft CFO Amy Hood said the current quarter's capex would be at least 50% more than the outlay during the same period a year earlier and greater than the company's record capital expenditures of $24.2bn in the quarter to June. 'We will continue to invest against the expansive opportunity ahead,' Hood said. For the coming fiscal year, big tech's total capital expenditure is slated to balloon enormously, surpassing the already eye-popping sums of the previous year. Microsoft plans to unload about $100bn on AI in the next fiscal year, CEO Satya Nadella said Wednesday. Meta plans to spend between $66bn and $72bn. Alphabet plans to spend $85bn, significantly higher than its previous estimation of $75bn. Amazon estimated that its 2025 expenditure would come to $100bn as it plows money into Amazon Web Services, which analysts now expect to amount to $118bn. In total, the four tech companies will spend more than $400bn on capex in the coming year, according to the Wall Street Journal. The multibillion-dollar figures represent mammoth investments, which the Journal points out is larger than the European Union's quarterly spending on defense. However, the tech giants can't seem to spend enough for their investors. Microsoft, Google and Meta informed Wall Street analysts last quarter that their total capex would be higher than previously estimated. In the case of all three companies, investors were thrilled, and shares in each company soared after their respective earnings calls. Microsoft's market capitalization hit $4tn the day after its report. Even Apple, the cagiest of the tech giants, signaled that it would boost its spending on AI in the coming year by a major amount, either via internal investments or acquisitions. The company's quarterly capex rose to $3.46bn, up from $2.15bn during the same period last year. The iPhone maker reported blockbuster earnings Thursday, with rebounding iPhone sales and better-than-expected business in China, but it is still seen as lagging farthest behind on development and deployment of AI products among the tech giants. Tim Cook, Apple's CEO, said Thursday that the company was reallocating a 'fair number' of employees to focus on artificial intelligence and that the 'heart of our AI strategy' is to increase investments and 'embed' AI across all of its devices and platforms. Cook refrained from disclosing exactly how much Apple is spending, however. Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion 'We are significantly growing our investment, I'm not putting specific numbers behind that,' he said. Smaller players are trying to keep up with the incumbents' massive spending and capitalize on the gold rush. OpenAI announced at the end of the week of earnings that it had raised $8.3bn in investment, part of a planned $40bn round of funding, valuing the startup, whose ChatGPT chatbot kicked in 2022, at $300bn.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store