Latest news with #UnlikelyAI


Times
6 days ago
- Business
- Times
Can AI be made trustworthy? Alexa inventor may have the answer
One of the inventors of Amazon's Alexa has proven he can make AI trustworthy — at least when it comes to assessing valid insurance claims. William Tunstall-Pedoe originally developed the technology that became the retail giant's voice assistant service and his new venture, called UnlikelyAI, has an even more ambitious goal. 'We are tackling a problem that is potentially bigger than Alexa, which is making AI trustworthy,' he said. His company has combined data-driven learning models, known as neural networks or large language models (LLMs), with rule-based systems, called symbolic reasoning, to create a platform that companies can use to automate their processes using AI. 'LLMs have amazing capabilities and are absolutely transformative but when enterprises try to apply LLMs to problems in their business it very often doesn't work,' said Tunstall-Pedoe, 56. 'A lot of pilots don't really succeed. It is a black box, isn't explainable, and it is inconsistent. We are developing fundamental technologies to tackle that problem.' UnlikelyAI has completed a pilot with SBS Insurance Services, which saw the insurer automate 40 per cent of its claims handling with 99 per cent accuracy. This compares with a rate of accuracy for the same task that is typically around 52 per cent when just using LLMs, the company said. UnlikelyAI's system also provides an audit trail for all its decisions, so they can be explained if queried by customers or regulators. 'We are building a collection of technologies that bring trust to AI applications. Whenever enterprises are using AI to do business critical things, where the cost of getting it wrong is high, we can help,' said Tunstall-Pedoe. 'In the insurance world we are ingesting the policies, which are natural language. We create a symbolic representation of it, which then gives you that really high accuracy when doing the claims process against it.' He sold the technology that became a key part of Amazon's Alexa voice assistant in 2012. It originated in a startup he founded in Cambridge called True Knowledge, which became known as Evi after it developed a voice assistant, a few months after Apple launched Siri. 'We were competing directly with the biggest company in the world as a 30-person Cambridge startup. We had millions of downloads very quickly and every big company that was trying to figure out its response to the existence of Siri were talking to us. At the end of 2012 we had two acquisition offers and we chose to get bought by Amazon.' Tunstall-Pedoe joined the Amazon team to develop Alexa, working on an initiative under the Project D codename, and launching it in the US in 2014. He left Amazon in 2016 and has since invested in over 100 start-ups and mentored entrepreneurs. He founded UnlikelyAI in 2020 and has since raised $20 million from investors including Amadeus Capital Partners, Octopus Ventures, and Cambridge Innovation Capital. Tunstall-Pedoe said UnlikelyAI's 'goal is to create AI that is always right'. 'When it gives you an answer you can always trust it. It can always provide a fully auditable explanation for any business decision that is made. And it will be consistent, and not breach your trust by giving a different answer each time you use it.' 'Our primary customers are high stakes industries, where a business decision has really big consequences if it's wrong. Medicine is a good example. Finance is also very important, or any industry that is regulated. If you breach regulations you can be fined.'


Entrepreneur
14-06-2025
- Business
- Entrepreneur
Entrepreneur UK's London 100: Unlikely AI
Industry: Artificial Intelligence UnlikelyAI is pioneering a novel approach that fuses large language models with symbolic methods to boost AI accuracy, safety, and transparency. In fields like healthcare, it's building technology that delivers explainable, verifiable insights - maximizing AI's benefits without compromising safety. Founder William Tunstall-Pedoe previously created Evi, the tech behind Amazon Alexa.


CNET
06-06-2025
- Entertainment
- CNET
He Got Us Talking to Alexa. Now He Wants to Kill Off AI Hallucinations
If it weren't for Amazon, it's entirely possible that instead of calling out to Alexa to change the music on our speakers, we might have been calling out to Evi instead. That's because the tech we know today as Amazon's smart assistant started out life with the name of Evi (pronounced ee-vee), as named by its original developer, William Tunstall-Pedoe. The British entrepreneur and computer scientist was experimenting with artificial intelligence before most of us had even heard of it. Inspired by sci-fi, he "arrogantly" set out to create a way for humans to talk to computers way back in 2008, he said at SXSW London this week. Arrogant or not, Tunstall-Pedoe's efforts were so successful that Evi, which launched in 2012 around the same time as Apple's Siri, was acquired by Amazon and he joined a team working on a top-secret voice assistant project. What resulted from that project was the tech we all know today as Alexa. That original mission accomplished, Tunstall-Pedoe now has a new challenge in his sights: to kill off AI hallucinations, which he says makes the technology highly risky for all of us to use. Hallucinations are the inaccurate pieces of information and content that AI generates out of thin air. They are, said Tunstall-Pedoe, "an intrinsic problem" of the technology. Through the experience he had with Alexa, he learned that people personify the technology and assume that when it's speaking back to them it's thinking the way we think. "What it's doing is truly remarkable, but it's doing something different from thinking," said Tunstall-Pedoe. "That sets expectations… that what it's telling you is true." Innumerable examples of AI generating nonsense show us that truth and accuracy are never guaranteed. Tunstall-Pedoe was concerned that the industry isn't doing enough to tackle hallucinations, so formed his own company, Unlikely AI, to tackle what he views as a high-stakes problem. Anytime we speak to an AI, there's a chance that what it's telling us is false, he said. "You can take that away into your life, take decisions on it, or you put it on the internet and it gets spread by others, [or] used to train future AIs to make the world a worse place." Some AI hallucinations have little impact, but in industries where the cost of getting things wrong – in medicine, law, finance and insurance, for example – inaccurately generated content can have severe consequences. These are the industries that Unlikely AI is targeting for now, said Tunstall-Pedoe Unlikely AI uses a mix of deep tech and proprietary tech to ground outputs in logic, minimizing the risk of hallucinations, as well as to log the decision-making process of algorithms. This makes it possible for companies to understand where things have gone wrong, when they inevitably do. Right now, AI can never be 100% accurate due to the underlying tech, said Tunstall-Pedoe. But advances currently happening in his own company and others like it mean that we're moving towards a point where accuracy can be achieved. For now, Unlikely AI is mainly being used by business customers, but eventually Tunstall-Pedoe believes it will be built into services and software all of us use. The change being brought about by AI, like any change, presents us with risks, he said. But overall he remains "biased towards optimism" that AI will be a net positive for society.