
Can AI be made trustworthy? Alexa inventor may have the answer
One of the inventors of Amazon's Alexa has proven he can make AI trustworthy — at least when it comes to assessing valid insurance claims.
William Tunstall-Pedoe originally developed the technology that became the retail giant's voice assistant service and his new venture, called UnlikelyAI, has an even more ambitious goal.
'We are tackling a problem that is potentially bigger than Alexa, which is making AI trustworthy,' he said.
His company has combined data-driven learning models, known as neural networks or large language models (LLMs), with rule-based systems, called symbolic reasoning, to create a platform that companies can use to automate their processes using AI.
'LLMs have amazing capabilities and are absolutely transformative but when enterprises try to apply LLMs to problems in their business it very often doesn't work,' said Tunstall-Pedoe, 56. 'A lot of pilots don't really succeed. It is a black box, isn't explainable, and it is inconsistent. We are developing fundamental technologies to tackle that problem.'
UnlikelyAI has completed a pilot with SBS Insurance Services, which saw the insurer automate 40 per cent of its claims handling with 99 per cent accuracy. This compares with a rate of accuracy for the same task that is typically around 52 per cent when just using LLMs, the company said. UnlikelyAI's system also provides an audit trail for all its decisions, so they can be explained if queried by customers or regulators.
'We are building a collection of technologies that bring trust to AI applications. Whenever enterprises are using AI to do business critical things, where the cost of getting it wrong is high, we can help,' said Tunstall-Pedoe.
'In the insurance world we are ingesting the policies, which are natural language. We create a symbolic representation of it, which then gives you that really high accuracy when doing the claims process against it.'
He sold the technology that became a key part of Amazon's Alexa voice assistant in 2012. It originated in a startup he founded in Cambridge called True Knowledge, which became known as Evi after it developed a voice assistant, a few months after Apple launched Siri.
'We were competing directly with the biggest company in the world as a 30-person Cambridge startup. We had millions of downloads very quickly and every big company that was trying to figure out its response to the existence of Siri were talking to us. At the end of 2012 we had two acquisition offers and we chose to get bought by Amazon.'
Tunstall-Pedoe joined the Amazon team to develop Alexa, working on an initiative under the Project D codename, and launching it in the US in 2014. He left Amazon in 2016 and has since invested in over 100 start-ups and mentored entrepreneurs. He founded UnlikelyAI in 2020 and has since raised $20 million from investors including Amadeus Capital Partners, Octopus Ventures, and Cambridge Innovation Capital.
Tunstall-Pedoe said UnlikelyAI's 'goal is to create AI that is always right'.
'When it gives you an answer you can always trust it. It can always provide a fully auditable explanation for any business decision that is made. And it will be consistent, and not breach your trust by giving a different answer each time you use it.'
'Our primary customers are high stakes industries, where a business decision has really big consequences if it's wrong. Medicine is a good example. Finance is also very important, or any industry that is regulated. If you breach regulations you can be fined.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Daily Mail
27 minutes ago
- Daily Mail
Wood Group takeover talks extended again amid regulatory probe
John Wood Group has again extended the deadline for suitor Sidara to make a concrete takeover offer for the company. It follows the launch of a City watchdog investigation into Wood Group last week after an independent review unearthed 'cultural failings' with its accounting practices. The North Sea-focused oilfield services provider told investors on Monday Dubai-based Sidara now has until 5pm on 28 July to put forward a 'put up or shut up' proposal, having originally given the firm until today to do so. Sidara began a fresh move for Wood Group in February after a previous attempt to acquire the company in a £1.7billion deal last year failed. However, Wood Group refused to accept the offer after two months of talks, blaming 'rising geopolitical risks and financial market uncertainty'. Sidara eventually put forward a far lower 35p per share bid in April, which values Wood Group at around £240million. The offer additionally includes a potential $450million capital injection and Wood Group possibly pursuing an extension or amendment to its current debt facilities. Wood Group shares have plummeted by approximately 84 per cent over the past year amidst cash flow problems, massive contract write-offs and accounting failures. In February, the Edinburgh-based firm warned that it expected between $150million and $200million in negative free cash flow this year owing to subdued trading and legacy claims liabilities totalling about $150million. Its chief financial officer, Arvind Balan, resigned soon afterwards when he admitted to inaccurately describing his professional qualifications. A month later, Wood Group admitted that its previous financial statement would have to be restated after a Deloitte probe uncovered 'material weaknesses and failures' in the financial culture of its projects business. Following delays in publishing its 2024 results, the company's shares were suspended from trading on the London Stock Exchange in April. And just last week, the Financial Conduct Authority (FCA) began its own investigation into Wood Group, which will look at the firm's conduct between January 2023 and November 2024. Founded as Dar al-Handasah in Lebanon in 1956, Sidara is a network of engineering and design companies employing about 21,500 people with a specialist focus on large-scale building projects. Wood Group employs 35,000 individuals across more than 60 countries who give consultation, engineering, and management services to the energy and minerals sectors. Private equity giant Apollo Global Management tried to purchase Wood Group in 2023, making four proposals, including a final offer of 240p per share, before the latter walked away without explanation.


Geeky Gadgets
27 minutes ago
- Geeky Gadgets
The Future of AI: Sam Altman's Controversial Roadmap Explained
What if the future of humanity was being quietly rewritten, not in a distant lab, but in the mind of one visionary? Sam Altman, the CEO of OpenAI, has just unveiled a bold roadmap for artificial intelligence that could redefine everything we know about technology, society, and even ourselves. His concept of a 'gentle singularity' challenges the apocalyptic narratives often tied to AI, offering instead a measured, fantastic evolution. Imagine a world where AI doesn't just assist but fundamentally reshapes how we think, work, and live—an era where machines surpass human intelligence yet remain aligned with our values. Altman's vision is as ambitious as it is controversial, sparking debates about whether we're ready for the profound changes on the horizon. In this insider perspective, AI Grid explore the milestones Altman has laid out, from AI systems solving real-world problems by 2026 to speculative breakthroughs like brain-computer interfaces by 2035. You'll gain insight into the ethical dilemmas, societal shifts, and technological marvels that could define the next decade. But Altman's vision isn't without its critics—some question the feasibility of his timeline, while others warn of the risks tied to AI's rapid acceleration. Whether you view his predictions as inspiring or unsettling, one thing is certain: the future of AI is closer than you think, and it's unfolding in ways that demand our attention. Sam Altman's AI Vision TL;DR Key Takeaways : Sam Altman, CEO of OpenAI, envisions a 'gentle singularity,' where AI evolves gradually, focusing on advancements in artificial general intelligence (AGI), superintelligence, and robotics, with an emphasis on ethical and responsible development. Key AI milestones include AI agents performing complex tasks by 2025, autonomous robots by 2027, and speculative advancements like brain-computer interfaces and space colonization by 2035. Challenges in AI development include the alignment problem, making sure AI systems align with human values, and addressing risks through robust safety measures and ethical oversight. Critics question the feasibility of Altman's ambitious timelines, warn against technological hype, and express concerns over OpenAI's shift toward a profit-driven model and potential monopolistic control. AI's societal implications include transforming labor markets, governance, and human-AI integration, while raising ethical concerns about regulation, bias, fairness, and power dynamics. What Is the Singularity? The singularity, as defined by Altman, represents the point at which AI surpasses human intelligence and begins to improve itself autonomously. He suggests that humanity is already entering what he calls a 'gentle singularity,' a gradual and incremental phase of AI evolution. This phase is characterized by steady advancements rather than abrupt, disruptive changes. OpenAI has shifted its focus from AGI to superintelligence—AI systems capable of outperforming humans in reasoning, memory, and knowledge. This shift reflects the organization's belief in AI's potential to fundamentally reshape society and redefine human capabilities. Altman's concept of the singularity emphasizes the fantastic potential of AI while acknowledging the need for a measured and ethical approach. By framing this evolution as 'gentle,' he underscores the importance of managing the transition responsibly to mitigate risks and maximize benefits. Key Milestones in AI Development Altman has outlined a timeline of anticipated breakthroughs, providing a glimpse into the future trajectory of AI. These milestones highlight the rapid pace of innovation and the potential for AI to transform various industries and aspects of daily life. 2025: AI agents capable of performing complex cognitive tasks, such as writing code, generating creative content, and assisting in decision-making processes. AI agents capable of performing complex cognitive tasks, such as writing code, generating creative content, and assisting in decision-making processes. 2026: Systems designed to produce novel insights and solve intricate real-world problems, ranging from scientific research to urban planning. Systems designed to produce novel insights and solve intricate real-world problems, ranging from scientific research to urban planning. 2027: Autonomous robots capable of executing practical tasks, with the potential to transform industries like manufacturing, logistics, and healthcare. Autonomous robots capable of executing practical tasks, with the potential to transform industries like manufacturing, logistics, and healthcare. 2030s: A decade marked by advancements in intelligence, energy efficiency, and innovation, leading to increased productivity and economic growth. A decade marked by advancements in intelligence, energy efficiency, and innovation, leading to increased productivity and economic growth. 2035: Speculative developments in areas such as space colonization, brain-computer interfaces (BCIs), and deeper integration between humans and AI systems. These milestones reflect Altman's confidence in AI's rapid acceleration. However, they also invite scrutiny regarding their feasibility and the broader implications for society. While the timeline is ambitious, it serves as a framework for understanding the potential trajectory of AI development. Sam Altman Just Revealed The Future Of AI Dive deeper into Artificial Intelligence (AI) with other articles and guides we have written below. Challenges in AI Development The journey toward advanced AI is not without significant challenges. One of the most critical issues is the alignment problem—making sure that AI systems act in ways that align with human values and intentions. Altman has emphasized the importance of robust safety measures and ethical oversight to address this concern. He advocates for a proactive approach to managing risks, including the development of frameworks to guide AI behavior and decision-making. Critics, however, caution against overly optimistic timelines and warn of the potential for public disillusionment if breakthroughs fail to materialize as predicted. OpenAI itself has faced scrutiny over its transparency and governance. Some observers question whether the organization has strayed from its original mission of openness and public benefit, particularly as it adopts a more profit-driven model. The alignment problem is further complicated by the inherent unpredictability of AI systems as they become more advanced. Making sure that these systems remain safe, reliable, and aligned with human interests will require ongoing research, collaboration, and vigilance. Criticism and Debate Altman's vision has ignited debate among AI researchers, ethicists, and industry leaders. Critics argue that his predictions may overestimate AI's current capabilities and rely too heavily on speculative assumptions about its future potential. Prominent figures, such as Gary Marcus, have expressed concerns about the dangers of technological hype, drawing parallels to past instances where ambitious forecasts failed to deliver. Additionally, OpenAI's shift toward a profit-oriented model has raised questions about its ability to address ethical challenges effectively. Critics worry that prioritizing commercial interests could undermine efforts to ensure equitable access to AI's benefits and mitigate potential risks. The concentration of power among a few organizations, including OpenAI, has also been a point of contention, with some warning of the societal risks posed by monopolistic control over fantastic technologies. Despite these criticisms, Altman's vision continues to inspire discussions about the future of AI and its role in shaping the world. The debates surrounding his predictions highlight the need for a balanced approach that considers both the opportunities and risks associated with AI development. Societal Implications The societal implications of AI, as envisioned by Altman, are profound and far-reaching. From transforming labor markets to reshaping governance, AI has the potential to redefine human interactions with technology and influence nearly every aspect of modern life. However, these changes bring ethical dilemmas and practical challenges that must be addressed to ensure a positive outcome. Regulation: Developing fair and effective regulatory frameworks to govern AI systems and prevent misuse. Developing fair and effective regulatory frameworks to govern AI systems and prevent misuse. Bias and Fairness: Addressing biases embedded in AI algorithms to promote equity and inclusivity. Addressing biases embedded in AI algorithms to promote equity and inclusivity. Power Dynamics: Preventing the concentration of power among a few organizations and making sure widespread access to AI's benefits. Altman has also highlighted the potential for human-AI integration through technologies like brain-computer interfaces. These innovations could enhance collaboration between humans and machines, allowing new forms of creativity and problem-solving. However, they also raise concerns about privacy, autonomy, and the ethical implications of merging biological and artificial intelligence. As AI continues to evolve, its societal impact will depend on how these challenges are addressed. The decisions made today will shape the trajectory of AI development and its role in the future of humanity. Media Credit: TheAIGRID Latest Geeky Gadgets Deals Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy


Reuters
34 minutes ago
- Reuters
UK lenders approve more mortgages than expected in May
LONDON, June 30 (Reuters) - British lenders approved 63,032 mortgages in May, more than expected by economists, Bank of England data showed on Monday, suggesting the housing market recovered quickly from the end of a tax break for homebuyers in April. A Reuters poll of economists had pointed to 59,750 mortgage approvals during the month.