New AI tool could speed up skin cancer diagnoses in remote parts of world
Tess Watt, the PhD student at Heriot-Watt University in Edinburgh who led the project to develop the technology, said it is intended to enable early detection of skin conditions anywhere in the world, and without the need for direct access to dermatologists.
The technology also works without internet access.
The system involves a patient taking a photograph of their skin complaint using a small camera attached to a Raspberry Pi device – a cheap, energy-efficient handheld computer that is capable of storing vast amounts of information.
The photograph is analysed in real-time using the latest state-of-the-art image classification, comparing it to an enormous dataset of thousands of images stored on the device to reach a diagnosis.
The findings are then shared with a local GP service to begin a suitable treatment plan.
The project is understood to be the first of its kind to combine AI medical diagnosis with the aim of serving remote communities.
Ms Watt explained: 'Healthcare from home is a really important topic at the moment, especially as GP wait times continue to grow.
'If we can empower people to monitor skin conditions from their own homes using AI, we can dramatically reduce delays in diagnosis.'
A prototype of the device has already been demonstrated at Heriot-Watt's advanced health and care technologies suite.
The research team said the tool is up to 85% accurate in its diagnostic capabilities, but they hope to increase this further by gaining access to more skin lesion datasets, aided by advanced machine tools.
Ms Watt is also in talks with NHS Scotland to begin the ethical approval process for testing the technology in real-world clinical settings.
'Hopefully in the next year or two, we'll have a pilot project under way,' she said, noting medical technology often takes years to move from prototype to implementation.
She added: 'By the time I finish my PhD, three years from now, I'd love to see something well into the pipeline that's on its way to real-world use.'
The university said the long-term vision is to roll the system out first across remote regions of Scotland, before expanding to global areas with limited access to dermatological care.
It added the technology could also offer vital support to patients who are infirm or unable to travel, allowing loved ones to assist with capturing and submitting diagnostic images to GPs.
Ms Watt's academic supervisor, Dr Christos Chrysoulas, said: 'E-health devices must be engineered to operate independently of external connectivity to ensure continuity of patient service and safety.
'In the event of a network or cloud service failure, such devices must fail safely and maintain all essential clinical operations without functional degradation.
'While auxiliary or non-critical features may become temporarily unavailable, the core diagnostic and even therapeutic capabilities must remain fully operational, in compliance of course with safety and regulatory requirements.
'Ensuring this level of resilience in affordable, low-cost medical devices is the essence of our research, particularly for deployment in resource-limited settings and areas with limited or no connectivity, where uninterrupted patient care must still be guaranteed.'
UK Science and Technology Secretary Peter Kyle commented on the research, saying: 'Low-cost technology which could help detect skin cancer early and at home, without even the need for internet access, is an incredible example of AI's potential to break down barriers in healthcare and save lives.
'Promising, first of its kind research like this also demonstrates the crucial role UK innovators can play in improving the lives of people of all backgrounds, wherever they live, and makes clear the value of government investing in research to deliver our plan for change.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


UPI
2 minutes ago
- UPI
More people are considering AI lovers, and we shouldn't judge
People are falling in love with their chatbots. There are now dozens of apps that offer intimate companionship with an AI-powered bot, and they have millions of users. A recent survey of users found that 19% of Americans have interacted with an AI meant to simulate a romantic partner. The response has been polarizing. In a New Yorker article titled "Your A.I. Lover Will Change You," futurist Jaron Lanier argued that "when it comes to what will happen when people routinely fall in love with an A.I., I suggest we adopt a pessimistic estimate about the likelihood of human degradation." Podcaster Joe Rogan put it more succinctly -- in a recent interview with Sen. Bernie Sanders, the two discussed the "dystopian" prospect of people marrying their AIs. Noting a case where this has already happened, Rogan said: "I'm like, oh, we're done. We're cooked." We're probably not cooked. Rather, we should consider accepting human-AI relationships as beneficial and healthy. More and more people are going to form such relationships in the coming years, and my research in sexuality and technology indicates it is mostly going to be fine. When surveying the breathless media coverage, the main concern raised is that chatbots will spoil us for human connection. How could we not prefer their cheerful personalities, their uncomplicated affection and their willingness to affirm everything we say? The fear is that, seduced by such easy companionship, many people will surely give up their desire to find human partners, while others will lose their ability to form satisfying human relationships even if they want to. It has been less than three years since the launch of ChatGPT and other chatbots based on large language models. That means we can only speculate about the long-term effects of AI-human relationships on our capacity for intimacy. There is little data to support either side of the debate, though we can do our best to make sense of more short-term studies and other pieces of available evidence. There are certain risks that we do know about already, and we should take them seriously. For instance, we know that AI companion apps have terrible privacy policies. Chatbots can encourage destructive behaviors. Tragically, one may have played a role in a teenager's suicide. The companies that provide these apps can go out of business, or they can change their terms of service without warning. This can suddenly deprive users of access to technology that they've become emotionally attached, with no recourse or support. Complex relationships In assessing the dangers of relationships with AI, however, we should remember that human relationships are not exactly risk-free. One recent paper concluded that "the association between relationship distress and various forms of psychopathology is as strong as many other well-known predictors of mental illness." This is not to say we should swap human companions for AI ones. We just need to keep in mind that relationships can be messy, and we are always trying to balance the various challenges that come with them. AI relationships are no different. We should also remember that just because someone forms an intimate bond with a chatbot, that doesn't mean it will be their only close relationship. Most people have lots of different people in their lives, who play a variety of different roles. Chatbot users may depend on their AI companions for support and affirmation, while still having relationships with humans that provide different kinds of challenges and rewards. Meta's Mark Zuckerberg has suggested that AI companions may help solve the problem of loneliness. However, there is some (admittedly very preliminary data) to suggest that many of the people who form connections with chatbots are not just trying to escape loneliness. In a recent study (which has not yet been peer reviewed), researchers found that feelings of loneliness did not play a measurable role in someone's desire to form a relationship with an AI. Instead, the key predictor seemed to be a desire to explore romantic fantasies in a safe environment. Support and safety We should be willing to accept AI-human relationships without judging the people who form them. This follows a general moral principle that most of us already accept: we should respect the choices people make about their intimate lives when those choices don't harm anyone else. However, we can also take steps to ensure that these relationships are as safe and satisfying as possible. First of all, governments should implement regulations to address the risks we know about already. They should, for instance, hold companies accountable when their chatbots suggest or encourage harmful behavior. Governments should also consider safeguards to restrict access by younger users, or at least to control the behavior of chatbots who are interacting with young people. And they should mandate better privacy protections -- though this is a problem that spans the entire tech industry. Second, we need public education so people understand exactly what these chatbots are and the issues that can arise with their use. Everyone would benefit from full information about the nature of AI companions but, in particular, we should develop curricula for schools as soon as possible. While governments may need to consider some form of age restriction, the reality is that large numbers of young people are already using this technology, and will continue to do so. We should offer them non-judgmental resources to help them navigate their use in a manner that supports their well-being, rather than stigmatizes their choices. AI lovers aren't going to replace human ones. For all the messiness and agony of human relationships, we still (for some reason) pursue other people. But people will also keep experimenting with chatbot romances, if for no other reason than they can be a lot of fun. Neil McArthur is director of the Center for Professional and Applied Ethics at the University of Manitoba. This article is republished from The Conversation under a Creative Commons license. Read the original article. The views and opinions in this commentary are solely those of the author.


Newsweek
2 minutes ago
- Newsweek
AI Impact Awards 2025: Meet the 'Best Of' Winners
Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Newsweek announced its inaugural AI Impact Awards last month, recognizing 38 companies for tackling everyday problems with innovative solutions. Winners were announced across 13 categories, including Best of—Most Innovative AI Technology or Service, which highlighted some of the most outstanding cross-industry advancements in the practical use of machine learning. Among the five recipients in the Best Of category is Ex-Human, a digital platform that allows users to create customizable AI humans to interact with. Ex-Human took home the Extraordinary Impact in AI Human Interactivity or Collaboration award. Artem Rodichev, the founder and CEO of Ex-Human, told Newsweek that he started his company in response to the growing loneliness epidemic. According to the U.S. Surgeon General, some 30 percent of U.S. adults experience feelings of loneliness once a week. Those figures are even higher in young Americans. Roughly 80 percent of Gen Z report feeling lonely. The epidemic is also keeping college kids up at night, and studies show that a lack of connection can lead to negative health outcomes. To help bridge that gap, Rodichev sought to create empathetic characters, or what he described as "non-boring AI." "If you chat with ChatGPT, it doesn't feel like you are chatting with your friend," Rodichev said. "You feel more like you're chatting with Mr. Wikipedia. The responses are informative, but they're boring." What his company wanted to create, instead, was "AI that can feel, that can love, that can hate, that can feel emotions and can connect on an emotional level with users," Rodichev said. He cited the 1982 sci-fi classic Blade Runner and the Oscar-nominated film Her as two main forms of inspiration. AI Impact Awards: Best of Most Innovative AI Impact Awards: Best of Most Innovative Newsweek Illustration Trained on millions of real conversations, Ex-Human enables companies to create personalized AI companions that can strengthen digital connections between those characters and human users. Internal data suggests Ex-Human's technology is working. Their users spend an average of 90 minutes per day interacting with their AI companions, exchanging over 600 messages per week on average. "At any moment, a user can decide, 'It's boring to chat with a character. I'll go check my Instagram feed. I'll watch this funny TikTok video.' But for some reason, they stay," Rodichev said. "They stay and continue to chat with these companions." "A lot of these people struggle with social connections. They don't have a lot of friends and they have social anxiety," he said. "By chatting with these companions, they can reduce the social anxiety, they can improve their mental health. Because these kind of fake companions, they act as social trainers. They never judge you, they're available to you 24/7, you can discuss any fears, everything that you have in your head in a no-judgment environment." Ex-Human projects that it will have 10 million users by early next year. The company has also raised over $3.7 million from investors, including venture capitalist firm Andreessen Horowitz. Rodichev said while Ex-Human's AIs have been popular among young people, he foresees it becoming more popular among the elderly—another population that often suffers from loneliness—as AI adoption becomes more widespread. He also anticipated that Ex-Human would be a popular technology for companies with big IP portfolios, like Disney, whose popular characters may be "heavily underutilized" in the age of AI. Also among this year's "Best Of" winners is a developer-focused platform that allows users to create AI-generated audio, video and images. was the recipient of this year's Extraordinary Impact in General Purpose AI Tool or Service award. Co-founder Gorkem Yurtseven told Newsweek that the award was particularly meaningful to him "because it recognizes generative media as its own market and sector that is very promising and growing really fast." is almost exclusively focused on B2B, selling AI media tools to help other companies generate audio, video and images for their business. Essentially a "building block," the AI allows different clients to have unique experiences, Yurtseven explained. So far, the biggest categories for are advertising and marketing, and retail and e-commerce. "AI-generated ads are a very clear product-market fit. You can create unlimited versions of the same ad and test it to understand which ones perform better than the others. The cost of creation also goes down to zero," Yurtseven said. In the retail space, he said has commonly been used for product photography. His company's capabilities allow businesses to display products on diverse background or in various settings, and to even build experiences where customers are pictured wearing the items. Yurtseven believes that in some ways, he and his co-founder, Burkay Gur, got lucky. When large language models (LLM) started to gain steam, many thought the market for image and video models was too small. "Turns out, they were wrong," Yurtseven chuckled. "The market is very big, and now, everyone understands it." "We were able to ride the LLM AI wave, in a sense," he said. "People got excited about AI. It was, in the beginning, mostly LLMs. But image and media models got included into that as well, and you were able to tap into the AI budgets of different companies that were created because of the general AI wave." The one sector that he's waiting to embrace AI-generated audio, images and videos is social media. Yurtseven said this could be on an existing app or a completely new platform, but so far, "a true social media app, at the largest scale, hasn't been able to utilize this in a fun and engaging way." "I think it's going to be very interesting once someone figures that out," he said. "There's a lot of interesting and creative ways people are using this in smaller circles, but it hasn't reached a big social network where it becomes a daily part of our lives, similar to how Snapchat stories or Instagram stories became. So, I'm still expecting that's going to happen." There's no doubt that AI continues to evolve at a rapid pace, but initiatives to address AI's potential dangers and ethical concerns haven't quite matched that speed. The winner of this year's Extraordinary Impact in AI Transparency or Responsibility award is EY, which created a responsible AI framework compliant with one of the most comprehensive AI regulations to date: the European Union's Artificial Intelligence Act, which took effect on August 1, 2024. Joe Depa, EY's global chief innovation officer, told Newsweek that developing the framework was a natural next step for EY, a global professional services company with 400,000 employees that does everything from consulting to tax to assurance to strategy and transactions. "If you think about what that is, it's a lot of data," Depa said. "And when I think about data, one of the most important components around data right now is responsible AI." As a company operating in 150 countries worldwide, EY has seen firsthand how each country approaches AI differently. While some have more restrictive policies, others have almost none around responsible AI. This means there's no real "playbook" for what works and what doesn't work, Depa said. "It used to be that there was policy that you could follow. The policymakers would set policy, and then you could follow that policy," he said. "In this case, the speed of technology and the speed of AI and the rate of technology and pace of technology evolution is creating an environment where we have to be much more proactive about the way that we integrate responsible AI into everything we do, until the policy makers can catch up." "Now, it's incumbent upon leaders, and in particular, leaders that have technology prowess and have data sets to make sure that responsible AI is integrated into everything we do," Depa said. As part of their framework, EY teams at the company implemented firm-wide AI definitions that would promote consistency and clarity across all business functions. So far, their clients have been excited about the framework, Depa said. "At EY, trust is everything that we do for our clients," he said. "We want to be a trusted brand that they can they can trust with their data—their tax data, the ability to assure that the data from our insurance business and then hopefully help them lead through this transformation." "We're really proud of the award. We're excited for it. It confirms our approach, it confirms our understanding, and it confirms some of the core values that we have at EY," Depa said. As part of Newsweek's AI Impact Awards, Pharebio and Axon were also recognized in the Best of—Most Innovative AI Technology or Service category. Pharebio received the Extraordinary Impact in AI Innovation award, while Axon received the Extraordinary Impact in Commercial Tool or Service Award. To see the full list of winners and awards, visit the official page for Newsweek's AI Impact Awards.


Newsweek
2 minutes ago
- Newsweek
Senators Demand Answers as Delta Plans to Price More Tickets Based on What AI Thinks You'll Pay
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Three U.S. senators are demanding answers over Delta Air Lines' planned expansion to use artificial intelligence to set individualized fares — insisting the strategy is fraught with privacy concerns. Sens. Ruben Gallego, Richard Blumenthal and Mark Warner, all Democrats, sent a letter Monday to the Atlanta-based airline seeking additional details of plans to deploy AI-based revenue management technology across 20 percent of its domestic network in a matter of months. "Individualized pricing, or surveillance-based price setting, eliminates a fixed or static price in favor of prices that are tailored to an individual consumer's willingness to pay," the senators wrote to Delta in a letter obtained by Newsweek. "Delta's current and planned individualized pricing practices not only present data privacy concerns, but will also likely mean fare price increases up to each individual consumer's personal 'pain point' at a time when American families are already struggling with rising costs." A Delta Air Lines plane at the Fort Lauderdale-Hollywood International Airport in Fort Lauderdale, Florida, on April 9, 2025. A Delta Air Lines plane at the Fort Lauderdale-Hollywood International Airport in Fort Lauderdale, Florida, on April 9, Air Lines President Glen William Hauenstein told reporters during a July 10 earnings call that roughly 3 percent of the airline's domestic ticket prices are already set using AI, with hopes to reach 20 percent by the end of 2025. "So, we're in heavy testing phase," Hauenstein said. "We like what we see. We like it a lot and we're continuing to roll it out. But we're going to take our time and make sure that the rollout is successful, as opposed to trying to rush it and risk that there are unwanted answers in there." Hauenstein also praised Delta's partnership with Fetcherr, an Israel-based tech company that employs AI to process "millions of data points instantly," according to its website. "The convergence of AI, machine learning and real-time data processing completely transforms how airlines approach pricing strategy," a post on dynamic pricing in aviation reads. "Gone are the days of rigid pricing rules and manual adjustments. Welcome to the era of true dynamic pricing, where artificial intelligence can process millions of data points instantly to set the perfect price every time. Welcome to the modern age of AI dynamic pricing." But the approach is problematic, according to Gallego and his Democratic colleagues. Sen. Ruben Gallego, D-Ariz., arrives for a vote in the Capitol on May 13, 2025. Sen. Ruben Gallego, D-Ariz., arrives for a vote in the Capitol on May 13, 2025. Bill Clark/CQ Roll Call via AP Images "The implications for individual consumer privacy are severe on their own," they wrote Delta. "Surveillance pricing has been shown to utilize extensive personal information obtained through a variety of third-party channels, including data about a passenger's purchase history, web browsing behavior, geolocation, social media activity, biometric data, and financial status." Customers also have no way of knowing what data and personal information will be collected by Delta and Fetcherr, or how the airfare algorithm will be trained, according to the senators. "Prices could be dictated not by supply and demand, but by individual need," the letter continued. "While Delta has stated that the airline will 'maintain strict safeguards to ensure compliance with federal law,' your company has not shared what those safeguards are or how you plan to protect American families against pricing discrimination in the evolving AI landscape." The senators want a response by August 4, including details about the data Delta uses to train its revenue management system algorithm for setting customized prices for fares or other products, as well as how many passengers per day are currently purchasing fares set or informed by the customized model. Chelsea Wollerson, a Delta spokesperson, said the company had no immediate response to the letter early Tuesday. "There is no fare product Delta has ever used, is testing, or plans to use that targets customers with individualized offers based on personal information or otherwise," the airline told Newsweek in a statement late Monday. "A variety of market forces drive the dynamic pricing model that's been used in the global industry for decades, with new tech simply streamlining this process. Delta always complies with regulations around pricing and disclosures."