
AI tool uses selfies to predict biological age and cancer survival
By Issam AHMED
Doctors often start exams with the so-called "eyeball test" -- a snap judgment about whether the patient appears older or younger than their age, which can influence key medical decisions.
That intuitive assessment may soon get an AI upgrade.
FaceAge, a deep learning algorithm described in The Lancet Digital Health, converts a simple headshot into a number that more accurately reflects a person's biological age rather than the birthday on their chart.
Trained on tens of thousands of photographs, it pegged cancer patients on average as biologically five years older than healthy peers. The study's authors say it could help doctors decide who can safely tolerate punishing treatments, and who might fare better with a gentler approach.
"We hypothesize that FaceAge could be used as a biomarker in cancer care to quantify a patient's biological age and help a doctor make these tough decisions," said co-senior author Raymond Mak, an oncologist at Mass Brigham Health, a Harvard-affiliated health system in Boston.
Consider two hypothetical patients: a spry 75‑year‑old whose biological age clocks in at 65, and a frail 60‑year‑old whose biology reads 70. Aggressive radiation might be appropriate for the former but risky for the latter.
The same logic could help guide decisions about heart surgery, hip replacements or end-of-life care.
Growing evidence shows humans age at different rates, shaped by genes, stress, exercise, and habits like smoking or drinking. While pricey genetic tests can reveal how DNA wears over time, FaceAge promises insight using only a selfie.
The model was trained on 58,851 portraits of presumed-healthy adults over 60, culled from public datasets.
It was then tested on 6,196 cancer patients treated in the United States and the Netherlands, using photos snapped just before radiotherapy. Patients with malignancies looked on average 4.79 years older biologically than their chronological age.
Among cancer patients, a higher FaceAge score strongly predicted worse survival -- even after accounting for actual age, sex, and tumor type -- and the hazard rose steeply for anyone whose biological reading tipped past 85.
Intriguingly, FaceAge appears to weigh the signs of aging differently than humans do. For example, being gray-haired or balding matters less than subtle changes in facial muscle tone.
FaceAge boosted doctors' accuracy, too. Eight physicians were asked to examine headshots of terminal cancer patients and guess who would die within six months. Their success rate barely beat chance; with FaceAge data in hand, predictions improved sharply.
The model even affirmed a favorite internet meme, estimating actor Paul Rudd's biological age as 43 in a photo taken when he was 50.
AI tools have faced scrutiny for under‑serving non-white people. Mak said preliminary checks revealed no significant racial bias in FaceAge's predictions, but the group is training a second‑generation model on 20,000 patients.
They're also probing how factors like makeup, cosmetic surgery or room lighting variations could fool the system.
Ethics debates loom large. An AI that can read biological age from a selfie could prove a boon for clinicians, but also tempting for life insurers or employers seeking to gauge risk.
"It is for sure something that needs attention, to assure that these technologies are used only in the benefit for the patient," said Hugo Aerts, the study's co-lead who directs MGB's AI in medicine program.
Another dilemma: What happens when the mirror talks back? Learning that your body is biologically older than you thought may spur healthy changes -- or sow anxiety.
The researchers are planning to open a public-facing FaceAge portal where people can upload their own pictures to enroll in a research study to further validate the algorithm. Commercial versions aimed at clinicians may follow, but only after more validation.
© 2025 AFP
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Japan Today
16 hours ago
- Japan Today
Study reveals potato's secret tomato past
A peasant steps on potatoes to begin the elaboration of chuno (dehydrated potato) in Machacamarca, Bolivia, on June 30, 2021 By Issam AHMED You say potato, I say tomato? Turns out one helped create the other: Natural interbreeding between wild tomatoes and potato-like plants in South America gave rise to the modern day spud around nine million years ago, according to a new study published in the journal Cell. Co-author Loren Rieseberg, a professor at the University of British Columbia, told AFP the findings point to a "profound shift" in evolutionary biology, as scientists increasingly recognize the role of ancient hybridization events in shaping the Tree of Life. While it was once thought that random mutations were by far the biggest driver of new species, "we now agree that the creative role of hybridization has been underestimated," he said. Simple, affordable and versatile, the humble potato is now one of the world's most important crops. But its origins have long puzzled scientists. Modern potato plants closely resemble three species from Chile known as Etuberosum. However, these plants do not produce tubers -- the large underground structures, like those found in potatoes and yams, that store nutrients and are the parts we eat. On the other hand, genetic analysis has revealed a surprising closeness to tomatoes. "This is known as discordance, and indicates something interesting is going on!" co-author Sandra Knapp, a research botanist at Britain's Natural History Museum, told AFP. To solve the mystery, an international team of researchers analyzed 450 genomes from cultivated potatoes and 56 wild potato species. Lead author Zhiyang Zhang, of the Agricultural Genomics Institute at Shenzhen, said in a statement: "Wild potatoes are very difficult to sample, so this dataset represents the most comprehensive collection of wild potato genomic data ever analysed." The analysis revealed that modern potatoes carry a balanced genetic legacy from two ancestral species -- roughly 60 percent from Etuberosum and 40 percent from tomatoes. "My wow moment was when the Chinese team showed that ALL potatoes, wild species as well as land races, had basically the same proportion of tomato genes and Etuberosum genes," said Knapp. "That really points to an ancient hybridization event rather than various events of gene exchange later on," she added. "It is so clear cut! Beautiful." One gene called SP6A, a signal for tuberization, came from the tomato lineage. But it only enabled tuber formation when paired with the IT1 gene from Etuberosum, which controls underground stem growth. The divergence between Etuberosum and tomatoes is thought to have begun 14 million years ago -- possibly due to off-target pollination by insects -- and completed nine million years ago. This evolutionary event coincided with the rapid uplift of the Andes mountain range, providing ideal conditions for the emergence of tuber-bearing plants that could store nutrients underground. Another key feature of tubers is their ability to reproduce asexually, sprouting new buds without the need for seeds or pollination -- a trait that helped them spread across South America, and through later human exchange, around the globe. Co-author Sanwen Huang, a professor at the Agricultural Genomics Institute at Shenzhen, told AFP that his lab is now working on a new hybrid potato that can be reproduced by seeds to accelerate breeding. This study suggests that using the tomato "as a chassis of synthetic biology" is a promising route for creating this new potato, he said. © 2025 AFP


Japan Today
17-06-2025
- Japan Today
AI chatbots need more books to learn from. These libraries are opening their stacks
By MATT O'BRIEN Everything ever said on the internet was just the start of teaching artificial intelligence about humanity. Tech companies are now tapping into an older repository of knowledge: the library stacks. Nearly one million books published as early as the 15th century — and in 254 languages — are part of a Harvard University collection being released to AI researchers Thursday. Also coming soon are troves of old newspapers and government documents held by Boston's public library. Cracking open the vaults to centuries-old tomes could be a data bonanza for tech companies battling lawsuits from living novelists, visual artistsand others whose creative works have been scooped up without their consent to train AI chatbots. 'It is a prudent decision to start with public domain data because that's less controversial right now than content that's still under copyright,' said Burton Davis, a deputy general counsel at Microsoft. Davis said libraries also hold 'significant amounts of interesting cultural, historical and language data' that's missing from the past few decades of online commentary that AI chatbots have mostly learned from. Supported by 'unrestricted gifts' from Microsoft and ChatGPT maker OpenAI, the Harvard-based Institutional Data Initiative is working with libraries around the world on how to make their historic collections AI-ready in a way that also benefits libraries and the communities they serve. 'We're trying to move some of the power from this current AI moment back to these institutions,' said Aristana Scourtas, who manages research at Harvard Law School's Library Innovation Lab. 'Librarians have always been the stewards of data and the stewards of information.' Harvard's newly released dataset, Institutional Books 1.0, contains more than 394 million scanned pages of paper. One of the earlier works is from the 1400s — a Korean painter's handwritten thoughts about cultivating flowers and trees. The largest concentration of works is from the 19th century, on subjects such as literature, philosophy, law and agriculture, all of it meticulously preserved and organized by generations of librarians. It promises to be a boon for AI developers trying to improve the accuracy and reliability of their systems. 'A lot of the data that's been used in AI training has not come from original sources,' said the data initiative's executive director, Greg Leppert, who is also chief technologist at Harvard's Berkman Klein Center for Internet & Society. This book collection goes "all the way back to the physical copy that was scanned by the institutions that actually collected those items,' he said. Before ChatGPT sparked a commercial AI frenzy, most AI researchers didn't think much about the provenance of the passages of text they pulled from Wikipedia, from social media forums like Reddit and sometimes from deep repositories of pirated books. They just needed lots of what computer scientists call tokens — units of data, each of which can represent a piece of a word. Harvard's new AI training collection has an estimated 242 billion tokens, an amount that's hard for humans to fathom but it's still just a drop of what's being fed into the most advanced AI systems. Facebook parent company Meta, for instance, has said the latest version of its AI large language model was trained on more than 30 trillion tokens pulled from text, images and videos. Meta is also battling a lawsuit from comedian Sarah Silverman and other published authors who accuse the company of stealing their books from 'shadow libraries' of pirated works. Now, with some reservations, the real libraries are standing up. OpenAI, which is also fighting a string of copyright lawsuits, donated $50 million this year to a group of research institutions including Oxford University's 400-year-old Bodleian Library, which is digitizing rare texts and using AI to help transcribe them. When the company first reached out to the Boston Public Library, one of the biggest in the U.S., the library made clear that any information it digitized would be for everyone, said Jessica Chapel, its chief of digital and online services. 'OpenAI had this interest in massive amounts of training data. We have an interest in massive amounts of digital objects. So this is kind of just a case that things are aligning,' Chapel said. Digitization is expensive. It's been painstaking work, for instance, for Boston's library to scan and curate dozens of New England's French-language newspapers that were widely read in the late 19th and early 20th century by Canadian immigrant communities from Quebec. Now that such text is of use as training data, it helps bankroll projects that librarians want to do anyway. 'We've been very clear that, 'Hey, we're a public library,'" Chapel said. 'Our collections are held for public use, and anything we digitized as part of this project will be made public.' Harvard's collection was already digitized starting in 2006 for another tech giant, Google, in its controversial project to create a searchable online library of more than 20 million books. Google spent years beating back legal challenges from authors to its online book library, which included many newer and copyrighted works. It was finally settled in 2016 when the U.S. Supreme Court let stand lower court rulings that rejected copyright infringement claims. Now, for the first time, Google has worked with Harvard to retrieve public domain volumes from Google Books and clear the way for their release to AI developers. Copyright protections in the U.S. typically last for 95 years, and longer for sound recordings. How useful all of this will be for the next generation of AI tools remains to be seen as the data gets shared Thursday on the Hugging Face platform, which hosts datasets and open-source AI models that anyone can download. The book collection is more linguistically diverse than typical AI data sources. Fewer than half the volumes are in English, though European languages still dominate, particularly German, French, Italian, Spanish and Latin. A book collection steeped in 19th century thought could also be 'immensely critical' for the tech industry's efforts to build AI agents that can plan and reason as well as humans, Leppert said. 'At a university, you have a lot of pedagogy around what it means to reason,' Leppert said. 'You have a lot of scientific information about how to run processes and how to run analyses.' At the same time, there's also plenty of outdated data, from debunked scientific and medical theories to racist narratives. 'When you're dealing with such a large data set, there are some tricky issues around harmful content and language," said Kristi Mukk, a coordinator at Harvard's Library Innovation Lab who said the initiative is trying to provide guidance about mitigating the risks of using the data, to 'help them make their own informed decisions and use AI responsibly.' © Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.


Kyodo News
13-06-2025
- Kyodo News
Japan draws up 100 bil. yen policy to attract foreign researchers
KYODO NEWS - 10 hours ago - 16:20 | All, Japan The Japanese government unveiled on Friday a 100 billion yen ($700 million) policy package designed to attract foreign researchers, some of whom may have joined the exodus of talent from the United States due to research funding cuts. The measures aim to create an elite research environment in Japan, as competition to lure talent intensifies globally in fields such as artificial intelligence and semiconductors. The government also plans to use profits generated from a 10 trillion yen investment fund set up by the state to help universities produce internationally competitive research. "We will make utmost efforts to make our country the most attractive in the world for researchers," said science and technology policy minister Minoru Kiuchi at a press conference. Many researchers have departed the United States as President Donald Trump's administration has pushed elite universities to prioritize American students over those from other countries and slashed federal funding for many programs. Japan's new policy package will fund many existing programs, including a plan by Tohoku University to spend around 30 billion yen to recruit about 500 researchers from Japan and abroad. An education ministry project in which hubs will be created to promote top-level research is also included. The government aims to raise salaries for researchers and reduce their administrative burden, allowing them to concentrate on their work. It also seeks to acquire advanced technology for use at institutions. Kiuchi said the government will consider additional measures to retain researchers after bringing them in from abroad. Despite the government's recent efforts to promote science and technology research, an education ministry institute said that last year Japan remained ranked at a record-low 13th place in the number of highly cited scientific papers. Related coverage: Japan calls on colleges to accept students in U.S. after Harvard ban Univ. of Tokyo mulls accepting Harvard foreign students if barred Defense tech subsidies for Japan universities totaled 2.7 bil. yen