
AI helps Latin scholars decipher ancient Roman texts
PARIS: Around 1,500 Latin inscriptions are discovered every year, offering an invaluable view into the daily life of ancient Romans – and posing a daunting challenge for the historians tasked with interpreting them.
But a new artificial intelligence tool, partly developed by Google researchers, can now help Latin scholars piece together these puzzles from the past, according to a study published on July 23.
Inscriptions in Latin were commonplace across the Roman world, from laying out the decrees of emperors to graffiti on the city streets. One mosaic outside a home in the ancient city of Pompeii even warns: "Beware of the dog".
These inscriptions are "so precious to historians because they offer first-hand evidence of ancient thought, language, society and history", said study co-author Yannis Assael, a researcher at Google's AI lab DeepMind.
"What makes them unique is that they are written by the ancient people themselves across all social classes on any subject. It's not just history written by the elite," Assael, who co-designed the AI model, told a press conference.
However these texts have often been damaged over the millennia.
"We usually don't know where and when they were written," Assael said.
So the researchers created a generative neural network, which is an AI tool that can be trained to identify complex relationships between types of data.
They named their model Aeneas, after the Trojan hero and son of the Greek goddess Aphrodite.
It was trained on data about the dates, locations and meanings of Latin transcriptions from an empire that spanned five million square kilometres over two millennia.
Thea Sommerschield, an epigrapher at the University of Nottingham who co-designed the AI model, said that "studying history through inscriptions is like solving a gigantic jigsaw puzzle".
"You can't solve the puzzle with a single isolated piece, even though you know information like its colour or its shape," she explained.
"To solve the puzzle, you need to use that information to find the pieces that connect to it."
Tested on Augustus
This can be a huge job.
Latin scholars have to compare inscriptions against "potentially hundreds of parallels", a task which "demands extraordinary erudition" and "laborious manual searches" through massive library and museum collections, the study in the journal Nature said.
The researchers trained their model on 176,861 inscriptions – worth up to 16 million characters – five percent of which contained images.
It can now estimate the location of an inscription among the 62 Roman provinces, offer a decade when it was produced and even guess what missing sections might have contained, they said.
To test their model, the team asked Aeneas to analyse a famous inscription called "Res Gestae Divi Augusti", in which Rome's first emperor Augustus detailed his accomplishments.
Debate still rages between historians about when exactly the text was written.
Though the text is riddled with exaggerations, irrelevant dates and erroneous geographical references, the researchers said that Aeneas was able to use subtle clues such as archaic spelling to land on two possible dates – the two being debated between historians.
More than 20 historians who tried out the model found it provided a useful starting point in 90 percent of cases, according to DeepMind.
The best results came when historians used the AI model together with their skills as researchers, rather than relying solely on one or the other, the study said.
"Since their breakthrough, generative neural networks have seemed at odds with educational goals, with fears that relying on AI hinders critical thinking rather than enhances knowledge," said study co-author Robbe Wulgaert, a Belgian AI researcher.
"By developing Aeneas, we demonstrate how this technology can meaningfully support the humanities by addressing concrete challenges historians face." – AFP

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Star
17 hours ago
- The Star
AI is replacing search engines as a shopping guide, research suggests
Finding products, comparing prices and browsing reviews: Until now, you'd have done most of this in a search engine like Google. But that era appears to be ending thanks to AI, research shows. — Photo: Christin Klose/dpa COPENHAGEN: Three in four people who use AI are turning to the likes of ChatGPT, Gemini and Copilot to get advice and recommendations on shopping and travel instead of using the previous online method of search engines like Google, new research shows. AI-supported online shopping is done at least occasionally by 76% of AI users, with 17% doing so most or even all of the time, according to a study conducted by the market research institute Norstat on behalf of Verdane, a leading European investment company. The changes in consumer search behaviour pose a major challenge not only for search engine providers like Google but also for manufacturers and retailers, who must adapt to maintain their visibility in the AI-driven world. AI chatbots have emerged as powerful tools for tracking down specific products, often providing helpful advice in response to complex and specific queries. Of the survey respondents, 3% are dedicated AI enthusiasts who always use AI tools instead of search engines when shopping online, while 14% said they mostly use AI and 35% do so occasionally. A total of 7,282 people from the UK, Germany, Sweden, Norway, Denmark and Finland aged between 18 and 60 participated in the survey in June. The highest proportion of AI use is in online travel research, at 33%. This is followed by consumer electronics (22%), DIY and hobby supplies (20%), and software or digital subscriptions (19%). However, AI usage is still relatively low in fashion and clothing (13%), cosmetics (12%), and real estate (7%). Among AI tools, ChatGPT is far ahead of its competitors and 86% of AI users regularly use OpenAI's chatbot. This is followed at a considerable distance by Google's Gemini (26% regular users) and Microsoft's Copilot (20%). The Chinese AI bot DeepSeek, which has been the subject of heated debate among AI experts and data protection advocates, appears to have no significant role among consumers in Europe. – dpa


The Star
a day ago
- The Star
‘It's the most empathetic voice in my life': How AI is transforming the lives of neurodivergent people
-For Cape Town-based filmmaker Kate D'hotman, connecting with movie audiences comes naturally. Far more daunting is speaking with others. 'I've never understood how people [decipher] social cues,' the 40-year-old director of horror films says. D'hotman has autism and attention-deficit hyperactivity disorder (ADHD), which can make relating to others exhausting and a challenge. However, since 2022, D'hotman has been a regular user of ChatGPT, the popular AI-powered chatbot from OpenAI, relying on it to overcome communication barriers at work and in her personal life. 'I know it's a machine,' she says. 'But sometimes, honestly, it's the most empathetic voice in my life.' Neurodivergent people — including those with autism, ADHD, dyslexia and other conditions — can experience the world differently from the neurotypical norm. Talking to a colleague, or even texting a friend, can entail misread signals, a misunderstood tone and unintended impressions. AI-powered chatbots have emerged as an unlikely ally, helping people navigate social encounters with real-time guidance. Although this new technology is not without risks — in particular some worry about over-reliance — many neurodivergent users now see it as a lifeline. How does it work in practice? For D'hotman, ChatGPT acts as an editor, translator and confidant. Before using the technology, she says communicating in neurotypical spaces was difficult. She recalls how she once sent her boss a bulleted list of ways to improve the company, at their request. But what she took to be a straightforward response was received as overly blunt, and even rude. Now, she regularly runs things by ChatGPT, asking the chatbot to consider the tone and context of her conversations. Sometimes she'll instruct it to take on the role of a psychologist or therapist, asking for help to navigate scenarios as sensitive as a misunderstanding with her best friend. She once uploaded months of messages between them, prompting the chatbot to help her see what she might have otherwise missed. Unlike humans, D'hotman says, the chatbot is positive and non-judgmental. That's a feeling other neurodivergent people can relate to. Sarah Rickwood, a senior project manager in the sales training industry, based in Kent, England, has ADHD and autism. Rickwood says she has ideas that run away with her and often loses people in conversations. 'I don't do myself justice,' she says, noting that ChatGPT has 'allowed me to do a lot more with my brain.' With its help, she can put together emails and business cases more clearly. The use of AI-powered tools is surging. A January study conducted by Google and the polling firm Ipsos found that AI usage globally has jumped 48%, with excitement about the technology's practical benefits now exceeding concerns over its potentially adverse February, OpenAI told Reuters that its weekly active users surpassed 400 million, of which at least 2 million are paying business users. But for neurodivergent users, these aren't just tools of convenience and some AI-powered chatbotsare now being created with the neurodivergent community in mind. Michael Daniel, an engineer and entrepreneur based in Newcastle, Australia, told Reuters that it wasn't until his daughter was diagnosed with autism — and he received the same diagnosis himself — that he realised how much he had been masking his own neurodivergent traits. His desire to communicate more clearly with his neurotypical wife and loved ones inspired him to build Neurotranslator, an AI-powered personal assistant, which he credits with helping him fully understand and process interactions, as well as avoid misunderstandings. 'Wow … that's a unique shirt,' he recalls saying about his wife's outfit one day, without realising how his comment might be perceived. She asked him to run the comment through NeuroTranslator, which helped him recognise that, without a positive affirmation, remarks about a person's appearance could come across as criticism. 'The emotional baggage that comes along with those situations would just disappear within minutes,' he says of using the app. Since its launch in September, Daniel says NeuroTranslator has attracted more than 200 paid subscribers. An earlier web version of the app, called Autistic Translator, amassed 500 monthly paid subscribers. As transformative as this technology has become, some warn against becoming too dependent. The ability to get results on demand can be 'very seductive,' says Larissa Suzuki, a London-based computer scientist and visiting NASA researcher who is herself neurodivergent. Overreliance could be harmful if it inhibits neurodivergent users' ability to function without it, or if the technology itself becomes unreliable — as is already the case with many AI search-engine results, according to a recent study from the Columbia Journalism Review.'If AI starts screwing up things and getting things wrong,' Suzuki says, 'people might give up on technology, and on themselves." Baring your soul to an AI chatbot does carry risk, agrees Gianluca Mauro, an AI adviser and co-author of Zero to AI. 'The objective [of AI models like ChatGPT] is to satisfy the user,' he says, raising questions about its willingness to offer critical advice. Unlike therapists, these tools aren't bound by ethical codes or professional guidelines. If AI has the potential to become addictive, Mauro adds, regulation should follow. A recent study by Carnegie Mellon and Microsoft (which is a key investor in OpenAI) suggests that long-term overdependence on generative AI tools can undermine users' critical-thinking skills and leave them ill-equipped to manage without it. 'While AI can improve efficiency,' the researchers wrote, 'it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI.' While Dr. Melanie Katzman, a clinical psychologist and expert in human behaviour, recognises the benefits of AI for neurodivergent people, she does see downsides, such as giving patients an excuse not to engage with others. A therapist will push their patient to try different things outside of their comfort zone. "I think it's harder for your AI companion to push you," she says. But for users who have come to rely on this technology, such fears are academic. 'A lot of us just end up kind of retreating from society,' warns D'hotman, who says that she barely left the house in the year following her autism diagnosis, feeling overwhelmed. Were she to give up using ChatGPT, she fears she would return to that traumatic period of isolation. 'As somebody who's struggled with a disability my whole life,' she says, 'I need this.' (Editing by Yasmeen Serhan and Sharon Singleton)


The Star
2 days ago
- The Star
Cybersecurity demands proactive identity verification to counter AI threats
Cybersecurity is a race to outpace scammers, and secure identity verification must be at the forefront, says Jen Liang, CEO of Australia-based IDMeta Group. During the Cybersecurity Summit 2025 on Friday (July 25), Liang said artificial intelligence reshapes fraud tactics, with deepfakes posing a growing threat, making identity verification the foundation of digital trust. "Fraudsters use AI to manipulate IDs, mimic voices, and create deepfake videos, but we also use AI for fraud detection. Our biometric technology can detect deepfakes not just at onboarding but during live interactions," he said during the panel discussion titled "Digital Trust and Resilience: Strengthening Cyber Confidence in Malaysia." He highlighted cases where people unwittingly engaged with deepfakes on video calls. In 2024, a finance employee at a multinational company in Hong Kong was deceived into transferring $25mil after fraudsters used deepfake technology to impersonate the company's CFO during a video conference call. "It's really concerning. Fake meetings are being set up with deepfakes that are 85% to 95% accurate," he said. Liang said the challenge lies in staying ahead of cybercriminals and adapting faster than they do. "Cybersecurity has always been about staying one step ahead. The difference now is the tools are far more powerful for both sides." He emphasised the importance of scalable, secure identity verification in sectors like fintech and gaming. "Fintech and gaming are typically the spaces we're very much involved in. Verification is critical when onboarding customers securely and ensuring they are who they say they are. It's also key to preventing scams and fraudulent accounts, which is especially important today," he said. Operating in multiple jurisdictions, Liang acknowledged that navigating data privacy laws and compliance is one of the company's greatest challenges. "Every country has its own regulatory framework. In Australia, the privacy act is very strict." He noted that both Australia and Indonesia require in-country data servers, with no allowance for cross-border storage. "The Philippines is moving in that direction too, but they don't yet have the infrastructure to support it. Without local data centres like Google or Amazon Web Services, requiring in-country servers could overwhelm their current systems," he said. Liang added that while regulations are becoming more standardised, such as biometric validation and email screening, enforcement is key. "It's not just about laws being in place. It's about how consistently those laws are enforced." He also acknowledged Malaysia's evolving digital policy landscape. "On this trip, we've had conversations with several stakeholders here. The direction is there, but the execution and development are still maturing. It's something we're keeping a close eye on," he said. In response to a question on educating youth about cyber threats, Liang stressed the need to empower them to navigate digital ecosystems responsibly. "Young people today are far more experiential. They have broad access to information, and they're not afraid to challenge what they're told. We just need to provide them a wider scope of guidance, not control," he said. Other speakers echoed Liang's concerns, particularly around resilience and preparedness in the face of rising threats. Amal Wikramasinghe, Head of Governance Risk and Compliance - Cybersecurity and Data Privacy at Axiata Group, described how the company managed a rare and unforeseen third-party outage that impacted four of its operating companies. He emphasised the need for real-time crisis communication and damage assessment protocols. Zainol Zainuddin, CTO of NTT DATA eCommerce Solutions, warned that infrastructure resilience is only as strong as an organisation's cybersecurity culture. He highlighted how phishing, still the most common entry point for hackers, thrives in organisations where awareness is treated as a checklist, not a mindset. "Even the best technology won't protect you if your people don't know how to spot a phishing email. You have to create a blame-free, transparent culture where mistakes can be reported early," he said. Moderator Jaco Benadie, Partner, Technology Consulting – Cyber at Ernst & Young Consulting Sdn Bhd, summarised that building digital trust requires a proactive, resilient strategy that spans technology, people, and culture, while prioritising user privacy and navigating cross-border regulatory challenges.