
Rediscovery of Babylon epic poem is a reason to cheer AI
ALAMY
'Like the sea, Babylon proffers her yield / Like a garden of fruit, she flourishes in her charms / Like a wave, her swell brings her bounties rolling in.'
These words, written around 3,000 years ago, were known by heart by people in the Babylonian empire for centuries. They have just been recovered with the help of AI. The words form part of a 250-line poem deciphered from fragments of hundreds of cuneiform tablets discovered in the library of Sippar, a lost city 40 miles north of Baghdad. Without AI, says Professor Enrique Jiménez, of Ludwig-Maximilians University, the joint Iraqi- German project would have taken decades.
• Inside the library where cutting-edge tech is unlocking the secrets of ancient scrolls
Dating from 300 years before the Iliad and the Odyssey, the poem was recovered from 30 separate manuscripts written over a 600-year period. This suggests that it was a work of great importance, possibly the Babylonian equivalent of Greece's Homeric hymns and Rome's Aeneid. Indeed, it appears to have been on the Babylonian school curriculum, some of the research sources being schoolchildren's tablets. Such texts were learned by heart at the time.
That's partly why the find is so exciting: it's unusual for such a significant piece of literature to be lost and then to resurface. But the poem is also a powerful literary work, using vivid language reminiscent of the Psalms to bring the city and its fertile agricultural hinterland to life. And it reveals some fascinating features of Babylonian society, such as the importance of women priests and the respect accorded to foreigners.
Humanity is understandably alarmed by AI's potential to shake contemporary civilisation to its foundations, and so tends to focus on the threats it may pose. But it is important also to remember its many upsides, such as its potential for revealing the lost cultural riches of ancient civilisation. Like fruitful Babylon, AI has much to yield.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Guardian
20 minutes ago
- The Guardian
Human-level AI is not inevitable. We have the power to change course
'Technology happens because it is possible,' OpenAI CEO, Sam Altman, told the New York Times in 2019, consciously paraphrasing Robert Oppenheimer, the father of the atomic bomb. Altman captures a Silicon Valley mantra: technology marches forward inexorably. Another widespread techie conviction is that the first human-level AI – also known as artificial general intelligence (AGI) – will lead to one of two futures: a post-scarcity techno-utopia or the annihilation of humanity. For countless other species, the arrival of humans spelled doom. We weren't tougher, faster or stronger – just smarter and better coordinated. In many cases, extinction was an accidental byproduct of some other goal we had. A true AGI would amount to creating a new species, which might quickly outsmart or outnumber us. It could see humanity as a minor obstacle, like an anthill in the way of a planned hydroelectric dam, or a resource to exploit, like the billions of animals confined in factory farms. Altman, along with the heads of the other top AI labs, believes that AI-driven extinction is a real possibility (joining hundreds of leading AI researchers and prominent figures). Given all this, it's natural to ask: should we really try to build a technology that may kill us all if it goes wrong? Perhaps the most common reply says: AGI is inevitable. It's just too useful not to build. After all, AGI would be the ultimate technology – what a colleague of Alan Turing called 'the last invention that man need ever make'. Besides, the reasoning goes within AI labs, if we don't, someone else will do it – less responsibly, of course. A new ideology out of Silicon Valley, effective accelerationism (e/acc), claims that AGI's inevitability is a consequence of the second law of thermodynamics and that its engine is 'technocapital'. The e/acc manifesto asserts: 'This engine cannot be stopped. The ratchet of progress only ever turns in one direction. Going back is not an option.' For Altman and e/accs, technology takes on a mystical quality – the march of invention is treated as a fact of nature. But it's not. Technology is the product of deliberate human choices, motivated by myriad powerful forces. We have the agency to shape those forces, and history shows that we've done it before. No technology is inevitable, not even something as tempting as AGI. Some AI worriers like to point out the times humanity resisted and restrained valuable technologies. Fearing novel risks, biologists initially banned and then successfully regulated experiments on recombinant DNA in the 1970s. No human has been reproduced via cloning, even though it's been technically possible for over a decade, and the only scientist to genetically engineer humans was imprisoned for his efforts. Nuclear power can provide consistent, carbon-free energy, but vivid fears of catastrophe have motivated stifling regulations and outright bans. And if Altman were more familiar with the history of the Manhattan Project, he might realize that the creation of nuclear weapons in 1945 was actually a highly contingent and unlikely outcome, motivated by a mistaken belief that the Germans were ahead in a 'race' for the bomb. Philip Zelikow, the historian who led the 9/11 Commission, said: 'I think had the United States not built an atomic bomb during the Second World War, it's actually not clear to me when or possibly even if an atomic bomb ever is built.' It's now hard to imagine a world without nuclear weapons. But in a little-known episode, then president Ronald Reagan and Soviet leader Mikhail Gorbachev nearly agreed to ditch all their bombs (a misunderstanding over the 'Star Wars' satellite defense system dashed these hopes). Even though the dream of full disarmament remains just that, nuke counts are less than 20% of their 1986 peak, thanks largely to international agreements. These choices weren't made in a vacuum. Reagan was a staunch opponent of disarmament before the millions-strong Nuclear Freeze movement got to him. In 1983, he commented to his secretary of state : 'If things get hotter and hotter and arms control remains an issue, maybe I should go see [Soviet leader Yuri] Andropov and propose eliminating all nuclear weapons.' There are extremely strong economic incentives to keep burning fossil fuels, but climate advocacy has pried open the Overton window and significantly accelerated our decarbonization efforts. In April 2019, the young climate group Extinction Rebellion (XR) brought London to a halt, demanding the UK target net-zero carbon emissions by 2025. Their controversial civil disobedience prompted parliament to declare a climate emergency and the Labour party to adopt a 2030 target to decarbonize the UK's electricity production. The Sierra Club's Beyond Coal campaign was lesser-known but wildly effective. In just its first five years, the campaign helped shutter more than one-third of US coal plants. Thanks primarily to its move from coal, US per capita carbon emissions are now lower than they were in 1913. In many ways, the challenge of regulating efforts to build AGI is much smaller than that of decarbonizing. Eighty-two percent of global energy production comes from fossil fuels. Energy is what makes civilization work, but we're not dependent on a hypothetical AGI to make the world go round. Further, slowing and guiding the development of future systems doesn't mean we'd need to stop using existing systems or developing specialist AIs to tackle important problems in medicine, climate and elsewhere. It's obvious why so many capitalists are AI enthusiasts: they foresee a technology that can achieve their long-time dream of cutting workers out of the loop (and the balance sheet). But governments are not profit maximizers. Sure, they care about economic growth, but they also care about things like employment, social stability, market concentration, and, occasionally, democracy. It's far less clear how AGI would affect these domains overall. Governments aren't prepared for a world where most people are technologically unemployed. Capitalists often get what they want, particularly in recent decades, and the boundless pursuit of profit may undermine any regulatory effort to slow the speed of AI development. But capitalists don't always get what they want. At a bar in San Francisco in February, a longtime OpenAI safety researcher pronounced to a group that the e/accs shouldn't be worried about the 'extreme' AI safety people, because they'll never have power. The boosters should actually be afraid of AOC and Senator Josh Hawley because they 'can really fuck things up for you'. Assuming humans stick around for many millennia, there's no way to know we won't eventually build AGI. But this isn't really what the inevitabilists are saying. Instead, the message tends to be: AGI is imminent. Resistance is futile. But whether we build AGI in five, 20 or 100 years really matters. And the timeline is far more in our control than the boosters will admit. Deep down, I suspect many of them realize this, which is why they spend so much effort trying to convince others that there's no point in trying. Besides, if you think AGI is inevitable, why bother convincing anybody? We actually had the computing power required to train GPT-2 more than a decade before OpenAI actually did it, but people didn't know whether it was worth doing. But right now, the top AI labs are locked in such a fierce race that they aren't implementing all the precautions that even their own safety teams want. (One OpenAI employee announced recently that he quit 'due to losing confidence that it would behave responsibly around the time of AGI'.) There's a 'safety tax' that labs can't afford to pay if they hope to stay competitive; testing slows product releases and consumes company resources. Governments, on the other hand, aren't subject to the same financial pressures. An inevitabilist tech entrepreneur recently said regulating AI development is impossible 'unless you control every line of written code'. That might be true if anyone could spin up an AGI on their laptop. But it turns out that building advanced, general AI models requires enormous arrays of supercomputers, with chips produced by an absurdly monopolistic industry. Because of this, many AI safety advocates see 'compute governance' as a promising approach. Governments could compel cloud computing providers to halt next generation training runs that don't comply with established guardrails. Far from locking out upstarts or requiring Orwellian levels of surveillance, thresholds could be chosen to only affect players who can afford to spend more than $100m on a single training run. Governments do have to worry about international competition and the risk of unilateral disarmament, so to speak. But international treaties can be negotiated to widely share the benefits from cutting-edge AI systems while ensuring that labs aren't blindly scaling up systems they don't understand. And while the world may feel fractious, rival nations have cooperated to surprising degrees. The Montreal Protocol fixed the ozone layer by banning chlorofluorocarbons. Most of the world has agreed to ethically motivated bans on militarily useful weapons, such as biological and chemical weapons, blinding laser weapons, and 'weather warfare'. In the 1960s and 70s, many analysts feared that every country that could build nukes, would. But most of the world's roughly three-dozen nuclear programs were abandoned. This wasn't the result of happenstance, but rather the creation of a global nonproliferation norm through deliberate statecraft, like the 1968 Non-Proliferation Treaty. On the few occasions when Americans were asked if they wanted superhuman AI, large majorities said 'no'. Opposition to AI has grown as the technology has become more prevalent. When people argue that AGI is inevitable, what they're really saying is that the popular will shouldn't matter. The boosters see the masses as provincial neo-Luddites who don't know what's good for them. That's why inevitability holds such rhetorical allure for them; it lets them avoid making their real argument, which they know is a loser in the court of public opinion. The draw of AGI is strong. But the risks involved are potentially civilization-ending. A civilization-scale effort is needed to compel the necessary powers to resist it. Technology happens because people make it happen. We can choose otherwise. Garrison Lovely is a freelance journalist


Geeky Gadgets
6 hours ago
- Geeky Gadgets
Kimi K2 Agent Researcher for Deep Reasoning Research Tasks
What if you could delegate your most complex research tasks to an AI that not only understands the intricacies of your work but also evolves with every challenge it faces? Enter the Kimi K2 Agent Researcher, a new single-agent system designed to redefine how we approach deep reasoning and long-term problem-solving. Unlike traditional tools that falter under the weight of extended tasks or lose focus in the noise of irrelevant data, the Kimi K2 thrives in complexity, offering precision, adaptability, and unparalleled efficiency. Imagine a system that can sift through hundreds of sources, refine hypotheses on the fly, and deliver actionable insights—all while maintaining a laser-sharp focus on your objectives. It's not just a tool; it's a partner in innovation. Prompt Engineering explores the fantastic potential of the Kimi K2 Agent Researcher, delving into its innovative functionalities like iterative hypothesis refinement, real-time internal search, and automated coding. You'll discover how its single-agent architecture eliminates inefficiencies common in multi-agent systems, making sure consistency and clarity even in the most demanding research environments. Whether you're a data scientist navigating complex datasets or an academic pushing the boundaries of your field, the Kimi K2 promises to elevate your research process. But how does it compare to other AI models, and what makes its design uniquely suited for global, multilingual challenges? The answers lie in its seamless integration of technology and purpose—a design philosophy that might just change the way we think about research forever. Kimi K2 Research Overview Core Features of the Kimi K2 Agent Researcher At the foundation of the Kimi K2 Agent Researcher lies its ability to handle complex research tasks with exceptional accuracy and efficiency. Its single-agent architecture incorporates three primary tools that work in tandem to optimize performance: Real-time internal search: This feature enables the system to swiftly retrieve relevant information from internal datasets, making sure rapid access to critical data and minimizing delays in research workflows. This feature enables the system to swiftly retrieve relevant information from internal datasets, making sure rapid access to critical data and minimizing delays in research workflows. Text-based browser: Equipped to conduct extensive web-based research, this tool can explore up to 200 URLs per task, allowing comprehensive data collection from diverse online sources. Equipped to conduct extensive web-based research, this tool can explore up to 200 URLs per task, allowing comprehensive data collection from diverse online sources. Automated coding tool: Designed to generate and refine code, this tool supports technical aspects of research, streamlining processes that would otherwise require significant manual effort. By combining these tools, the system synthesizes information from multiple sources, delivering thorough analyses and highly accurate results. This integration ensures that users can rely on the system for both breadth and depth in their research endeavors. Training Methodology and Advanced Functionalities The Kimi K2 Agent Researcher is trained using an end-to-end reinforcement learning approach, allowing it to refine its strategies through iterative trial and error. This training methodology underpins several advanced functionalities that set the system apart: Iterative hypothesis refinement: The system evaluates conflicting information, adjusts hypotheses, and self-corrects to enhance the accuracy of its conclusions. The system evaluates conflicting information, adjusts hypotheses, and self-corrects to enhance the accuracy of its conclusions. Information validation: It verifies the reliability and accuracy of data before presenting results, making sure that conclusions are based on credible sources. It verifies the reliability and accuracy of data before presenting results, making sure that conclusions are based on credible sources. Context management: By retaining relevant information and filtering out irrelevant data, the system maintains clarity and focus during extended research tasks. These capabilities make the Kimi K2 Agent Researcher particularly effective for scenarios requiring deep reasoning, such as scientific research, data analysis, and solving complex problems. Its ability to adapt and refine its approach ensures consistent performance, even in dynamic or uncertain research environments. Kimi K2 Agent Researcher Watch this video on YouTube. Take a look at other insightful guides from our broad collection that might capture your interest in Kimi K2 . Performance Metrics and Comparative Benchmarks The Kimi K2 Agent Researcher delivers impressive performance metrics, particularly in tasks requiring deep reasoning and extended focus. It can execute up to 23 reasoning steps within a single task and supports up to 50 iterations without experiencing 'context rot,' a common issue in prolonged tasks where systems lose track of relevant information. This resilience ensures that the system maintains accuracy and coherence, even in demanding scenarios. While the Kimi K2 Agent Researcher outperforms most comparable models in terms of versatility and integration, it does fall slightly behind the Gro 4 model in specific benchmarks. However, its ability to incorporate diverse data sources, including Chinese web links, gives it a distinct advantage for global research applications. This feature broadens its utility for users who require access to multilingual or region-specific data. Single-Agent Design and Its Advantages Unlike multi-agent systems, which distribute tasks among specialized agents, the Kimi K2 Agent Researcher employs a holistic single-agent design. This approach simplifies coordination and enhances the system's ability to manage large observation contexts. By focusing on a unified problem-solving strategy, the system reduces redundancy and ensures a streamlined research process. The single-agent architecture also allows for greater consistency in reasoning and decision-making. It eliminates the potential for miscommunication or inefficiencies that can arise in multi-agent setups, making it particularly well-suited for tasks that require sustained focus and comprehensive analysis. API Hosting Options and User Accessibility The Kimi K2 Agent Researcher offers flexible API hosting options, allowing users to select configurations that align with their specific needs and budgets. These options include variations in quantization levels, token processing speeds, and pricing structures, making sure that the system can accommodate a wide range of use cases. Beyond its technical capabilities, the system enhances user accessibility through its reporting and visualization features. It generates detailed reports and interactive websites to summarize findings, simplifying the interpretation and application of results. This functionality is particularly valuable for professionals who need to present their research in a clear and actionable format. Additionally, the system provides a limited number of free searches per month, allowing users to explore its capabilities before committing to a subscription. Its balanced interaction style ensures that information is delivered accurately and without unnecessary bias, fostering a productive and engaging research experience. Why the Kimi K2 Agent Researcher Stands Out The Kimi K2 Agent Researcher distinguishes itself as a powerful tool for addressing complex research challenges. Its advanced reasoning capabilities, rigorous validation processes, and robust context management make it a reliable choice for professionals seeking precision and adaptability. Whether you are conducting academic research, analyzing large datasets, or exploring new hypotheses, this single-agent system provides the tools and efficiency necessary to achieve your objectives with confidence. By combining innovative technology with user-centric design, the Kimi K2 Agent Researcher offers a comprehensive solution for modern research needs. Its ability to integrate diverse data sources, adapt to evolving tasks, and deliver actionable insights ensures that it remains a valuable resource for professionals across industries. Media Credit: Prompt Engineering Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


The Independent
9 hours ago
- The Independent
New tool could revolutionise skin cancer diagnosis
A groundbreaking artificial intelligence tool, developed by PhD student Tess Watt at Heriot-Watt University in Edinburgh, aims to revolutionise skin cancer diagnosis. The system allows patients to photograph skin complaints using a camera attached to an inexpensive Raspberry Pi device, which then analyses the image against a vast dataset for real-time diagnosis. Designed for early detection, the tool is intended to provide rapid assessments globally, particularly in remote regions, without requiring direct access to dermatologists or internet connectivity. The research team reports the tool is up to 85 per cent accurate, with ongoing efforts to enhance its diagnostic capabilities by accessing more skin lesion datasets. Discussions are underway with NHS Scotland for ethical approval, with a pilot project anticipated within the next one to two years, aiming for eventual widespread implementation.