
Why we must keep humans at the heart of AI in warfare
Read more
The ICRC has stressed that responsibility in warfare must remain with humans. 'Human control must be maintained,' it argues, and limits on autonomy urgently established 'to ensure compliance with international law and to satisfy ethical concerns'.
In 2022, the MoD itself echoed this sentiment. It stated that only human soldiers 'can make instinctive decisions on the ground in a conflict zone; improvise on rescue missions during natural disasters; or offer empathy and sympathy.' The then Defence Secretary Ben Wallace added that 'at its heart, our Army relies on the judgment of its own individuals.'
A recruitment campaign at the time carried the tagline: 'Technology will help us do incredible things. But nothing can do what a soldier can do.'
Colonel Nick Mackenzie, then Assistant Director for Recruitment, highlighted that, while 'technology is really, really important… there is always somebody, a person, behind that technology,' who is ultimately responsible for its use and the decisions it enables.
Since then, however, the use of AI-enabled rapid target identification systems in contemporary conflicts has grown rapidly, with notable examples being Lavender and Where's Daddy (Israel/Palestine), Saker and Wolly (Russia/Ukraine). A human being is generally still required in order to engage any lethal effects, but technological capabilities are already being developed to remove human input from the targeting process altogether.
Against this backdrop, the MoD's Strategic Defence Review 2025, released last month, calls for 'greater use of autonomy and Artificial Intelligence within the UK's conventional forces' to deliver 'greater accuracy, lethality, and cheaper capabilities'. 'As in Ukraine,' the Review continues, 'this would provide greater accuracy, lethality, and cheaper capabilities – changing the economics of defence.'
One example is Project ASGARD, which will help the Army locate and strike enemy targets at greater distances using AI as a 'force multiplier'. This is just one of over 400 AI-related projects being run by the MoD.
What remains unclear, but is critical from a legal and moral perspective, is what role human judgment will play in these projects and the military operations they support.
Computer scientist Pei Wang has said that while AI can behave like human intelligence in some ways, it is fundamentally different. AI shouldn't replace human intelligence, but rather support and enhance it – helping people make better-informed decisions.
Human-robot interaction specialist Karolina Zawieska warns of the need to distinguish between what is human and what is only human-like. AI systems often function as a 'black box', meaning it is not always clear how or why they produce certain outcomes. This creates serious problems for human understanding, control, and accountability.
When properly used, AI can support situational awareness and help human operators make better decisions. In this sense, it is a tool – not a decision-maker. But if too much control is handed over to AI, we risk removing human judgment and with it, moral responsibility.
Professor Jeff McMahan, moral philosopher at the Oxford Institute for Ethics, Law and Armed Conflict, has argued that it is essential for combatants to feel 'deep inhibitions about tackling non-combatants'.
However accurate or efficient AI may be, these inhibitions cannot be replicated by algorithms. As political scientist Valerie Morkevičius has pointed out, the emotional and moral 'messiness' of war is a feature, not a flaw because it slows down violence and prompts ethical reflection. Military decisions should be difficult. This is why human judgment must remain at the centre.
While defence and national security are reserved for Westminster, Scotland plays a key role in UK defence, from the bases at Faslane and Lossiemouth to the defence research carried out at Scottish universities. The issues raised in the Strategic Defence Review therefore carry particular relevance here.
UN Secretary General António Guterres has recommended that 'a legally binding instrument' to prohibit and/or regulate AI weapons be concluded by 2026 (Image: Getty)
Scotland's approach to AI, shaped by the AI Strategy (2021) and the Scottish AI Playbook (2024), is notably human-centred. Informed by Organisation for Economic Cooperation and Development's (OECD) principles, both documents stress the importance of trustworthy, ethical, and inclusive AI that improves people's lives. They highlight the need for transparency, human control, and robust accountability.
Though not military in scope, these principles nevertheless offer a useful framework for a Scottish perspective on the development and use of AI for military purposes: keeping people at the centre, and ensuring that technology supports rather than replaces human agency.
The goal should not be the delegation of human decisions to machines, or the replacement of human beings with technology. Rather, AI should support and strengthen human decision-making – a tool for the enactment of human agency: a technological means for strictly human ends.
Dr Joanna LD Wilson is a Lecturer in Law at the University of the West of Scotland
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Guardian
3 hours ago
- The Guardian
Australia shouldn't fear the AI revolution – new skills can create more and better jobs
It seems a lifetime ago, but it was 2017 when the former NBN CEO Mike Quigley and I wrote a book about the impact of technology on our labour market. Changing Jobs: The Fair Go in the New Machine Age was our attempt to make sense of rapid technological change and its implications for Australian workers. It sprang from a thinkers' circle Andrew Charlton and I convened regularly back then, to consider the biggest, most consequential shifts in our economy. Flicking through the book now makes it very clear that the pace of change since then has been breathtaking. The stories of Australian tech companies give a sense of its scale. In 2017, the cloud design pioneer Canva was valued at $US1bn – today, it's more than $US30bn. Leading datacentre company AirTrunk was opening its first two centres in Sydney and Melbourne. It now has almost a dozen across Asia-Pacific and is backed by one of the world's biggest investors. We understand a churning and changing world is a source of opportunity but also anxiety for Australians. While the technology has changed, our goal as leaders remains the same. The responsibility we embrace is to make Australian workers, businesses and investors beneficiaries, not victims, of that change. That matters more than ever in a new world of artificial intelligence. Breakthroughs in 'large language models' (LLMs) – computer programs trained on massive datasets that can understand and respond in human languages – have triggered a booming AI 'hype cycle' and are driving a 'cognitive industrial revolution'. ChatGPT became a household name in a matter of months and has reframed how we think about working, creating and problem-solving. LLMs have been adopted seven times faster than the internet and 20 times faster than electricity. The rapid take-up has driven the biggest rise in the S&P 500 since the late 1990s. According to one US estimate, eight out of 10 workers could use LLMs for at least 10% of their work in future. Yet businesses are still in the discovery phase, trying to separate hype from reality and determine what AI to build, buy or borrow. Artificial intelligence will completely transform our economy. Every aspect of life will be affected. I'm optimistic that AI will be a force for good, but realistic about the risks. The Nobel prize-winning economist Darren Acemoglu estimates that AI could boost productivity by 0.7% over the next decade, but some private sector estimates are up to 30 times higher. Goldman Sachs expects AI could drive gross domestic product (GDP) growth up 7% over the next 10 years, and PwC estimates it could bump up global GDP by $15.7tn by 2030. The wide variation in estimates is partly due to different views on how long it will take to integrate AI into business workflows deeply enough to transform the market size or cost base of industries. But if some of the predictions prove correct, AI may be the most transformative technology in human history. At its best, it will convert energy into analysis, and more productivity into higher living standards. It's expected to have at least two significant economy-wide effects. First, it reduces the cost of information processing. One example of this is how eBay's AI translation tools have removed language barriers to drive international sales. The increase in cross-border trade is the equivalent of having buyers and sellers 26% closer to one another – effectively shrinking the distance between Australia and global markets. This is one reason why the World Trade Organization forecasts AI will lower trade costs and boost trade volumes by up to 13%. Second, cheaper analysis accelerates and increases our problem-solving capacity, which can, in turn, speed up innovation by reducing research and development (R&D) costs and skills bottlenecks. By making more projects stack up commercially, AI is likely to raise investment, boost GDP and generate demand for human expertise. Despite the potential for AI to create more high-skilled, high-wage jobs, some are concerned that adoption will lead to big increases in unemployment. The impact of AI on the labour force is uncertain, but there are good reasons to be optimistic. One study finds that more than half of the use cases of LLMs involve workers iterating back and forth with the technology, augmenting workers' skills in ways that enable them to achieve more. Another recent study found that current LLMs often automate only some tasks within roles, freeing up employees to add more value rather than reducing hours worked. These are some of the reasons many expect the AI transformation to enhance skills and change the nature of work, rather than causing widespread or long-term structural unemployment. Even so, the impact of AI on the nature of work is expected to be substantial. We've seen this play out before – more than half the jobs people do today are in occupations that didn't even exist at the start of the second world war. Some economists have suggested AI could increase occupational polarisation – driving a U-shaped increase in demand for manual roles that are harder to automate and high-skill roles that leverage technology, but a reduction in demand for medium-skilled tasks. But workers in many of these occupations may be able to leverage AI to complete more specialised tasks and take on more productive, higher-paying roles. In this transition, the middle has the most to gain and the most at stake. There is also a risk that AI could increase short-term unemployment if investment in skills does not keep up with the changing nature of work. Governments have an important role to play here, and a big motivation for our record investment in education is ensuring that skills keep pace with technological change. But it's also up to business, unions and the broader community to ensure we continue to build the human capital and skills we need to grasp this opportunity. To be optimistic about AI is not to dismiss the risks, which are not limited to the labour market. The ability of AI to rapidly collate, create and disseminate information and disinformation makes people more vulnerable to fraud and poses a risk to democracies. AI technologies are also drastically reducing the cost of surveillance and increasing its effectiveness, with implications for privacy, autonomy at work and, in some cases, personal security. There are questions of ethics, of inequality, of bias in algorithms, and legal responsibility for decision-making when AI is involved. These new technologies will also put pressure on resources such as energy, land, water and telecoms infrastructure, with implications for carbon emissions. But we are well placed to manage the risks and maximise the opportunities. In 2020, Australia was ranked sixth in the world in terms of AI companies and research institutions when accounting for GDP. Our industrial opportunities are vast and varied – from developing AI software to using AI to unlock value in traditional industries. Markets for AI hardware – particularly chips – and foundational models are quite concentrated. About 70% of the widely used foundational models have been developed in the US, and three US firms claim 65% of the global cloud computing market. But further downstream, markets for AI software and services are dynamic, fragmented and more competitive. The Productivity Commission sees potential to develop areas of comparative advantage in these markets. Infrastructure is an obvious place to start. According to the International Data Corporation, global investment in AI infrastructure increased 97% in the first half of 2024 to $US47bn and is on its way to $US200bn by 2028. We are among the top five global destinations for datacentres and a world leader in quantum computing. Our landmass, renewable energy potential and trusted international partnerships make us an attractive destination for data processing. Our substantial agenda, from the capacity investment scheme to the Future Made in Australia plan, will be key to this. They are good examples of our strategy to engage and invest, not protect and retreat. Our intention is to regulate as much as necessary to protect Australians, but as little as possible to encourage innovation. There is much work already under way: our investment in quantum computing company PsiQuantum and AI adopt centres, development of Australia's first voluntary AI safety standard, putting AI on the critical technologies list, a national capability plan, and work on R&D. Next steps will build on the work of colleagues like the assistant minister for the digital economy, Andrew Charlton, the science minister, Tim Ayres and former science minister Ed Husic, and focus on at least five things: Building confidence in AI to accelerate development and adoption in key sectors. Investing in and encouraging up skilling and reskilling to support our workforce. Helping to attract, streamline, speed up and coordinate investment in data infrastructure that's in the national interest, in ways that are cost effective, sustainable and make the most of our advantages. Promoting fair competition in global markets and building demand and capability locally to secure our influence in AI supply chains. And working with the finance minister, Katy Gallagher, to deliver safer and better public services using AI. Artificial intelligence will be a key concern of the economic reform roundtable I'm convening this month because it has major implications for economic resilience, productivity and budget sustainability. I'm setting these thoughts out now to explain what we'll grapple with and how. AI is contentious, and of course, there is a wide spectrum of views, but we are ambitious and optimistic. We can deploy AI in a way consistent with our values if we treat it as an enabler, not an enemy, by listening to and training workers to adapt and augment their work. Because empowering people to use AI well is not just a matter of decency or a choice between prosperity and fairness; it is the only way to get the best out of people and technology at the same time. It is not beyond us to chart a responsible middle course on AI, which maximises the benefits and manages the risks. Not by letting it rip, and not by turning back the clock and pretending none of this is happening, but by turning algorithms into opportunities for more Australians to be beneficiaries, not victims of a rapid transformation that is gathering pace. Jim Chalmers is the federal treasurer


The Guardian
4 hours ago
- The Guardian
Big tech has spent $155bn on AI this year. It's about to spend hundreds of billions more
The US's largest companies have spent 2025 locked in a competition to spend more money than one another, lavishing $155bn on the development of artificial intelligence, more than the US government has spent on education, training, employment and social services in the 2025 fiscal year so far. Based on the most recent financial disclosures of Silicon Valley's biggest players, the race is about to accelerate to hundreds of billions in a single year. Over the past two weeks, Meta, Microsoft, Amazon, and Alphabet, Google's parent, have shared their quarterly public financial reports. Each disclosed that their year-to-date capital expenditure, a figure that refers to the money companies spend to acquire or upgrade tangible assets, already totals tens of billions. Capex, as the term is abbreviated, is a proxy for technology companies' spending on AI because the technology requires gargantuan investments in physical infrastructure, namely data centers, which require large amounts of power, water and expensive semiconductor chips. Google said during its most recent earnings call that its capital expenditure 'primarily reflects investments in servers and data centers to support AI'. Meta's year-to-date capital expenditure amounted to $30.7bn, doubling the $15.2bn figure from the same time last year, per its earnings report. For the most recent quarter alone, the company spent $17bn on capital expenditures, also double the same period in 2024, $8.5bn. Alphabet reported nearly $40bn in capex to date for the first two quarters of the current fiscal year, and Amazon reported $55.7bn. Microsoft said it would spend more than $30bn in the current quarter to build out the data centers powering its AI services. Microsoft CFO Amy Hood said the current quarter's capex would be at least 50% more than the outlay during the same period a year earlier and greater than the company's record capital expenditures of $24.2bn in the quarter to June. 'We will continue to invest against the expansive opportunity ahead,' Hood said. For the coming fiscal year, big tech's total capital expenditure is slated to balloon enormously, surpassing the already eye-popping sums of the previous year. Microsoft plans to unload about $100bn on AI in the next fiscal year, CEO Satya Nadella said Wednesday. Meta plans to spend between $66bn and $72bn. Alphabet plans to spend $85bn, significantly higher than its previous estimation of $75bn. Amazon estimated that its 2025 expenditure would come to $100bn as it plows money into Amazon Web Services, which analysts now expect to amount to $118bn. In total, the four tech companies will spend more than $400bn on capex in the coming year, according to the Wall Street Journal. The multibillion-dollar figures represent mammoth investments, which the Journal points out is larger than the European Union's quarterly spending on defense. However, the tech giants can't seem to spend enough for their investors. Microsoft, Google and Meta informed Wall Street analysts last quarter that their total capex would be higher than previously estimated. In the case of all three companies, investors were thrilled, and shares in each company soared after their respective earnings calls. Microsoft's market capitalization hit $4tn the day after its report. Even Apple, the cagiest of the tech giants, signaled that it would boost its spending on AI in the coming year by a major amount, either via internal investments or acquisitions. The company's quarterly capex rose to $3.46bn, up from $2.15bn during the same period last year. The iPhone maker reported blockbuster earnings Thursday, with rebounding iPhone sales and better-than-expected business in China, but it is still seen as lagging farthest behind on development and deployment of AI products among the tech giants. Tim Cook, Apple's CEO, said Thursday that the company was reallocating a 'fair number' of employees to focus on artificial intelligence and that the 'heart of our AI strategy' is to increase investments and 'embed' AI across all of its devices and platforms. Cook refrained from disclosing exactly how much Apple is spending, however. Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion 'We are significantly growing our investment, I'm not putting specific numbers behind that,' he said. Smaller players are trying to keep up with the incumbents' massive spending and capitalize on the gold rush. OpenAI announced at the end of the week of earnings that it had raised $8.3bn in investment, part of a planned $40bn round of funding, valuing the startup, whose ChatGPT chatbot kicked in 2022, at $300bn.


Times
5 hours ago
- Times
High taxes, a recession: my fears for young job hunters in Scotland
I started employing my latest assistant in March this year and for reliability, productivity, speed and all-round knowledge, he's hard to beat. Unfailingly polite and endlessly resourceful, he's settled into my small in-house team of seven with ease. Everyone loves him. Although he is only five months old and his background is unknown, he's already indispensable. He is, of course, one of the new autonomous artificial intelligence agents — otherwise known as agentic AI. This is one of the first publicly available AI agents capable of independent planning, decision-making and real-world task execution without requiring detailed human oversight. In beta mode and available by invitation only — codes were changing hands for $1,000 recently — it is a glimpse of a future that is awe-inspiring and terrifying in equal measure. For the time being, I'm ignoring the fact that I've had to hand over a lot of personal information to gain access (admittedly much of it already available online) and that very little is known about the Chinese start-up behind the technology. It is simply too valuable a tool and I'm already hooked. Agentic AI is turbocharging technical aspects of my business that other AI tools simply can't reach. I'm an optimist about the advent of AI. Or I should say, I'm an optimist about humanity. Such tools can, and are, being used for destructive purposes. But this is the best argument for not withdrawing from research. If the good guys slow down, they simply hand advantage to the bad actors. I understand the arguments against AI that end with humanity facing Armageddon. But mankind is perfectly capable of orchestrating its own destruction without the use of artificial intelligence. We just have to look at Gaza and Ukraine to be reminded of the depth of human depravity. Meanwhile AI is already saving lives. All progress has provoked moral panic. From the coming of the railways to Elvis wiggling his hips. And while my new AI assistant sometimes leaves me feeling like an 18th-century peasant contemplating the wonders of the internal combustion engine, I know that it is actual intelligence combined with AI that gives us the breakthroughs and competitive edge we need. While the AI assistant can code, I still need to employ my full-stack developer to implement, evaluate and interpret the results. But what is certainly true is that AI is contributing to an upcoming economic upheaval for which Scotland is wholly unprepared. A toxic combination of political decisions by the Labour government at Westminster and the SNP government in Scotland, a mental health crisis among millennials and Gen Zs and weak economic growth have the potential to tip the country into recession. This month, the accountancy firm EY reported that Scotland's high income tax rates were seen as the main barrier to expansion in Scotland's financial services industry, which contributes about 10 per cent of the Scottish economy by value. All Scottish workers earning more than £30,318 pay more income tax than their English counterparts and the highest band is set at 48 per cent for Scotland compared with 45 per cent for the rest of the UK. The job market is being squeezed from both ends. According to McKinsey & Co, the number of job vacancies online fell by 31 per cent in the three months to May, compared with the same period in 2022, the year that ChatGPT was launched. Research from KPMG and the Recruitment and Employment Confederation revealed that hiring fell in June at the fastest pace in almost two years. Sluggish growth and higher interest rates have been blamed but in occupations at entry level across all industries, including graduate traineeships and apprentices, jobs are disappearing at an alarming rate. The last apprentice I hired was unable to address an envelope and had no idea what a stamp was. She had a HNC in 'collective dance, specialising in hip-hop' and was about as prepared for the world of work as your average pigeon. She lasted three months. Somebody within the education system had let her down badly. Young people will be most seriously affected by the storm that is coming. They are also the group facing the biggest mental health crisis. In Scotland more than one million adults report that anxiety interferes with daily life. Gen Z and young millennials lose up to 60 days of productivity per year due to mental health issues compared with 36 days for older colleagues. The number of Scots out of work because of sickness and disability is at its highest level in 20 years and the number claiming disability payments in Scotland is set to almost double by 2030. Labour's plans under the Employment Rights Bill to remove the two-year qualifying period for key rights such as protection against unfair dismissal, parental leave and statutory sick pay, mean that many SMEs will not risk hiring staff without experience or a track record. That's if the SMEs stay in business. Confidence is at a low ebb. One in five small businesses believe they will be forced out of business if conditions don't improve. According to the Federation of Small Businesses, 27 per cent of business owners believe their company will downsize, be sold or close in the next 12 months. For the first time in 15 years, pessimism has outweighed optimism. Even profitable SMEs wonder if the juice is still worth the squeeze. The government is not protecting the jobs we do have. The closure of the Grangemouth refinery and the threat by bus manufacturer Alexander Dennis to move Scottish production to Scarborough could lead to 400 jobs lost in the Falkirk area. Add in jobs lost in the supply chain and the number rises to four figures. Both companies have foreign ownership, which rather dampens enthusiasm for the SNP government's boast that Scotland punches above its weight for inward investment. The Grangemouth closure and a sharp fall in manufacturing output drove a 0.4 per cent GDP decline in the three months up to May. About 80 per cent of leisure and hospitality businesses believe the Scottish economy will decline this year. John Swinney has mentioned a possible Scottish recession, blaming US tariffs. Even without a recession, growth is weak and Scottish economic activity is fragile. Even boom sectors such as renewables are facing cuts. At least one of the country's largest employers has just cut nearly all its graduate jobs for the present cohort reaching the end of their two-year training stint. Recent recessions have not brought the same level of job losses that the UK experienced in the 1990s and before. But that is set to change, and we are not prepared. This will affect a generation, already struggling post-pandemic, for most of their lives. The Scottish government has deliberately and negligently failed to promote the nation's economic wellbeing at the expense of ideology which a majority of voters do not share. As Harold Macmillan pointed out, it is 'events, dear boy' that bring down governments. But it is policy decisions that cripple countries.