
Pupils could gain more face-to-face time with teachers under AI plans
It suggests AI can cut down administrative tasks – such as generating letters, reports and planning lessons – to give teachers more time to work with pupils.
But the guidance also calls on teachers to always check outputs generated by AI for 'accuracy' and it insists that personal data should be protected.
School leaders' unions have welcomed the resources but they said further investment is needed to unlock the potential benefits of AI in education.
The support materials suggest that generative AI could be used to help teachers with formative assessments – such as generating quizzes and 'offering feedback on errors' – as well as generating 'exam-style questions'.
Generative AI tools can also help staff with administrative tasks such as composing emails and letters, policy writing and planning trips, it added.
One section of the guidance demonstrates how AI could be used to generate a letter to parents and carers about a head lice outbreak at the school.
It said: 'Strategic implementation of AI can cut down administrative tasks for leaders, teachers and support staff, particularly in areas such as data analysis, lesson planning, report generation and correspondence.
'This could allow educators more time to work directly with students and pupils and help to reduce workload if implemented well.'
But educators should only use AI tools 'approved' in their setting, it added.
AI should also only be used by teachers for formative, low-stakes marking – such as classroom quizzes or homework, the DfE has said.
Paul Whiteman, general secretary at school leaders' union NAHT, said: 'These resources are a welcome source of support for education staff.
'AI has huge potential benefits for schools and children's learning but it is important that these are harnessed in the right way and any pitfalls avoided.
'Government investment in future testing and research is vital as staff need reliable sources of evaluation – supported with evidence – on the benefits, limitations and risks of AI tools and their potential uses.'
Pepe Di'Iasio, general secretary of the Association of School and College Leaders (ASCL), said: 'The great potential of AI is in easing staff workloads which are driven by system-wide pressures and are a major cause of recruitment and retention challenges.
'If we can get this right it will improve working conditions and help address teacher shortages.
'However, there are some big issues which need to be resolved and paramount is ensuring that all schools and colleges have the technology and training they need.
'Budgets are extremely tight because of the huge financial pressures on the education sector and realising the potential benefits of AI requires investment.'
The DfE has said it is investing an extra £1 million in funding to accelerate the development of AI tools to help with marking and generating detailed, tailored feedback for individual students.
Education Secretary Bridget Phillipson said: 'We're putting cutting-edge AI tools into the hands of our brilliant teachers to enhance how our children learn and develop – freeing teachers from paperwork so they can focus on what parents and pupils need most: inspiring teaching and personalised support.'
She added: 'By harnessing AI's power to cut workloads, we're revolutionising classrooms and driving high standards everywhere – breaking down barriers to opportunity so every child can achieve and thrive.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Evening Standard
20 minutes ago
- Evening Standard
What's powering Jeff Bezos and the tech bros' thirst for nuclear energy?
The new technology consumes enormous amounts of energy. The data processing centres for AI require constant feeding on an epic, hitherto unimagined scale. Already, researchers at Barclays reckon AI accounts for 3.5 per cent of all US electricity output. This is set to rise to 5.5 per cent in 2027 and more than 9 per cent by 2030. That supply cannot switch off; there must not be a blackout — the recent shutdown in Spain and Portugal highlighted the likelihood of that occurring. To satisfy users it must be guaranteed 24/7, 365 days a year, every year.

Reuters
2 hours ago
- Reuters
More AI bots, less human visits on the internet
July 3 (Reuters) - This was originally published in the Artificial Intelligencer newsletter, which is issued every Wednesday. Sign up here to learn about the latest breakthroughs in AI and tech. Professionals spend, on average, three hours a day in their inboxes. That single statistic, which Grammarly CEO Shishir Mehrotra shared with me in my exclusive story on their latest move, is the key to understanding his company's acquisition of email tool Superhuman. The vision, he explained, is to build a network of specialized AI agents that can pull data from across your private digital workflow—emails, documents, calendars—to reduce the time you spend searching for information or crafting responses. This vision of a helpful AI agent, however, isn't just about getting to inbox zero. It's a preview of a much larger, more disruptive shift happening across the entire web. Scroll down for more on that. Do you experience this shift in your work or daily use of the internet already? Email me here, opens new tab or follow me on LinkedIn, opens new tab to share any feedback, and what you want to read about next in AI. Read our latest reporting in tech & AI * Exclusive-Intel's new CEO explores big shift in chip manufacturing business * Exclusive-Scale AI's bigger rival Surge AI seeks up to $1 billion capital raise, sources say * Grammarly to acquire email startup Superhuman in AI platform push * Meta deepens AI push with 'Superintelligence' lab, source says * Asia is a formidable force in the AI race. Register, opens new tab to watch the live broadcast of the #ReutersNEXTAsia, opens new tab summit on July 9 to hear from executives and experts on the ground about what digital transformation looks like there. A new internet with more AI bots than humans For decades, the internet worked like this: Google indexed millions of web pages, ranked them and showed them on search results. We'd click through to individual websites—Reuters, the New York Times, Pinterest, Reddit, you name it. Those sites then sold our attention to advertisers, earning more ad dollars or subscription fees for producing high-quality, engaging or unique content you couldn't get anywhere else. Now, AI companies are pitching a new way to deliver information: everything you want, inside a chat window. Imagine your chatbot answering any question by scraping info from across the web—without ever having to click back to the original source. That's what some AI companies are pitching as a more 'optimized' web experience, except that the people creating the content will get left behind. In this new online world, as envisioned by AI companies like OpenAI, navigating the web would be frictionless. Users will no longer bother with clicking links or juggling tabs. Instead, everything happens through chat, while personal AI agents will do the dirty work of browsing the internet, performing tasks, and making decisions like comparing plane tickets on your behalf. So-called 'agents' refer to autonomous AI tools that act on a user's instructions, fetching information and interacting with websites. The shift is happening fast, according to Cloudflare, a content delivery network that oversees about 20% of web traffic. It started to hear complaints from publishers like news websites about plunging referral traffic in the past few months. The data pointed to one trend: more bot activity, less human visits, and lower ad revenue. Bots have long been an integral part of the internet—there are good bots that crawl and index websites and help them get discovered and recommended when users search for relevant services or information. Bad bots are usually the ones that could overwhelm websites with traffic to cause crashes. And then there is a new category of AI bots made for large language models (LLMs). AI companies send them to scrape websites using automated programs to copy vast amounts of online information. The volume of such bot activity has risen 125% in just six months, according to Webflow data. The first wave of AI data scraping hit books and archives. Now, there's a push for real-time access, putting content owners on the internet in the crosshairs, because chatbot users want information about both history and current events—and they want it to be accurate without hallucinations. This demand has sparked a wave of partnerships and lawsuits between AI companies and media companies. OpenAI is signing on more news sources while Perplexity is trying to build out a publisher program that was met with little fanfare. Reddit sued Anthropic over data scraping, even as it inked a $60 million deal with Google to license its content. AI companies argue that web crawling isn't illegal. They say they're optimizing the user experience, and that they'll try to offer links to the original sources when they aggregate information. Website owners are experimenting, too. Cloudflare's new 'block or pay' crawler model, launched Tuesday, is a new model that already gained support from dozens of websites, from Condé Nast to Reddit. It's a novel attempt to charge for the use of content by 'per crawl', although it's too early to tell whether publishers would be made whole by the loss of human visitors. Chart of the week Data from Cloudflare reveals how drastically the web has shifted in just six months. The number of pages crawled per visitor referred has risen sharply—especially among AI companies. Anthropic now sends its bot to scrape 60,000 times for every single visitor it refers back to a website. For site owners who monetize human attention, this presents real challenges. And for those hoping to have their brands or services featured in AI chatbot responses, there's growing pressure to build "bot-friendly" websites—optimized not for humans, but for machines, according to Webflow CEO Linda Tong. What AI researchers are reading A study from MIT Media Lab, 'Your Brain on ChatGPT, opens new tab,' digs into what really happens in our heads when we write essays using large language models (LLMs) like ChatGPT, Google Search, or just our own brainpower. The research team recruited university students and split them into three groups: one could only use ChatGPT, another used traditional search engines like Google (no AI answers allowed), and a third had to rely on memory alone. The findings are striking. Writing without any digital tools led to the strongest and most widespread brain connectivity, especially in regions associated with memory, creativity, and executive function. The 'Search Engine' group showed intermediate engagement—more than the LLM group, but less than brain-only—while those using ChatGPT exhibited the weakest neural coupling. In other words, the more we outsource to AI, the less our brains are forced to work. But the story doesn't end there. Participants who used LLMs not only had less brain engagement but also struggled to remember or quote from their own essays just minutes after writing. They reported a weaker sense of ownership over their work, and their essays tended to be more homogeneous in style and content. In contrast, those who wrote unaided or used search engines felt more attached to their writing and were better able to recall and accurately quote what they'd written. Interestingly, when participants switched tools—going from LLM to brain-only or vice versa—the neural patterns didn't fully reset. Prior reliance on AI seemed to leave a trace, resulting in less coordinated brain effort when writing unaided. The researchers warn that frequent LLM use may lead to an 'accumulation of cognitive debt'—a kind of atrophy of the mental muscles needed for deep engagement, memory and authentic authorship. The takeaway? Use AI tools wisely, but don't let them do all the thinking for you—or you might find your own voice, and memory, fading into the background. AI jargon you need to know Imagine if every device required a unique charging cable. AI has faced a similar challenge, where each external tool—like calendars or email—needed custom-built connections, making it slow and complex. Introducing the Model Context Protocol (MCP), a new standard from Anthropic that's gaining traction with major players like OpenAI, Microsoft, and Google. It serves as a universal adapter for AI models, enabling seamless communication with diverse tools and data. This means AIs can better manage tasks, integrate with apps, and access real-time information. MCP is vital for the rise of autonomous AI agents because it eliminates custom integrations, paving the way for more integrated and helpful AI in our daily lives. LLM, NLP, RLHF: What's a jargon term you'd like to see defined? Email me, opens new tab and I might feature the suggestion in an upcoming edition.

Finextra
2 hours ago
- Finextra
BBVA to roll out GenAI tools to 100,000 employees via Google Cloud
BBVA is deepening its relationship with Google Cloud through the deployment of Google Workspace with Gemini, as part of the global bank's AI adoption strategy. 0 The initiative will provide over 100,000 employees worldwide with secure generative AI experiences in tools like Gmail, Google Docs, Google Sheets, and more. BBVA employees have so far taken to the use of AI, reporting that automating repetitive tasks saves them nearly three hours per week on average, freeing up valuable time for more strategic, customer-focused work. This data follows the bank's deployment of 11,000 ChatGPT licences from OpenAI. Under the new agreement with Google, BBVA employees will use Gemini to help summarize, draft, and find information across emails, chats, and files; create professional documents, presentations, spreadsheets, and videos; and take notes and when on calls. Beyond Google Workspace with Gemini, BBVA employees will also have access to the standalone Gemini app and NotebookLM, an AI-powered research and writing assistant, to help with tasks like research, generating audio overviews of complex findings, and creating reports. 'BBVA transformed the way we work with Google Workspace more than ten years ago,' explained Juan Ortigosa, global head of Workplace at BBVA. 'We expect that the widespread adoption of generative AI across these tools will improve productivity and the work experience of all employees, regardless of their role, fostering a more dynamic and efficient environment.' In parallel to this AI deployment, the bank has launched a mandatory training program, 'AI Express', focused on the broader use of artificial intelligence. It provides employees with clear principles for secure and responsible AI adoption across use cases. Ortigosa says access to Google Workspace with Gemini, the Gemini app, and NotebookLM will be granted to employees who have completed internal training programmes to ensure that teams are prepared to use generative AI tools effectively, ethically, and in line with BBVA's AI governance standards.