
AI in Schools Would ‘Dehumanise' Classroom Interactions, Education Specialist Warns
Christopher McGovern, chairman of the Campaign for Real Education (CRE), told The Epoch Times that educators tend to embrace technology because they see it as an improvement; however, they have not fully considered the implications of education-enhanced AI.
Some of these concerns involve how it would reduce the elements of human interaction that are integral to the learning experience.
'AI dehumanises the traditional classroom interaction between a teacher and the children, but also between the children themselves. That's all taken away,' McGovern said.
McGovern, a retired head teacher and former adviser to the policy unit at 10 Downing Street, made the comments in the context of the education sector exploring the ways in which AI can aid pupils in the classroom and teachers with administration.
The Ada Lovelace Institute (ALI), a research centre which aims to ensure that technology works for the benefit of society,
Related Stories
5/7/2025
4/28/2025
David Game College, a private school in London,
Children Could Reject AI
Younger generations who have have grown up in a world of technology would reasonably be expected to be the most open to AI taking over the classroom.
But according to the ALI, that is not necessarily going to be the case. In its
'The importance of the pupil-teacher relationship matters as much to the pupil as it does to the teacher,' the think tank observed.
Similarly, teachers who were invited by the DfE to test a proof-of-concept AI marking tool
'[Pupils] want you to read their work. They want you to know and understand who they are as an individual. They want to impress you often. They want to interest you in who they are,' one secondary school teacher said in feedback to the department.
Tech Overload
McGovern said he does recognise that AI can be used constructively in certain situations and has the capacity to match learning tasks to the individual needs of pupils.
However, he said that if schools are going to introduce AI into a classroom, the use of technology needs to be reduced elsewhere.
The educator warned that AI would contribute to the 'massive overload' of technology that is already impacting children, not least since smartphones and social media have become such a prominent part of young peoples' lives.
'It's an overdose of AI which is going to be the problem. As we are going further along the path overdosing our children, they become increasingly addicted to their screens,' he said, adding it could be a further detriment to children's mental health.
Teachers Already Using AI
Despite there being few education-specific AI tools available, teachers are using generic AI products like ChatGPT in administrative tasks.
In 2023, 42 percent of teachers in England
File photo of a maths exam in progress at Pittville High School, Cheltenham, England, on March 2, 2012.
David Davies/PA Wire
ALI has pointed out that using generic products comes with its own problems, including generating content that is not age-appropriate or relevant to the curriculum. AI can also 'hallucinate,' producing inaccurate outputs that it presents as facts.
The DfE has
Schools can also set their own rules on AI use—including whether and how pupils can use it—as long as they follow legal requirements around data protection, child safety, and intellectual property.
The DfE is already supporting
Concerns Over Cheating
Last month, a survey of school support staff who belong to the GMB union
Cheating is not a new phenomenon, but educators have said that generative AI has made it much easier for children to do so, particularly in non-supervised assignments like coursework.
Education specialist Tom Richmond told The Epoch Times, 'Coursework was already recognised as an unreliable form of assessment well before ChatGPT came along, but it is now abundantly clear that unsupervised assignments cannot be treated as a fair and trustworthy form of assessment.'
Richmond, the former director of the EDSK think tank, said that it is not possible to say with certainty how many children are using AI to cheat, as there are no reliable detection tools available to schools and colleges.
He added, 'No form of assessment is immune to cheating, but some assessments are much harder to manipulate than others.'
'The most obvious way to reduce cheating is for schools to change the types of tasks and assessments that they set for pupils. Any task and assessment completed at home without supervision is now wide open to cheating, so schools can switch to more in-class assessments to prevent cheating,' he added.
File photo of Education Secretary Bridget Phillipson, dated Feb. 03, 2025.
Lucy North/PA Wire
An EDSK report from 2023
£1 Million for EdTech
The government has a wider strategy to advance the usage of AI, including in education.
On Monday, Education Secretary Bridget Phillipson
In her speech at the Education World Forum, she confirmed that the department's new Content Store Project will see curriculum guidance, teaching resources, lesson plans, and anonymised pupils' work made available to AI companies to train their tools 'so that they can generate top quality content for use in our classrooms.'
However, she emphasised that EdTech 'can't replace great teachers' and that 'AI isn't a magic wand.'
She also said the DfE will be working closely with international partners in the development of global AI guidelines for generative AI in education, in order to shape 'the global consensus on how generative AI can be deployed safely and effectively to boost education around the world.'
The UK will host an international summit on generative AI in education in 2026.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Bloomberg
6 minutes ago
- Bloomberg
Navigating the future of AI in communications compliance
Traditional vs. generative AI: A practical application The underlying concepts of AI are not new. The term 'artificial intelligence' dates to 1956, and 'machine learning' back to 1959. However, recent advancements, particularly in generative AI, have put powerful, accessible tools into the hands of businesses and consumers alike. It's crucial to understand that AI is not a singular technology but a diverse array of tools and techniques that, for compliance, have now fallen into two broad categories: Traditional: Traditional machine learning (ML) approaches to AI provide efficient, explainable models for tasks like entity and topic recognition, natural language processing (NLP), automated speech recognition, and sentiment analysis. In compliance, traditional AI is highly effective for classifying data (e.g., identifying a disclaimer), determining sentiment (e.g., positive / neutral / negative), or extracting specific information (e.g., tradable securities). A key advantage is their explainability: users can understand the data inputs, expected outputs, and how the model arrives at its conclusions, which is critical for regulatory scrutiny. Generative: Generative AI (GenAI) not only improves performance on tasks historically handled with traditional ML, but through developments such as large language models (LLMs) – which underpins ChatGPT for example – also excels at tasks like summarization and question answering that were previously difficult using traditional models. The relative creativity of GenAI allows these models to rapidly synthesize content across large datasets rapidly, making it suitable for tasks like transcript summarization, synthetic data creation, and regulatory change analysis. While AI has incredible potential, it must also be deployed thoughtfully. After all, it is a technology, not a standalone solution. This means the focus should always be on solving specific problems rather than simply deploying AI for its own sake. That is why Bloomberg's approach to AI in compliance is rooted in purpose and pragmatism. By equating a surveillance officer's role in uncovering violations to finding the proverbial 'needle in the haystack,' one can see how AI can be used in a variety of ways to reduce the noise (haystack) in poorly structured data to help better identify the true signals (needles) that can ultimately uncover the patterns and full picture that drive a firm's conduct and market abuse risks. 1. Upstream data challenges – reducing the haystack AI can optimize data capture and management processes by normalizing and transforming data before it is sent to a surveillance engine. Identifying market-related conversations (and filtering out those that are personal), removing disclaimers, ignoring news articles, and transcribing and translating audio are all prime examples of how AI can optimize communications and bring previously unstructured data into a format that helps reduce false positives. The transcription and translation engines offered by Bloomberg Vault as well as Bloomberg's use of traditional AI models and LLMs for security identification and conversation classification are testaments to how AI can help optimize and pre-process data to reduce false positives. 2. In-stream surveillance and review workflows – finding the needles The core of modern compliance monitoring lies in surveillance systems that trigger alerts based on communication patterns. Traditional lexicon-based tools, while still effective in certain ways, are being augmented with more advanced AI-based models that can detect subtle policy violations, nuanced behaviors, changes in tone, or patterns indicative of market abuse, insider trading, or conflicts of interest. This significantly enhances the quality of oversight and strengthens internal controls. AI can also help risk analyze, rationalize, and contextualize alerts, allowing teams to prioritize higher-risk cases and dramatically reduce the time and personnel costs associated with manual review. 3. Downstream analytics – seeing the full picture AI can also be used to help combine and correlate data across unstructured communications and structured pricing and market signals to help firms see the full context around a particular trade or customer interaction. It can also conduct analytics based on current and past communication and trading patterns within the context of a user's risk, role, customer, and organizational profile. Ultimately, it can help compliance officers better understand and optimize employee and client interactions, risks, and business opportunities through the additional context and insights it provides. Building trust and transparency in AI policies Trust and transparency are paramount for AI to be truly effective in a regulated environment. As such, Bloomberg recommends that firms utilize and deploy AI-based solutions with these core principles in mind: 1. Defined goals: Every AI solution should begin with clear, explainable objectives and preferably with specific goals and Key Performance Indicators (KPIs) in mind, including reductions in false positives, volumes of voice calls surveilled and identification of true positives. 2. Regulator-ready explainability: Detailed documentation should be provided to help clients understand the policy's design and intended outcomes and are ready to share such documentation with auditors and regulators and that specific alerts can be explained and rationalized. 3. Consistent annotation: Annotation guides should be developed to improve consistency and accuracy across datasets used for training. 4. High-quality data sourcing: Data availability and quality challenges should be addressed through appropriate, high-quality datasets in which identifiers and inherent biases have been removed. Such datasets can be further optimized through close collaboration with internal and/or third-party subject matter experts and broadened through the use of LLMs, which are particularly good at creating additional synthetic data sets. 5. Rigorous testing and refinement: Like any technology, AI-based solutions should be released to early-adopter clients for direct feedback and continuous refinement through effective A / B testing that continually optimizes models over time. 6. Internal validation: A critical step is reverting to the firm's initial metrics and goals to quickly identify, optimize, replace, and/or depreciate under-performing models. Regardless of whether a firm is developing its own solutions, or choosing a trusted technology such as Bloomberg Vault, this approach is designed to help AI-based compliance solution(s) to withstand scrutiny, allowing compliance officers to confidently explain what their models do, how they were trained, and how their repeatable outputs have met the firm's goals. The future is collaborative Integrating AI into compliance is not about replacing human professionals but empowering them to be more effective at protecting their firms. AI enhances human oversight, reduces blind spots, and enables smarter, faster decisions. As this journey continues, collaboration between financial firms and technology providers will be key to addressing emerging challenges and developing policies for new risk areas, such as market abuse, off-channel communications, and detecting behavioral changes in communication patterns. The future of compliance is intertwined with AI and promises a more efficient, accurate, and proactive regulatory surveillance landscape. Looking for solutions to elevate your business controls, manage your conduct risk and meet regulatory obligations? Bloomberg Vault provides a suite of highly secure solutions for capturing, archiving, surveilling, reconstructing and analyzing all of a firm's trades and communications, allowing companies to elevate their business controls, manage their conduct risk and meet their regulatory obligations in today's highly dynamic environment where channels, venues and regulations are constantly evolving and expanding. E-communications archive Bloomberg Vault delivers write-once, read-many (WORM) storage across your firm's monitored e-communication channels for any data retention period you select up to 30 years. E-communications surveillance Apply out-of-the box policies in real time, customize policies to reduce false positives and meet regulatory and data security needs, and create complex policies with our powerful query language and syntax capability. Search and e-discovery Use our case management tools and flexible data exporting capabilities to quickly locate, compile and analyze your records, expediting regulatory audits and legal investigations. View trade records from multiple order management systems in a single consolidated view, gaining visibility into a trade's complete lifecycle, when you pair your trade archive solution with Bloomberg Vault.

Business Insider
2 hours ago
- Business Insider
There's money to be made in AI startups that boost human connections, according to a VC
AI adoption is ramping up, opening opportunities for new consumer startups. Menlo Ventures' recent "State of Consumer AI" report reveals categories the VC firm is eyeing. Business Insider spoke with two partners at the firm about where Menlo is placing bets. That's a question Menlo Ventures, a venture capital firm that's invested in companies like Uber, Tumblr, and Anthropic, wants to answer. Connection is one of a handful of "white space opportunities" that Menlo Ventures is eyeing as fertile ground for new startups in consumer AI technology, according to the firm's recent "The State of Consumer AI" report. Menlo Ventures and Morning Consult surveyed roughly 5,000 US-based adults in April about their feelings around AI and how they've used the tools within the past six months. "Today, usage is dominated by these generalist AI systems," such as OpenAI's ChatGPT or Google's Gemini, Menlo Ventures partner Amy Wu Martin told Business Insider. "But we're seeing, starting with specific categories, this move into more specialized apps." Menlo's research identified five broad categories where specialized AI apps are gaining traction: routine tasks, creative expression, physical and mental health, learning and development, and connection. Dating, social networking, AI companions, and more What falls under the connection umbrella? One niche is dating. Menlo's market map of consumer AI tools highlighted AI-powered matchmaking apps like A16z Speedrun alum Sitch, Keeper, and Ditto. Then there are social networking apps that use AI agents to surface new people to meet, such as Gigi or professional-focused startups like Series or Boardy. Menlo also puts AI companions (think Character AI or Replika) and the turn-yourself-into-a-bot startup Delphi (a Menlo investment) under its connection thesis. "People are starting to use AI as a bit of a crutch to actually figure out how to interact with people and feel less awkward," Martin said, pointing to examples of how people may use AI to prepare for a date or dinner party. In addition to dating advice or social coaching, the technology can be a semi-social outlet in itself, enabling users to interact with AI-generated personas. "The biggest gap in the AI connectivity is multiplayer mode," Martin said, referring to AI that facilitates and participates in group activities. Social media has largely morphed into entertainment — propelled by the rise of influencers — instead of a place to foster real-life connections. Menlo thinks AI could help bring people together, especially in the still-untapped realm of multiplayer experiences. "What is the tool that really just helps you be better in your relationships?" Menlo partner Shawn Carolan said. "I don't want more media coming my way. It's almost like the opposite of social media." But people aren't running en masse to AI for connection just yet. According to the report, only 14% of participants said they used AI for "staying in touch." Investors are buzzing about consumer AI A new crop of startups at the intersection of AI and social networking has stirred buzz with investors. "We are trying to understand where the puck is going," Martin said. "The next phase, especially consumer, is around these specialized apps." Menlo Ventures isn't the only firm betting on consumer AI applications. Amber Atherton, a partner at early-stage consumer fund Patron, recently told BI about wanting to invest in startups that better help people find new relationships and maintain their existing ones. Beyond connection, Menlo Ventures is also watching spaces like healthcare and wellness, financial management, personalized learning, home-related tasks, and family logistics as opportunities for startups. Parents, for instance, are AI "power users," according to Menlo's survey.


New York Post
2 hours ago
- New York Post
ChatGPT facing widespread outage, OpenAI claims it's working on fixing bugs
ChatGPT is facing a spike in outages on Monday morning as OpenAI says it's working to fix the bugs. OpenAI reported on its status page on Monday that it was experiencing a widespread outage impacting its popular AI chatbot. According to the Downdetector status page, a dramatic spike in outage reports began around 7:00 AM EDT on Monday. OpeanAi is facing a wide outage. AP The most severe impact appears to be on ChatGPT. A specific alert indicates 'elevated errors on ChatGPT for all paid users,' noting that the issue has been 'ongoing for 24 minutes' and 'affects ChatGPT.' OpenAI has acknowledged the problem, stating, 'We have identified that users are experiencing elevated errors for the impacted services. We are working on implementing a mitigation.'