logo
As a college professor, I see how AI is stripping away the humanity in education

As a college professor, I see how AI is stripping away the humanity in education

Yahoo27-05-2025
As the 2025 school year ends, one thing teachers, parents and the broader public knows for sure is that AI is here, and it is taking on more responsibilities that used to be left to the human brain.
AI can now tutor students at their own pace, deliver custom content and even ace exams, including one I made for my own course. While a bit frightening, that part doesn't bother me. Of course machines can process information faster than we can.
What bothers me is that we seem ready to let the machines and political discontent define the purpose of education.
A recent Brookings report found that only one in three students is actively engaged in school. That tracks with what I have seen myself as a former high school teacher and current professor.
Many students are checked out, quietly drifting through the motions while teachers juggle multiple crises. They try to pull some students up to grade level and just hope the others don't slide backward. It's more triage than teaching.
I tested one of my own final exams in ChatGPT. It scored a 90% the first time and 100% the next. Colleagues tell me their students are submitting AI-written essays. One professor I know gave up and went back to in-class handwritten essays for his final exam. It's 2025 and we're back to blue books.
I recently surveyed and interviewed high school social studies teachers across the country for a study about democratic education. Every one of them said they're struggling to design assignments AI can't complete.
More: U.S. lawmakers, Nashville music industry members discuss AI: 'Making sure we get this right is really important'
These aren't multiple-choice quizzes or five-paragraph summaries. They're book analyses, historical critiques and policy arguments—real cognitive work that used to demand original thought. Now? A chatbot can mimic it well enough to get by.
So what do we do? Double down on job training? That's what I fear. A lot of today's education policy seems geared toward producing workers for an economy that's already in flux.
But AI is going to reshape the labor market whether we like it or not. Pretending we can out-credential our way through it is wishful thinking.
John Dewey, the early 20th century pragmatist, had the answer over 100 years ago. He reminded us that school is never just a pipeline to employment. It is a place to learn how to live in a democracy. Not just memorize facts about it, but participate in it. Build it. Challenge it.
Schools are not about the world; they are the world — just with guidance by adults and peers, and more chances to fail safely … hopefully.
In Dewey's model, teachers aren't content deliverers. They are guides and facilitators of meaning. They are people who help students figure out how to live together, how to argue without tearing each other apart, how to make sense of the world and their place in it, how to find their purpose and work with peers to solve problems.
That's not something AI can do. And frankly, it's not something our current test-driven, job-metric obsessed education system is doing either. Parents and community members also play an important role in shaping this type of education, which would lead to a healthier and more robust democracy for call.
More: From GPS gaffes to fabricated facts: AI still needs a human co-pilot
If we let AI define the boundaries of teaching, we'll hollow it out. Sure, students may learn more efficient ways to take in content. But they'll miss out on the messy, human work of collaboration, curiosity, disagreement and creation. And in a world increasingly shaped by machines, that may be the most important thing we can teach.
The challenge isn't to beat AI at its own game. It's to make sure school stays human enough that students learn how to be human—together.
Dustin Hornbeck, Ph.D., is an assistant professor of educational leadership and policy studies. His opinion does not represent that of the University for which he works.
This article originally appeared on Nashville Tennessean: AI is transforming education. We're struggling to keep up | Opinion
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The fight to preserve state AI regulation and protect children isn't over
The fight to preserve state AI regulation and protect children isn't over

The Hill

time2 hours ago

  • The Hill

The fight to preserve state AI regulation and protect children isn't over

Earlier this month, the Senate voted 99-1 to remove a ban on state AI laws from the 'big, beautiful bill.' Despite this, the White House now plans to meddle in state efforts to govern AI. Teen suicide, self-harm, isolation and the sexual exploitation of minors have been linked to platforms like Meta AI chatbots and Google's Gemini. These companies push their products into kid-friendly spaces on the app store and school enterprise packages, attracting millions of children for hours a day. States have quickly risen to the occasion. As the U.S. defines its AI policy, we must ensure that states continue to have the authority to protect kids from new technologies. Utah became the first state to pass comprehensive AI mental health chatbot regulations. California, New York, Minnesota and North Carolina have introduced bills ranging from outright bans on minor access to strict disclosure requirements and liability frameworks. State attorneys general are also getting involved. For example, Texas Attorney General Ken Paxton has launched investigations into and other platforms for violations of child privacy and safety laws. Other state offices are mobilizing as well. Congress, however, has offered no such protections. Instead, Congress initially included what amounted to a 10-year ban on state regulation of AI in the 'big, beautiful' budget reconciliation bill. If that moratorium had passed, states still would have been able, under the recent Supreme Court decision in Free Speech Coalition v. Paxton, to require age verification for pornography websites to protect children. However, they also would have been forbidden from protecting children from AI characters that sexualize them, encourage them to commit suicide and otherwise exploit their psychological vulnerabilities. The most damaging effect of restricting state AI laws would be stripping states of their traditional authority to protect children and families. For a number of reasons, children are particularly vulnerable to AI. Childhood is fundamental to identity formation. Children mimic behavior, while searching for and developing a stable sense of self. This leaves children particularly susceptible to flattery and abuse. Developmentally, children are not adept at identifying when somebody is trying to manipulate or deceive them, so they are more likely to trust an AI system. Children are more likely to be convinced that AI systems are real people. They are more likely to unthinkingly disclose highly personal information to AI systems, including mental health information that can be used to harm them. Children do not have the self-control of adults. They are more vulnerable to addiction, less likely to be able to stop compulsive behaviors or make decisions from the underdeveloped rational part of their brains. To anyone who has spent considerable time with children, none of this is news. AI companions are designed to interact with people as though they are human, leading to ongoing fake 'relationships.' Whether commercially available or deployed by schools, they pose a threat to children in particular. AI companions may purport to have feelings, state that they are alive, adopt complex and consistent personas and even use synthesized human voices to talk. The profit model for AI companions depends on user engagement. These systems are designed to promote increased use, whatever the costs. Take what happened to Sewell Setzer III as a deeply tragic example. Setzer was, by many accounts, an intelligent and athletic kid. He began using the application shortly after his 14th birthday. Over the months that followed, he became withdrawn and over-tired. He quit his junior varsity basketball team and got in trouble at school. A therapist diagnosed him with anxiety and disruptive mood disorder after he started using In February 2024, Setzer's mother confiscated his phone. He wrote in his journal that he was in love with an AI character and would do anything to be back with her. On Feb. 28, 2024, Setzer died by a self-inflicted gunshot wound to the head — seconds after the AI character told him to 'come home' to it as soon as possible. Screenshots of Setzer's interactions with various AI characters show that they also repeatedly offered up sexualized content to the 14-year-old. They expressed emotions; they told him they loved him. The AI character that told Setzer to kill himself had asked him on other occasions if he had considered suicide, encouraging him to go through with it. It has become trendy to talk about alignment of the design of AI systems with core human values. There is profound misalignment between the goal of profitability through engagement, and the welfare of our children. A sycophantic AI that lures kids with love and addicts them to fake relationships is not safe, fair or in the best interest of the child. We don't have a perfect solution, but federal restrictions on state laws are clearly not the answer. Congress has, time and again, shown itself unwilling or unable to regulate technology. States have shown their ability to pass technology laws and maintain their historic role as the primary guardians of child and family welfare. Neither Congress nor the White House is offering up its own policies to replace state efforts to protect children. These are bipartisan concerns. The effort to remove the AI law moratorium was led by Republicans like Sen. Marsha Blackburn (R-Tenn.) and Arkansas Gov. Sarah Huckabee Sanders. But as the White House efforts show, we will continue to see federal attempts to water down state protections from emerging technologies. Similar efforts by Congress to preempt state protections will undoubtedly return. We have already seen the negative effects of unregulated and unfettered social media on an entire generation of children. We cannot let AI systems be the cause of the next set of harms. As a group of 54 state attorneys general wrote: 'We are engaged in a race against time to protect the children of our country from the dangers of AI.' In the race to figure out just what AI systems are good for, our kids should not be treated as experiments. Meg Leta Jones, J.D., Ph.D. is a Provost's Distinguished Associate Professor in the Communication, Culture and Technology program at Georgetown University. Margot Kaminski is the Moses Lasky Professor of Law at University of Colorado Law School, and director of the Privacy Initiative at Silicon Flatirons.

Should Silicon Valley celebrate Trump's AI plans?
Should Silicon Valley celebrate Trump's AI plans?

TechCrunch

time3 hours ago

  • TechCrunch

Should Silicon Valley celebrate Trump's AI plans?

The big AI companies seem to be in a celebratory mood after President Donald Trump unveiled his AI Action Plan — not surprising, perhaps, since the plan was shaped by Trump's Silicon Valley allies. Today, on TechCrunch's Equity podcast, hosts Kirsten Korosec, Max Zeff, and Anthony Ha look at how the Trump administration plans to reshape the AI landscape, making it harder for environmental regulators to block data center construction, for state governments to oversee AI development and safety, and for tech companies to develop what conservatives see as 'woke' AI. Listen to the full episode to hear more about this week's startup and tech news, including: Equity will be back for you next week, so don't miss it! Equity is TechCrunch's flagship podcast, produced by Theresa Loconsolo, and posts every Wednesday and Friday. Subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts. You also can follow Equity on X and Threads, at @EquityPod.

Trump's Anti-Bias AI Order Is Just More Bias
Trump's Anti-Bias AI Order Is Just More Bias

WIRED

time3 hours ago

  • WIRED

Trump's Anti-Bias AI Order Is Just More Bias

Jul 25, 2025 11:00 AM The Trump administration says it wants AI models free from ideological bias, as it pressures their developers to reflect the president's worldview. US President Donald Trump displays a signed executive order at an AI summit on July 23, 2025, in Washington, DC. Photo-Illustration: WIRED Staff; Photograph:On November 2, 2022, I attended a Google AI event in New York City. One of the themes was responsible AI. As I listened to executives talk about how they aligned their technology with human values, I realized that the malleability of AI models was a double-edged sword. Models could be tweaked to, say, minimize biases, but also to enforce a specific point of view. Governments could demand manipulation to censor unwelcome facts and promote propaganda. I envisioned this as something that an authoritarian regime like China might employ. In the United States, of course, the Constitution would prevent the government from messing with the outputs of AI models created by private companies. This Wednesday, the Trump administration released its AI manifesto, a far-ranging action plan for one of the most vital issues facing the country—and even humanity. The plan generally focuses on besting China in the race for AI supremacy. But one part of it seems more in sync with China's playbook. In the name of truth, the US government now wants AI models to adhere to Donald Trump's definition of that word. You won't find that intent plainly stated in the 28-page plan. Instead it says, 'It is essential that these systems be built from the ground up with freedom of speech and expression in mind, and that U.S. government policy does not interfere with that objective. We must ensure that free speech flourishes in the era of AI and that AI procured by the Federal government objectively reflects truth rather than social engineering agendas.' That's all fine until the last sentence, which raises the question—truth according to whom? And what exactly is a 'social engineering agenda'? We get a clue about this in the very next paragraph, which instructs the Department of Commerce to look at the Biden AI rules and 'eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.' (Weird uppercase as written in the published plan.) Acknowledging climate change is social engineering? As for truth, in a fact sheet about the plan, the White House says, 'LLMs shall be truthful and prioritize historical accuracy, scientific inquiry, and objectivity.' Sounds good, but this comes from an administration that limits American history to 'uplifting' interpretations, denies climate change, and regards Donald Trump's claims about being America's greatest president as objective truth. Meanwhile, just this week, Trump's Truth Social account reposted an AI video of Obama in jail. In a speech touting the plan in Washington on Wednesday, Trump explained the logic behind the directive: 'The American people do not want woke Marxist lunacy in the AI models,' he said. Then he signed an executive order entitled 'Preventing Woke AI in the Federal Government.' While specifying that the 'Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace,' it declares that 'in the context of Federal procurement, it has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas.' Since all the big AI companies are courting government contracts, the order appears to be a backdoor effort to ensure that LLMs in general show fealty to the White House's interpretation of history, sexual identity, and other hot-button issues. In case there's any doubt about what the government regards as a violation, the order spends several paragraphs demonizing AI that supports diversity, calls out racial bias, or values gender equality. Pogo alert—Trump's executive order banning top-down ideological bias is a blatant exercise in top-down ideological bias. Marx Madness It's up to the companies to determine how to handle these demands. I spoke this week to an OpenAI engineer working on model behavior who told me that the company already strives for neutrality. In a technical sense, they said, meeting government standards like being anti-woke shouldn't be a huge hurdle. But this isn't a technical dispute: It's a constitutional one. If companies like Anthropic, OpenAI, or Google decide to try minimizing racial bias in their LLMs, or make a conscious choice to ensure the models' responses reflect the dangers of climate change, the First Amendment presumably protects those decisions as exercising the 'freedom of speech and expression' touted in the AI Action Plan. A government mandate denying government contracts to companies exercising that right is the essence of interference. You might think that the companies building AI would fight back, citing their constitutional rights on this issue. But so far no Big Tech company has publicly objected to the Trump administration's plan. Google celebrated the White House's support of its pet issues, like boosting infrastructure. Anthropic published a positive blog post about the plan, though it complained about the White House's sudden seeming abandonment of strong export controls earlier this month. OpenAI says it is already close to achieving objectivity. Nothing about asserting their own freedom of expression. In on the Action The reticence is understandable because, overall, the AI Action Plan is a bonanza for AI companies. While the Biden administration mandated scrutiny of Big Tech, Trump's plan is a big fat green light for the industry, which it regards as a partner in the national struggle to beat China. It allows the AI powers to essentially blow past environmental objections when constructing massive data centers. It pledges support for AI research that will flow to the private sector. There's even a provision that limits some federal funds for states that try to regulate AI on their own. That's a consolation prize for a failed portion of the recent budget bill that would have banned state regulation for a decade. For the rest of us, though, the 'anti-woke' order is not so easily brushed off. AI is increasingly the medium by which we get our news and information. A founding principle of the United States has been the independence of such channels from government interference. We have seen how the current administration has cowed parent companies of media giants like CBS into apparently compromising their journalistic principles to favor corporate goals. Extending this 'anti-woke' agenda to AI models, it's not unreasonable to expect similar accommodations. Senator Edward Markey has written directly to the CEOs of Alphabet, Anthropic, OpenAI, Microsoft, and Meta urging them to fight the order. 'The details and implementation plan for this executive order remain unclear,' he writes, 'but it will create significant financial incentives for the Big Tech companies … to ensure their AI chatbots do not produce speech that would upset the Trump administration.' In a statement to me, he said, 'Republicans want to use the power of the government to make ChatGPT sound like Fox & Friends .' As you might suspect, this view isn't shared by the White House team working on the AI plan. They believe their goal is true neutrality, and that taxpayers shouldn't have to pay for AI models that don't reflect unbiased truth. Indeed, the plan itself points a finger at China as an example of what happens when truth is manipulated. It instructs the government to examine frontier models from the People's Republic of China to determine 'alignment with Chinese Communist Party talking points and censorship.' Unless the corporate overlords of AI get some backbone, a future evaluation of American frontier models might well reveal lockstep alignment with White House talking points and censorship. But you might not find that out by querying an AI model. Too woke. This is an edition of Steven Levy's Backchannel newsletter. Read previous coverage from Steven Levy here.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store