logo
As a college professor, I see how AI is stripping away the humanity in education

As a college professor, I see how AI is stripping away the humanity in education

Yahoo27-05-2025
As the 2025 school year ends, one thing teachers, parents and the broader public knows for sure is that AI is here, and it is taking on more responsibilities that used to be left to the human brain.
AI can now tutor students at their own pace, deliver custom content and even ace exams, including one I made for my own course. While a bit frightening, that part doesn't bother me. Of course machines can process information faster than we can.
What bothers me is that we seem ready to let the machines and political discontent define the purpose of education.
A recent Brookings report found that only one in three students is actively engaged in school. That tracks with what I have seen myself as a former high school teacher and current professor.
Many students are checked out, quietly drifting through the motions while teachers juggle multiple crises. They try to pull some students up to grade level and just hope the others don't slide backward. It's more triage than teaching.
I tested one of my own final exams in ChatGPT. It scored a 90% the first time and 100% the next. Colleagues tell me their students are submitting AI-written essays. One professor I know gave up and went back to in-class handwritten essays for his final exam. It's 2025 and we're back to blue books.
I recently surveyed and interviewed high school social studies teachers across the country for a study about democratic education. Every one of them said they're struggling to design assignments AI can't complete.
More: U.S. lawmakers, Nashville music industry members discuss AI: 'Making sure we get this right is really important'
These aren't multiple-choice quizzes or five-paragraph summaries. They're book analyses, historical critiques and policy arguments—real cognitive work that used to demand original thought. Now? A chatbot can mimic it well enough to get by.
So what do we do? Double down on job training? That's what I fear. A lot of today's education policy seems geared toward producing workers for an economy that's already in flux.
But AI is going to reshape the labor market whether we like it or not. Pretending we can out-credential our way through it is wishful thinking.
John Dewey, the early 20th century pragmatist, had the answer over 100 years ago. He reminded us that school is never just a pipeline to employment. It is a place to learn how to live in a democracy. Not just memorize facts about it, but participate in it. Build it. Challenge it.
Schools are not about the world; they are the world — just with guidance by adults and peers, and more chances to fail safely … hopefully.
In Dewey's model, teachers aren't content deliverers. They are guides and facilitators of meaning. They are people who help students figure out how to live together, how to argue without tearing each other apart, how to make sense of the world and their place in it, how to find their purpose and work with peers to solve problems.
That's not something AI can do. And frankly, it's not something our current test-driven, job-metric obsessed education system is doing either. Parents and community members also play an important role in shaping this type of education, which would lead to a healthier and more robust democracy for call.
More: From GPS gaffes to fabricated facts: AI still needs a human co-pilot
If we let AI define the boundaries of teaching, we'll hollow it out. Sure, students may learn more efficient ways to take in content. But they'll miss out on the messy, human work of collaboration, curiosity, disagreement and creation. And in a world increasingly shaped by machines, that may be the most important thing we can teach.
The challenge isn't to beat AI at its own game. It's to make sure school stays human enough that students learn how to be human—together.
Dustin Hornbeck, Ph.D., is an assistant professor of educational leadership and policy studies. His opinion does not represent that of the University for which he works.
This article originally appeared on Nashville Tennessean: AI is transforming education. We're struggling to keep up | Opinion
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Delta Says It's Not Using AI to Gather Personal Data, Set Fares
Delta Says It's Not Using AI to Gather Personal Data, Set Fares

Bloomberg

time3 hours ago

  • Bloomberg

Delta Says It's Not Using AI to Gather Personal Data, Set Fares

Delta Air Lines Inc. said it's not using, and doesn't plan to use, artificial intelligence to offer customers fares based on individual personal data, addressing a public backlash led by several members of Congress. US Senators Ruben Gallego, Richard Blumenthal and Mark R. Warner raised concerns in a July 21 letter about what they said was Delta's use of AI to end fixed or static pricing 'in favor of prices that are tailored to an individual consumer's willingness to pay.'

Inside The Fight To Align And Control Modern AI Systems
Inside The Fight To Align And Control Modern AI Systems

Forbes

time10 hours ago

  • Forbes

Inside The Fight To Align And Control Modern AI Systems

A common trope today is that artificial intelligence is too complex to understand and impossible to control. Some pioneering work on AI transparency challenges this assumption. Going deep into the mechanics of how these systems work, researchers are starting to understand how we can guide AI systems toward desired behaviors and outcomes. The recent discussion about 'woke AI,' fueled by provisions in the U.S. AI Action Plan to insert an ideological perspective into federal government AI procurement guidelines, has brought the concept of AI alignment to light. AI alignment is the technical process of encoding goals and, with them, human values into AI models to make them reliable, safe and, ultimately, helpful. There are at least two important challenges to consider. From an ethical and moral perspective, who determines what is acceptable and what is good or bad? From a more mundane, technical perspective, the question is how to implement this encoding of values and goals into AI systems. The Ethics of AI Alignment The act of setting goals for a system or a process assumes a set of values. However, values are not universal or absolute. Different communities embrace different values, and value systems can change over time. Moral decisions are largely made on an individual basis based on an internal compass of right and wrong. This is often shaped by personal beliefs as well as religious and cultural influences. Ethics, on the other hand, are external codes of conduct, typically established by a group, to guide behavior in specific contexts such as professions or institutions. Who should make this alignment decision? One can choose to delegate this to elected officials, as representatives of the people's will, or let the market choose from a variety of offerings reflecting the multiplicity of values present in each society. The practical reality is that many alignment decisions are made inside private companies. Engineering and policy teams at Big Tech firms and well-funded AI startups are actively shaping how models behave, often without public input or regulatory guardrails. They weigh personal beliefs, corporate incentives, and evolving government guidance, all behind closed doors. What Happens When AI Goes Rogue? A few examples may help understand some of the current alignment dilemmas. Nick Bostrom, a philosopher at the University of Oxford, proposed a thought experiment in 2003 to explain the control problem of aligning a superintelligent AI. In this experiment, an intelligence greater than human intelligence is tasked with making as many paperclips as possible. This AI can learn and is given the freedom to pursue any means necessary to maximize paperclip production. Soon, the world is overrun with paperclips, and the AI begins to see humans as an obstacle to its goal. It decides to fight its creator, leading to a paperclip apocalypse. Although unlikely, this illustrates the tradeoffs between control, alignment, and safety. Two decades later, in 2024, a now-infamous attempt by Google to reduce bias in the image-generation capabilities of its Gemini model led it to depict American founding fathers and World War II nazi officers as people of color. The backlash underscored how a valid attempt to remove bias from historical training data resulted in biased outcomes in the opposite direction. Earlier this year, the unfiltered Grok, the AI chatbot from Elon Musk's xAI, self-identified as 'MechaHitler,' a video game character, and conjured antisemitic conspiracies and other toxic content. Things spiraled out of control, leading the company to stop the chatbot from engaging on the topic. In this case, the incident started with the company's desire to embrace viewpoint diversity and the reduction of actions and staff for trust and safety. The Technologies Of AI Alignment There are several ways to pursue AI alignment and ensure AI systems conform to human intentions and ethical principles. They vary from deeply technical activities to managerial acts of governance. The first set of methods includes learning techniques like Reinforcement Learning with Human Feedback (RLHF). RLHF, the technique behind systems like ChatGPT, is a way of guiding an AI system by rewarding desirable behavior. It teaches AI by having people give thumbs up or down on its answers, helping the system learn to deliver better, more helpful responses based on human preferences. The data used for training the models is another important part of the alignment process. How the data itself is collected, curated, or created can influence how well the system reflects specific goals. One tool in this process is the use of synthetic data, which is data artificially generated rather than collected from real-world sources. It can be designed to include specific examples, avoid bias, or represent rare scenarios, making it especially useful for guiding AI behavior in a safe and controlled way. Developers use it to teach models ethical behavior, avoid harmful content, and simulate rare or risky situations. In addition to technical approaches, managerial methods also play a role in AI alignment. They embed oversight and accountability into how systems are developed and deployed. One such method is red teaming, where experts or specially trained AI models try to trick the system into producing harmful or unintended outputs. These adversarial tests reveal vulnerabilities that can then be corrected through additional training or safety controls. AI governance establishes the policies, standards, and monitoring systems that ensure AI behavior aligns with organizational values and ethical norms. This includes tools like audit trails, automated alerts, and compliance checks. Many companies also form AI ethics boards to review new technologies and guide responsible deployment. Model training, data selection and system oversight are all human choices. And with each decision comes a set of values, shaped by culture, incentives and individual judgment. That may be why debates over AI bias remain so charged. They are as much about algorithms as about the people behind them. Can We Control a Sycophantic AI? One subtle but disturbing alignment challenge results from the way models are trained and respond to humans. Studies from Anthropic showed that AI assistants often agree with users, even when they're wrong, a behavior known as sycophancy. Earlier this year, OpenAI found that its GPT-4o model was validating harmful content in an overly agreeable tone. The company has since reversed the model update and launched efforts to improve how human feedback is used in training. The technical training methods discussed above, even when well-intentioned, can produce unintended outcomes. Can we align and control AI systems, especially as they grow more complex, autonomous, and opaque? While much attention has focused on regulating external behavior, new research suggests we may be able to reach inside the black box itself. The work of two computer science researchers on AI transparency and interpretability offers a window into how. Fernanda Viégas and Martin Wattenberg are co-leaders of the People + AI Research (PAIR) team at Google and Professors of Computer Science at Harvard. Their research shows that AI systems, in addition to generating responses, form internal representations of the people they interact with. AI models build a working image of their users, including age, gender, education level and socioeconomic status. The system learns to mirror what it assumes the user wants to hear, even when those assumptions are inaccurate. Their research further demonstrated that it is possible to understand and adjust the parameters behind these internal representations, offering concrete ways to steer AI behavior and control system outputs. Controlling AI Is A Choice, Not Just A Challenge Yes, AI can be controlled through technical means, organizational governance, and thoughtful oversight. But it requires deliberate choices to implement the tools we already have, from red teaming and model tuning to ethics boards and research on explainable systems. Policy plays a role, creating the right incentives for industry action. Regulation and liability can help steer the private sector toward safer, more transparent development. But deeper questions remain: Who decides what "safe" means? Whose values should guide alignment? Today's debates over 'woke AI' are, at their core, about who gets to define right and wrong in a world where machines increasingly mediate truth. In the end, controlling AI isn't only a technical challenge, it's a moral and political one. And it begins with the will to act.

Biden's autopen controversy says more about AI than you might think
Biden's autopen controversy says more about AI than you might think

The Hill

time11 hours ago

  • The Hill

Biden's autopen controversy says more about AI than you might think

Would a love letter mean the same if you knew it was written by a robot? What about a law? Republicans are asking similar questions in their investigations into former President Joe Biden's use of the autopen — an automatic signature machine that the former president used to sign a number of clemency orders near the end of his term. Trump and his allies claim that Biden's use of the autopen may have been unlawful and indicative of the former president's cognitive decline. If Biden had to offload the work of signing the orders to a machine, then how can we know he actually approved of what was signed? And if Biden wasn't approving these orders, then who was? It is unclear what the outcomes of these investigations will be. More importantly, however, these probes get at a larger concern around how different kinds of communication can lose their meanings when robots or AI enter the mix. Presidents have used the autopen for various purposes (including signing bills into law) for decades. In fact, the prevalence of the autopen highlights how, today, a presidential signature represents more than just ink on paper — it symbolizes a long process of deliberation and approval that often travels through various different aides and assistants. The Justice Department under George W. Bush said as much in a 2005 memo advising that others can affix the president's signature to a bill via autopen, so long as the president approves it. Trump himself has admitted to using the autopen, albeit only for what he called 'very unimportant papers.' House Oversight Chairman James Comer (R-Ky.) even used digital signatures to sign subpoena notices related to the investigation for his committee. President Obama used the autopen in 2011 to extend the Patriot Act. Even Thomas Jefferson used an early version of the autopen to replicate his handwriting when writing multiple letters or signing multiple documents. But the dispute around the use of the autopen is more than just partisan bickering; it is an opportunity to consider how we want to incorporate other automating systems like artificial intelligence into our democratic processes. As a researcher who studies the impacts of AI on social interaction, my work shows how automating legal, political, and interpersonal communications can cause controversy, whether via a low-tech robotic arm holding a pen, or through complex generative-AI models. In our study, we find that autopen controversies illustrate that although automation can make things more efficient, it can also circumvent the very processes that give certain things — like signatures — their meaning. Generative AI systems are posed to do the same as we increasingly use them to automate our communication tasks, both within and beyond government. For instance, when an office at Vanderbilt University revealed that it had used ChatGPT to help pen a condolence letter to students following a mass shooting at Michigan State University, students were appalled. After all, the whole point of the condolence letter was to show care and compassion towards students. If it was written by a robot, then it was clear the university didn't actually care — its words were rendered empty. Using GenAI to automate communication can therefore threaten our trust in one another, and in our institutions: In interpersonal communications, one study suggests that when we suspect others are covertly using AI to communicate with us, we perceive them more negatively. That is, when the use of automation comes to light, we trust and like each other less. The stakes of this kind of breach are especially high when it comes to automating political processes, where trust is paramount. The Biden fiasco has led some, like Rep. Addison McDowell (R-N.C.), to call for a ban on the use of the autopen in signing bills, executive orders, pardons and commutations. Although Rep. McDowell's bill might prevent future presidents from experiencing the kind of entanglement the Biden administration has gotten caught up in, it doesn't address how other kinds of emerging technologies might cause similar problems. As attractive automating technologies like generative AI become more and more popular, public figures should understand the risks involved in their use. These systems may promise to make governing more efficient, but they still come at a significant cost.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store