logo
Is ChatGPT killing higher education?

Is ChatGPT killing higher education?

Vox3 days ago
What's the point of college if no one's actually doing the work?
It's not a rhetorical question. More and more students are not doing the work. They're offloading their essays, their homework, even their exams, to AI tools like ChatGPT or Claude. These are not just study aids. They're doing everything.
We're living in a cheating utopia — and professors know it. It's becoming increasingly common, and faculty are either too burned out or unsupported to do anything about it. And even if they wanted to do something, it's not clear that there's anything to be done at this point.
So what are we doing here?
James Walsh is a features writer for New York magazine's Intelligencer and the author of the most unsettling piece I've read about the impact of AI on higher education.
Walsh spent months talking to students and professors who are living through this moment, and what he found isn't just a story about cheating. It's a story about ambivalence and disillusionment and despair. A story about what happens when technology moves faster than our institutions can adapt.
I invited Walsh onto The Gray Area to talk about what all of this means, not just for the future of college but the future of writing and thinking. As always, there's much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday.
This interview has been edited for length and clarity.
Let's talk about how students are cheating today. How are they using these tools? What's the process look like?
It depends on the type of student, the type of class, the type of school you're going to. Whether or not a student can get away with that is a different question, but there are plenty of students who are taking their prompt from their professor, copying and pasting it into ChatGPT and saying, 'I need a four to five-page essay,' and copying and pasting that essay without ever reading it.
One of the funniest examples I came across is a number of professors are using this so-called Trojan horse method where they're dropping non-sequiturs into their prompts. They mention broccoli or Dua Lipa, or they say something about Finland in the essay prompts just to see if people are copying and pasting the prompts into ChatGPT. If they are, ChatGPT or whatever LLM they're using will say something random about broccoli or Dua Lipa.
Unless you're incredibly lazy, it takes just a little effort to cover that up.
Every professor I spoke to said, 'So many of my students are using AI and I know that so many more students are using it and I have no idea,' because it can essentially write 70 percent of your essay for you, and if you do that other 30 percent to cover all your tracks and make it your own, it can write you a pretty good essay.
And there are these platforms, these AI detectors, and there's a big debate about how effective they are. They will scan an essay and assign some grade, say a 70 percent chance that this is AI-generated. And that's really just looking at the language and deciding whether or not that language is created by an LLM.
But it doesn't account for big ideas. It doesn't catch the students who are using AI and saying, 'What should I write this essay about?' And not doing the actual thinking themselves and then just writing. It's like paint by numbers at that point.
Did you find that students are relating very differently to all of this? What was the general vibe you got?
It was a pretty wide perspective on AI. I spoke to a student at the University of Wisconsin who said, 'I realized AI was a problem last fall, walking into the library and at least half of the students were using ChatGPT.' And it was at that moment that she started thinking about her classroom discussions and some of the essays she was reading.
The one example she gave that really stuck with me was that she was taking some psych class, and they were talking about attachment theories. She was like, 'Attachment theory is something that we should all be able to talk about [from] our own personal experiences. We all have our own attachment theory. We can talk about our relationships with our parents. That should be a great class discussion. And yet I'm sitting here in class and people are referencing studies that we haven't even covered in class, and it just makes for a really boring and unfulfilling class.' That was the realization for her that something is really wrong. So there are students like that.
And then there are students who feel like they have to use AI because if they're not using AI, they're at a disadvantage. Not only that, AI is going to be around no matter what for the rest of their lives. So they feel as if college, to some extent now, is about training them to use AI.
What's the general professor's perspective on this? They seem to all share something pretty close to despair.
Yes. Those are primarily the professors in writing-heavy classes or computer science classes. There were professors who I spoke to who actually were really bullish on AI. I spoke to one professor who doesn't appear in the piece, but she is at UCLA and she teaches comparative literature, and used AI to create her entire textbook for this class this semester. And she says it's the best class she's ever had.
So I think there are some people who are optimistic, [but] she was an outlier in terms of the professors I spoke to. For the most part, professors were, yes, in despair. They don't know how to police AI usage. And even when they know an essay is AI-generated, the recourse there is really thorny. If you're going to accuse a student of using AI, there's no real good way to prove it. And students know this, so they can always deny, deny, deny. And the sheer volume of AI-generated essays or paragraphs is overwhelming. So that, just on the surface level, is extremely frustrating and has a lot of professors down.
Now, if we zoom out and think also about education in general, this raises a lot of really uncomfortable questions for teachers and administrators about the value of each assignment and the value of the degree in general.
How many professors do you think are now just having AI write their lectures?
There's been a little reporting on this. I don't know how many are. I know that there are a lot of platforms that are advertising themselves or asking professors to use them more, not just to write lectures, but to grade papers, which of course, as I say in the piece, opens up the very real possibility that right now an AI is grading itself and offering comments on an essay that it wrote. And this is pretty widespread stuff. There are plenty of universities across the country offering teachers this technology. And students love to talk about catching their professors using AI.
I've spoken to another couple of professors who are like, I'm nearing retirement, so it's not my problem, and good luck figuring it out, younger generation. I just don't think people outside of academia realize what a seismic change is coming. This is something that we're all going to have to deal with professionally.
And it's happening much, much faster than anyone anticipated. I spoke with somebody who works on education at Anthropic, who said, 'We expected students to be early adopters and use it a lot. We did not realize how many students would be using it and how often they would be using it.'
Is it your sense that a lot of university administrators are incentivized to not look at this too closely, that it's better for business to shove it aside?
I do think there's a vein of AI optimism among a certain type of person, a certain generation, who saw the tech boom and thought, I missed out on that wave, and now I want to adopt. I want to be part of this new wave, this future, this inevitable future that's coming. They want to adopt the technology and aren't really picking up on how dangerous it might be.
I used to teach at a university. I still know a lot of people in that world. A lot of them tell me that they feel very much on their own with this, that the administrators are pretty much just saying, Hey, figure it out. And I think it's revealing that university admins were quickly able, during Covid, for instance, to implement drastic institutional changes to respond to that, but they're much more content to let the whole AI thing play out.
I think they were super responsive to Covid because it was a threat to the bottom line. They needed to keep the operation running. AI, on the other hand, doesn't threaten the bottom line in that way, or at least it doesn't yet. AI is a massive, potentially extinction-level threat to the very idea of higher education, but they seem more comfortable with a degraded education as long as the tuition checks are still cashing. Do you think I'm being too harsh?
I genuinely don't think that's too harsh. I think administrators may not fully appreciate the power of AI and exactly what's happening in the classroom and how prevalent it is. I did speak with many professors who go to administrators or even just older teachers, TAs going to professors and saying, This is a problem.
I spoke to one TA at a writing course at Iowa who went to his professor, and the professor said, 'Just grade it like it was any other paper.' I think they're just turning a blind eye to it. And that is one of the ways AI is exposing the rot underneath education.
It's this system that hasn't been updated in forever. And in the case of the US higher ed system, it's like, yeah, for a long time it's been this transactional experience. You pay X amount of dollars, tens of thousands of dollars, and you get your degree. And what happens in between is not as important.
The universities, in many cases, also have partnerships with AI companies, right?
Right. And what you said about universities can also be said about AI companies. For the most part, these are companies or companies within nonprofits that are trying to capture customers. One of the more dystopian moments was when we were finishing this story, getting ready to completely close it, and I got a push alert that was like, 'Google is letting parents know that they have created a chatbot for children under [thirteen years old].' And it was kind of a disturbing experience, but they are trying to capture these younger customers and build this loyalty.
There's been reporting from the Wall Street Journal on OpenAI and how they have been sitting on an AI that would be really, really effective at essentially watermarking their output. And they've been sitting on it, they have not released it, and you have to wonder why. And you have to imagine they know that students are using it, and in terms of building loyalty, an AI detector might not be the best thing for their brand.
This is a good time to ask the obligatory question, Are we sure we're not just old people yelling at clouds here? People have always panicked about new technologies. Hell, Socrates panicked about the written word. How do we know this isn't just another moral panic?
I think there's a lot of different ways we could respond to that. It's not a generational moral panic. This is a tool that's available, and it's available to us just as it's available to students. Society and our culture will decide what the morals are. And that is changing, and the way that the definition of cheating is changing. So who knows? It might be a moral panic toda,y and it won't be in a year.
However, I think somebody like Sam Altman, the CEO of OpenAI, is one of the people who said, 'This is a calculator for words.' And I just don't really understand how that is compatible with other statements he's made about AI potentially being lights out for humanity or statements made by people at an Anthropic about the power of AI to potentially be a catastrophic event for humans. And these are the people who are closest and thinking about it the most, of course.
I have spoken to some people who say there is a possibility, and I think there are people who use AI who would back this up, that we've maxed out the AI's potential to supplement essays or writing. That it might not get much better than it is now. And I think that's a very long shot, one that I would not want to bank on.
Is your biggest fear at this point that we are hurtling toward a post-literate society? I would argue, if we are post-literate, then we're also post-thinking.
It's a very scary thought that I try not to dwell in — the idea that my profession and what I'm doing is just feeding the machine, that my most important reader now is a robot, and that there's going to be fewer and fewer readers is really scary, not just because of subscriptions, but because, as you said, that means fewer and fewer people thinking and engaging with these ideas.
I think ideas can certainly be expressed in other mediums and that's exciting, but I don't think anybody who's paid attention to the way technology has shaped teen brains over the past decade and a half is thinking, Yeah, we need more of that. And the technology we're talking about now is orders of magnitude more powerful than the algorithms on Instagram.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

‘The frontier is moving': AI is already making it harder for some to find a job
‘The frontier is moving': AI is already making it harder for some to find a job

Boston Globe

time15 minutes ago

  • Boston Globe

‘The frontier is moving': AI is already making it harder for some to find a job

Over the past three years, the unemployment rate for recent college graduates has exceeded the overall unemployment rate for the first time, research firm Oxford Economics reported. 'There are signs that entry-level positions are being displaced by artificial intelligence,' the firm wrote in a report in May, noting that grads with programming and other tech degrees seemed to be particularly struggling in the job market. Other factors, including companies cutting back after over-hiring, could also be at play. In June, Anthropic, predicted the technology will eliminate half of all white-collar jobs. Advertisement Brooke DeRenzis, head of the nonprofit National Skills Coalition, has described the arrival of AI in the workforce as a 'jump ball' for the middle class. The tech will create some new jobs, enhance some existing jobs, and eliminate others, but how that will impact ordinary workers is yet to be determined, she said. Government and business leaders need to invest in training programs to teach people how to incorporate AI skills and, at the same time, build a social safety net beyond just unemployment insurance for workers in industries completely displaced by AI, DeRenzis argued. Advertisement 'We can shape a society that supports our workforce in adapting to an AI economy in a way that can actually grow our middle class,' DeRenzis said. 'One of the potential risks is we could see inequality widen … if we are not fully investing in people's ability to work alongside AI.' Still, even the latest AI apps are riddled with mistakes and unable to fully replace human workers at many tasks. Less than three years after ChatGPT burst on the scene, researchers say there is a long way to go before anyone can definitively predict how the technology will affect employment, according to Morgan Frank, a professor at the University of Pittsburgh who studies the impact of AI in jobs. He says pronouncements from tech CEOs could just be scapegoating as they need to make layoffs because of over-hiring during the pandemic. 'There's not a lot of evidence that there's a huge disaster pending, but there are signs that people entering the workforce to do these kinds of jobs right now don't have the same opportunity they had in the past,' he said. 'The way AI operates and the way that people use it is constantly shifting, and we're just in this transitory period…. The frontier is moving.' Aaron Pressman can be reached at

The world is too complex for AI to pick your stocks, a hedge fund quant says
The world is too complex for AI to pick your stocks, a hedge fund quant says

Yahoo

time28 minutes ago

  • Yahoo

The world is too complex for AI to pick your stocks, a hedge fund quant says

A top hedge fund quant says ChatGPT just isn't ready to pick stocks like a real investor. Gappy Paleologo warned AI still can't match the gut instincts and context humans bring to investing. Some say Wall Street's big bet on AI might be getting ahead of what the tech can actually do. The dream of letting ChatGPT build your investment portfolio may still be far off, according to one of the hedge fund world's top quant minds. Gappy Paleologo, a partner at Balyasny Asset Management and a veteran of firms like Citadel and Hudson River Trading, said large language models like OpenAI's ChatGPT lack the real-world grounding needed to make serious investment decisions. "The decision to invest in a particular stock is a very demanding cognitive function, and I don't see that really being replicated very well," Paleologo said on Bloomberg's "Money Stuff" podcast. Despite the hype surrounding AI in finance, Paleologo argued that machine learning models are still disconnected from how investors experience companies through direct observation, human conversation, and a holistic understanding of industries and people. "Our inputs are much more complex than just a string of text or YouTube videos," he said. "An investor has a fundamentally different experience of a company than an LLM that has an experience that is mediated by multiple layers of processing." While AI may be able to handle baseline tasks like replicating a researcher's writing style or summarizing earnings calls, Paleologo remains skeptical of its ability to generate conviction around trades. The human edge, in his view, is still rooted in messy, real-world intuition. That is not to say AI won't change the game. He said he expects large firms like Bloomberg to roll out advanced prompt-based tools that replace traditional terminals, allowing investors to interact with data more naturally. "This is going to happen in one form or another," he said. " But I don't think AI is that smart also. So I think that having a baseline system would be already pretty good." Paleologo's caution comes as Wall Street is increasingly betting on AI to fuel the next wave of stock market growth. Tech stocks, especially in the AI space, have soared since ChatGPT's launch in 2022, with the Magnificent Seven — Apple, Amazon, Alphabet, Meta, Microsoft, Tesla, and Nvidia — now making up roughly one-third of the S&P 500's total market value. But that optimism is meeting resistance. Market strategists like Callie Cox have warned that the AI trade could hit a wall amid rising tariffs, inflationary pressure, and slowing consumer demand. Others have drawn historical parallels to the dot-com bubble. Richard Bernstein, chief investment officer of the $15 billion investment firm Richard Bernstein Advisors, said the AI mania is "eerily similar" to the overhype of internet stocks in the late 1990s. As Paleologo sees it, AI may eventually integrate into investors' toolkits, but for now, it still lacks the sensory, intuitive, and contextual capabilities that define truly strategic investing. Read the original article on Business Insider

The Prompt: OpenAI Backs New AI Training Academy For Teachers
The Prompt: OpenAI Backs New AI Training Academy For Teachers

Forbes

time34 minutes ago

  • Forbes

The Prompt: OpenAI Backs New AI Training Academy For Teachers

Welcome back to The Prompt. Open AI CEO Sam Altman speaks during a talk session with SoftBank Group. Getty Images The second largest teacher's union in the country is starting a new training center to train teachers how to use AI in the classroom, thanks to $23 million in funding from Microsoft, Anthropic and OpenAI. The American Federation of Teachers announced The National Academy for AI Instruction will train teachers on how AI can generate lesson plans and write emails to parents. Based in New York City, the Academy's goal is to provide free training to 400,000 K-12 teachers over the next five years. A string of universities like Arizona State University and California State University have already partnered with OpenAI to bring ChatGPT to their students. And the Trump Administration is encouraging educators to widely adopt AI within schools. But AI is hardly ready for prime time in education. Students regularly use ChatGPT to cheat on homework (as teachers once feared) and to write college admissions essays, and old forms of teaching and traditional curriculums simply aren't suited to the moment. Worse, AI tools designed for education aren't always fully safe for kids: A recent Forbes investigation found that the top AI study aid chatbots can be prompted to give dangerous advice and generate harmful content like fentanyl recipes. Let's get into the headlines. POLITICS + ELECTION AI is increasingly being used to impersonate government officials and obtain access to confidential information. The most recent example is an impostor who pretended to be Secretary of State Marco Rubio, using AI to generate voice and text messages in his likeness, then contacting three foreign ministers, a U.S. governor and a U.S. member of Congress on encrypted messaging app Signal, the Washington Post reported. TALENT RESHUFFLING As it builds out its shiny new superintelligence team, Meta has snagged top AI researchers from large AI labs like OpenAI, Anthropic and Google DeepMind with multi-million dollar offers. Its most recent addition is Ruoming Pang, who was previously in charge of developing AI models at Apple, Bloomberg's Mark Gurman reported. AI DEAL OF THE WEEK Neocloud giant CoreWeave, which recently went public at a $23 billion valuation, announced it has acquired crypto miner and data center infrastructure provider Core Scientific in an all-stock deal valued at about $9 billion. CoreWeave, which provides prized Nvidia GPUs to large AI companies like OpenAI and Microsoft, has been leasing datacenters from Core Scientific since 2018. The acquisition will help CoreWeave eliminate rent costs for the next 15 years, CEO Mike Intrator said in an interview with CNBC. The deal also gives CoreWeave more power capacity, which is increasingly important to train and run powerful AI models. DEEP DIVE As longtime cancer doctors with regulatory experience, Pi Health cofounders Geoff Kim and Bobby Reddy knew that completing clinical trials took far too long. There was the painfully slow process of signing up patients and after that a grueling slog through vast swamps of data to prepare voluminous regulatory filings–something that few hospitals and clinics can handle. The pair knew their startup's best chance of success meant doing an end run around all that. So they did something audacious and unprecedented: they built their own cancer hospital in India. Clinical trials are an enormous bottleneck in drug development, and Kim and Reddy thought the AI-enabled software they'd been building at Pi Health could help do them faster and cheaper by expanding the pool of potentially eligible patients. But the majority of clinical trials today are done in top-notch academic medical centers, and first they needed to prove that their AI-enabled software could help overseas hospitals and smaller community cancer centers handle the documentation required to get through regulatory approval. So they found a site in Hyderabad, a major technology and pharmaceutical center in southern India, and built a 30-bed, state-of-the-art cancer hospital. Only 8% of cancer patients in the U.S. participate in clinical studies, in part because of the voluminous paperwork required to run them. That limits understanding of the disease and the way that it affects diverse populations, and also means drug approvals take longer and cost more than they would if the limited pool of patients weren't a bottleneck. Pi Health's software aims to lower the burden. It combines all clinical trial data into one place, streamlining workflows and reducing errors, starting with the trial design and continuing through regulatory submission. It uses artificial intelligence to check for discrepancies and errors in data and to produce automated notes with clinical documentation from regulatory-grade data. To date, the Cambridge, Mass.-based startup has raised some $40 million at a valuation of nearly $100 million. It is generating revenue, with signed contracts of more than $70 million. Read the full story on Forbes . WEEKLY DEMO Free open source software is helping websites fend off the army of bot scrapers hoovering up their data to train AI models and running up their server costs in the process. Anubis, which has been downloaded 200,000 times and is being used by organizations like UNESCO, uses a browser that carries out a type of cryptographic math to verify that a site visitor is in fact a human and not a bot, 404 Media reported. Another clever way organizations are trying to thwart content scrapers: creating an 'infinite maze' that continuously sends bots to useless links that aren't used by humans. MODEL BEHAVIOR A band called The Velvet Sundown has exploded in popularity with more than a million monthly listeners and hundreds of thousands of streams on Spotify. But multiple Redditors noted that the band appeared to be 'completely fake.' After initially denying that it used AI to generate its music, a spokesperson for the band told Rolling Stone that it used AI in the production of its songs. The spokesperson later said he was an imposter. The Velvet Sundown 's Spotify bio mentions that the band uses AI as a 'creative instrument' to compose its music.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store