logo
Students are using AI to write scholarship essays. Does it work?

Students are using AI to write scholarship essays. Does it work?

Boston Globe09-04-2025
'They felt a little bit sterile,' said Geiger, the cofounder and CEO of a company called Scholarships360, an online platform used by more than 300,000 students last year to find and apply for scholarships.
Related
:
Advertisement
Curious, Scholarships360 staffers deployed AI-detection software called GPTZero. It checked almost 1,000 essays submitted for one scholarship and determined that about 42 percent of them had likely been composed with the help of generative AI.
With college acceptances beginning to roll in for high school seniors, and juniors starting to brainstorm the essays they'll submit with their applications in the fall, Geiger is concerned. When students use AI to help write their essays, he said, they are wasting a valuable opportunity.
'The essay is one of the few opportunities in the admissions process for a student to communicate directly with a scholarship committee or with an admissions reader,' Geiger said. 'That provides a really powerful opportunity to share who you are as a person, and I don't think that an AI tool is able to do that.'
Advertisement
Madelyn Ronk, a 20-year-old student at Penn State Beaver, said she never considered using ChatGPT to write the personal statement required for her transfer application from community college last year. A self-described Goody Two-shoes, she didn't want to get in trouble. But there was another reason: She didn't want to turn in the same essay as anyone else.
'I want to be unique. I feel like when people use AI constantly, it just gives the same answer to every single person,' said Ronk, who wrote her essay about volunteering for charitable organizations in her hometown. 'I would like my answer to be me. So I don't use AI.'
Geiger said students' fears about submitting a generic essay are valid — they're less likely to get scholarships that way. But that doesn't mean they have to avoid generative AI altogether. Some companies offer services to help students use AI to improve their work, rather than to cheat — such as getting help writing an outline, using proper grammar or making points effectively. Generative AI can proofread an essay, and can even tell a student whether their teacher is likely to flag it as AI-assisted.
Related
:
Packback, for example, is an online platform whose AI software can chat with students and give feedback as they are writing. The bot might flag grammatical errors or the use of passive voice or whether students are digressing from their point. Craig Booth, the company's chief technology officer, said the software is designed to introduce students to ethical uses of AI.
A
Advertisement
Not all scholarship providers or colleges have policies on exactly how AI can or cannot be used in prospective student essays. For example,
Tools like GPTZero aren't reliable 100 percent of the time. The Markup, a news outlet focused on technology, reported on a study that found
Because detection software isn't always accurate, Geiger said, Scholarships360 doesn't base scholarship decisions on whether essays were flagged as being generated by AI. But, he said, many of the students whose essays were flagged weren't awarded a given scholarship because 'if your writing is being mistaken for AI,' whether you used the technology or not, for a scholarship or admissions essay, 'it's probably going to be missing the mark.'
Jonah O'Hara, who serves as chair of the admissions practices committee at the National Association of College Admissions Counselors, said that using AI isn't 'inherently evil,' but colleges and scholarship providers need to be transparent about their expectations and students need to disclose when they're using it and for what.
Advertisement
O'Hara, who is director of college counseling at Rocky Hill Country Day School in Rhode Island, said that he has always discouraged students from using a thesaurus in writing college application essays, or using any words that aren't normal for them.
'If you don't use 'hegemony' and 'parsimonious' in text messages with your friends, then why would you use it in an essay to college? That's not you,' O'Hara said. 'If you love the way polysyllabic words roll off your tongue, then, of course, if it's your voice, then use it.'
Generative AI is, functionally, the latest evolution of the thesaurus, and O'Hara wonders whether it has 'put a shelf life on the college essay.'
There was a time when some professors offered self-scheduled, unproctored take-home exams, O'Hara recalled. Students had to sign an honor statement promising that everything they submitted was their own work. But the onus was on the professors to write cheat-proof exams. O'Hara said if the college essay is going to survive, he thinks this is the direction administrators will have to go.
'If we get to a point where colleges cannot confidently determine [its] authenticity,' he said, 'then they may abandon it entirely.'
This story about was produced by
, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the
.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

New Google AI Changes Are Detrimental To Old News Models
New Google AI Changes Are Detrimental To Old News Models

Forbes

time16 minutes ago

  • Forbes

New Google AI Changes Are Detrimental To Old News Models

Palo Alto, California, USA - January 02, 2018: Googleplex office in Silicon Valley. Huge Google ... More sign, Android robot sculpture and main Google office. In some ways, our news model has been the same for hundreds of years. But that way of providing a service to readers seems to be going obsolete pretty quickly. In recent times, news publishers are getting unpleasant surprises in the form of decreasing Internet traffic. That's a problem, because these publishers have already had to pivot away from a physical print medium to the web, and for many of them, that's been challenging. (Think, newspapers.) Now we see that Google's most recent changes to its model are throwing these traditional businesses another curveball. Reports in The Information and other sources show that traditional publishers are losing out as Google introduces AI search mechanisms that compete with the old blue hyperlink SERP directory search engine. We're Using AI to Search As I mentioned in covering remarks by Sam Altman of OpenAI a while ago, even Altman himself uses ChatGPT to find out things that he would have previously used Google for. Multiply that by millions of people, and you have a scary situation for anyone who's in the news business. We're transfixed by the power of these LLMs to scour the entire Internet in seconds, build responses based on collective consciousness, and get us our answers right away, without that tedious old job of doing the research. But it comes at a price for those relying on the old ways. Watching Traffic Decline Publishers, who have already seen their revenue model change, are now seeing that the Internet footprints they use for visibility and conversion are not doing as well as they did previously. Rebecca Bellan at Techcrunch reports on the New York Times having a certain percentage less of organic traffic in its overall numbers (down to 36.5% in April, from 44%) – which is sort of a strange way to measure declining search, but still one facet of describing a real problem. A Little Damning Then there are additional reports based on Apple's internal communications that suggest the company actively decided to make a power play based on its monopoly on traditional search. Reporters looking at Alphabet internal directives suggest that the company could have offered publishers more, but decided to force those who want to be included in traditional search to let their content be used by the AI, a Faustian bargain in which, presumably, the agreeing party is an active participant in its own demise. On the other hand, Google's apologists claim that it has created something called Offerwall as a potential new revenue model for publishers. Offerwall, they contend, offers these creators revenue beyond ads. But that's only if readers take the offers. Why Don't Micropayments Work? There's also another mostly theoretical solution in the mix – having people buy individual news articles with micropayments. This move, however, is almost certain to fail, according to some close to the industry who point out some major problems with the micropayments method. One is that news media is often seen as a package deal. 'If a subscription is worth a hundred dollars a year to a publisher, then even one person clicking on the twenty-cent button instead means the publisher needs five hundred people to buy articles to make up for the lost revenue,' writes James Ball at the Columbia Journalism Review. 'The ratios are different for different outlets, but the math remains intimidating. … There's also a philosophical objection. As noted, newspapers and magazines have been conceived as a package—a mix of the light and the heavy. Some stories cost far more to produce than others, but it balances out because you buy the whole thing. That logic dies if you separate them out.' As an aside, what few parties have tried is building a hyper-local newsroom on a shoestring budget, tying it to a mobile phone app, and charging subscribers very low prices for getting all the tea about what's happening in their neighborhoods. Perhaps people are scared of the legal liability. Does it Really Know? Then there's the question of whether the Google AI Overviews results are actually accurate. Some users claim the model is often wrong. And its track record is spotty. 'It became the laughingstock of the internet in mid-2024 for recommending glue as a way to make sure cheese wouldn't slide off your homemade pizza,' writes Max Delaney at TechRadar. 'And we loved the time it described running with scissors as 'a cardio exercise that can improve your heart rate and require concentration and focus.'' Figuring Out the Endgame In any case, publishers are between a rock and a hard place - they have to choose from a lot of bad options, and figure out the best ways to try to stay afloat in a scenario that seems wildly slanted against them. Some would say it's just free market economics – that publishers will have to do what many other businesses have done over the years, to be Netflix instead of Blockbuster, and change with the times. But we're certain to see more of this wider debate about how we consume information, and what it means in the second quarter of the 21st century.

Crunchyroll blames third-party vendor for AI subtitle mess
Crunchyroll blames third-party vendor for AI subtitle mess

Engadget

time39 minutes ago

  • Engadget

Crunchyroll blames third-party vendor for AI subtitle mess

At the start of last year, Crunchyroll President Rahul Purini told The Verge the company was "very focused on testing" generative AI tools for subtitling and captioning speech to text. The comment came just months after the streamer temporarily took down the debut episode of one of its newest shows, The Yuzuki Family's Four Sons, after people complained about poor subtitles. Much of the translation was nonsensical, with missing punctuation in many sentences. At the time, some fans speculated the company had used AI to translate the episode. Earlier this week, fresh accusations of AI use came up when an episode of new anime showed evidence ChatGPT was used to write the subtitles. Igor Bonifacic for Engadget On July 1, Bluesky user Pixel spotted an issue with the German subtitles for Necronomico and the Cosmic Horror Show , one of the new series Crunchyroll is streaming this anime season. Beyond a general sloppiness, one line began with the words "ChatGPT said..." during a pivotal scene in the show's debut episode. Engadget was able to independently verify the episode contains the AI-generated translation. If you're curious, the English subtitles aren't much better, as seen in the screenshots above and below. "We were made aware that AI-generated subtitles were employed by a third-party vendor, which is in violation of our agreement," a Crunchyroll spokesperson told Engadget. "We are investigating the matter and are working to rectify the error." People were understandably upset about the subtitles. Crunchyroll subscriptions start at $8 per month, and since its acquisition by Sony, service has been the dominant player in the anime streaming market outside of Japan. "This is not acceptable. How can we be expected to pay for a service that clearly doesn't care about the quality of its products?" wrote Pixel in their original post. As of the writing of this article, their post has been quoted more than 300 times and reposted by thousands of other people. Many fans say they're turning to torrented fansubs, calling the official AI-generated translations "unwatchable." People on Reddit have expressed similar frustrations. Ironically, when Purini revealed Crunchyroll was testing generative AI tools for subtitles, he said part of the motivation was to prevent piracy. He reasoned the tech would allow the company to start streaming new, translated anime episodes as close to their original Japanese release as possible, adding the lag between official releases was sometimes what pushed fans to torrent shows. Update 3:58PM ET: Added comment from Crunchyroll. Have a tip for Igor? You can reach him by email , on Bluesky or send a message to @Kodachrome.72 to chat confidentially on Signal.

Owning a Piece of ChatGPT Was Already Messy. Then Elon Musk Made It Weirder
Owning a Piece of ChatGPT Was Already Messy. Then Elon Musk Made It Weirder

Gizmodo

time44 minutes ago

  • Gizmodo

Owning a Piece of ChatGPT Was Already Messy. Then Elon Musk Made It Weirder

OpenAI has a message for anyone who thinks they're about to cash in on the AI boom by buying a new 'OpenAI token' on Robinhood: Don't. But in a chaotic turn, Elon Musk just suggested that even the company's real equity might be an illusion. The maker of ChatGPT, in a rare public warning posted on X (formerly Twitter), disavowed any involvement with crypto-like financial products claiming to offer a piece of its business. 'These 'OpenAI tokens' are not OpenAI equity,' the company wrote. 'We did not partner with Robinhood, were not involved in this, and do not endorse it. Any transfer of OpenAI equity requires our approval—we did not approve any transfer.' The company added a clear warning: 'Please be careful.' These 'OpenAI tokens' are not OpenAI equity. We did not partner with Robinhood, were not involved in this, and do not endorse it. Any transfer of OpenAI equity requires our approval—we did not approve any transfer. Please be careful. — OpenAI Newsroom (@OpenAINewsroom) July 2, 2025This strange situation immediately attracted the attention of Elon Musk, OpenAI's co-founder turned chief antagonist. He responded to the company's post with a blunt, explosive accusation of his own: 'Your 'equity' is fake.' Your 'equity' is fake — Elon Musk (@elonmusk) July 2, 2025The controversy began after Robinhood, the popular trading platform, unveiled a new product for its European customers. In a statement to Gizmodo, the company explained its move. 'To cap off our recent crypto event, we announced a limited stock token giveaway on OpenAI and SpaceX to eligible European customers,' a Robinhood spokesperson said. 'These tokens give retail investors indirect exposure to private markets, opening up access, and are enabled by Robinhood's ownership stake in a special purpose vehicle.' Robinhood used a Special Purpose Vehicle (SPV), which is essentially a separate company created to hold an investment, to buy a stake in OpenAI. It then issued its own digital tokens that represent a claim on that stake. This process, known as tokenization, aims to make illiquid assets, like a share in a private company, easy to trade. CEO Vlad Tenev elaborated on X, admitting the tokens are not a direct investment. 'While it is true that they aren't technically 'equity' (you can see the precise dynamics in our Terms for those interested), the tokens effectively give retail investors exposure to these private assets,' he explained. 'Our giveaway plants a seed for something much bigger.' At our recent crypto event, we announced a limited Stock Token giveaway on OpenAI and SpaceX to eligible European customers. While it is true that they aren't technically 'equity' (you can see the precise dynamics in our Terms for those interested), the tokens effectively give… — Vlad Tenev (@vladtenev) July 2, 2025Musk's swipe is the latest in his long running war with the company he helped found and now openly despises. He has accused it of abandoning its nonprofit mission for profit and has even filed a lawsuit against it. But his comment also throws a spotlight on the bizarre corporate structure that makes this whole situation possible. OpenAI is technically governed by a nonprofit board. Most of its commercial products, like ChatGPT, are operated by a 'capped-profit' subsidiary. This hybrid model means investors can earn returns, but only up to a certain limit, after which any excess profits are supposed to be returned to the nonprofit to 'benefit humanity.' This structure makes a traditional IPO impossible and means even internal investors don't hold 'equity' in the normal sense. They own a right to a share of future profits, but only within the complex limits set by the board. So when Musk says 'your equity is fake,' he's not just trolling. He's pointing out that the very nature of ownership at the world's most important AI company is confusing and opaque. This confluence of crypto hype, corporate vagueness, and a billionaire feud is a dangerous cocktail for everyday investors. AI is the next frontier for financial speculation, and this episode shows how creative the attempts to cash in have become. Whether Robinhood's token was a well-intentioned but misleading product or something more cynical, the fact that OpenAI had to publicly disavow it is a massive red flag. You can't buy OpenAI stock. You can't trade official OpenAI tokens. If you think you've found a way to own a piece of the AI revolution, you're probably being misled. If someone tells you otherwise, you're probably being scammed. But Elon Musk's jab points to a deeper irony: In the strange, confusing world of OpenAI's corporate structure, even its own insiders may not really own what they think they do. As the AI gold rush heats up, ownership is becoming one of the most contested, confusing, and misleading parts of the story. And unless companies like OpenAI become more transparent, the vacuum will be filled by fake products, crypto stunts, and viral misinformation.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store