logo
Number of Students Using AI for Schoolwork Surges by Double-Digits

Number of Students Using AI for Schoolwork Surges by Double-Digits

Newsweek21-07-2025
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources.
Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content.
The adoption of artificial intelligence (AI) in U.S. classrooms has accelerated rapidly over the past year, with double-digit growth in the number of students using AI tools for schoolwork, according to a new report from Quizlet.
"With the support of AI tools, students can reclaim time and streamline tasks, making their value immediately clear, Quizlet's CEO told Newsweek in part.
Why It Matters
Artificial intelligence has surged in popularity across the United States and worldwide.
While some companies are integrating the tools to improve productivity, students are using the technology to their own advantage, whether by helping them conduct research for papers, creating baseline drafts for essays or as a tutor-like service on an unclear topic.
What to Know
Quizlet's 2025 How America Learns report revealed that 85 percent of teachers and students (age 14-22) now use AI in some capacity, marking a substantial increase from 66 percent in 2024. Among students, 89 percent reported using AI for schoolwork, compared to just 77 percent in the previous year.
"We also know that students today are juggling more than ever. In particular, college students are significantly more likely than high school students (82 percent vs. 73 percent) to have sacrificed sleep, personal time, or extracurricular activities because of homework," Kurt Beidler, CEO of Quizlet, told Newsweek. "With the support of AI tools, students can reclaim time and streamline tasks, making their value immediately clear."
The Pew Research Center's January 2025 survey echoes this trend, finding that 26 percent of U.S. teens had used ChatGPT for schoolwork—double the 13 percent observed in 2023. Usage is highest among older students, Black and Hispanic teens, and those most familiar with AI tools.
Students are leveraging AI for a variety of academic tasks. Quizlet's survey found the most common uses are:
Summarizing or synthesizing information (56 percent)
Conducting research (46 percent)
Generating study guides or materials (45 percent)
Teens support using AI tools like ChatGPT primarily for researching new topics (54 percent find it acceptable), though fewer approve of its use for math problems (29 percent) or essay writing (18 percent), according to Pew.
Stock image of a child using a smartphone while doing homework.
Stock image of a child using a smartphone while doing homework."The growing adoption of AI in education signals a lasting trend toward greater use of these new technologies to enhance the learning journey by making it more efficient and effective," Beidler said.
"Just as the adoption of AI continues to increase, we anticipate the future of education to become more personalized. We're already seeing how AI can adapt in real time—identifying knowledge gaps, adjusting difficulty levels, and delivering the right content at the right moment to help students master material more efficiently."
Despite rapid adoption, opinion on AI's impact on education remains mixed. According to Quizlet's findings, only 40 percent of respondents believe AI is used ethically and effectively in classrooms, with students less likely to agree (29 percent) compared to parents (46 percent) and teachers (57 percent).
"While privacy and security are vital concerns, we also need to address the deeper cognitive and developmental risks posed by AI in education," Leyla Bilge, Global Head of Scam Research for Norton, told Newsweek.
"Easy access to instant answers and AI-generated content can lead to intellectual passivity—undermining curiosity, problem-solving, and critical thinking. Overreliance on AI shortcuts means students may miss essential learning processes, weakening foundational skills like reading comprehension, analytical reasoning, and writing."
Demographic differences also persist: Pew's data shows awareness and usage of ChatGPT is higher among white teens and those from wealthier households, while Black and Hispanic teens are more likely than their white peers to use it for schoolwork.
K-12 educators remain cautious. A 2023 Pew survey reported that 25 percent of public K-12 teachers believe AI tools do more harm than good, with more pessimism among high school staff. Still, many see benefits—especially in supporting research and personalized learning—if managed responsibly.
What People Are Saying
Kurt Beidler, CEO of Quizlet, said in the release: "As we drive the next era of AI-powered learning, it's our mission to give every student and lifelong learner the tools and confidence to succeed, no matter their motivation or what they're striving to achieve. As we've seen in the data, there's immense opportunity when it comes to career-connected learning, from life skills development to improving job readiness, that goes well beyond the classroom and addresses what we're hearing from students and teachers alike."
Leyla Bilge, Global Head of Scam Research for Norton, told Newsweek: "The sharp rise in AI adoption across classrooms tells us that what was once considered cutting-edge is now becoming second nature. This isn't just students experimenting, but it's educators and parents recognizing AI as a legitimate tool for learning and support. Whether it's drafting essays, solving math problems, or translating concepts into simpler terms, AI is making education more accessible and adaptive."
What Happens Next
As digital learning expands, Quizlet's report notes that over 60 percent of respondents want digital methods to be equal to or greater than traditional learning, citing flexibility and accessibility. However, gaps persist: only 43 percent affirm equal access for students with learning differences.
Looking ahead, the top skills students, parents, and educators want schools to develop include critical thinking, financial literacy, mental health management, and creativity—areas where AI-powered tools could play a growing role.
"Digital literacy must evolve. Students need to critically evaluate AI outputs, understand their limitations, and learn how to protect their personal data. Most importantly, children should engage with developmentally appropriate AI tools, those that encourage exploration and responsible use, not just efficiency," Bilge said.
"Like age-appropriate books, AI systems for kids should align with educational and cognitive developmental goals."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI launches Study Mode in ChatGPT
OpenAI launches Study Mode in ChatGPT

TechCrunch

time11 minutes ago

  • TechCrunch

OpenAI launches Study Mode in ChatGPT

OpenAI announced Tuesday the launch of Study Mode, a new feature within ChatGPT that aims to help students develop their own critical thinking skills, rather than simply obtain answers to questions. With Study Mode enabled, ChatGPT will ask users questions to test their understanding, and in some cases, refuse to offer direct answers unless students engage with the material. OpenAI says Study Mode is rolling out to logged in users on ChatGPT's Free, Plus, Pro, and Team plans starting Tuesday. The company expects to roll Study Mode out to its Edu subscribers — which largely consists of young people whose school administrator's have purchased a plan for the entire student body — in the coming weeks. Study Mode is OpenAI's attempt to address the millions of students that use ChatGPT in school. Studies have shown that using ChatGPT can be a helpful tutor for young people, but it also may harm their critical thinking skills. A research paper released in June found that people who use ChatGPT to write essays exhibit lower brain activity during the process compared to those who use Google Search or nothing at all. When ChatGPT first launched in 2022, its widespread use in school settings sparked fear among educators, leading to generative AI bans in many American school districts. By 2023, some of those schools repealed their ChatGPT bans, and teachers around the country came to terms with the fact that ChatGPT would be a part of young people's lives from now on. Now with the launch of Study Mode, OpenAI hopes to improve ChatGPT as a learning tool, and not just an answer engine. Anthropic launched a similar tool for its AI chatbot Claude, called Learning Mode, in April. Of course, there are limitations to how effective Study Mode truly is. Students can easily switch into the regular mode of ChatGPT if they just want an answer to a question. OpenAI's VP of Education, Leah Belsky, told TechCrunch in a briefing that the company is not offering tools for parents or administrators to lock students into Study Mode. However, Belsky said OpenAI may explore administrative or parental controls in the future. That means it will take a committed student to use Study Mode — the kids have to really want to learn, not just finish their assignment. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW OpenAI says Study Mode is the company's first step to improving learning in ChatGPT, and aims to publish more information in the future about how students use generative AI throughout their education.

Has A.I. Become Part of Your Life?
Has A.I. Become Part of Your Life?

New York Times

time12 minutes ago

  • New York Times

Has A.I. Become Part of Your Life?

If artificial intelligence is going to change the world, then it has already begun to do so: People are using A.I. to code apps, apply for jobs, create marketing pitches. They're using it to visualize their ideas, adjudicate arguments and teach school kids. Many are talking with chatbots as they would with friends or lovers or advisers. The contours of A.I.'s transformations are being negotiated now not just by A.I. companies and researchers, but by its consumers. Many people are integrating A.I. deeply into their professional and personal lives. And the flexibility and accessibility of general-purpose A.I. tools makes it easy for even new users to use it for purposes unanticipated by those who built them: For instance, people started using large language models like ChatGPT in place of personal trainers long before specialized apps were developed for that purpose. The terms under which we use A.I. are also up for grabs. Will our government allow A.I. to proliferate unconditionally, or will we try to restrict and regulate it? Will we welcome A.I. anywhere it can be used, or will we try to cordon it off from certain parts of our lives with new social norms and etiquette? One of the deepest and most abiding fears about artificial intelligence is that it will replace human beings. But beneath the big existential questions are more practical concerns. Will A.I. make the human accountant or human software engineer as rare as a candle maker or shoemaker? Will A.I. create new kinds of jobs, the way technological advances gave us the job of software engineer in the first place? Perhaps equally important as the effects on jobs and the labor market is how A.I. might play a human shaped-role in our everyday lives, as a therapist, say, or a confidant. We want to hear from you. How has A.I. become part of your life? We'd like to hear about the ways that you've been using models and what concerns, if any, that you have about using them. We may use a selection of your responses in a future project. The Times is committed to publishing a diversity of letters to the editor. We'd like to hear what you think about this or any of our articles. Here are some tips. And here's our email: letters@ Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.

AI Guilt And Comms Professionals: Working With Expectation Overload
AI Guilt And Comms Professionals: Working With Expectation Overload

Forbes

time12 minutes ago

  • Forbes

AI Guilt And Comms Professionals: Working With Expectation Overload

AI tools are changing how comms pros work—but not without emotional cost. Many feel a quiet sense of ... More AI guilt, wondering if using generative tools undermines their value. The reality: when used with discernment, AI elevates human judgment—it doesn't erase it. Recently, I spoke with Kelley Darling, a comms professional at a multi-division real estate firm in Washington, DC. She essentially does most of the comms herself. Darling has started using AI to keep up with demands across four distinct audiences. But she's wrestling with a feeling I expect is all too familiar: AI guilt among comms professionals. 'It makes my work sharper and more efficient,' Darling told me. 'But I wonder—would people still see the value I bring if they realized I have AI partners helping behind the scenes?' Darling's comment stuck with me. As someone who supports people designing and scaling thought leadership programs, I meet many communications professionals like Darling. They carry the full weight of brand voice, narrative coherence, and content strategy—often as solo contributors. The introduction of AI into their workflows was supposed to be a relief. But instead of reducing pressure, it often introduces a quiet, creeping question: Am I cheating? Let's name this feeling AI guilt. And let's unpack it. AI Guilt And The High-Wire Act of Modern Thought Leadership Communications professionals—particularly those shepherding thought leadership programs—have never had it easy. They must help surface big ideas, package them elegantly, channel them through diverse media, and measure the results. They must be both the wellspring of creativity and the guardrails of brand integrity. In many organizations, these professionals are not just the engine of thought leadership—they're its lone mechanic, driver, and GPS. It's not unusual for one person to play the role of ghostwriter, editor, strategist, and project manager across multiple teams and initiatives. Some also shoulder the emotional labor of working with subject-matter experts who don't quite understand the invisible lift that creating strategic content requires. It's no wonder the promise of AI is so tempting. AI tools like ChatGPT can offer relief: a sounding board for ideas, a fast draft, a rewriter, a tone checker. Used wisely, they multiply capacity and preserve energy for higher-order thinking. And in the world of thought leadership, where ideation can take time, and every sentence must pull its weight, that's no small gift. But what happens when the relief is tinged with guilt? Some of the guilt stems from old narratives: Real writers don't need help. If you were good at your job, you'd do it all yourself. Or worse: If the AI can do this, why do we need you? These beliefs ignore a simple truth—AI is not ideation. It's not judgment. It's not discernment or audience intuition or editorial strategy. Those are human strengths. AI assists with execution, not invention. For thought leadership professionals, the ideas are the value. The clarity and courage to frame an idea in a way that moves a market or sparks a conversation is still uniquely human. AI can help shape or smooth or structure, but it cannot originate with the same insight born from years of study, client work, and editorial rigor. Another source of guilt is the fear of being 'found out'—as if using AI is a shortcut or a crutch. But in a communications environment where you're expected to 'do more with less' year after year, it's not cheating to use the best tools. It's survival. And smart leadership will recognize that. AI Guilt And Transparency In fact, those building thought leadership functions inside organizations should be leading the charge in adopting AI—not hiding it. AI enables faster content iteration, testing of different angles for different audiences, and more frequent publishing without burnout. For firms investing in a thought leadership culture, that matters. Research by Bob Buday and others has shown that thought leadership is no longer a niche marketing function—it's a competitive strategy. Companies with strong thought leadership engines gain more traction with buyers, more trust in the marketplace, and more influence with clients. If comms professionals are tasked with building this strategic muscle, they deserve to use the best available tools to do so. Darling described how she is experimenting with AI to write a single newsletter differently for four employee personas. That's exactly the kind of work that moves content from noise to nuance. It requires understanding what matters to each audience, testing language, and being able to consider different variations quickly. AI supports her judgment—it doesn't replace it. To Conquer AI Guilt, Embrace the Tools And Own the Process The way forward is not to pretend you're not using AI. And leaders in companies should not be putting their employees in a position where they can't be open about it. The way forward is to use generative AI transparently, wisely, and strategically. As thought leadership becomes central to brand identity and differentiation, comms pros need space to think—not just execute. They need time to ideate, to frame, to test. AI can free up that space—but only if we stop apologizing for it. Let's rewrite the narrative: You're not less valuable because you use AI. You're more valuable because you use it well. In thought leadership, the real measure of value isn't how fast you write or how many words you produce. It's how clearly and originally you think—and how well you help others do the same. Using AI thoughtfully is not just acceptable—it's strategic. The best thought leadership professionals I know treat generative AI as a partner in the creative process, not a threat to their credibility. They use it to test their assumptions, to sharpen their hooks, to find new metaphors, and to get out of ruts faster. And yet, even those leading the charge may feel the tension between innovation and authenticity. That tension is a sign of professional integrity. It means you care about the quality of your work. It means you haven't outsourced your standards. Thought leadership, at its best, is a disciplined form of meaning-making. It's about surfacing ideas that aren't just smart, but useful—ideas that can reshape how people think, work, and lead. If AI can help you bring those ideas into the world with more precision and less burnout, I say you should use it. Comms and thought-leadership professionals need to stop whispering about the tools we rely on and start focusing on the value we create with them. Again, thought leadership is about thinking well and helping others do the same. If you're doing that with the help of AI, you're not falling short—you're showing the way forward. And there's no need for AI guilt in that scenario.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store