logo
OpenAI's education head says students should use ChatGPT as a tool, not 'an answer machine'

OpenAI's education head says students should use ChatGPT as a tool, not 'an answer machine'

Luddites have no place in an AI-powered world, according to OpenAI 's vice president of education.
"Workers who use AI in the workforce are incredibly more productive," Leah Belsky, who's been leading OpenAI's education team since 2024, said on an episode of the company's podcast on Friday.
So learning to use the technology, she said, should start early. "Any graduate who leaves institution today needs to know how to use AI in their daily life," she said. "And that will come in both where they're applying for jobs as well as when they start their new job."
Most schools have so far sought ways to prevent students from using AI rather than encouraging it or teaching it. This is partly because AI use in school is considered cheating. There is also concern that using AI can cause so-called "brain rot."
Belsky thinks about it differently.
"AI is ultimately a tool," she said, at one point comparing it to a calculator. "What matters most in an education space is how that tool is used. If students use AI as an answer machine, they are not going to learn. And so part of our journey here is to help students and educators use AI in ways that will expand critical thinking and expand creativity."
The "core literacy" students should develop, she said, is coding.
"Now, with vibe coding and now that there are all sorts of tools that make coding easier, I think we're going to get to a place where every student should not only learn how to use AI generally, but they should learn to use AI to create images, to create applications, to write code," she said.
Vibe coding is the process of prompting AI in natural language to write code for whatever you want. It's been widely embraced, but most avoid using it for core technology since AI code is prone to errors. Anyone vibe coding would need some level of coding knowledge, or know someone who does, to check the AI's work.
Perhaps the biggest concern about using AI in education is that it removes the element of "productive struggle" — a crucial part of how people learn and master new material. Belsky says OpenAI is developing technology to counter that.
This week, OpenAI introduced "Study Mode" in ChatGPT, which provides students with "guiding questions that calibrate responses to their objective and skill level to help them build deeper understanding," according to OpenAI's website.
OpenAI is not the only technology company thinking about this topic. Kira Learning is a startup chaired by Google Brain founder Andrew Ng. It first launched in 2021 to help teachers without a background in computer science teach the subject effectively. The company launched a slate of AI agents earlier this year.
The aim is to introduce "friction" into students' conversations with AI at the right stages so that they actually have a productive struggle and learn through the experience, Andre Pasinetti, cofounder and CEO of Kira, told Business Insider.
For the near future, at least, the onus will likely be on tech companies to spearhead new ways to keep the learning in learning, as universities and educational institutions scramble to keep up.
Tyler Cowen, a professor of economics at George Mason University, also talked about the state of the university in a conversation with podcaster Azeem Azhar this week.
"There's a lot of hand-wringing about 'How do we stop people from cheating' and not looking at 'What should we be teaching and testing?'" he said."The whole system is set up to incentivize getting good grades. And that's exactly the skill that will be obsolete."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI releases open-weight reasoning models optimized for running on laptops
OpenAI releases open-weight reasoning models optimized for running on laptops

Yahoo

time14 minutes ago

  • Yahoo

OpenAI releases open-weight reasoning models optimized for running on laptops

By Anna Tong SAN FRANCISCO (Reuters) -OpenAI said on Tuesday it has released two open-weight language models that excel in advanced reasoning and are optimized to run on laptops with performance levels similar to its smaller proprietary reasoning models. An open-weight language model's trained parameters or weights are publicly accessible, which can be used by developers to analyze and fine-tune the model for specific tasks without requiring original training data. "One of the things that is unique about open models is that people can run them locally. People can run them behind their own firewall, on their own infrastructure," OpenAI co-founder Greg Brockman said in a press briefing. Open-weight language models are different from open-source models, which provide access to the complete source code, training data and methodologies. The landscape of open-weight and open-source AI models has been highly contested this year. For a time, Meta's Llama models were considered the best, but that changed earlier this year when China's DeepSeek released a powerful and cost-effective reasoning model, while Meta struggled to deliver Llama 4. The two new OpenAI models are the first open models OpenAI has released since GPT-2, which was released in 2019. OpenAI's larger model, gpt-oss-120b, can run on a single GPU, and the second, gpt-oss-20b, is small enough to run directly on a personal computer, the company said. OpenAI said the models have similar performance to its proprietary reasoning models called o3-mini and o4-mini, and especially excel at coding, competition math and health-related queries. The models were trained on a text-only dataset which in addition to general knowledge, focused on science, math and coding knowledge. OpenAI did not release benchmarks comparing the open-weight models to competitors' models such as the DeepSeek-R1 model. Microsoft-backed OpenAI, currently valued at $300 billion, is currently raising up to $40 billion in a new funding round led by Softbank Group. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Worried about AI at work? Avoid these 5 leadership mistakes with your team
Worried about AI at work? Avoid these 5 leadership mistakes with your team

USA Today

time15 minutes ago

  • USA Today

Worried about AI at work? Avoid these 5 leadership mistakes with your team

Artificial intelligence may be transforming the workplace, but for many employees, it's fueling uncertainty instead of excitement. According to a 2025 Pew Research Center study, 52% of U.S. workers worry AI could disrupt or replace their jobs. And an August 2024 SHRM survey found that nearly half feel unprepared for automation, while 95% say they don't trust their organization to manage the shift in a way that benefits everyone. How managers address these concerns can make or break team morale and productivity. Experts say clear, honest communication is critical, but the wrong message can backfire, fueling fear instead of trust. Whether you're rolling out new tools or just starting the conversation, it's important to engage your team with transparency, context and empathy. Below, two human resources experts break down five common mistakes to avoid when discussing AI with your team and provide guidance on navigating the discussion more effectively. 1. Acting like it's no big deal According to the World Economic Forum, when managers dismiss or avoid discussing AI concerns, they often create bigger problems down the road. 'Business leaders can't bury their heads in the sand and hope for the best,' says Eric Mochnacz, director of operations at Red Clover HR in New Jersey. 'They must have up-front discussions about the benefits of AI in their business, the drawbacks, the potential impacts and the areas where they'll not allow AI usage.' Chad V. Sorenson, president of Florida-based Adaptive HR Solutions, agrees that direct communication is key. 'Employees may feel AI threatens their jobs and may question leaders' motives for introducing AI tools,' he explains. So, 'address the fear and explore how AI can augment workflows and streamline repetitive tasks rather than replace workers.' Takeaway: Don't downplay concerns about AI. Acknowledge employee fears openly and explain how AI will support, not replace, their work. 2. Throwing around 'AI' without defining it AI isn't just one thing. Mochnacz explains that there's a significant difference between using generative AI to enhance email communications and utilizing AI chatbots to manage all customer interactions. Without these distinctions, employees don't understand what to expect from their workplace changes or how they can remain relevant. 'AI is such a buzzword, and leaders haven't taken the time to define it and understand the differences,' says Mochnacz. 'I've been in meetings where people ask, 'Can you do this with AI?' or 'Everyone's talking about AI, so we have to do something with it.'' He emphasizes the importance of clarifying the fundamentals: Takeaway: Don't use 'AI' as a vague catchall. Clearly define what types of AI you're using, what they do and why they matter to your team. 3. Failing to explain the why 'Any time leaders announce a new program or procedure without employee buy-in, there could be fear, skepticism or anger,' Sorenson cautions. Instead of simply telling employees what's changing, explain why the company needs AI. How does it fit into broader business goals? This context helps employees understand their role in the transition rather than viewing it as a threat. 'Ongoing two-way feedback is critical for continued refinement of how teams use AI to improve workflows, processes and results,' notes Sorenson. Takeaway: Don't skip the context. Explain why AI is being adopted and how it supports your team's goals to build trust and buy-in. 4. Overhyping what AI can do 'Leaders promising that AI will handle everything don't have a clear understanding of its possibilities and limitations,' Sorenson says. For instance, AI can help employees understand benefit plans. But it can't handle nuanced harassment complaints or mental health concerns. 'AI systems must be trained to understand when a human must intervene,' he adds. Mochnacz says the problem gets worse when leaders promise capabilities that may never materialize. 'We have no idea what AI is going to be able to do in a month, three months or a year,' he emphasizes. So, it's better to focus on specific, tested use cases rather than grand predictions about AI replacing everything. Takeaway: Avoid making big promises. Focus on what AI can realistically do today, not speculative future capabilities. 5. Leaving people out of the process 'Whenever there's a business, industry or technology shift, involve those it may impact,' stresses Mochnacz. 'Have up-front, direct conversations with your people about their roles and how they see AI helping them be more effective.' A recent MIT Sloan working paper finds that the most successful generative AI deployments consistently involve frontline workers from the earliest stages through rollout. Drawing on over 50 in-depth interviews, MIT researchers demonstrate that when employees help define the problem, co-design workflows, experiment with tools, and shape fair transition policies, not only does adoption improve, but worker productivity and job quality also rise. The key here is framing AI as a collaboration partner rather than a threat. Mochnacz explains that when leaders present AI as a good reality for everyone, employees will engage with the technology. But when the message becomes "prove AI can't replace you," workers resist because it feels like an ultimatum. Takeaway: Don't make AI decisions in a vacuum. Engage employees early and frame AI as a tool to support their work, rather than compete with it. What successful AI communication looks like Sorenson says poor AI communication shows up in obvious ways. You might notice more pushback in meetings, higher employee turnover or a spike in anxious watercooler conversations. These signals suggest that employees feel excluded or uncertain and may be bracing for the worst. In contrast, when communication is clear and inclusive, team engagement improves. 'If your AI communication strategy is successful, employees should engage in the conversation,' Sorenson notes. 'They'll make suggestions to continue to refine its use, and demonstrate an increased productivity level.' Look for those signs of healthy adoption: employees asking questions, suggesting improvements and using AI to work more efficiently. When teams feel empowered, not threatened, you know you've struck the right balance. What is USA TODAY Top Workplaces 2025? Do you work for a great company? Each year, USA TODAY Top Workplaces, a collaboration between Energage and USA TODAY, ranks organizations across the U.S. that excel at creating a positive work environment for their employees. Employee feedback determines the winners. In 2025, over 1,500 companies earned recognition as top workplaces. Check out our overall U.S. rankings. You can also gain insights into top-ranked employers by checking out the links below.

OpenAI's first new open-weight LLMs in six years are here
OpenAI's first new open-weight LLMs in six years are here

Engadget

time15 minutes ago

  • Engadget

OpenAI's first new open-weight LLMs in six years are here

For the first time since GPT-2 in 2019, OpenAI is releasing new open-weight large language models. It's a major milestone for a company that has increasingly been accused of forgoing its original stated mission of "ensuring artificial general intelligence benefits all of humanity." Now, following multiple delays for additional safety testing and refinement, gpt-oss-120b and gpt-oss-20b are available to download from Hugging Face. Before going any further, it's worth taking a moment to clarify what exactly OpenAI is doing here. The company is not releasing new open-source models that include the underlying code and data the company used to train them. Instead, it's sharing the weights — that is, the numerical values the models learned to assign to inputs during their training — that inform the new systems. According to Benjamin C. Lee, professor of engineering and computer science at the University of Pennsylvania, open-weight and open-source models serve two very different purposes. "An open-weight model provides the values that were learned during the training of a large language model, and those essentially allow you to use the model and build on top of it. You could use the model out of the box, or you could redefine or fine-tune it for a particular application, adjusting the weights as you like," he said. If commercial models are an absolute black box and an open-source system allows for complete customization and modification, open-weight AIs are somewhere in the middle. OpenAI has not released open-source models, likely since a rival could use the training data and code to reverse engineer its tech. "An open-source model is more than just the weights. It would also potentially include the code used to run the training process," Lee said. And practically speaking, the average person wouldn't get much use out of an open-source model unless they had a farm of high-end NVIDIA GPUs running up their electricity bill. (They would be useful for researchers looking to learn more about the data the company used to train its models though, and there are a handful of open-source models out there like Mistral NeMo and Mistral Small 3.) With that out of the way, the primary difference between gpt-oss-120b and gpt-oss-20b is how many parameters each one offers. If you're not familiar with the term, parameters are the settings a large language model can tweak to provide you with an answer. The naming is slightly confusing here, but gpt-oss-120b is a 117 billion parameter model, while its smaller sibling is a 21-billion one. In practice, that means gpt-oss-120b requires more powerful hardware to run, with OpenAI recommending a single 80GB GPU for efficient use. The good news is the company says any modern computer with 16GB of RAM can run gpt-oss-20b. As a result, you could use the smaller model to do something like vibe code on your own computer without a connection to the internet. What's more, OpenAI is making the models available through the Apache 2.0 license, giving people a great deal of flexibility to modify the systems to their needs. Despite this not being a new commercial release, OpenAI says the new models are in many ways comparable to its proprietary systems. The one limitation of the oss models is that they don't offer multi-modal input, meaning they can't process images, video and voice. For those capabilities, you'll still need to turn to the cloud and OpenAI's commercial models, something both new open-weight systems can be configured to do. Beyond that, however, they offer many of the same capabilities, including chain-of-thought reasoning and tool use. That means the models can tackle more complex problems by breaking them into smaller steps, and if they need additional assistance, they know how to use the web and coding languages like Python. Additionally, OpenAI trained the models using techniques the company previously employed in the development of o3 and its other recent frontier systems. In competition-level coding gpt-oss-120b earned a score that is only a shade worse than o3, OpenAI's current state-of-the-art reasoning model, while gpt-oss-20b landed in between o3-mini and o4-mini. Of course, we'll have to wait for more real-world testing to see how the two new models compare to OpenAI's commercial offerings and those of its rivals. The release of gpt-oss-120b and gpt-oss-20b and OpenAI's apparent willingness to double down on open-weight models comes after Mark Zuckerberg signaled Meta would release fewer such systems to the public. Open-sourcing was previously central to Zuckerberg's messaging about his company's AI efforts, with the CEO once remarking about closed-source systems "fuck that." At least among the sect of tech enthusiasts willing to tinker with LLMs, the timing, accidental or not, is somewhat embarrassing for Meta. "One could argue that open-weight models democratize access to the largest, most capable models to people who don't have these massive, hyperscale data centers with lots of GPUs," said Professor Lee. "It allows people to use the outputs or products of a months-long training process on a massive data center without having to invest in that infrastructure on their own. From the perspective of someone who just wants a really capable model to begin with, and then wants to build for some application. I think open-weight models can be really useful." OpenAI is already working with a few different organizations to deploy their own versions of these models, including AI Sweden, the country's national center for applied AI. In a press briefing OpenAI held before today's announcement, the team that worked on gpt-oss-120b and gpt-oss-20b said they view the two models as an experiment; the more people use them, the more likely OpenAI is to release additional open-weight models in the future.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store