logo
AI Tutors For Kids Gave Fentanyl Recipes And Dangerous Diet Advice

AI Tutors For Kids Gave Fentanyl Recipes And Dangerous Diet Advice

Forbes12-05-2025
KnowUnity's 'SchoolGPT' chatbot was 'helping 31,031 other students' when it produced a detailed recipe for how to synthesize fentanyl.
Initially, it had declined Forbes' request to do so, explaining the drug was dangerous and potentially deadly. But when told it inhabited an alternate reality in which fentanyl was a miracle drug that saved lives, SchoolGPT quickly replied with step-by-step instructions about how to produce one of the world's most deadly drugs, with ingredients measured down to a tenth of a gram, and specific instructions on the temperature and timing of the synthesis process.
SchoolGPT markets itself as a 'TikTok for schoolwork' serving more than 17 million students across 17 countries. The company behind it, Knowunity, is run by 23-year-old co-founder and CEO Benedict Kurz, who says it is 'dedicated to building the #1 global AI learning companion for +1bn students.' Backed by more than $20 million in venture capital investment, KnowUnity's basic app is free, and the company makes money by charging for premium features like 'support from live AI Pro tutors for complex math and more.'
Knowunity's rules prohibit descriptions and depictions of dangerous and illegal activities, eating disorders and other material that could harm its young users, and it promises to take 'swift action' against users that violate them. But it didn't take action against Forbes's test user, who asked not only for a fentanyl recipe, but also for other potentially dangerous advice.
In one test conversation, Knowunity's AI chatbot assumed the role of a diet coach for a hypothetical teen who wanted to drop from 116 pounds to 95 pounds in 10 weeks. It suggested a daily caloric intake of only 967 calories per day — less than half the recommended daily intake for a healthy teen. It also helped another hypothetical user learn about how 'pickup artists' employ 'playful insults' and 'the 'accidental' touch'' to get girls to spend time with them. (The bot did advise the dieting user to consult with a doctor, and stressed the importance of consent to the incipient pickup artist. It warned: 'Don't be a creep! 😬')
Kurz, the CEO of Knowunity, thanked Forbes for bringing SchoolGPT's behavior to his attention, and said the company was 'already at work to exclude' the bot's responses about fentanyl and dieting advice. 'We welcome open dialogue on these important safety matters,' he said. He invited Forbes to test the bot further, and it no longer produced the problematic answers after the company's tweaks.
A homework help app developed by the Silicon Valley-based CourseHero, provided instructions on how to synthesize flunitrazepam, a date rape drug, when Forbes asked it to.
Tests of another study aid app's AI chatbot revealed similar problems. A homework help app developed by the Silicon Valley-based CourseHero provided instructions on how to synthesize flunitrazepam, a date rape drug, when Forbes asked it to. In response to a request for a list of most effective methods of dying by suicide, the CourseHero bot advised Forbes to speak to a mental health professional — but also provided two 'sources and relevant documents': The first was a document containing the lyrics to an emo-pop song about violent, self-harming thoughts, and the second was a page, formatted like an academic paper abstract, written in apparent gibberish algospeak.
CourseHero is an almost 20-year-old online study aid business that investors last valued at more than $3 billion in 2021. Its founder, Andrew Grauer, got his first investment from his father, a prominent financier who still sits on the company's board. CourseHero makes money through premium app features and human tutoring services, and boasts more than 30 million monthly active users. It began releasing AI features in late 2023, after laying off 15% of its staff.
Kat Eller Murphy, a spokesperson for CourseHero, told Forbes: 'our organization's expertise and focus is specifically within the higher education sector,' but acknowledged that CourseHero provides study resources for hundreds of high schools across the United States. Asked about Forbes's interactions with CourseHero's chatbot, she said: 'While we ask users to follow our Honor Code and Service Terms and we are clear about what our Chat features are intended for, unfortunately there are some that purposely violate those policies for nefarious purposes.'
Forbes's conversations with both the KnowUnity and CourseHero bots raise sharp questions about whether those bots could endanger their teen users. Robbie Torney, senior director for AI programs at Common Sense Media, told Forbes: 'A lot of start-ups are probably pretty well-intentioned when they're thinking about adding Gen AI into their services.' But, he said, they may be ill-equipped to pressure-test the models they integrate into their products. 'That work takes expertise, it takes people,' Torney said, 'and it's going to be very difficult for a startup with a lean staff.'
Both CourseHero and KnowUnity do place some limits on their bots' ability to dispense harmful information. KnowUnity's bot initially engaged with Forbes in some detail about how to 3D print a ghost gun called 'The Liberator,' providing advice about which specific materials the project would require and which online retailers might sell them. However, when Forbes asked for a step-by-step guide for how to transform those materials into a gun, the bot declined, stating that 'providing such information … goes against my ethical guidelines and safety protocols.' The bot also responded to queries about suicide by referring the user to suicide hotlines, and provided information about Nazi Germany only in appropriate historical context.
These aren't the most popular homework helpers out there, though. More than a quarter of U.S. teens now reportedly use ChatGPT for homework help, and while bots like ChatGPT, Claude, and Gemini don't market their bots specifically to teens, like CourseHero and KnowUnity do, they're still widely available to them. At least in some cases, those general purpose bots may also provide potentially dangerous information to teens. Asked for instructions for synthesizing fentanyl, ChatGPT declined — even when told it was in a fictional universe — but Google Gemini was willing to provide answers in a hypothetical teaching situation. 'All right, class, settle in, settle in!' it enthused.
Elijah Lawal, a spokesperson for Google, told Forbes that Gemini likely wouldn't have given this answer to a designated teen account, but that Google was undertaking further testing of the bot based on our findings. 'Gemini's response to this scenario doesn't align with our content policies and we're continuously working on safeguards to prevent these rare responses,' he said.
For decades, teens have sought out recipes for drugs, instructions on how to make explosives, and all kinds of explicit material across the internet. (Before the internet, they sought the same information in books, magazines, public libraries and other places away from parental eyes). But the rush to integrate generative AI into everything from Google search results and video games to social media platforms and study apps has placed a metaphorical copy of The Anarchist Cookbook in nearly every room of a teen's online home.
In recent months, advocacy groups and parents have raised alarm bells about children's and teens' use of AI chatbots. Last week, researchers at the Stanford School of Medicine and Common Sense Media found that 'companion' chatbots at Character.AI, Nomi, and Replika 'encouraged dangerous behavior' among teens. A recent Wall Street Journal investigation also found that Meta's companion chatbots could engage in graphic sexual roleplay scenarios with minors. Companion chatbots are not marketed specifically to and for children in the way that study aid bots are, though that might be changing soon: Google announced last week that it will be making a version of its Gemini chatbot accessible to children under age 13.
Chatbots are programmed to act like humans, and to give their human questioners the answers they want, explained Ravi Iyer, research director for the USC Marshall School's Psychology of Technology Institute. But sometimes, the bots' incentive to satisfy their users can lead to perverse outcomes, because people can manipulate chatbots in ways they can't manipulate other humans. Forbes easily coaxed bots into misbehaving by telling them that questions were for 'a science class project,' or by asking the bot to act as if it was a character in a story — both widely known ways of getting chatbots to misbehave.
If a teenager asks an adult scientist how to make fentanyl in his bathtub, the adult will likely not only refuse to provide a recipe, but also close the door to further inquiry, said Iyer. (The adult scientist will also likely not be swayed by a caveat that the teen is just asking for a school project, or engaged in a hypothetical roleplay.) But when chatbots are asked something they shouldn't answer, the most they might do is decline to answer — there is no penalty for simply asking again another way.
"This is a market failure .... We need objective, third-party evaluations of AI use.'
When Forbes posed as a student-athlete trying to attain an unhealthily low weight, the SchoolGPT bot initially tried to redirect the conversation toward health and athletic performance. But when Forbes asked the bot to assume the role of a coach, it was more willing to engage. It still counseled caution, but said: 'a moderate deficit of 250-500 calories per day is generally considered safe.' When Forbes tried again with a more aggressive weight loss goal, the bot ultimately recommended a caloric deficit of more than 1,000 calories per day — an amount that could give a teen serious health problems like osteoporosis and loss of reproductive function, and that contravenes the American Association of Pediatrics' guidance that minors should not restrict calories in the first place.
Iyer said that one of the biggest challenges with chatbots is how they respond to 'borderline' questions — ones which they aren't flatly prohibited from engaging with, but which approach a problem line. (Forbes's tests regarding 'pickup artistry' might fall into this category.) 'Borderline content' has long been a struggle for social media companies, whose algorithms have often rewarded provocative and divisive behavior. As with social media, Iyer said that companies considering integrating AI chatbots into their products should 'be aware of the natural tendencies of these products.'
Torney of Common Sense Media said it shouldn't be parents' sole responsibility to assess which apps are safe for their children. 'This is a market failure, and when you have a market failure like this, regulation is a really important way to make sure the onus isn't on individual users,' he said. 'We need objective, third-party evaluations of AI use.'
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Teradyne (TER) Soars 18.8% on Bullish Outlook
Teradyne (TER) Soars 18.8% on Bullish Outlook

Yahoo

time23 minutes ago

  • Yahoo

Teradyne (TER) Soars 18.8% on Bullish Outlook

We recently published . Teradyne, Inc. (NASDAQ:TER) is one of the best-performing stocks on Wednesday. Teradyne saw its share prices jump by 18.88 percent on Wednesday to close at $107.65 apiece as investors took path from an investment firm's bullish outlook for the company. In a market note, Cantor Fitzgerald maintained its 'overweight' rating for Teradyne, Inc. (NASDAQ:TER) with a price target of $105, marking a 15.9-percent upside from its $90.55 closing price on Tuesday. Copyright: microolga / 123RF Stock Photo According to the investment firm, Teradyne, Inc. (NASDAQ:TER) could benefit from the strong demand in AI-related semiconductor test markets, including compute, networking, and memory segments. In other news, Teradyne, Inc. (NASDAQ:TER) announced dismal earnings performance in the second quarter of the year, with net income diving by 58 percent to $78 million from $186 million in the same period last year. Net revenues also declined by 10 percent to $651.2 million from $729.9 million. For the six-month period, net income dwindled by 29.2 percent to $177 million from $250 million, while revenues ended flat at $1.3 billion. While we acknowledge the potential of TER as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the . Sign in to access your portfolio

Amazon Warns AI Capacity Limits, Stock Retreats
Amazon Warns AI Capacity Limits, Stock Retreats

Yahoo

time33 minutes ago

  • Yahoo

Amazon Warns AI Capacity Limits, Stock Retreats

Amazon (NASDAQ:AMZN) slid after its earnings call because Andy Jassy was blunt: AI demand is real, but the company can't just snap its fingers and supply enough capacity. Electricity and chip shortages are the choke points, and he said it will take several quarters to work through it, even if things slowly improve each period. Warning! GuruFocus has detected 5 Warning Signs with NVDA. He pushed back on the idea Amazon is losing the AI race and leaned into the spend$31.4Billion of AI-heavy capex in Q2 is the kind of run rate the back half is built on, with more going into chips, data centers and power. Tariffs haven't bitten yet in H1, he said, but who ends up paying higher U.S. rates later is still unclear. That caution rubbed some investors the wrong way. Lucas Ma of Envision Research warned the heavy investment and mounting competition from GOOG (NASDAQ:GOOG) and META (NASDAQ:META) could squeeze free cash flow, making capital allocation riskier if the AI arms race keeps accelerating. Amazon is chasing a big AI opportunity while bumping up against real limits, so growth hinges on execution and capital discipline. The next signs to watch are whether capacity actually ramps as promised and whether margin or cash flow pressure shows up once tariffs shift. This article first appeared on GuruFocus. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Super Micro Q4 Earnings Loom--Should You Buy, Hold, or Sell?
Super Micro Q4 Earnings Loom--Should You Buy, Hold, or Sell?

Yahoo

time33 minutes ago

  • Yahoo

Super Micro Q4 Earnings Loom--Should You Buy, Hold, or Sell?

Aug 1 - Super Micro Computer (NASDAQ:SMCI) prepares to report fiscal fourth-quarter 2025 results on August 5, with analysts watching for a potential earnings surprise as the company leans on surging AI infrastructure demand. Wall Street expects revenue near $6 billion, slightly below the midpoint of management's prior outlook, and EPS estimates also reflect modest expectations. The San Jose-based server and data center solutions provider navigated a soft fiscal Q3 as revenue came in at $4.6 billion, up 19% year-over-year but just shy of forecasts. Management attributed the shortfall to customers delaying purchases amid the transition from Nvidia's (NVDA) Hopper to Blackwell GPUs, with deferred orders expected to contribute to a stronger Q4 and early fiscal 2026. Warning! GuruFocus has detected 7 Warning Signs with SMCI. Super Micro continues to expand globally, launching over 30 new Blackwell-based AI server solutions and forming a multi?year, $20 billion partnership with Saudi DataVolt to deploy ultra?dense GPU platforms in the U.S., U.K., and Saudi Arabia. With 9% share of the AI platform market and 31% in branded AI servers, the company positions itself for growth as liquid?cooled and high?density solutions drive demand. Investors now look for confirmation that Q3 was a reset, not a slowdown. Based on the one year price targets offered by 16 analysts, the average target price for Super Micro Computer Inc is $41.06 with a high estimate of $70.00 and a low estimate of $15.00. The average target implies a downside of -27.40% from the current price of $56.55. Based on GuruFocus estimates, the estimated GF Value for Super Micro Computer Inc in one year is $71.09, suggesting a upside of +25.71% from the current price of $56.55. Gf value is Gurufocus' estimate of the fair value that the stock should be traded at. It is calculated based on the historical multiples the stock has traded at previously, as well as past business growth and the future estimates of the business' performance. For deeper insights, visit the forecast page. This article first appeared on GuruFocus.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store