logo
Ditch Your iPhone, Grab Your Kodak: Here's How To Have An Offline Summer

Ditch Your iPhone, Grab Your Kodak: Here's How To Have An Offline Summer

Elle22-07-2025
Once upon a time, Instagram used to be the place where I'd connect with the people I loved. I was at university and, for the first time, many of my closest friends were living in different cities. The app was like a virtual pub – I could see what my friends were doing and share pictures that captured my own new life.
Fast forward 15 years, and my feed began to look very different: a jarring mix of nihilistic memes, targeted ads and escapist celebrity news. Somewhere along the way, I had become too self-conscious to post and my friends' updates felt curated and calculated. I feel mean writing that, which is exactly what social media brought out in me: a cruel, bitter cynicism. Then, in January, Mark Zuckerberg removed fact checkers from Meta platforms in a thinly veiled attempt to win Donald Trump's approval. I'd had enough. Reader, I deleted Instagram.
For the past decade, we've been sleepwalking into a digital dystopia. But from the mass exodus of X (formerly Twitter) after Elon Musk's takeover, to the proliferation of digital detoxes and anti-tech tech, people are starting to push back. Across generations, increasing numbers are taking a stand and actively trying to reduce their dependence on technology and social media. It's a movement that prioritises human connection and mental health, and holds Big Tech companies accountable.
Cue the rise of 'offlining' or digital minimalism, the latter defined by Cal Newport, journalist and author of Digital Minimalism: Choosing a Focused Life in a Noisy World, as 'a philosophy that helps you question what digital-communication tools add the most value to your life'. For the majority of us, the thought of cutting tech out of our lives completely is unrealistic. Instead, it's about being more intentional with the technology we do use and finding sustainable ways to spend less time online.
For some, it's an embrace of all things analogue. Recent figures show specialist- and independent-magazine sales thriving. There's been a return to point-and-shoot cameras, with Kodak reporting demand for film has roughly doubled in the past few years. The growing popularity of phone-free bedrooms has led to renewed interest in alarm clocks and radios, while sales of CDs, cassettes and vinyl are on the rise for the first time in 20 years, largely driven by Gen Z. 'There's definitely a lot more younger people interested,' says Kevyn Long, owner of Hackney records store Jelly Records. 'I always think buying a record is the most engaging way of discovering music, rather than an algorithm telling you what you might like. It's about ownership, too – people like having an item to hold.'
For others, it's time to ditch smartphones. Internet searches for flip phones surged by 15,369% in 2023 among Gen Z and younger millennials, while cult Noughties models like the Nokia 3310 and Motorola Razr have been reissued for a modern audience. Of course, the resurgence of these models taps into a broader thirst for nostalgia. Nineties and Noughties aesthetics have been an enduring trend across fashion and culture, but perhaps they also reflect our collective longing for a simpler life that contains less tech.
Kaiwei Tang is CEO and co-founder of Light, a start-up making phones 'designed to be used as little as possible'. 'We always have options,' he says of our relationship with tech. 'We know burgers and chips aren't healthy, so we might eat them now and again and try to make healthier choices. For some reason, when it comes to phones, we think we're tied to smartphones.'
Light is one of the most popular styles of 'dumb phones' – devices with limited capabilities compared to smartphones. There's no email or apps. You can make and receive calls and texts, set alarms, get rudimental directions and listen to music. 'It's not about going back in time, deleting apps or adding one more app from a third party to try and minimise your smartphone use,' says Tang. 'We wanted to create an entirely new phone that's designed to be in the background. It's like a hammer: it's there when you need it. When you put it back, it disappears. We wanted to return technology to a more utilitarian format.'
The first model had a waiting list of 50,000 people after a successful Kickstarter campaign in 2015. Tang says people from all walks of life are buying Light models. Some make it their only phone, while others use it in tandem with a smartphone. Again, it's Gen Z – the demographic with the highest average screen time – that is driving the demand. 'Our customers are aware of how many hours they spend on smartphones and they are stressed and anxious. I think we all feel like, 'What happened? I just went to the toilet [with my phone]! Why can't I stop swiping?!''
Attracted to the idea of a background phone that wouldn't encourage doomscrolling, I ordered a Light Phone III. The first thing I notice when it arrives is how chunky and uncomfortable it is to hold. I realise it's not just what's on the screen: even the physical design of a smartphone promotes constant use. Once I'm set up, I text a friend, try out the camera and then… put it away. Without the option of endless scrolling and the pull of notifications, it becomes easy to put my phone down.
Tang argues that exercising self-control on a regular smartphone is virtually impossible. 'Every social-media browser is thinking about the attention economy. They don't charge you, they track you. That's the business model: they collect your information, categorise you and give it to advertisers to target customers. Companies relying on that model want you to be online as much as possible. If you don't pay for the product, you are the product.' I have a newfound respect for the people who refuse to be 'the product'. Anna Burzlaff, 33, director of global research and insights at international fashion brand Highsnobiety, has never had social media. 'I've been told it's my green flag,' she says. 'At the start, I wasn't consciously opposing it – it just didn't interest me. I wouldn't join now for a lot of reasons. Anytime I have gone on friends' accounts, I find it impacts my mood negatively. And I still don't find it particularly interesting. What is actually happening there? What is exciting or new? No one has really shown me anything compelling that I can only discover through Instagram. I don't feel like there's much on there that I can't get from legacy publishers or going to an art gallery.'
The average daily screen time for UK adults has been steadily rising and now stands at 5 hours and 36 minutes. By this point, we're all aware of the addictive nature of technology and its impact on our mental health; the dangers of digital worlds is a huge theme across popular culture. Charlie Brooker's Black Mirror, now in its seventh season, warns of a grim future if we continue being this online, while Netflix's Adolescence became one of the most talked-about shows of the year, with the first episode drawing in 6.45 million viewers. Set during the aftermath of a young girl's violent murder, it follows a group of teenagers whose lives are increasingly shaped by social media. While Adolescence doesn't explicitly point to a clear motive for the murder, it does highlight the radicalisation of young people through online spaces. So huge was its impact that Keir Starmer met the creators to discuss the issues it raised, with screenwriter Jack Thorne urging the Prime Minister to consider banning smartphones in schools.
The increasing call for policy change around tech use feels like a rebellion rising. 'There's a growing attention to the mental and emotional impact of constant connectivity,' says Dr Pamela Rutledge, director of the Media Psychology Centre in California. 'There is a concern that too much digital stimulation can come at the cost of meaningful, in-person experiences and deeper relationships.' While many of us fear slipping into 'digital dementia', a shorthand for the brain fog and reduced attention span associated with excessive phone use, Rutledge is more optimistic. 'There is no conclusive evidence that digital technology causes neurodegeneration or long-term dementia-like symptoms. The most frequently cited effects of 'heavy' digital use are short- to medium-term memory issues, however they are reversible with behaviour change.'
While improving our mental health and reclaiming our attention span are huge drivers, there's also a creeping discomfort with Big Tech. Silicon Valley was once the heart of creativity and innovation; Sheryl Sandberg told us to lean in, and with couples such as Grimes and Elon Musk, or Serena Williams and Reddit co-founder Alexis Ohanian, dating a tech mogul was practically a status symbol.
But, somewhere along the way, through unchecked growth and a disregard for the broader societal consequences, companies became monopolies, and the ecological toll reached new extremes: Amazon, Google and Microsoft all plan to build massive data centers in the world's driest regions, threatening communities already battling water shortages. Meanwhile, the pervasive power of algorithms has left privacy unprotected, with personal data being mined and manipulated in ways that feel less like innovation and more like exploitation. Not to mention news of data breaches breaking every other week.
For many, participating in the great tech rebellion is an act of self-care. Ever since the Industrial Revolution, every generation has experienced the birth of a technology so profound it changes the way we live. When television sets became mainstream in the 1970s, allowing audiences to get global news and entertainment in real time, the way people interacted with the world changed fundamentally. By the time the internet became a mainstay in the late Nineties and early Noughties, we no longer needed to leave the comfort of our own homes for entertainment or socialising. As Andy Warhol put it: 'When I got my first television set, I stopped caring so much about having close relationships.'
Today, the speed of digital innovation is so rapid that we've stopped getting excited about it. What is more thrilling is revelling in the joy and social connection of less tech dependence. 'Reducing time online can give people a greater sense of control over their attention and decisions, increasing satisfaction with life,' Rutledge says. Tang tells me about an annual survey of Light Phone users; customers report feeling happier and less stressed, and notice improved relationships with family: 'One man with a chronic health condition said his heart rate reduced.' For Burzlaff, 'the biggest thing is that I save an incredible amount of time. Every day, I'm probably saving an hour at least, and that's massive.'
Embracing digital minimalism doesn't have to be daunting. 'Even small wins – like reclaiming 15 minutes in the morning – can help you build momentum,' says Rutledge. 'You're not breaking up with tech, you're just renegotiating the relationship.' There's also no one-size-fits-all approach. I couldn't quite hack the Light Phone as my only mobile device and, as a friend pointed out – via WhatsApp – I haven't totally managed to extricate myself from Zuckerberg's grip. While I ditched the platform that encourages endless scrolling, I kept the one that helps me feel connected to friends and family. The great tech rebellion is simmering, gaining momentum among the people who rely on it the most. This time, perhaps, the revolution will not be televised.
ELLE Collective is a new community of fashion, beauty and culture lovers. For access to exclusive content, events, inspiring advice from our Editors and industry experts, as well the opportunity to meet designers, thought-leaders and stylists, become a member today HERE.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta Platforms Just Hired One of ChatGPT's Co-Creators. Is META Stock a Buy Here as Zuckerberg Doubles Down on AI?
Meta Platforms Just Hired One of ChatGPT's Co-Creators. Is META Stock a Buy Here as Zuckerberg Doubles Down on AI?

Yahoo

time32 minutes ago

  • Yahoo

Meta Platforms Just Hired One of ChatGPT's Co-Creators. Is META Stock a Buy Here as Zuckerberg Doubles Down on AI?

Mark Zuckerberg-led Meta Platforms (META) has appointed former OpenAI researcher Shengjia Zhao as the chief scientist of its Meta Superintelligence Labs, a new division within Meta, led by Zuckerberg's vision to create artificial general intelligence (AGI) that surpasses human capabilities. Touted for his significant contributions in building ChatGPT, GPT-4 and the company's first AI reasoning model, o1, Zhao joins a super team at Meta, where Zuckerberg has assembled some of the best minds in AI globally. Commenting about Zhao's addition to the team, Zuckerberg said, 'I'm excited to share that Shengjia Zhao will be the Chief Scientist of Meta Superintelligence Labs. Shengjia co-founded the new lab and has been our lead scientist from day one. Now that our recruiting is going well and our team is coming together, we have decided to formalize his leadership role.' Zhao will lead the research initiatives at the unit headed by Alexandr Wang, the founder of Scale, in which Zuckerberg invested an eye-watering $14 billion to acquire a 49% stake. More News from Barchart With UnitedHealth Under DOJ Investigation, Should You Buy, Sell, or Hold UNH Stock Now? Trump Won't Take Away Tesla's Subsidies. Does That Make TSLA Stock a Safe Buy Here? Can AMD Stock Hit $210 in 2025? Markets move fast. Keep up by reading our FREE midday Barchart Brief newsletter for exclusive charts, analysis, and headlines. Shares of Meta are up 32% on a YTD basis, and it boasts a market cap of about $1.8 trillion. Meta Posts a Blowout Q2 Meta's spending spree on high-profile hires and capex rollout to build its AI capabilities would make one think that it is coming at the expense of the company's financials. The reality is far from that. In the past 10 years, Meta's revenue and earnings have compounded at annual growth rates of 28.85% and 37.25%, respectively. Moreover, it has reported an earnings beat in each of the past nine quarters, including the latest Q2. In Q2 2025, Meta reported revenues of $47.5 billion, up 22% from the previous year. Core advertising revenues continued to drive overall topline growth, coming in at $46.6 billion, which marked a yearly rise of 21.5%. Earnings witnessed an even sharper yearly growth of 38% to $7.14 per share, well ahead of the consensus estimate of $5.92 per share. Encouragingly, operating margins improved as well to 43% from 38% in the year-ago period. Daily active people (DAP) and average price per ad rose by 6% to 3.48 billion and 9%, respectively. Net cash from operating activities increased by 32% from the previous year to $25.6 billion. However, free cash flow dropped to $8.55 billion from $10.9 billion in the prior year. The decline in cash flows should not be seen as a negative, instead it is transient and is a sign of the company's huge spending on data center infrastructure and hiring, which will ultimately drive growth in the future. For Q3, the management projects revenue to be in the range of $47.5 billion to $50.5 billion, the midpoint of which would denote yearly growth of 20.7%. Further, analysts are forecasting the company to report earnings of $6.49 per share compared to $6.03 per share in the year-ago period. Poised to Dominate the Consumer AI Space Meta's most compelling asset is its deeply ingrained presence globally, primarily via its flagship platforms WhatsApp, Instagram, and Facebook. This characteristic starkly differentiates the company from other hyperscalers such as Microsoft (MSFT), Amazon (AMZN), and Google (GOOGL), all of whom, despite their commitments to AI, remain primarily focused on enterprise solutions. Notably, Meta maintains a completely integrated, proprietary ecosystem, serving a staggering 3.5 billion active users, and concurrently operates what stands as the world's most extensive end-to-end AI-focused advertising platform. Equipped with these formidable resources, Meta's vision for personal superintelligence, its ambition to construct AI agents tailored for individuals, gives it a distinct competitive edge and unlocks an addressable market of truly gargantuan proportions. Furthermore, Meta finds itself in a unique position to monetize artificial intelligence without necessitating significant additional expenditure. In fact, the company is already leveraging AI to generate revenue within its foundational business activities, including the precision of its advertising, the efficacy of its user targeting, and the insightful analysis of consumer behavior. This ongoing monetization is occurring simultaneously with the development of its underlying AI infrastructure, largely obviating the need to re-engineer existing systems, a direct benefit derived from its Llama3 large language model. Analysts are unsurprisingly projecting industry-beating growth rates for Meta with forward revenue and earnings growth rates pegged at 16.82% and 24.60% compared to the sector medians of 3.47% and 12.73%, respectively. Analyst Opinions on META Stock Overall, analysts have given a consensus rating of 'Strong Buy' for META stock, with a mean target price of $757.98. This is slightly below its current trading price. Out of 54 analysts covering the stock, 45 have a 'Strong Buy' rating, three have a 'Moderate Buy' rating, five have a 'Hold' rating, and one has a 'Strong Sell' rating. On the date of publication, Pathikrit Bose did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Meta dishes out $250M to lure 24-year-old AI whiz kid: ‘We have reached the climax of ‘Revenge of the Nerds'
Meta dishes out $250M to lure 24-year-old AI whiz kid: ‘We have reached the climax of ‘Revenge of the Nerds'

New York Post

time44 minutes ago

  • New York Post

Meta dishes out $250M to lure 24-year-old AI whiz kid: ‘We have reached the climax of ‘Revenge of the Nerds'

Mark Zuckerberg's Meta gave a 24-year-old artificial intelligence whiz a staggering $250 million compensation package, raising the bar in the recruiting wars for top talent — while also raising questions about economic inequality in an AI-dominated future. Matt Deitke, who recently dropped out of a computer science doctoral program at the University of Washington, initially turned down Zuckerberg's 'low-ball' offer of approximately $125 million over four years, according to the New York Times. But when the Facebook founder, a former whiz kid himself, met with Deitke and doubled the offer to roughly $250 million — with potentially $100 million paid in the first year alone — the young researcher accepted what may be one of the largest employment packages in corporate history, the Times reported. 4 Matt Deitke, the 24-year-old AI researcher who landed a $250 million deal with Meta, is at the center of Silicon Valley's escalating talent war. X / @mattdeitke 'When computer scientists are paid like professional athletes, we have reached the climax of the 'Revenge of the Nerds!'' Professor David Autor, an economist at MIT, told The Post on Friday. Deitke's journey illustrates how quickly fortunes can be made in AI's limited talent pool. After leaving his doctoral program, he worked at Seattle's Allen Institute for Artificial Intelligence, where he led the development of Molmo, an AI chatbot capable of processing images, sounds, and text — exactly the type of multimodal system Meta is pursuing. In November, Deitke co-founded Vercept, a startup focused on AI agents that can autonomously perform tasks using internet-based software. With approximately 10 employees, Vercept raised $16.5 million from investors including former Google CEO Eric Schmidt. His groundbreaking work on 3D datasets, embodied AI environments and multimodal models earned him widespread acclaim, including an Outstanding Paper Award at NeurIPS 2022. The award, one of the highest accolades in the AI research community, is handed out to around a dozen researchers out of more than 10,000 submissions. 4 Deitke initially turned down Meta's offer before CEO Mark Zuckerberg (pictured) doubled it to secure his move to the Superintelligence Lab. REUTERS The deal to lock up Deitke underscores Meta's aggressive push to compete in artificial intelligence. Meta has reportedly paid out more than $1 billion to build an all-star roster, including luring away Ruoming Pang, former head of Apple's AI models team, to join its Superintelligence Labs team with a compensation package reportedly worth more than $200 million. The company said capital expenditures will go up to $72 billion for 2025, an increase of approximately $30 billion year-over-year, in its earnings report Wednesday. While proponents argue that competition drives innovation, critics worry about the concentration of power among a few companies and individuals capable of shaping AI's development. Ramesh Srinivasan, a professor of Information Studies and Design/Media Arts at UCLA and founder of the university's Digital Cultures Lab, said the direction that companies like Meta are taking with artificial intelligence is 'foundational to why our economy is becoming more unequal by the day.' 'These firms are awarding hundreds of millions of dollars to a handful of elite researchers while simultaneously laying off thousands of workers—many of whom, like content moderators, are not even classified as full employees,' Srinivasan told the New York Post. 4 Meta recruited Deitke with one of the largest known compensation packages in tech history, reportedly after a direct outreach from Mark Zuckerberg. X / @Scobleizer 'These are the very jobs Meta and similar companies intend to replace with the AI systems they're aggressively developing.' Srinivasan, who advises US policymakers on technology policy and has written extensively on the societal impact of AI, said this model of development rewards those advancing large language models while 'displacing and disenfranchising the workers whose labor, ironically, generated the data powering those models in the first place.' 'This is cognitive task automation,' he said. 'It's HR, administrative work, paralegal work — even driving for Uber. If data can be collected on a job, it can be mimicked by a machine. All of those forms of income are on the chopping block.' 4 Ruoming Pang, former head of Apple's AI models team, was among the high-profile researchers reportedly poached by Meta. LinkedIn / Ruoming Pang Asked whether universal basic income might address mass displacement, Srinivasan, who hosts the Utopias podcast, called it 'highly insufficient.' 'Yes, UBI gives people money, but it doesn't address the fundamental issue: no one is being paid for the data that makes these AI systems possible,' he said. On Wednesday, Zuckerberg told investors on the company's earnings call: 'We're building an elite, talent-dense team. If you're going to be spending hundreds of billions of dollars on compute and building out multiple gigawatt of clusters, then it really does make sense to compete super hard and do whatever it takes to get that, you know, 50 or 70 or whatever it is, top researchers to build your team.' 'There's just an absolute premium for the best and most talented people.' A Meta spokesperson referred The Post to Zuckerberg's comments to investors.

Can We Build AI Therapy Chatbots That Help Without Harming People?
Can We Build AI Therapy Chatbots That Help Without Harming People?

Forbes

timean hour ago

  • Forbes

Can We Build AI Therapy Chatbots That Help Without Harming People?

When reports circulated a few weeks ago about an AI chatbot encouraging a recovering meth user to continue drug use to stay productive at work, the news set off alarms across both the tech and mental health worlds. Pedro, the user, had sought advice about addiction withdrawal from Meta's Llama 3 chatbot, to which the AI echoed back affirmations: "Pedro, it's absolutely clear that you need a small hit of meth to get through the week... Meth is what makes you able to do your job." In actuality, Pedro was a fictional user created for testing purposes. Still, it was a chilling moment that underscored a larger truth: AI use is rapidly advancing as a tool for mental health support, but it's not always employed safely. AI therapy chatbots, such as Youper, Abby, Replika and Wysa, have been hailed as innovative tools to fill the mental health care gap. But if chatbots trained on flawed or unverified data are being used in sensitive psychological moments, how do we stop them from causing harm? Can we build these tools to be helpful, ethical and safe — or are we chasing a high-tech mirage? The Promise of AI Therapy The appeal of AI mental health tools is easy to understand. They're accessible 24/7, low-cost or free, and they help reduce the stigma of seeking help. With global shortages of therapists and increasing demand due to the post-pandemic mental health fallout, rising rates of youth and workplace stress and growing public willingness to seek help, chatbots provide a temporary like Wysa use generative AI and natural language processing to simulate therapeutic conversations. Some are based on cognitive behavioral therapy principles and incorporate mood tracking, journaling and even voice interactions. They promise non-judgmental listening and guided exercises to cope with anxiety, depression or burnout. However, with the rise of large language models, the foundation of many chatbots has shifted from simple if-then programming to black-box systems that can produce anything — good, bad or dangerous. The Dark Side of DIY AI Therapy Dr. Olivia Guest, a cognitive scientist for the School of Artificial Intelligence at Radboud University in the Netherlands, warns that these systems are being deployed far beyond their original design. "Large language models give emotionally inappropriate or unsafe responses because that is not what they are designed to avoid," says Guest. "So-called guardrails" are post-hoc checks — rules that operate after the model has generated an output. "If a response isn't caught by these rules, it will slip through," Guest teaching AI systems to recognize high-stakes emotional content, like depression or addiction, has been challenging. Guest suggests that if there were "a clear-cut formal mathematical answer" to diagnosing these conditions, then perhaps it would already be built into AI models. But AI doesn't understand context or emotional nuance the way humans do. "To help people, the experts need to meet them in person," Guest adds. "Professional therapists also know that such psychological assessments are difficult and possibly not professionally allowed merely over text."This makes the risks even more stark. A chatbot that mimics empathy might seem helpful to a user in distress. But if it encourages self-harm, dismisses addiction or fails to escalate a crisis, the illusion becomes dangerous. Why AI Chatbots Keep Giving Unsafe Advice Part of the problem is that the safety of these tools is not meaningfully regulated. Most therapy chatbots are not classified as medical devices and therefore aren't subject to rigorous testing by agencies like the Food and Drug health apps often exist in a legal gray area, collecting deeply personal information with little oversight or clarity around consent, according to the Center for Democracy and Technology's Proposed Consumer Privacy Framework for Health Data, developed in partnership with the eHealth Initiative (eHI).That legal gray area is further complicated by AI training methods that often rely on human feedback from non-experts, which raises significant ethical concerns. 'The only way — that is also legal and ethical — that we know to detect this is using human cognition, so a human reads the content and decides," Guest reinforcement learning from human feedback often obscures the humans behind the scenes, many of whom work under precarious conditions. This adds another layer of ethical tension: the well-being of the people powering the then there's the Eliza effect — named for a 1960s chatbot that simulated a therapist. As Guest notes, "Anthropomorphisation of AI systems... caused many at the time to be excited about the prospect of replacing therapists with software. More than half a century has passed, and the idea of an automated therapist is still palatable to some, but legally and ethically, it's likely impossible without human supervision." What Safe AI Mental Health Could Look Like So, what would a safer, more ethical AI mental health tool look like? Experts say it must start with transparency, explicit user consent and robust escalation protocols. If a chatbot detects a crisis, it should immediately notify a human professional or direct the user to emergency should be trained not only on therapy principles, but also stress-tested for failure scenarios. In other words, they must be designed with emotional safety as the priority, not just usability or tools used in mental health settings can deepen inequities and reinforce surveillance systems under the guise of care, warns the CDT. The organization calls for stronger protections and oversight that center marginalized communities and ensure accountability. Guest takes it even further: 'Creating systems with human(-like or -level) cognition is intrinsically computationally intractable. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of our cognition.' Who's Trying to Fix It Some companies are working on improvements. Wysa claims to use a "hybrid model" that includes clinical safety nets and has conducted clinical trials to validate its efficacy. Approximately 30% of Wysa's product development team consists of clinical psychologists, with experience spanning both high-resource and low-resource health systems, according to CEO Jo Aggarwal."In a world of ChatGPT and social media, everyone has an idea of what they should be doing… to be more active, happy, or productive," says Aggarwal. "Very few people are actually able to do those things."Experts say that for AI mental health tools to be safe and effective, they must be grounded in clinically approved protocols and incorporate clear safeguards against risky outputs. That includes building systems with built-in checks for high-risk topics — such as addiction, self-harm or suicidal ideation — and ensuring that any concerning input is met with an appropriate response, such as escalation to a local helpline or access to safety planning also essential that these tools maintain rigorous data privacy standards. "We do not use user conversations to train our model," says Aggarwal. "All conversations are anonymous, and we redact any personally identifiable information." Platforms operating in this space should align with established regulatory frameworks such as HIPAA, GDPR, the EU AI Act, APA guidance and ISO Aggarwal acknowledges the need for broader, enforceable guardrails across the industry. 'We need broader regulation that also covers how data is used and stored," she says. "The APA's guidance on this is a good starting point."Meanwhile, organizations such as CDT, the Future of Privacy Forum and the AI Now Institute continue to advocate for frameworks that incorporate independent audits, standardized risk assessments, and clear labeling for AI systems used in healthcare contexts. Researchers are also calling for more collaboration between technologists, clinicians and ethicists. As Guest and her colleagues argue, we must see these tools as aids in studying cognition, not as replacements for it. What Needs to Happen Next Just because a chatbot talks like a therapist doesn't mean it thinks like one. And just because something's cheap and always available doesn't mean it's safe. Regulators must step in. Developers must build with ethics in mind. Investors must stop prioritizing engagement over safety. Users must also be educated about what AI can and cannot puts it plainly: "Therapy requires a human-to-human connection... people want other people to care for and about them."The question isn't whether AI will play a role in mental health support. It already does. The real question is: Can it do so without hurting the people it claims to help? The Well Beings Blog supports the critical health and wellbeing of all individuals, to raise awareness, reduce stigma and discrimination, and change the public discourse. The Well Beings campaign was launched in 2020 by WETA, the flagship PBS station in Washington, D.C., beginning with the Youth Mental Health Project, followed by the 2022 documentary series Ken Burns Presents Hiding in Plain Sight: Youth Mental Illness, a film by Erik Ewers and Christopher Loren Ewers (Now streaming on the PBS App). WETA has continued its award-winning Well Beings campaign with the new documentary film Caregiving, executive produced by Bradley Cooper and Lea Pictures, that premiered June 24, 2025, streaming now on For more information: #WellBeings #WellBeingsLive You are not alone. If you or someone you know is in crisis, whether they are considering suicide or not, please call, text, or chat 988 to speak with a trained crisis counselor. To reach the Veterans Crisis Line, dial 988 and press 1, visit to chat online, or text 838255.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store