logo
Grief in filters: The digital mask of emotion

Grief in filters: The digital mask of emotion

Observer19-06-2025
Gen Z is known for introducing many new concepts — some are worth the trend, others not so much. Over the past few years, Gen Z has started to treat sadness as an aesthetic, something to play around with instead of confronting it as a real emotion.
Whether it's losing someone or going through a traumatic event, each emotion carries weight — and turning it into a trend makes those feelings harder to understand.
Instead of dealing with emotions directly, many young people turn to various coping mechanisms. Some are healthy, but others raise concerns. From ironic memes to oversharing on social media, these habits have become common ways to process pain.
Studies suggest that around 45 per cent of youth depend on harmful coping strategies. One of the most common is binge-watching series, which often leads to severe procrastination and distraction from studies or activities they used to enjoy. Another example is oversharing with strangers or online friends when they feel no one else understands — this could lead to leaks of personal information or even emotional harm.
Gen Z also uses dark humour to mask pain, often without realising that others going through the same thing might not find it funny. The most serious and damaging of all these habits is emotional numbing — thinking that suppressing emotions will stop the pain. But this only leads to endless scrolling, gaming, and surface-level interactions.
On the other hand, a portion of Gen Z is turning to healthier methods. Meditation helps calm the mind, journaling allows for emotional release, and reading gives a chance to relate to characters and better understand one's own feelings.
Social media platforms like Instagram and TikTok often glorify these unhealthy habits.
Reels, posts, and trauma-dump stories can make sadness look beautiful — and when pain becomes a trend, it becomes hard to tell who genuinely needs help and who's following the aesthetic.
These habits can seriously affect mental health. Many experience anxiety, stress, or even depression without realizing what's really causing it.
To change this, we need to shift towards better strategies — like opening up to someone we trust, doing creative or active things like art or sports, and seeing therapy as a healthy, not shameful, option.
In conclusion, it's time to stop pretending everything is fine or turning pain into a joke.
Sadness is real, and everyone experiences it. The difference lies in how we cope — and it's up to us to turn aesthetic into awareness.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

BIG TECH MODERATORS UNITE TO FIGHT TRAUMA
BIG TECH MODERATORS UNITE TO FIGHT TRAUMA

Observer

time4 days ago

  • Observer

BIG TECH MODERATORS UNITE TO FIGHT TRAUMA

Content moderators from the Philippines to Türkiye are uniting to push for greater mental health support to help them cope with the psychological effects of exposure to a rising tide of disturbing images online. The people tasked with removing harmful content from tech giants like Meta Platforms or TikTok, report a range of noxious health effects from loss of appetite to anxiety and suicidal thoughts. "Before I would sleep seven hours," said one Filipino content moderator who asked to remain anonymous to avoid problems with their employer. "Now I only sleep around four hours". Workers are gagged by non-disclosure agreements with the tech platforms or companies that do the outsourced work, meaning they cannot discuss exact details of the content they are seeing. But videos of people being burned alive by the IS, babies dying in Gaza and gruesome pictures from the Air India crash in June were given as examples by moderators. Social media companies, which often outsource content moderation to third parties, are facing increasing pressure to address the emotional toll of moderation. Meta, which owns Facebook, WhatsApp and Instagram, has already been hit with workers' rights lawsuits in Kenya and Ghana; and in 2020 the firm paid a $52 million settlement to American content moderators suffering long-term mental health issues. The Global Trade Union Alliance of Content Moderators was launched in Nairobi in April to establish worker protections for what they dub 'a 21st century hazardous job', similar to the work of emergency responders. Their first demand is for tech companies to adopt mental health protocols, such as exposure limits and trauma training, in their supply chains. "They say we're the ones protecting the internet, keeping kids safe online," the Filipino worker said, "But we are not protected enough". SCROLLING TRAUMA Globally, tens of thousands of content moderators spend up to 10 hours a day scrolling through social media posts to remove harmful content — and the mental toll is well-documented. "I've had bad dreams because of the graphic content and I'm smoking more, losing focus," said Berfin Sirin Tunc, a content moderator for TikTok in Türkiye employed via Canadian-based tech company Telus, which also does work for Meta. In a video call, she said the first time she saw graphic content as part of her job she had to leave the room and go home. While some employers do provide psychological support, some workers say it is just for show — with advice to count numbers or do breathing exercises. Therapy is limited to either group sessions or a recommendation to switch off for a certain number of 'wellness break' minutes. But taking them is another thing. "If you don't go back to the computer, your team leader will ask where are you and (say) that the queue of videos is growing," said Tunc, "Bosses see us just as machines". In emailed statements, Telus and Meta said the well-being of their employees is a top priority and that employees should have access to 24/7 healthcare support. RISING PRESSURE Moderators have seen an uptick in violent videos. A report by Meta for the first quarter of 2025 showed a rise in the sharing of violent content on Facebook, after the company changed its content moderation policies in a commitment to 'free expression'. However, Telus said in its emailed response that internal estimates show that distressing material represents less than 5 per cent of the total content reviewed. Adding to the pressure on moderators is a fear of losing jobs as companies shift towards AI-powered moderation. Meta, which invested billions and hired thousands of content moderators globally over the years to police extreme content, scrapped its US fact-checking programme in January, following the election of Donald Trump. In April, 2,000 Barcelona-based workers were sent home after Meta severed a contract with Telus. A Meta spokesperson said the company has moved the services that were being performed from Barcelona to other locations. "I'm waiting for Telus to fire me," said Tunc, "because they fired my friends from our union". Fifteen workers in Türkiye are suing the company after being dismissed, they say, after organising a union and attending protests this year. A spokesperson for Telus said in an emailed response that the company "respects the rights of workers to organise". Telus said a May report by Türkiye's Ministry of Labour found contract terminations were based on performance and it could not be concluded that the terminations were union-related. The Labour Ministry did not immediately respond to a request for comment. PROTECTION PROTOCOLS Moderators in low-income countries say that the low wages, productivity pressure and inadequate mental health support can be remedied if companies sign up to the Global Alliance's eight protocols. These include limiting exposure time, making realistic quotas and 24/7 counselling, as well as living wages, mental health training and the right to join a union. Telus said that it was already in compliance with the demands and Meta said it conducts audits to check that companies are providing required on-site support. "Bad things are happening in the world. Someone has to do this job and protect social media," said Tunc. "With better conditions, we can do this better. If you feel like a human, you can work like a human". — Thomson Reuters Foundation JOANNA GILL The writer is Europe correspondent for Thomson Reuters Foundation

Matcha: the Japanese tea taking over the world
Matcha: the Japanese tea taking over the world

Observer

time28-06-2025

  • Observer

Matcha: the Japanese tea taking over the world

Matcha is the new drink of choice at hip cafes worldwide, but Japanese producers are struggling to keep up with soaring demand for the powdered green tea. Here's what you need to know about the drink beloved of weekend treat-seekers and "wellness" influencers: Matcha: the Japanese tea taking over the world - What is matcha? - The word matcha means "ground tea" in Japanese and comes in the form of a vivid green powder that is whisked with hot water and can be added to milk to make a matcha latte. Green tea was introduced to Japan from China in the early ninth century, and was first used for medicinal purposes. Matcha came much later, in 16th century Kyoto -- part of the tea ceremony tradition developed by tea master Sen no Rikyu. Today, there are different grades of matcha quality, from "ceremonial" to "culinary" types used in baking. Matcha: the Japanese tea taking over the world - How is it produced? - Matcha is made from leaves called "tencha", which are grown in the shade in the final weeks before their harvest to concentrate the flavour, colour and nutrients. This "requires the construction of a complex structure with poles and a roof to filter the light", explained Masahiro Okutomi, a tea producer in Sayama, northwest of Tokyo. Tencha leaves, rich in chlorophyll and L-theanine, a compound known for its relaxing effects, are hand-picked and deveined, then steamed, dried and ground between two stone mills to produce an ultra-fine powder. It can take up to an hour to produce just 40 grams (1.4 ounces) of matcha -- making the powder on average twice as expensive to produce as standard green tea leaves. Matcha: the Japanese tea taking over the world - What are its benefits? - Many drink matcha for its rich grass-like taste, but others are drawn to the drink's nutritional properties. It is rich in antioxidants, and can aid concentration because of its caffeine content: one cup contains on average 48 milligrams, slightly less than a drip coffee but nearly twice as much as a standardly brewed green tea. "Matcha is often seen as being good for your health," said Shigehito Nishikida, manager of Tokyo tea shop Jugetsudo. "But people are also attracted to the Japanese culture around tea: the ritual, the time taken, the aesthetics," he said. Matcha: the Japanese tea taking over the world - Why is it so popular? - Japan produced 4,176 tonnes of matcha in 2023 -- a huge increase from the 1,430 tonnes in 2012. More than half of the powder is exported, according to the agriculture ministry, mostly to the United States, Southeast Asia, Europe, Australia and the Middle East. Millions of videos on TikTok, Instagram and YouTube demonstrate how to make photogenic matcha drinks or choose a traditional "chasen" bamboo whisk. Matcha: the Japanese tea taking over the world "I feel like Gen Z really drove this enthusiasm for matcha, and they heavily relied on social media to do so," Stevie Youssef, a 31-year-old marketing professional, told AFP at a matcha bar in Los Angeles. Matcha can also be used in cooking, extending its appeal to others aside from tea lovers. "Some customers simply enjoy drinking it, others like preparing it themselves. And of course, many buy it as a gift -- Japanese matcha is always appreciated," said Jugetsudo's Nishikida. —AFP

The ethics of using AI to predict patient choices
The ethics of using AI to predict patient choices

Observer

time21-06-2025

  • Observer

The ethics of using AI to predict patient choices

I recently attended a conference on bioethics in Switzerland where professionals from different countries met to discuss recent topics in medical ethics which was the main theme of this year's conference. Among the highlights of the meeting were several talks about the inclusion of Artificial Intelligence in decision-making and its ethical impact. What caught my attention was a talk about Personalised Patient Preference Predictor, or P4, which is a tool that aims to predict an individual patient's preferences for healthcare, using machine learning. The idea is that in situations where a person is incapacitated — for example, found unconscious with no advance directive — the AI would comb through their digital footprint, including tweets, Instagram and Facebook posts, and possibly even emails, to infer their likely wishes. The system would then create a virtual copy of the individual's personality, known as a 'psychological twin,' which would communicate decisions to the medical team on the person's behalf. While this concept is technologically fascinating, it raises several pressing ethical concerns. First, it assumes that our social media presence accurately reflects our core values and long-term preferences. However, people's views are dynamic and influenced by their emotional state, life experiences, and personal growth. A sarcastic tweet or a momentary opinion shared online may not represent someone's actual end-of-life wishes. Second, the use of AI risks introducing or amplifying bias — especially against the elderly and individuals from ethnic or religious minorities. AI systems often generalise from large datasets, which can lead to 'one-size-fits-all' assumptions that disregard cultural, spiritual, or personal nuances. Another critical question is: can AI truly understand or navigate the emotional and moral complexity of disagreements among family members and healthcare providers? Would it possess the empathy required to mediate a delicate conversation, or would it deliver cold logic such as: 'Grandpa is too old, his survival chances are low, so resources would be better allocated elsewhere'? Furthermore, relying on AI for such deeply human decisions risks the deskilling of health professionals. Ethical decision-making is an essential skill developed through experience, reflection, and dialogue. If AI takes over these roles, clinicians may gradually lose the ability — or the confidence — to engage in these vital discussions. The speaker, who advocated for the use of P4, admitted he did not fully understand how the AI makes its decisions. This lack of transparency is alarming. If we are to entrust a machine with life-or-death recommendations, we must first demand clarity and accountability in its design and operation. In my opinion, while AI has a growing role in healthcare, ethical decision-making remains a human responsibility. These discussions are often fraught with disagreement, cultural sensitivity, and intense emotion — particularly when they involve questions of life and death. In my view, we are not yet ready to hand this task over to machines. We are not yet ready to hand this task over to machines.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store