logo
FDA issues highest-level recall for Alma Pak organic blueberries over Listeria contamination

FDA issues highest-level recall for Alma Pak organic blueberries over Listeria contamination

Time of Indiaa day ago
The FDA has issued a Class I recall for Alma Pak International LLC's organic blueberries after Listeria monocytogenes was detected. This recall affects 400 boxes distributed to North Carolina. Listeria infection poses a severe risk, especially to vulnerable populations, potentially leading to serious illness or death. Consumers are urged to check lot numbers and seek medical attention if symptoms arise.
Tired of too many ads?
Remove Ads
What's affected?
Product: Organic blueberries from Alma Pak International LLC (Georgia-based)
Quantity: 400 boxes, each weighing 30 pounds (totaling 12,000 pounds)
Lot Numbers: 13325 G1060 and 13325 G1096
Distribution: Shipped to a single customer in North Carolina
Why the urgency?
Check their blueberries for the affected lot numbers
Avoid consuming any recalled product
Contact the retailer for a refund
Seek medical attention if symptoms of listeriosis develop after consumption
Company response and background
Tired of too many ads?
Remove Ads
The U.S. Food and Drug Administration (FDA) has issued a Class I recall—the highest possible risk level—for 400 boxes of organic blueberries distributed by Alma Pak International LLC, following the detection of Listeria monocytogenes during routine testing. The recall, first initiated on June 9, 2025, was elevated to Class I status on July 1, signaling a 'reasonable probability' that consuming the affected product could lead to serious health consequences or even death, according to the FDA's official definition.Listeria monocytogenes is a dangerous bacteria that can contaminate various foods. Infection, known as listeriosis, is particularly hazardous for newborns, adults over 65, and individuals with weakened immune systems. Symptoms range from fever, muscle aches, nausea, vomiting, and diarrhea in milder cases, to severe complications such as headaches, neck stiffness, confusion, loss of coordination, and seizures in more vulnerable individuals. Listeria infection is the third leading cause of death from foodborne illness in the U.S., causing around 260 deaths annually.Consumers are urged to:The FDA's Class I recall status underscores the seriousness of this contamination, as the agency warns that exposure to the tainted blueberries could result in life-threatening illness.Alma Pak International LLC has voluntarily recalled the product and is cooperating with authorities. Notably, this is not the company's first recall—previous incidents have included recalls for other frozen berry products due to contamination concerns.This recall comes amid a broader trend of increased food recalls in 2024 and 2025, with a notable rise in hospitalizations and deaths linked to foodborne pathogens.If you have purchased Alma Pak organic blueberries, check your packaging immediately and follow FDA guidance to ensure your safety.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Review held to enhance administrative efficiency Kurnool GGH
Review held to enhance administrative efficiency Kurnool GGH

Hans India

timean hour ago

  • Hans India

Review held to enhance administrative efficiency Kurnool GGH

Kurnool: A comprehensive review meeting was held at the Kurnool Government General Hospital on Thursday under the supervision of Hospital Superintendent Dr K Venkateswarlu. The session focused on the performance and responsibilities of the ministerial staff and hospital personnel. Key topics included punctuality, section-wise performance assessments, adherence to the Face Recognizing System (FRS), and the delivery of timely administrative and patient-related services. Dr Venkateswarlu conducted an in-depth evaluation of the ministerial staff's functioning, analysing their roles across various sections. He also scrutinised the duties of fourth-class employees and voiced displeasure regarding lapses in their responsibilities. Strict instructions were issued mandating the use of ID cards, compliance with dress code regulations, proper FRS-based attendance, and strict punctuality. The Superintendent warned that any violations in these areas would result in departmental disciplinary action. Further, the Medical Records and Transcription (MRT) section was directed to ensure the prompt issuance of reports, death certificates, and birth certificates to avoid inconvenience to patients. Emphasis was placed on attending to public grievances without delay and avoiding any negligence in service delivery. All hospital staff were instructed to remain accessible during their designated duty hours and to maintain proper documentation of files and records. The Superintendent also reviewed seat allotments within the ministerial staff and announced changes aimed at improving workflow. Deputy CS RMO Dr Padmaja, Hospital Administrator Sindhu Subrahmanyam, Administrative Officer Srinivasulu, and other hospital personnel attended the meeting.

Content moderators for Big Tech unite to tackle mental trauma
Content moderators for Big Tech unite to tackle mental trauma

Time of India

timean hour ago

  • Time of India

Content moderators for Big Tech unite to tackle mental trauma

Content moderators from the Philippines to Turkey are uniting to push for greater mental health support to help them cope with the psychological effects of exposure to a rising tide of disturbing images online. The people tasked with removing harmful content from tech giants like Meta Platforms or TikTok, report a range of noxious health effects from loss of appetite to anxiety and suicidal thoughts. "Before I would sleep seven hours," said one Filipino content moderator who asked to remain anonymous to avoid problems with their employer. "Now I only sleep around four hours." Workers are gagged by non-disclosure agreements with the tech platforms or companies that do the outsourced work, meaning they cannot discuss exact details of the content they are seeing. But videos of people being burned alive by the Islamic State, babies dying in Gaza and gruesome pictures from the Air India crash in June were given as examples by moderators who spoke to the Thomson Reuters Foundation. Social media companies, which often outsource content moderation to third parties, are facing increasing pressure to address the emotional toll of moderation. Meta, which owns Facebook, WhatsApp and Instagram, has already been hit with workers' rights lawsuits in Kenya and Ghana, and in 2020 the firm paid a $52 million settlement to American content moderators suffering long-term mental health issues. The Global Trade Union Alliance of Content Moderators was launched in Nairobi in April to establish worker protections for what they dub "a 21st century hazardous job", similar to the work of emergency responders. Their first demand is for tech companies to adopt mental health protocols, such as exposure limits and trauma training, in their supply chains. "They say we're the ones protecting the internet, keeping kids safe online," the Filipino worker said, "But we are not protected enough." Scrolling trauma Globally, tens of thousands of content moderators spend up to 10 hours a day scrolling through social media posts to remove harmful content - and the mental toll is well-documented. "I've had bad dreams because of the graphic content, and I'm smoking more, losing focus," said Berfin Sirin Tunc, a content moderator for TikTok in Turkey employed via Canadian-based tech company Telus, which also does work for Meta. In a video call with the Thomson Reuters Foundation, she said the first time she saw graphic content as part of her job she had to leave the room and go home. While some employers do provide psychological support, some workers say it is just for show - with advice to count numbers or do breathing exercises. Therapy is limited to either group sessions or a recommendation to switch off for a certain number of "wellness break" minutes. But taking them is another thing. "If you don't go back to the computer, your team leader will ask where are you and (say) that the queue of videos is growing," said Tunc, "Bosses see us just as machines." In emailed statements to the Thomson Reuters Foundation, Telus and Meta said the well-being of their employees is a top priority and that employees should have access to 24/7 healthcare support. Rising pressure Moderators have seen an uptick in violent videos. A report by Meta for the first quarter of 2025 showed a rise in the sharing of violent content on Facebook, after the company changed its content moderation policies in a commitment to "free expression." However, Telus said in its emailed response that internal estimates show that distressing material represents less than 5% of the total content reviewed. Adding to the pressure on moderators is a fear of losing jobs as companies shift towards artificial intelligence-powered moderation. Meta, which invested billions and hired thousands of content moderators globally over the years to police extreme content, scrapped its US fact-checking programme in January, following the election of Donald Trump. In April, 2,000 Barcelona-based workers were sent home after Meta severed a contract with Telus. A Meta spokesperson said the company has moved the services that were being performed from Barcelona to other locations. "I'm waiting for Telus to fire me," said Tunc, "because they fired my friends from our union." Fifteen workers in Turkey are suing the company after being dismissed, they say, after organising a union and attending protests this year. A spokesperson for Telus said in an emailed response that the company "respects the rights of workers to organise". Telus said a May report by Turkey's Ministry of Labour found contract terminations were based on performance and it could not be concluded that the terminations were union-related. The Labour Ministry did not immediately respond to a request for comment. Protection protocols Moderators in low-income countries say that the low wages, productivity pressure and inadequate mental health support can be remedied if companies sign up to the Global Alliance's eight protocols. These include limiting exposure time, making realistic quotas and 24/7 counselling, as well as living wages, mental health training and the right to join a union. Telus said in its statement that it was already in compliance with the demands, and Meta said it conducts audits to check that companies are providing required on-site support. New European Union rules - such as the Digital Services Act, the AI Act and supply chain regulations which demand tech companies address risks to workers - should give stronger legal grounds to protect content moderators' rights, according to labour experts. "Bad things are happening in the world. Someone has to do this job and protect social media," said Tunc. "With better conditions, we can do this better. If you feel like a human, you can work like a human."

Content moderators for Big Tech unite to tackle mental trauma
Content moderators for Big Tech unite to tackle mental trauma

The Hindu

time2 hours ago

  • The Hindu

Content moderators for Big Tech unite to tackle mental trauma

Content moderators from the Philippines to Turkey are uniting to push for greater mental health support to help them cope with the psychological effects of exposure to a rising tide of disturbing images online. The people tasked with removing harmful content from tech giants like Meta Platforms or TikTok, report a range of noxious health effects from loss of appetite to anxiety and suicidal thoughts. "Before I would sleep seven hours," said one Filipino content moderator who asked to remain anonymous to avoid problems with their employer. "Now I only sleep around four hours." Workers are gagged by non-disclosure agreements with the tech platforms or companies that do the outsourced work, meaning they cannot discuss exact details of the content they are seeing. But videos of people being burned alive by the Islamic State, babies dying in Gaza and gruesome pictures from the Air India crash in June were given as examples by moderators who spoke to the Thomson Reuters Foundation. Social media companies, which often outsource content moderation to third parties, are facing increasing pressure to address the emotional toll of moderation. Meta, which owns Facebook, WhatsApp and Instagram, has already been hit with workers' rights lawsuits in Kenya and Ghana, and in 2020 the firm paid a $52 million settlement to American content moderators suffering long-term mental health issues. The Global Trade Union Alliance of Content Moderators was launched in Nairobi in April to establish worker protections for what they dub "a 21st century hazardous job", similar to the work of emergency responders. Their first demand is for tech companies to adopt mental health protocols, such as exposure limits and trauma training, in their supply chains. "They say we're the ones protecting the internet, keeping kids safe online," the Filipino worker said, "But we are not protected enough." Globally, tens of thousands of content moderators spend up to 10 hours a day scrolling through social media posts to remove harmful content, and the mental toll is well-documented. "I've had bad dreams because of the graphic content, and I'm smoking more, losing focus," said Berfin Sirin Tunc, a content moderator for TikTok in Turkey employed via Canadian-based tech company Telus, which also does work for Meta. In a video call with the Thomson Reuters Foundation, she said the first time she saw graphic content as part of her job she had to leave the room and go home. While some employers do provide psychological support, some workers say it is just for show, with advice to count numbers or do breathing exercises. Therapy is limited to either group sessions or a recommendation to switch off for a certain number of "wellness break" minutes. But taking them is another thing. "If you don't go back to the computer, your team leader will ask where are you and (say) that the queue of videos is growing," said Tunc, "Bosses see us just as machines." In emailed statements to the Thomson Reuters Foundation, Telus and Meta said the well-being of their employees is a top priority and that employees should have access to 24/7 healthcare support. Moderators have seen an uptick in violent videos. A report by Meta for the first quarter of 2025 showed a rise in the sharing of violent content on Facebook, after the company changed its content moderation policies in a commitment to "free expression." However, Telus said in its emailed response that internal estimates show that distressing material represents less than 5% of the total content reviewed. Adding to the pressure on moderators is a fear of losing jobs as companies shift towards artificial intelligence-powered moderation. Meta, which invested billions and hired thousands of content moderators globally over the years to police extreme content, scrapped its U.S. fact-checking programme in January, following the election of Donald Trump. In April, 2,000 Barcelona-based workers were sent home after Meta severed a contract with Telus. A Meta spokesperson said the company has moved the services that were being performed from Barcelona to other locations. "I'm waiting for Telus to fire me," said Tunc, "because they fired my friends from our union." Fifteen workers in Turkey are suing the company after being dismissed, they say, after organising a union and attending protests this year. A spokesperson for Telus said in an emailed response that the company "respects the rights of workers to organise". Telus said a May report by Turkey's Ministry of Labour found contract terminations were based on performance and it could not be concluded that the terminations were union-related. The Labour Ministry did not immediately respond to a request for comment. Moderators in low-income countries say that the low wages, productivity pressure and inadequate mental health support can be remedied if companies sign up to the Global Alliance's eight protocols. These include limiting exposure time, making realistic quotas and 24/7 counselling, as well as living wages, mental health training and the right to join a union. Telus said in its statement that it was already in compliance with the demands, and Meta said it conducts audits to check that companies are providing required on-site support. New European Union rules, such as the Digital Services Act, the AI Act and supply chain regulations which demand tech companies address risks to workers, should give stronger legal grounds to protect content moderators' rights, according to labour experts. "Bad things are happening in the world. Someone has to do this job and protect social media," said Tunc. "With better conditions, we can do this better. If you feel like a human, you can work like a human."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store