logo
#

Latest news with #hearingAids

Noise Is The New Secondhand Smoke
Noise Is The New Secondhand Smoke

Forbes

time11-06-2025

  • Health
  • Forbes

Noise Is The New Secondhand Smoke

Woman have tinnitus,noise whistling in her ears Summer is here, and so is the seasonal surge in sound, urban noise, social noise, and even the quiet missing from our homes and workplaces. But noise is not just a nuisance. Increasingly, it is a measurable health and business risk. This June series explores how we are rethinking noise not just through avoidance but through innovation. From ear protection to haptic sound and emerging wellness experiences, a new market is emerging. Call it the Noise Economy. It is louder, more disruptive, and more threatening to our health than we realize. Noise is everywhere in my world. As someone who has worn hearing aids for most of my life, I am acutely aware of sound and noise. It is the constant companion I did not invite. Background noise has quietly crept up in volume and impact in restaurants, airports, stores, on the street, and even in my home. And I am not alone. Spend time with younger cohorts today, and you will see that noise is becoming their normal, too. A generation raised on personal digital devices and open-office culture now moves through the day surrounded by an array of auditory factors—alerts, conversations, video conferences, background noise, and personal audio streams. Many attempt to tune out one layer of noise by adding another. The body still takes it in, and the mind still fatigues. Twenty years ago, noise looked different. If you worked in a corporate office, chances are you had walls, a door, or a cubicle providing acoustic separation. Many meetings took place face-to-face or over the phone, not on video. Background chatter was limited. When you left the office, your auditory environment changed again—perhaps the street was noisy, but your home was primarily a place of quiet. Life provided more moments of auditory relief. Today, that relief is harder to find. The way we carry sound has evolved alongside this rise in background noise. The original Sony Walkman, launched in 1979, gave people their first taste of portable music. It was a dedicated device used with intent. In 2001, Apple launched the iPod, making it possible to carry an entire music library in your pocket. Microsoft introduced the Zune in 2006, bringing its vision of portable digital music to market. Then came the iPhone and a wave of Android devices, collapsing music, communication, and constant connectivity into a single screen and pair of earbuds. Now, for many people, audio is an always-on layer of life. We are surrounded by noise and often add more of it ourselves. Today, we live in an always-on auditory environment. Devices chirp, alerts ping, and voices echo across open-plan spaces. In urban environments, construction noise is no longer confined to daytime hours. The piercing sirens of police, fire, and emergency vehicles add another layer of stress to our environments. Restaurants have long been noisy, and in many ways, they remain unchanged. Today, that experience is layered on top of an already noisy lifestyle. Many now intentionally amplify the buzz through background music, believing that more noise equals more energy and revenue. Yet for customers and staff alike, it often leads to the opposite: auditory fatigue and disengagement. At home, HVAC systems hum, and appliances chime. Even wellness spaces, meant to calm us, often rely on background music and brand-driven sound. But here is what is missing from the conversation. Noise is no longer just about how loud it is. It is about how much our brains must process to navigate modern life. The cognitive load of unmanaged sound is becoming one of our time's least discussed health and productivity challenges. Humans evolved in environments where sound signaled something important. Now, we live in a world of meaningless noise, forcing our brains to sort through an endless stream of irrelevant sound. Every notification that pulls your attention, every video meeting layered with background chatter, and every conversation forced through a wall of ambient noise. That constant filtering burns energy, creates stress, and weakens focus and clarity. Over time, it can trigger fatigue, anxiety, and even cardiovascular strain. The World Health Organization classifies noise pollution as Europe's second most significant environmental health threat after air pollution. In the United States, the CDC links chronic noise exposure to sleep disruption, hypertension, and impaired cognitive performance. For employers, this translates to rising workplace fatigue, decreased productivity, more frequent errors, and higher health-related costs. Yet, in most organizations, noise remains an unexamined variable. As leaders examine it, they will find that unmanaged noise carries real costs and clear opportunities for those who act first. For businesses, unmanaged noise is no longer just an operational annoyance. It risks customer experience, employee well-being, and brand value. Leaders who understand this are beginning to gain an edge, and those who ignore it risk falling behind. These impacts are not hypothetical. The data is mounting and tells a clear story that leaders can no longer afford to overlook. The data is clear. Noise is not just affecting personal well-being. It is shaping customer choices and workforce dynamics in measurable ways. Akoio partnered with Chute Gerdeman on the Auditory Experience Will Shape the Future of Retail report. It highlighted that many stores peaked above 80 dB, hindering shoppers, staff, and internal communications, even in luxury environments. Supporting that, Quiet Mark's 2023 UK National Noise Report found that 84 percent of respondents across home, workplace, and hospitality settings consider it essential to have quiet moments. Quiet Mark's 2022 United States study revealed that 68 percent of Americans factor workplace noise levels into their job decisions. In short, noise is not an abstract issue. It is influencing real business outcomes today. Every organization has an opportunity to rethink how it manages auditory health. For leaders ready to take action, these are the first questions to ask: The answers to these questions will shape well—being, brand loyalty, workforce resilience, and competitive advantage. So, where do we go from here? That is where the opportunity lies. We are witnessing the emergence of what I call the Noise Economy. It is an ecosystem of products and experiences that help people manage noise and improve auditory wellness. It spans categories such as: This is no longer a niche. Growth is fueled by aging populations and younger consumers prioritizing sensory health and mental well-being. Over the next few weeks, I will explore these categories in depth, highlighting innovators, opportunities, and what businesses need to know. If your company has not started managing noise as part of its workplace or customer experience strategy, now is the time. For the next generation of customers, employees, and communities, how companies manage sound may prove as critical as how they manage air and light. In upcoming articles, we can begin to understand how to counteract noise by mitigating it and using sound to support our auditory health.

Solution to 'cocktail party problem' could help people with hearing loss
Solution to 'cocktail party problem' could help people with hearing loss

Yahoo

time10-05-2025

  • Science
  • Yahoo

Solution to 'cocktail party problem' could help people with hearing loss

When you buy through links on our articles, Future and its syndication partners may earn a commission. Have you ever struggled to pick out your friend's voice over other conversations in a crowded room? Scientists call this challenge the "cocktail party problem," and it can be especially difficult for people with hearing loss. Most hearing aids come with directional filters that help users focus on sounds in front of them. They're best at reducing static background noise, but falter in more complex acoustic scenarios, such as when the user is among cocktail-party guests who are standing close together and speaking at a similar volume. Now, a new algorithm could improve how hearing aids tackle the cocktail party problem. The model, dubbed the "biologically oriented sound segregation algorithm" (BOSSA), draws inspiration from the brain's auditory system, which uses inputs from both ears to locate the source of a noise and can filter out sound by location. Alexander Boyd, a doctoral student in biomedical engineering at Boston University, compared directional filters and BOSSA to flashlights, in that they highlight what is in their path. Related: 'Vestigial' human ear-wiggling muscle actually flexes when we're straining to hear "BOSSA is a new flashlight that has a tighter beam that's more selective," he told Live Science. Compared with the standard filters, BOSSA should be better at distinguishing between speakers — though it still needs to be tested in real-world scenarios with proper hearing aids. Boyd led a recent lab test of BOSSA, whose results were published April 22 in the journal Communications Engineering. In the experiment, participants with hearing loss donned headphones playing audio designed to simulate five people speaking simultaneously and from different angles around the listener. The audio was filtered through either BOSSA or a more traditional hearing-aid algorithm, and the participants compared both filters to how they heard the audio without additional processing. In each trial, participants were asked to follow sentences spoken by one of the five speakers. The volume of the "target speaker" relative to the other speakers varied between trials. When the target speaker was standing within 30 degrees of the listener in either direction, the participants could make out a greater proportion of words at a lower volume threshold with BOSSA than with the conventional algorithm or when unassisted. The conventional algorithm did seem to serve users better than BOSSA in distinguishing speech from static noise. However, this was tested in only four of the eight participants. The standard algorithm works by reducing distracting sounds by boosting the signal-to-noise ratio for sounds coming from a given direction. By comparison, BOSSA transforms sound waves into spikes of input that the algorithm can process, similar to how the cochlea in the inner ear converts vibrations from sound waves into signals transmitted by neurons. The algorithm emulates how special cells in the midbrain — the uppermost portion of the brainstem that connects the brain and spinal cord — respond selectively to sounds coming from a given direction. These spatially tuned cells judge direction based on differences in the timing and volume of sound inputs to each ear. Boyd said this aspect of BOSSA drew from studies of the midbrain in barn owls, which have sophisticated spatial sensing abilities since they rely on sound cues to locate prey. The BOSSA-filtered signals are then reconstructed into sound for the listener. BOSSA is modeled on the nervous system's "bottom up" attention pathway, which gathers bits of sensory information that are then interpreted by the brain. These sensory inputs govern which aspects of the environment warrant focus and which can be ignored. Related: Our outer ears may have come from ancient fish gills, scientists discover But attention is also dictated by a "top down" pathway, in which a person's prior knowledge and current goals shape their perception. In this case, an individual can decide what is relevant to focus on. These two modes of processing aren't necessarily mutually exclusive; for instance, your friend's voice might jump out at you both because you recognize it and because they're shouting over the sound of a crowd. BOSSA's "bottom-up" approach can help people focus on speech coming from a predetermined location, but in real life, people rapidly shift their attention to different conversations. "You can't do that with this algorithm," said Michael Stone, an audiology researcher at the University of Manchester in the U.K. who was not involved in the new study. Stone added that the study didn't replicate how sounds echo and reverberate in real life, especially in indoor settings. Still, he said BOSSA could be more practical for hearing aids than algorithms based on deep neural networks, another emerging approach to sound filtering. Deep neural network models need extensive training to be prepared for all the different configurations of speakers the user may encounter. And once implemented, the computational demands of these models require a lot of power. BOSSA is simpler by comparison, relying mainly on the spatial difference between two sounds. RELATED STORIES —These noise-canceling headphones can filter specific sounds on command, thanks to deep learning —1 billion teens and young adults risk hearing loss from listening devices —Tinnitus may stem from nerve damage not detectable on hearing tests BOSSA may also be more transparent than the "black box" of deep neural networks, said Fan-Gang Zeng, professor of otolaryngology at University of California, Irvine, who was not involved with the research. That means it would be easier to interpret how sound inputs become algorithmic outputs, perhaps making the model simpler to refine. Zeng added that BOSSA may require further refining as it is studied in more-realistic scenarios. The researchers plan to test BOSSA in proper hearing aids, rather than in headphones, and also hope to develop a steering mechanism to help users direct the algorithm's focus.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store