homehome Home chatchat Notifications


These AI headphones let you listen to a single person in a crowd or noisy area

With these headphones, all it takes is a brief glance at the desired speaker to isolate their voice.

Mihai Andrei
June 3, 2024 @ 9:19 pm

share Share

In the din of a bustling café or a crowded conference, discerning one voice amidst the noise often feels like a superpower. Now, thanks to a groundbreaking innovation by the University of Washington, we may all have that superpower. Leveraging advanced artificial intelligence, researchers have developed headphones that allow users to focus on a single speaker in a sea of sound. All it takes is a brief glance at the desired speaker to isolate their voice, effectively silencing all other background noise.

Headphones have come a long way. They were first invented in the 1880s, out of a need to free up a person’s hands when operating the telephone. Modern headphones do essentially the same thing, but are much more sophisticated. They can be wireless, adjust sound levels, and even apply noise cancellation. A team of researchers wanted to take this to the next level — using AI.

The idea is to identify the desired source of sound and then use AI to keep only that source of sound audible. The headphone wearer turns towards whoever they want to listen to and the headphone “locks on”, continuing to play that voice or sound even if the wearer moves around.

“We tend to think of AI now as web-based chatbots that answer questions,” said senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering. “But in this project, we develop AI to modify the auditory perception of anyone wearing headphones, given their preferences. With our devices you can now hear a single speaker clearly even if you are in a noisy environment with lots of other people talking.”

Machine learning vocal patterns

The new approach builds on the team’s previous “semantic hearing” research, which allowed users to select specific sound classes that they wanted to cancel. This previous work detected sounds such as birds or specific voices and cancelled them, while leaving others unaffected.

The system is a sort of real time training algorithm. The headphones have an on-board mini-computer that runs machine learning software. The wearer turns towards the sound source and the headphones pick up that source (with a 16-degree margin of error). After an accommodation period of only a few seconds, the “target speech hearing” mode comes in and plays just the targeted speaker’s voice even as the listener moves around. The system also gets better with time as the system gets more training data from the speaker’s voice.

The team tested the system on 21 subjects who were asked to rate how well they could hear the voice before and after filtering, all reported major improvements. The clarity of the speaker was rated nearly twice as high as the unfiltered audio, on average.

“Our user studies demonstrate generalization to real-world static and mobile speakers in previously unseen indoor and outdoor multipath environments. Finally, our enrollment interface for noisy examples does not cause performance degradation compared to clean examples, while being convenient and user-friendly. Taking a step back, this paper takes an important step towards enhancing the human auditory perception with artificial intelligence,” the researchers conclude.

Some limitations to work out

The system has applications in various fields. For individuals with hearing impairment, these AI-powered headphones could offer a significant improvement in their ability to communicate and engage in social settings. In professional environments, where clear communication is crucial, such technology could enhance productivity and reduce misunderstandings. Moreover, for anyone who has struggled to hold a conversation in a noisy café or during a bustling conference, these headphones represent a transformative leap in auditory technology.

But there are still some things to sort out.

The system is promising but it can only work with a single speaker at a time. If there are multiple speakers, and especially multiple speakers in the same direction, the system can have difficulties locking on. The user can run another enrollment to try to improve the clarity, but there are still instances when it won’t work properly. Also, the team is working to integrate the system into a less bulky headset (i.e. earbuds or hearing aids).

The team also released the code for the proof-of-concept device, making it available for others to build on. The system is not commercially available yet but this will make it much easier for other teams to also contribute.

The team presented its findings May 14 in Honolulu at the ACM CHI Conference on Human Factors in Computing Systems.

share Share

A Dutch 17-Year-Old Forgot His Native Language After Knee Surgery and Spoke Only English Even Though He Had Never Used It Outside School

He experienced foreign language syndrome for about 24 hours, and remembered every single detail of the incident even after recovery.

Your Brain Hits a Metabolic Cliff at 43. Here’s What That Means

This is when brain aging quietly kicks in.

Scientists Just Found a Hidden Battery Life Killer and the Fix Is Shockingly Simple

A simple tweak could dramatically improve the lifespan of Li-ion batteries.

Westerners cheat AI agents while Japanese treat them with respect

Japan’s robots are redefining work, care, and education — with lessons for the world.

Scientists Turn to Smelly Frogs to Fight Superbugs: How Their Slime Might Be the Key to Our Next Antibiotics

Researchers engineer synthetic antibiotics from frog slime that kill deadly bacteria without harming humans.

This Popular Zero-Calorie Sugar Substitute May Be Making You Hungrier, Not Slimmer

Zero-calorie sweeteners might confuse the brain, especially in people with obesity

Any Kind of Exercise, At Any Age, Boosts Your Brain

Even light physical activity can sharpen memory and boost mood across all ages.

A Brain Implant Just Turned a Woman’s Thoughts Into Speech in Near Real Time

This tech restores speech in real time for people who can’t talk, using only brain signals.

Using screens in bed increases insomnia risk by 59% — but social media isn’t the worst offender

Forget blue light, the real reason screens disrupt sleep may be simpler than experts thought.

We Should Start Worrying About Space Piracy. Here's Why This Could be A Big Deal

“We are arguing that it’s already started," say experts.