In the United States one in eight people aged 12 years or older has hearing loss in both ears. The challenges of living with hearing loss are profound and wide-ranging and can affect both personal and work life.
One of the biggest issues with hearing loss is that it is mostly irreversible. The most common form of hearing loss is ‘sensorineural’ - this is acquired when the delicate hair cells of the inner ear (which are responsible for picking up sound) are destroyed. When gone, these sound collectors do not regenerate.
The incurability of ‘sensorineural’ hearing loss is the reason why millions of dollars are spent annually on hearing aid research. After years of work, the hearing aid industry is currently pinning their hopes on one technology - artificial intelligence.
Hearing aids have long been an essential tool for people with hearing loss, but the arrival of artificial intelligence (AI) is transforming them into something much more advanced. With AI, hearing aids are becoming smarter, more personalized, and better able to adapt to different environments, offering a new level of hearing assistance that was once unimaginable.
Talking about AI can be confusing, so let's break down some the key terms:
Artificial intelligence (AI) is a computer technology that makes machines and software able to do things that usually require human intelligence, like understanding language or recognizing images.
Machine learning is a type of AI that lets machines or software learn and improve from experience without being programmed to do so. Machine learning algorithms can look at large amounts of information and find patterns that help them make predictions or decisions.
A deep neural network (DNN) is a kind of machine learning algorithm that tries to work like the human brain. It's made up of many layers of small computer programs that work together to solve complex problems, like recognizing faces or translating languages.
The merging of AI technologies and hearing aids will present some amazing possibilities for the millions that live with hearing impairment around the world.
Here we feature some of the latest AI hearing aids, and explain how they use the technology to improve the hearing aid experience for the user.
Hearing aid manufacturer Oticon has just introduced a new family of premium hearing aids called Oticon Real. These hearing aids offer wearers a full spectrum of sound with exceptional detail and clarity, while reducing disruptive sounds like wind and handling noise.
Unlike traditional hearing aids, Oticon Real is built with BrainHearing technology, which uses AI to train an onboard Deep Neural Network (DNN) with 12 million real-life sounds, to deliver improved detail with each sound. With this technology, wearers experience the sounds of real life without compromising clarity. The hearing aid also includes new detectors that minimize distractions and increase speech clarity in windy situations, reducing listening effort for wearers.
With up to 55 million personalized adjustments available every hour, the Starkey Evolv AI’s always-on, always-automatic approach delivers authentic sound quality in any listening environment without requiring extra effort.
Evolv comes with a 40% reduction in noise energy, providing better sound quality. Additionally, the new Edge Mode feature, which can be activated using the easy-to-use Artificial Intelligence system, makes it easy to hear clearly in complex listening environments. Finally, Starkey’s TeleHear program provides greater flexibility and customization by allowing virtual consultations with a hearing professional to adjust your hearing aids to your needs, even from your home.
Already making waves, the Evolv AI was recently chosen as an honoree in the accessibility category of the CES Innovation Awards. This award acknowledges exceptional design, engineering, and innovation in consumer electronic devices, with only a handful of winners chosen from thousands of entries worldwide.
Widex Moment is one of the first hearing aids to enhance real-time listening using dual artificial intelligence engines. They have produced the first and only hearing aid in the world powered by real-time machine learning - one that can learn how a person needs to hear right in the Moment, helping them personalize their sound experience.
The Widex AI engine analyzes hearing aid users' preferences through the data they provide to SoundSense Learn (an app feature) and quickly delivers a tailor-made sound. The technology collects a range of anonymous data, such as how much the wearer changes the volume, the sound presets they use most often, and the number of custom settings they make. A full, customized listening experience is produced for the wearer based on these on-the-fly preferences.
The future of AI and hearing aids is bright. From advanced speech recognition in noisy environments to crowd-sourced sound adjustments, it’s clear we have only scratched the surface of what is possible.
Book an appointment with Dr. Kevin Ivory to start hearing better today.