Vancouver-based Singular Hearing will unveil its AI-powered app called HeardThat in the Eureka Park areas of CES 2020 in January that uses machine learning to turn a smartphone into a hearing assistant.
The app can tune out background noise so that those with hearing loss can hear speech more clearly and thus engage better in conversations.
"Often the first step in helping people with a hearing problem is an in-ear hearing aid," says Bruce Sharpe, Founder and CEO Singular Hearing. "However, the weakness of even the most sophisticated hearing aids is the challenge of separating speech from background noise. Hearing aids tend to amplify all sound, making it difficult to have one-on-one or group conversations in a noisy environment. It can be frustrating enough that a person with hearing loss may even avoid a social outing or public place.
"Machine learning gives us the unique power and flexibility to solve this long-standing problem," he continues. "We are passionate about putting it to use through HeardThat and providing new options for the millions of families, friends, and colleagues who suffer from hearing loss."
HeardThat uses advanced machine learning machine algorithms to separate speech from noise. It listens to the noisy environment and delivers denoised speech to the individual's Bluetooth-enabled hearing aid or other listening devices via their smartphone.
"Machine learning algorithms require too much processing power to run on hearing aids or other small devices," explains Sharpe. "By leveraging the smartphone, our HeardThat App is freed from hardware constraints and so can do much more. And because it is an agile and flexible software solution, HeardThat can be quickly and continually improved upon."
HeardThat will be available in Q1, 2020 on both Android and iOS platforms. At its CES booth, Eureka Park 31504, the company will be providing demonstrations of the technology on the show floor.