A deep dive into the patents, the chips, and the corporate land grab shaping the future of hearing technology
Every major hearing aid manufacturer now markets “AI” as a headline feature. Starkey calls it Neuro Sound Technology. Oticon says Deep Neural Network. Phonak went with DEEPSONIC. ReSound settled on Intelligent Focus. The branding differs. The underlying race is the same: patent as much of the neural-network-in-a-hearing-aid territory as possible before the other five do it first.
For clinicians and consumers, the question worth asking is deceptively simple. Who actually owns this technology? Not which company has the slickest trade show demo, but who holds the granted United States patents that define what a hearing aid’s artificial intelligence can legally do, and what it can’t?
The answer, when you pull the patent filings and trace the assignee records, is more complicated and more interesting than any manufacturer’s brochure would suggest.
The six companies (and the corporate structures behind them)
Six companies dominate global hearing aid manufacturing. In patent databases, their names multiply. Starkey files under Starkey Laboratories, Inc. Oticon’s patents belong to Oticon A/S or its parent company Demant A/S. GN Hearing A/S owns the ReSound portfolio. Sonova AG and its subsidiary Phonak AG share the Swiss stable. And then there’s WS Audiology, formed in 2019 when Sivantos (which makes Signia) merged with Widex. Their patents still file under both Sivantos Pte. Ltd. and Widex A/S, depending on which legacy team did the inventing.
If you search for “AI hearing aid” in the patent title, you’ll find almost nothing. These companies file under classifications like H04R25/507 (hearing aids with signal processing) and G06N3 (neural networks). The word “artificial intelligence” rarely appears in a patent claim. “Neural network,” “deep learning,” and “environment classification” do the heavy lifting. The marketing department says AI. The patent attorney says trained model with configurable weights.
Starkey: the broadest portfolio
Starkey Laboratories holds what appears to be the broadest AI-related patent portfolio among the Big Six, at least in the United States. The range is worth noting.
Their foundational patent, US 8,494,193, granted to inventor Sidney A. Higgins, covers a Bayesian environmental classifier. The hearing aid listens to the world around it, categorises the sound scene (speech, noise, music, wind, quiet), and automatically adjusts its processing to match. That patent underpins the automatic program-switching behaviour in every Starkey AI product since Livio. Its continuation chain runs through US 11,337,011 to US 11,722,826 to US 11,863,936 to US 12,273,685, each one extending the classification into nested layers: not just “speech” but speech in noise, speech in wind, your own voice versus someone else’s.
Then there’s the neural-network signal processing line. US 10,492,008 kicked off a chain that runs through US 10,993,051, US 11,553,287, US 11,979,717, and US 12,356,155. These patents describe a deep neural network trained on synthesised speech mixed into babble noise, running either on the hearing aid’s chip or on a connected smartphone. The DNN processes the microphone signal and separates what you want to hear from what you don’t.
Starkey also holds something none of the other five appear to have patented: neural-network-driven frequency translation. The patent family, beginning with US 10,575,103 and continuing through US 11,223,909, US 11,736,870, and US 12,149,890, covers a DNN that reproduces high-frequency speech cues at lower frequencies for people with severe high-frequency hearing loss. Inventors Brian Fitz, Haixin Xu, Tao Zhang, and Issa Abdollahi are named on the filings.
For feedback cancellation, Starkey holds US 11,606,650 and its continuation US 12,483,844, which describe training a neural network to govern the adaptive feedback canceller. US 12,413,916 goes a step further, combining DNN-based speech enhancement and feedback cancellation with non-audio sensor data: accelerometers, heart rate monitors, even blood-oxygen readings. A hearing aid that knows you just climbed a flight of stairs and adjusts accordingly.
At the chip level, US 12,108,219 describes a heterogeneous processing chip with separate compute units for convolutional and recurrent neural network layers, shared memory, and a split bus architecture. This is the patent behind Starkey’s G2 Neuro Processor, the neural processing unit inside Edge AI, launched on 9 October 2024.
On top of all that, US 12,302,084 covers a hearing aid with multiple swappable neural network models that can be refined using usage data and retrained in the cloud. US 12,309,552 describes dynamic neural networks whose complexity changes on the fly when an audio feature detector triggers them. Resource-aware AI, running in your ear canal.
Oticon: the brain-inspired approach
Oticon’s patent strategy takes a different shape. Where Starkey casts a wide net, Oticon has concentrated its claims around one core idea: a deep neural network trained to estimate gain (amplification) in a way that mimics how the brain processes sound.
The key 2024 patent is US 12,075,215, granted on 17 September 2024 to inventors Meng Guo, Anders Meng, and Bernhard Kuenzle. It covers a method and system for improving speech understanding in real-time conversation by processing audio through a neural network embedded in the hearing device. This is the patent behind what Oticon markets as DNN 2.0, the engine inside the Sirius platform powering Oticon Intent.
The earlier work lives in US 11,330,378 and US 11,696,079, both assigned to Oticon A/S. These describe a hearing device containing a modified gated recurrent unit, a type of recurrent neural network that processes audio sequentially, with a clever trick: channel-update sparsification. The network doesn’t recalculate every channel at every time step. It skips the ones that haven’t changed much, saving power. Inventors include Zuzana Jelcicová, Rasmus Jones, David Thorn Blix, Michael Syskind Pedersen, Jesper Jensen, and Asger Heidemann Andersen.
US 11,540,063 and its continuation US 12,143,775 add another layer. These patents describe a hearing device with a feature-vector pre-processor feeding a trained neural network detector. When the detector identifies a particular condition (say, a sudden change in the acoustic environment), it routes the signal to an adaptation mode. When things are stable, it uses normal processing. The hearing aid, in other words, monitors itself.
Oticon Intent, launched in February 2024, added what the company calls 4D Sensor technology: motion sensors that detect whether you’re walking, sitting, turning your head. Demant’s published whitepapers connect these sensor inputs back to the DNN 2.0 patent family, though the specific sensor-fusion patent claims are harder to enumerate from public filings alone.
Sonova and Phonak: the silicon bet
Sonova claims more than 1,900 active granted patents and design rights across the group. In the AI hearing aid space, Sonova has made what looks like the biggest hardware bet: a custom chip called DEEPSONIC.
The numbers Sonova publishes for DEEPSONIC are striking. 4.5 million neural connections. Trained on 22 million sound samples. 7,700 million operations per second. Fifty-three times more processing power than what Sonova describes as “current industry chips.” These figures come from Sonova’s own press releases and annual report for 2024/25. They have not been independently verified by peer-reviewed research, and they should be read as marketing claims until they are.
DEEPSONIC sits alongside Sonova’s ERA platform chip in the Phonak Audéo Sphere Infinio, launched on 6 August 2024. The commercial feature built on this hardware, Spheric Speech Clarity, claims a 10 dB signal-to-noise ratio improvement, against a baseline of 6.4 dB for conventional directional processing. Sonova’s EUHA 2025 presentation, for the Infinio Ultra refresh, referenced “key patented elements” and improvements to AutoSense OS 7.0, their environmental classifier, which they say was trained on 18 times more environments and classifies 24% more precisely than the previous version.
Sonova’s specific chip-level patent numbers are harder to pin down than Starkey’s, because Sonova doesn’t publish a consolidated patent list the way Starkey does. But their patent filings do surface some notable claims. One Sonova AG filing covers a system for detecting whether an authentic copy of a deep neural network is running in a device, without reverse engineering the network itself. That’s a defensive patent against grey-market DNN cloning: someone copying Sonova’s trained model into unauthorised hardware. Another covers a neural network that generates a confidence parameter alongside its audio processing output, steering downstream algorithms based on how certain the AI is about its own decision. That’s a rare example of explainability built into a hearing aid patent claim.
On the Phonak AG side, legacy filings cover sound-class-based feature determination and remote support systems (US 9,906,871).
Signia and Widex (WS Audiology): two portfolios, one company
WS Audiology presents an unusual case. The 2019 merger joined two companies with distinct engineering traditions, and their patent portfolios remain separate.
On the Signia side, Sivantos Pte. Ltd. holds US 11,889,268 B2, a patent that specifically describes a hearing aid where the signal processing chain includes an artificial neural network whose topology and weights are selected based on the operation being performed, the ambient situation, and user input. Inventors Thilo von Mansberg, Marco Steffen, and Alexander Menke are named. This is one of the clearest examples in the entire hearing aid patent landscape of a manufacturer claiming an on-board, context-aware neural network architecture. Not a cloud service. Not a smartphone app. A neural network running on the hearing aid itself, changing its own configuration based on what it hears and what the wearer does.
Sivantos also has a published application, US 2021/0195343, by Marc Aubreville and colleagues, covering a two-stage fitting method: a problem-classifier identifies what the user is struggling with, then a solution-classifier (an artificial neural network) proposes parameter adjustments. That’s a fitting workflow where the AI plays audiologist, at least in part.
Going back further, the legacy Sivantos (then operating as Siemens Audiologische Technik) holds US 6,035,050, covering neural-network and fuzzy-logic approaches to determining optimum parameter sets. That patent dates to the late 1990s, one of the earliest instances of neural network claims in hearing aid IP.
Widex, for its part, went to market early with machine learning. The Widex Evoke, launched in 2018, was marketed as the first hearing aid with real-time machine-learning-based personalisation through SoundSense Learn. Widex’s recent granted patents lean more toward signal processing hardware: US 12,143,774, granted 24 December 2024, covers a digital processing unit performing complex multiplication via switch and multiplexer routing. US 12,279,091, granted 15 April 2025, describes a low-latency filter underlying Widex’s PureSound marketing. Widex states it has filed more than 100 patents over its history, though the company does not publish a list that separates the AI-relevant ones.
Homayoun Kamkar Parsi, based in Erlangen, Germany, is publicly identified by WS Audiology as Head of Signal Processing, Algorithmic Research, and Neural Networks. His team’s work feeds into the Signia IX (Integrated Xperience) platform. US 12,445,787, covering an updated hearing device feedback cancellation method, carries a priority date of 4 April 2024, making it one of WS Audiology’s freshest filings.
GN Hearing and ReSound: the quiet builder
GN Hearing A/S announced the ReSound Vivia on 4 February 2025, calling it the world’s smallest AI-powered hearing aid. The headline feature, Intelligent Focus, uses a dedicated DNN chip that GN claims is up to 17 times more efficient than competing DNN-plus-directionality solutions. CTO Brian Dam Pedersen is named as an inventor on multiple GN Hearing audio-processing and antenna patents.
GN’s challenge, from a patent-watching perspective, is visibility. The company holds a substantial portfolio on adaptive feedback cancellation and impulse suppression. Their AI-branded DNN noise reduction patents, if granted, were not surfaced by name in patent database searches for this article. That doesn’t mean they don’t exist; it means GN, like Sonova, keeps its AI patent numbers close. A direct USPTO assignee search on “GN Hearing A/S” combined with neural network classifications would likely turn up more. For now, GN’s DNN chip claims remain backed by press releases rather than enumerable patent grants.
The disruptors nobody talks about
Here’s where things get interesting. The Big Six aren’t the only companies filing AI hearing aid patents. Two smaller players hold IP that overlaps directly with the majors.
Chromatic Inc., a company based in New York and Minnesota (the team behind the Fortell hearing aid), has assembled a surprisingly dense cluster of granted patents. US 11,812,225 covers a neural network hearing aid with DSP plus CNN/GRU architecture. US 11,877,125 describes selective routing of audio to neural network circuitry versus plain DSP based on signal-to-noise ratio thresholds: the hearing aid decides, frame by frame, whether AI processing is worth the power cost. US 11,886,974 claims a neural network chip built as a tile array of MAC (multiply-accumulate) units with integrated memory and bias circuits. US 12,356,154 and US 12,356,156 describe neural network circuitry achieving at least one billion operations per second and at least two billion operations per milliwatt, at roughly two milliwatts total power draw. US 12,395,800, granted 19 August 2025, covers hearing loss amplification that processes speech and noise sub-signals through separate compression pipelines.
Chromatic’s claims around per-frame complex-ratio-mask denoising at 15 dB or better without audible speech degradation, running on-chip with a million or more neural units, overlap directly with Sonova’s DEEPSONIC territory and Starkey’s Edge AI subject matter. That makes Chromatic the most patent-dense startup in this space, and a potential licensing or litigation counterparty for any of the Big Six.
Then there’s Tuned Ltd., which holds US 11,991,502 B2, granted 28 May 2024, and its parent US 11,438,716. These patents describe an AI-mediated hearing aid adjustment system where the user reports a perceived deficiency in natural language (too loud, poor sound quality, interfering noise), a detection algorithm identifies the likely cause, and a solution algorithm proposes parameter changes. The solution algorithm incorporates expert knowledge, the user’s audiogram, current parameter values, previous adjustments, environment-specific history, and what Tuned calls a “user acoustic fingerprint.” It’s an AI expert system for at-home tuning, layered on top of conventional manufacturer firmware.
For anyone building a chatbot-style or LLM-based hearing aid adjustment app, Tuned’s patent family is the first freedom-to-operate hurdle to clear.
What the patent map tells us
Three battlegrounds define the AI hearing aid patent landscape right now.
The first is on-device DNN noise suppression: which company’s chip can separate speech from noise in real time, on a power budget measured in milliwatts, without sending anything to the cloud? Sonova’s DEEPSONIC, Starkey’s G2 Neuro Processor, GN’s new DNN chip, Oticon’s Sirius, and Chromatic’s tile-array NN chip are all patented (or claim-pending) entries in this race. This is where the real money sits, because the chip defines what the hearing aid can do at the physics level. You can’t software-update your way past someone else’s silicon patent.
The second is environmental classification: the algorithms that detect whether you’re in a restaurant, a car, a quiet room, a windy park. Starkey’s Bayesian classifier family (rooted in US 8,494,193) is the deepest, but Sonova’s AutoSense OS and Oticon’s scene-analysis systems are well established. This territory is more crowded and harder for any single company to monopolise.
The third is self-fitting and personalisation, where the hearing aid adapts to the individual without requiring a clinic visit for every adjustment. Sivantos’s two-stage classifier system and Tuned Ltd.’s expert-system patents occupy this space, alongside Widex’s SoundSense Learn and Starkey’s cloud-retrained multi-model architecture (US 12,302,084). With over-the-counter hearing aids now legal in the US and gaining traction in other markets, this category may turn out to be the most commercially significant of the three.
A note on what’s real and what’s marketing
Every patent number cited in this article was checked against USPTO and Justia assignee records. Two items from manufacturer marketing could not be independently confirmed: a claim that US 11,917,372 is the patent behind Starkey’s Pro Fit fitting software (nearby fitting-system patents exist under Starkey, including US 12,101,604, but the specific number didn’t return a match), and an attribution of US 12,389,169 to the Big Six (it belongs to an independent inventor, not a major manufacturer).
The performance claims, the “53 times more processing power” and “10 dB SNR improvement” and “17 times more efficient” figures, come from manufacturer press releases and internal studies. They are not lies, but they are not independent evidence either. Read them as you would read any claim made by a company trying to sell you something.
What the patents themselves reveal, stripped of the marketing, is a hearing aid industry in the middle of a genuine technical transformation. The traditional DSP architectures that defined hearing aids for thirty years are being supplemented, and in some cases replaced, by trained neural networks running on dedicated silicon. The companies that own the strongest patent positions in this shift will shape what hearing aids can do for the next decade. The ones that don’t will be licensing their competitors’ technology, or working very carefully around it.




