Can you hear me now? AI-coustics to fight noisy audio with generative AI
Noisy recordings of interviews and speeches are the bane of audio engineersā existence. But one German startup hopes to fix that with a unique technical approach that uses generative AI to enhance the clarity of voices in video.
Today, AI-cousticsĀ emerged from stealth with a ā¬1.9 million in funding. According to co-founder and CEO Fabian Seipel, AI-cousticsā technology goes beyond standard noise suppression to work across ā and with ā any device and speaker.
āOur core mission is to make every digital interaction, whether on a conference call, consumer device or casual social media video, as clear as a broadcast from a professional studio,ā Seipel told TechCrunch in an interview.
Seipel, an audio engineer by training, co-founded AI-coustics with Corvin Jaedicke, a lecturer in machine learning at the Technical University of Berlin, in 2021. Seipel and Jaedicke met while studying audiotechnology at TU Berlin, where they often encountered poor audio quality in the online courses and tutorials they had to take.
āWeāve been driven by a personal mission to overcome the pervasive challenge of poor audio quality in digital communications,ā Seipel said. āWhile my hearing is slightly impaired from music production in my early twenties, Iāve always struggled with online content and lectures, which led us to work on the speech quality and intelligibility topic in the first place.ā
The market for AI-powered noise-suppressing, voice-enhancing software is very robust already. AI-cousticsā rivals include Insoundz, which uses generative AI to enhance streamed and pre-recorded speech clips, and Veed.io, a video editing suite with tools to remove background noise from clips.
But Seipel says AI-coustics has a unique approach to developing the AI mechanisms that do the actual noise reduction work.
The startup uses a model trained on speech samples recorded in the startupās studio in Berlin, AI-cousticsā home city. People are paid to record samples ā Seipel wouldnāt say how much ā that then get added to a data set to train AI-cousticsā noise-reducing model.
āWe developed a unique approach to simulate audio artifacts and problems ā e.g. noise, reverberation, compression, band-limited microphones, distortion, clipping and so on ā during the training process,ā Seipel said.
Iād wager that some will take issue with AI-cousticsā one-time compensation scheme for creators, given the model that the startup is training could turn out to be quite lucrative over the long run. (Thereās a healthy debate over whether creators of training data for AI models deserve residuals for their contributions.) But perhaps the bigger, more immediate concern is bias.
Itās well-established that speech recognition algorithms can develop biases ā biases that end up harming users. A study published in The Proceedings of the National Academy of Sciences showed speech recognition from leading companies were twice as likely to incorrectly transcribe audio from Black speakers as opposed to White speakers.
In an effort to combat this, Seipel says AI-coustics is focusing on recruiting ādiverseā speech sample contributors. He added: āSize and diversity are key to eliminating bias and making the technology work for all languages, speaker identities, ages, accents and genders.ā
It wasnāt the most scientific test, but I uploaded three video clips ā an interview with an 18th century farmer, a car driving demo and an Israel-Palestine conflict protest ā to AI-cousticsā platform to see how well it performed with each. AI-coustics indeed delivered on its promise of boosting clarity; to my ears, the processed clips had far less ambient background noise drowning out speakers.
Hereās the 18th century farmer clip before:
And after:
Seipel sees AI-cousticsā technology being used for real-time as well as recorded speech enhancement, and perhaps even being embedded in devices like soundbars, smartphones and headphones to automatically boost voice clarity. Currently, AI-coustics offers a web app and API for post-processing audio and video recordings, and an SDK that brings AI-cousticsā platform into existing workflows, apps and hardware.
Seipel says that AI-coustics ā which makes money through a mix of subscriptions, on-demand pricing and licensing ā has five enterprise customers and 20,000 users (albeit not all paying) at present. On the roadmap for the next few months is expanding the companyās four-person team and improving the underlying speech-enhancing model.
āPrior to our initial investment, AI-coustics ran a fairly lean operation with a low burn rate in order to survive the difficulties of the VC investment market,ā Seipel said. āAI-coustics now has a substantial network of investors and mentors in Germany and the UK for advice. A strong technology base and the ability to address different markets with the same database and core technology gives the company flexibility and the ability for smaller pivots.ā
Asked about whether audio mastering tech like AI-coustics might steal jobs like some pundits fear, Seipel noted AI-cousticsā potential to expedite time-consuming tasks that currently fall to human audio engineers.
āA content creation studio or broadcast manager can save time and money by automating parts of the audio production process with AI-coustics while maintaining the highest speech quality,ā he said. āSpeech quality and intelligibility still is an annoying problem in nearly every consumer or pro-device as well as in content production or consumption. Every application where speech is being recorded, processed, or transmitted can potentially benefit from our technology.ā
The funding took the form of an equity and debt tranche from Connect Ventures, Inovia Capital, FOV Ventures and Ableton CFO Jan Bohl.
Ā