homehome Home chatchat Notifications


AI for speech recognition is nearing a watershed moment

Computers will soon be able to understand what we say.

Mihai Andrei
November 21, 2022 @ 6:22 am

share Share

Artificial Intelligence (AI) is one of the (if not the) most hyped technology at the moment. While some of this hype is undoubtedly exaggerated — the name itself is somewhat of a misnomer since it’s not exactly intelligent — it’s already making quite a mark and it feels like we’re only seeing the tip of the iceberg. But while the world has been buzzing with AI for creating images, another type of algorithm has been making quite a buzz: speech recognition.

In the 1950s, three researchers from the legendary Bell Labs wanted to work on speech recognition and despite not having access to computers, made notable progress. However, another AI pioneer, Raj Reddy, picked up the topic at Stanford University, and he developed the first system capable of recognizing continuous speaking (until then, there had to be small pauses for the system to work). Reddy saw in speech recognition (and automated translation) a way of making people’s lives better, especially for the ones in lower socioeconomic conditions. He saw this technology a something that can “move the plateau” and improve the lives of the people that need it most. “The technology we’ve created in the past ten years, with things like translation, have moved the [socioeconomic] plateau up by a significant amount,” Reddy noted in a recent panel at the Heidelberg Laureate Forum.

But still, up until a few years ago, automatic transcriptions were pretty bad, despite all this progress. The problem is not an easy one by any margin: you have to recognize people’s speech, account for their accents and different way of pronouncing words, compensate for pitch and so on. But at some point, AI transcription and captioning started improving dramatically and new models seem to come along every day.

For communicators such as ourselves, this has been a boon. It often happens that transcribing an interview can take longer than the actual interview, and having tools (often free or at relatively low prices) that can perform speech recognition automatically is of great help. But this goes far beyond just transcribing interviews.

AI can be used for speech recognition in a number of ways, ranging from transcription to translation. It can play a role in everything from teaching and healthcare to tourism — heck, even food companies are now using speech recognition fridges. The market is expected to grow over $45 billion over the next decade, and pretty much all the big companies want a piece of the pie.

Just a month ago, Google announced its own speech-to-speech AI translation model called Translation Hub, and not long after that, Meta claimed its own breakthrough, by presenting an AI that can recognize and translate to and from Hokkien — a Taiwanese language that lacks a written form. Then, NVidia also joined the race, and the fact that all these happened within less than two months is telling of how fast the industry is growing.

For consumers, this is pretty good news. Many speech-to-text application programming interfaces (APIs) already boast 92% accuracy, which is fairly comparable to a human rate. Recent strides in machine learning research, as well as developments in computation and the improved data availability to train models on have also made AI speech recognition not just better, but also more affordable.

Of course, this technology was also accelerated by other AI features. For instance, the ability of AIs to summarize (reducing audio transcripts to logical parts) and identify different voices has both improved the performance and expanded the scope in which AI can be applied for speech recognition.

But while AI speech recognition seems to be entering a new phase, it is not without its own shortcomings and problems.

For instance, one such shortcoming is equity. By far, the best language for this type of application is English, and the reasons for that are twofold. Firstly, you need to manually classify data to train the models, which is easiest done in English (where you have a lot of data available). The second reason is that that’s where the money is. Sure, there’s market for speech recognition in Korean or Portuguese, but the market is smaller than the English one

There are also potential security risks in all of this. Voice-controlled devices are becoming increasingly common, and attackers are gaining new ways to get hold of your personal information through this type of speech recognition service. An attacker could, perhaps, confuse speech recognition systems and get them to perform unwanted actions, or access your private messages and documents by peaking to what your device is saying.

Ultimately, AI speech recognition is a tool — and a pretty useful one at that. It’s got plenty of potential, but it’s up to us as a society to use it responsibly.

share Share

Scientists Just Found a Hidden Battery Life Killer and the Fix Is Shockingly Simple

A simple tweak could dramatically improve the lifespan of Li-ion batteries.

Westerners cheat AI agents while Japanese treat them with respect

Japan’s robots are redefining work, care, and education — with lessons for the world.

A Brain Implant Just Turned a Woman’s Thoughts Into Speech in Near Real Time

This tech restores speech in real time for people who can’t talk, using only brain signals.

The Soviets Built a Jet Powered Train and It Was as Wild as It Sounds

This thing was away of its time and is now building rust in a scrapyard.

This Tiny Nuclear Battery Could Last for Thousands of Years Without Charging

The radiocarbon battery is supposed to be safe for everyday operations.

Meta's new AI can read your mind and type your thoughts with startling accuracy

Look like Mr. Zuckerberg is secretly working on something that could work as an alternative to invasive brain chips.

China just unveiled a deep-sea tool that can secretly cut the world’s internet cables

The concern is that China's new deep-sea cable cutter could spark global chaos.

Quadruped Robot Learns How to Ride Skateboard on Its Own

Researchers taught a legged robot to master skateboarding using a novel AI.

This AI-Powered Robot Just Made Breakfast and It Could Cook in Your Future Home

This $27,500 robot is the latest in a series of humanoid robots that have hit the market.

A Cartoonish Crash Test Raises Real Questions About Tesla’s Autopilot

Mark Rober pits Tesla against lidar in a real-life Wile E. Coyote experiment.