Have you ever imagined how convenient it would be if your computer would automatically type everything you’re saying in a meeting? Well, with automatic speech recognition (ASR), that can be a reality. This technology enables computers to convert spoken words into text.
We live in an age where computers are able to understand what we say and do. These days, we can retrieve and access information using our speech. We can use it to interact with and control applications and devices. Automatic speech recognition is one of the driving forces that has been causing this transformation.
Automatic Speech Recognition Explained
In some cases, speech recognition is referred to as “speech to text”. It is a process under computational linguistics that deals with recognizing and translating spoken language into text. Speech recognition is a combination of computer science, language, and electrical engineering.
While it generally refers to the process of translating spoken words into text, it also has subfields. Some examples are speaker identification and voice recognition which both specialize in recognizing the speaker’s identity and the spoken content.
Machine Learning and Automatic Speech Recognition
Automatic speech recognition, as we know it today, is a subfield of machine learning (ML). So, it is also considered artificial intelligence. ASR is more of a general technology while ML is a specialized technology. However, both aim to achieve the goals of AI by teaching a computer to learn independently.
Natural Language Processing and Automatic Speech Recognition
More advanced versions of automatic speech recognition increasingly include natural language processing (NLP). NLP devices record and process human conversations with the help of artificial intelligence. During the process, various parameters influence the accuracy of automatic speech recognition. These factors include background noise, speaker volume, and recording equipment, among others.
ASR as We Know It Today
Automatic speech recognition is more common than you think. Today, it is innovatively used in various industries. However, since ASR is such a ubiquitous technology, we cannot list everything it covers. Even so, here are some of the most popular applications:
When most people talk about automatic speech recognition, closed captions usually come to mind. Generating closed captions can be done live or offline. Offline ASR can create accurate closed captions before the actual speech in television, movies, video games, or other media forms. They are responsible for improving accessibility for hard-of-hearing individuals.
Meanwhile, live automatic speech recognition allows real-time caption streaming with seconds-long latency. This type of closed caption is ideal for live presentations, TV shows, and video calls.
ASR is also used in generating transcripts for podcasts, standard lectures, and interviews automatically. These days, this technology is widely used by companies for creating virtual meeting transcripts.
There are several benefits here. For instance, instead of searching through audio to pull out quotes, it’s easier to navigate through the text. What’s more, it takes less time to review a transcript than a recording. Besides, if someone misses a virtual meeting, they can easily go back to the transcript.
Automatic speech recognition is heavily adopted by the medical community too. For instance, a Wharton whitepaper reports that physicians depend on AI to convert voice-dictated clinical notes into electronic medical records that machines can understand. Medical professionals use these resources along with diagnostic image analysis to uncover relevant information for decision-making in neurology, cancer research, and cardiology.
Also, because of the COVID pandemic, telemedicine became popular. Along similar lines, automatic speech recognition became instrumental in triaging and screening patients remotely.
Call centers use ASR to improve customer service processes. Apart from using fully automated chatbots, contact centers also use the technology for the following:
- Tracking customer support conversations
- Resolving issues more quickly through the analysis of initial interactions
- Enhancing employee training
According to McKinsey research, companies that used advanced analytics were able to lower their average handle time by up to 40%. Moreover, these businesses reduced labor costs by up to $5 million while increasing containment rates by up to 20%. Along the process, they were able to improve employee engagement and customer satisfaction.
Software developers can integrate automatic speech recognition with an app to avoid spending on a data science team. When they train a new model, they wouldn’t need hours of high-powered cloud computing. As a result, people can access a more intuitive and seamless user experience. After all, they can use their voices to navigate the app.
Indeed, ASR covers a range of use cases. However, it is significantly beneficial in translation apps. The technology is in the early stages of developing a “universal translator”. It will make cross-border communication and travel more accessible by breaking down language barriers.
Internet of Things (IoT)
Physical “smart” devices are quickly becoming ubiquitous, and they are part of the IoT. Some examples include smart home devices like speakers and thermostats. In the Industrial Internet of Things (IIoT), there are devices that drive better automation and optimize manufacturing processes.
Indeed, more and more users are relying on speech to interact with the IoT. All you need to do is say, “turn down the temperature,” or “switch on the lights.” These voice commands will allow you to control your environment in real-time. You don’t even have to press a button or look at your screen.
Based on the current trends, automatic speech recognition is likely to play a crucial role in widely implementing and adopting the IoT. However, before those opportunities come to fruition, there are still significant challenges to overcome.
What Are the Challenges of ASR
As we’ve mentioned, while ASR opens several opportunities for various industries, it still comes with challenges. Here are some of the issues that the technology must overcome:
Equitability and Inclusivity
Ideally, technology must provide equal opportunities for everyone. However, according to a Brookings report, AI-empowered financial services often have biases against minorities when it comes to approving loans. Besides voice recognition software often have racial disparities.
According to a study published in the Proceedings of The National Academy of Sciences, African Americans encounter challenges in using speech recognition tools. The study concluded the importance of auditing machine-learning systems to ensure that they promote inclusivity. What’s more, it revealed that the leading ASR systems illustrated significant racial disparities. Results for African American speakers had an average word error rate of 0.35 while Caucasians had 0.19.
It’s worth noting that training datasets determine how ML models learn. If the data does not contain African American voices, the ASR system will be unable to parse their speech accurately. So, it’s essential to bring diversity to the software community. What’s more, training datasets should contain different vernaculars, accents, and speakers.
Another hurdle to the widespread adoption of automatic speech recognition is privacy. Surveillance has always been an issue in democratic countries. So, developers of ASR tech are responsible for creating innovative ways to make the system benefit society.
Improving data privacy in ASR can also be beneficial for businesses, as evident in the same Wharton white paper we mentioned earlier. Consumers expect voice-enabled technology to have a great level of data protection. Businesses must prioritize gaining credibility while they become heavily dependent on first-party data relationships.
According to a peer-reviewed study about ASR, methods for protecting privacy for speech fall under four categories:
The study recommends developing anonymization methods to conceal personally identifiable information in speech. All the while, the process will retain important attributes like linguistic content.
Existing automatic speech recognition systems are still unable to transcribe audios accurately. After all, there are many factors that complicate the process. These issues include diversity in pronunciation, overlapping speech, and evolving language.
This is also the reason why GoTranscript’s transcription services are favored over ASR systems. Because the transcripts are 100% generated by humans, customers can expect accurate results. No matter how complicated the audio recording is, the transcript will be clear and precise.
What Are the Opportunities for ASR?
If ASR developers are able to overcome the aforementioned challenges, the technology can provide several opportunities. Right now, the common practice is to run end-to-end models on high-powered computers in the cloud. However, ASR may enable the use of lower-powered computers that are closer to the source of the data.
Naturally, this improvement will bring key benefits like lower latency. Because the models are more personalized and accurate, there will be better privacy protections. After all, the voice data won’t have to be transmitted over a network.
Here are some of the opportunities that automatic speech recognition may open:
Many tech companies are creating custom chips that allow devices to handle certain machine learning tasks at the edge. Some examples include Apple’s Neural Engine and NVIDIA’s Jetson microcontrollers. Once these products become widely available, developers can run automatic speech recognition just about anywhere.
Of course, this opens the door to ambient computing. In this state, computers are so ever-present that we don’t notice that they exist. This concept came from a paper written by computer scientist Mark Weiser in 1991. He said that technology can become so profound that it will not be visible or tangible. It can weave itself into everyday living and become indistinguishable.
Over the past three decades, Silicon Valley has followed the principles of ubiquitous computers. We are getting close to turning Weiser’s vision into reality. One of the ways we can achieve this is by designing ASR to allow people to use their voices to control an ambient IoT.
ASR can also open the possibility of affective computing. This process involves detecting the undercurrents of thought and emotions by breaking down communications and speech patterns. After all, various elements in the delivery of a message can also affect its context. Speaking speed, pauses, intonations, and word choice can have underlying meanings and emotions.
One example is the 911 caller who pretended to order pizza. In reality, she was reporting domestic violence. The dispatcher was able to understand the context by listening carefully to how the caller was delivering the message. Advances in automatic speech recognition tech may enable chatbots to acquire the same capabilities.
AI and the Turing Test
ASR tech will also play a vital role in any AI system that can pass the Turing test. This process determines whether a machine can truly think. The test involves subjecting a human to two conversations. One of the conversations will be with another person and the other with a machine. If the person can’t identify which of the conversations is with a machine, then the test concludes that it is a thinking machine.
Top AI researchers have been subjecting their work to the Turing test. However, none of them has come even close to passing it. Even so, automatic speech recognition may play an important role in the conversation that will cross the threshold.
ASR may come with several challenges and complexities. However, its goal is simple: enabling computers to understand humans.
It’s easy for us to neglect this opportunity. However, when we consider its benefits, we will realize how important it is. Humans expand their minds by listening to other people. Moreover, they develop and grow relationships by listening to one another.
So, teaching machines to listen and comprehend can be a serious deal. On the other hand, it is still a double-edged sword. The practice will only be as ethical and morally acceptable as the people handling it. It’s important to hold the responsibility of developing technologies seriously. It should be done without prejudice or bias, and with the good of people in mind.