Enhancing Accessibility: A Deep Dive into Epiphan's LiveScript Transcription Device
Discover how Epiphan's LiveScript uses AI to provide real-time transcriptions, making content accessible for those with hearing impairments in various settings.
File
Epiphan LiveScrypt Simplified real-time automatic transcription for live events
Added on 09/06/2024
Speakers
add Add new speaker

Speaker 1: Have you ever found yourself turning on the captions on movies and TV shows? It happens in my house when watching content with heavy accents. In fact our captions are more often on than off. We sort of take it for granted that most of the content we watch with a flip of a button we no longer have to listen but can read what's happening. Plus with three kids? Sometimes that's the only way I can watch over all the noise that's coming from around the house. See for me it's a convenience. For those with hearing impairments it's a requirement. It allows them to enjoy the same content those without hearing impairments are watching. While most of the content we watch these days is captioned using AI, there are still so many places captions are not yet available or being used at all. Think of the content and information that is not available to someone who can't hear what's being said for in-person events. In church, schools, almost all civic events have captioning mandated. And for those who have partial hearing loss, dialogue at a large event with lots of background noise can be almost impossible to understand. While many environments have someone there to sign in real time, many still don't have that luxury. So to solve this problem, we're going to look at Epiphan's LiveScript. This little device is designed to hear and display what many people can't. So let's put it on the bench and see what words fall out of the inside. Transcription used to be difficult to implement and very expensive. It involved sending audio over a phone line to an operator in a call center somewhere in the world who would listen to your event and type everything they heard into a computer. That transcription would get sent back to another device and insert the words into a data stream that would get embedded into the video stream designated for the outbound broadcast. Even now, most captioning devices are designed for displaying captions to viewers who aren't in the room. Those watching in the room get left out. LiveScript from Epiphan is designed for displaying transcriptions within the same room as the speaker and live audience. This little appliance here uses machine learning. It's a fancy term for artificial intelligence to grab the audio, create the transcription, and rather than embed it, present it here on the screen or on a host of other interesting delivery methods that we'll get into in a little bit. This is a good time to look at the back of the units. No matter your audio source, XLR, TRS, 3.5mm, even RCA, you can get audio in. Audio embedded in HDMI and SDI can also be used. Heck, even USB audio is supported. There's no excuse why you can't use this device to create captions. The UI is designed to make setting up and using the LiveScript easier than any other captioning solution on the market. Once connected to an audio source as well as the internet, within seconds of hitting the star button, the words being spoken will appear on the screen. But you might be wondering why the LiveScript needs to be connected to the internet. Well, earlier I mentioned AI was being used, but I didn't say where the AI processing was happening. As you can see, language-based AI computing is best done in the cloud, and this little device uses the cloud to create the transcription, which allows it to be a much smaller device. But the LiveScript screen isn't intended for displaying the transcription to the whole in-person audience. It's just for monitoring the output. The HDMI port on the back panel, though, can be fed to a monitor on a wall so all audience members can see and take advantage of in-room transcription. Still having trouble viewing the screen? Well, grab your phone, hit a link, and the transcript is beamed right into your hands. With LiveScript, not having an option for those who need to view a live transcript of what's being said is a thing of the past, for both in-person and remote viewers alike. The cloud interface for LiveScript offers total control over the sources on the unit, how it operates, and how the transcript is seen. Here in this interface, we can download various forms of the transcript for compliance or reference after the event. See, current AI models for transcription are in the 90% accuracy range and never need take a break or get a glass of water. Always on, always ready. LiveScript is a game changer for those who want to offer their content to those who can't always hear what's being said. And Epiphan made it easy to use and one of the most affordable platforms for transcription out there today. Okay, that's our show for the day. For more information on transcription, Epiphan, or the LiveScript product, come visit us, call, or connect with us online anytime and we'll see you on the next episode.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript