20,000+ Professional Language Experts Ready to Help. Expertise in a variety of Niches.
Unmatched expertise at affordable rates tailored for your needs. Our services empower you to boost your productivity.
GoTranscript is the chosen service for top media organizations, universities, and Fortune 50 companies.
Speed Up Research, 10% Discount
Ensure Compliance, Secure Confidentiality
Court-Ready Transcriptions
HIPAA-Compliant Accuracy
Boost your revenue
Streamline Your Team’s Communication
One of the Largest Online Transcription and Translation Agencies
in the World.
Founded in 2005.
We're with you from start to finish, whether you're a first-time user or a long-time client.
Give Support a Call
+1 (831) 222-8398
Get quick answers and support.
Get a reply & call within 24 hours
Let's chat about how to work together
Direct line to our Head of Sales for bulk/API inquiries
Question about your orders with GoTranscript?
Ask any general questions about GoTranscript
Interested in working at GoTranscript?
PO setup, Net 30 terms, and .edu discounts
Speaker 1: Hi, my name is Jose Francisco, and today I'll be showing you how to transcribe any live stream audio that you want as accurately and efficiently as possible in real time. This tutorial is meant to be extremely quick, hence the background music. The fast paced music you're currently hearing in the background is Frederic Chopin's Fantasy Impromptu, and according to various YouTube videos, this piece is around 5 minutes long. But with this quick start tutorial, we'll show you how to transcribe any live audio feed you want before the music ends. All you have to do is use our notebook. Link in the description. Ready? Let's go. First things first, open up the notebook. Now, make a copy of the notebook, like this. This tutorial will assume that you're using Google Colab, but even if you're using Jupyter Notebooks or running this notebook in VS Code, the general instructions should be about the same. Alright, now that you've made a copy of the notebook, let's run the first cell. This cell simply installs dependencies using pip. Oh yeah, we're working in Python here. Give it a few moments, and you'll see some colorful text, like this. Now, for some people out there, you may need to use pip3 instead of pip depending on your setup, but the output should remain the same. And now, there's only one more cell to run. Maybe we just need to fill in a few variables first. Well, in reality, there's only one variable that needs to be filled in. The rest are optional. The variable you must change is the DeepGram API key. Just create one using your DeepGram account and paste it in here. For security reasons, I can't show you mine, but yours is just one button click away. And if you don't have a DeepGram account yet, don't worry. All you have to do is sign up with your email, and you'll receive 12,000 minutes of transcription for free. No need to put down a credit card or anything. Alright, if you plugged in the API key, you can run this cell immediately. No need to toy around with any of the other variables. But that being said, it might be important to know what these other variables are. So before I demo the live transcription, let me show you what these other variables do. This URL variable should be set to the URL that you wish to stream from. By default, we're streaming from BBC Radio. Alright, up next, check out this params variable. This variable should be set to the parameters that you wish to configure your DeepGram model to. The ones that are written here in the starter code shouldn't have to be modified for the sake of this demo, but if you wish to modify them on your own, go for it. Check out the DeepGram documentation for more information. Link in the description. And if you're curious, here's what these starter code parameters say. Punctuation is set to true, meaning we're going to punctuate our transcript. Capitalized words, periods, commas, and so on and so forth. Numerals is set to true as well, meaning we're going to use digits to represent numbers instead of words. Moreover, since we're listening to the BBC, we're going to set our language to English. But that being said, we do support multiple languages, languages that you're seeing on screen right now. Furthermore, we're using the most enhanced version of DeepGram that we have to offer. And as of the model we're using, we're going for a general all-purpose model. However, we also have models to support different types of audio streams, such as meetings, phone calls, voicemails, video streams, and even conversational AI. As usual, reference the DeepGram docs for more information. Link in the description. But again, long story short, the parameters we've pre-written for this starter code should be good for this demo. The next two variables are simple. Time limit is an int that represents the number of seconds that you wish to transcribe for, and transcription only is a boolean that should be set to true only if you want to see the transcribed word. You can set it to false if you wish to see the full JSON responses. For the sake of this demo, let's say that we just want to create subtitles for this BBC radio show. Here's what that would look like. Note, there's two latencies to keep track of. The first is the latency between the BBC radio show and your speakers. The second is the latency between the BBC radio show and DeepGram's AI. Luckily, these latencies are independent of each other. And as of today, the radio-to-speaker latency is larger than the radio-to-AI latency. The result? Subtitles that look like these. Notice that some of the words are printed to the console before we hear them on the speakers. Nevertheless, these subtitles are looking pretty good. And beyond the world of subtitles, you can do much more with real-time live-stream audio recording. Maybe you want to have a live conversation with ChatGPT. Maybe you want to translate yourself in real-time. Or perhaps you want to wear live subtitles on your chest. DeepGram users have done that before. Want to drive a small car with your voice? Our users have done that too. And what about a Disney princess dress that lights up different colors based on the song that you sing? You guessed it, our users have done that as well. Not to mention, DeepGram can also transcribe pre-recorded audios too. We've also made a notebook for that. Our language models also offer you the ability to summarize long audios, diarize audios of multiple speakers, filter profanity, and much, much more. So that's how you use DeepGram's live transcription feature as quickly as possible. Feel free to mess around with the notebook as much as you desire. Or if you want to write some code with DeepGram yourself, check out our software development kit, or SDK. We have SDKs for Node, Python, Go, and much more. But that's DeepGram in a nutshell. A quick, easy-to-use API. With documentation written by humans for humans. Alright, what's my time? Still got it.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now