Exploring OpenAI Whisper: Setup and Usage Guide
Learn how to set up and use OpenAI Whisper for transcribing audio. Discover model options and installation tips for optimum performance.
File
OpenAI Whisper - Free Audio to Text AI
Added on 01/29/2025
Speakers
add Add new speaker

Speaker 1: Hey guys, back with another video today. We're gonna be checking out OpenAI Whisper. They just open sourced this last month. It's a tool for transcribing audio into text. It's really accurate for English, but I think it does work for some other languages as well. This is their blog. They have an example here, a few different ones, and then the translations. I'm not gonna play these, but you can check that out. And then they go into a little bit about how it works, and then have some nice pictures here. So I'll leave this link in the description. There's also a GitHub link, so since it's open source, all this code is available under the MIT license. So they talk a little bit more about it here at the top, and if you scroll down, there's a setup section. So we're gonna need to install this. So if we come over, I have an environment setup, so I recommend that definitely, especially with stuff like this, keep everything clean and separate. And I'm on Ubuntu right now, so I'm logged in to a virtual machine. So I'm gonna install, pip install. I'll copy this, come over to the terminal, paste it, install. And this is actually already installed. Might just take a second here. Okay, so there we go, already installed. For you, that message will probably look a little bit different. We're also gonna need to install ffmpeg. So we'll grab this, since I'm on Ubuntu. If you're on a different operating system, check out these other commands. Copy that, we'll come back over to the terminal, we'll paste, hit enter. And then once that's done, you might need to install rust. I went ahead and just installed this as well, just to make sure we're all set. So we'll go ahead and paste that. And then once you have all of that done, you should be pretty much ready to go. You can use these different models. So they have tiny, base, small, medium, and large. The base is a little bit smaller, the medium is a little bit smaller, the medium is a little bit smaller, and the large, the bigger size, the more VRAM and the longer it's gonna take. So the smaller ones are a lot quicker. They're not as accurate. I have a audio file from one of my YouTube videos. So I'm gonna go and transcribe that right now for you guys, to show you what it looks like. So I'll clear the terminal. And this is the command right here, so you do whisper. And then make sure, if you're using an environment, make sure you have your environment where you have Whisper installed, make sure you're activated in that environment. So I'll do whisper. And then I have logging, YouTube MP3. And then we'll do model, tiny. So this one will be going pretty quick, compared to the other ones. I don't have Kudin activated right now, so that's what these errors are. It's already detected the language as English. It's gonna take a few seconds to load here. And then there we go. So it's already gone through the video. It says, as we got another video today, we're going to be setting up a Discord dashboard for our bot. We're going to be doing this one in JavaScript. So this is on the tiny, model. And if you see, right in here, so make sure you have node installed. Once you have node installed, and that should say node installed. So this one's a lot quicker, but as you can see, there's some typos in there. If I stop this, clear, and then rerun this command again, but with base, so that's just one model, and then I'll just do a new one. So that's just one model above. It's pretty much the same VRAM. It has almost double the parameters. It is a little bit slower, but not by much. So if we run this one now, let's wait for that same sentence to pop up, and we can see if it catches the typo or not. So it's loading here. See, this one's taking a little bit longer. Okay, and then as we can see in this one, it's going a little bit slower. And we can see right here, that typo is gone. So instead of saying node installed or whatever it said, it says node installed. So it was able to recognize the two words as two separated words. And this translation, words, and this transcript using the base model has much better accuracy. So that's really cool. You can up it even more. If you have somebody that's talking not very clear, or if it's something that's hard to hear, or maybe some words that are more complicated, you might need a bigger model. But I was gonna use this tool to start adding closed captions to all my videos, and I think most of my videos, I should be able to get by with the base model. So that's how this tool works. It takes a little bit to get through the whole video, but once it's done, it'll leave you with a text file. It'll also leave you with a transcript file with all these timestamps as well. So that's awesome. Check out these links. I'll leave them in the description. So this is the GitHub. This has a lot of great information. And then if you wanna see how it works with some great pictures, this is their blog site as well.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript