Transcribe Audio Locally Using OpenAI Whisper
Learn to transcribe audio into text on your local machine using OpenAI's Whisper tool, maintaining privacy without the need for internet connectivity.
File
How to Transcribe Audio to Text on Your Own PC with OpenAI Whisper
Added on 01/29/2025
Speakers
add Add new speaker

Speaker 1: Okay, so you want to transcribe some audio into text. No problem. They've had services that do that for years. You upload your audio to some website and it spits out a text file. And most of them are pretty good, but they cost money. And this audio may be something that you don't want to upload to some strange server or cloud service. Maybe it's confidential or it's your intellectual property. So what's the solution? Well, OpenAI has a service called Whisper and you can use an API or chat GPT to upload your files there and have it turned into text. But we still have the same problems. It costs money and you're sending your audio elsewhere to something else. So today I'll show you how you can transcribe audio to text on your own local machine. Now you can do it on your PC or laptop or whatever and use the Whisper service locally. You don't even need to be connected to the internet. So I'll shut up and show you how to do this. Okay, and for this demonstration, I'm using Ubuntu under WSL in Windows 11. However, this will be exactly the same if you're running a native Ubuntu or native Linux system. One of the biggest things that you need to do is have Python installed, Python environments and FFmpeg. And I think it'll work pretty much identically across most Linux distributions with those things. So first we'll do a pseudo apt update and upgrade. Okay, now we've got everything ready to go. And we're going to install FFmpeg. Awesome. Now, if you have an NVIDIA GPU, you must install the NVIDIA drivers for this to work properly. And you can verify they're installed by typing in NVIDIA-SMI. And you should see something like this. Okay, now we need to create our own Python environment. So I'm gonna create a folder called Whisper Test. Okay, and then I'll create a new Python environment, and then we'll activate that environment. And so you should see this little whisper test before your prompt to make sure that you know you're in the environment. And now we need to install the Rust setup tools. All right, perfect. And now we need to install Whisper. So we're gonna download the OpenAI Whisper package into our Python environment and run it. And to install it, we type in pip install-u OpenAI Whisper. It's going to install a ton of stuff. So grab an ice water and chill out for a little bit. Okay, now we've got it installed and ready to go. Okay, and you see here I have my sample file. So now we're going to try a quick transcription. So I'll type in Whisper, the name of the file, and then the model I wanna use. We've got several different models available here, everything from tiny to large. So I'm gonna start out with tiny just for the heck of it and see how it transcribes. Now, if you use a tiny model and it doesn't transcribe it properly, you can keep stepping up that model size until you get the results you want. Just remember that your hardware is going to be taxed harder and you're going to have to have better hardware

Speaker 2: the larger the models get. And as you can see, it automatically detects the language.

Speaker 1: And there's the recording. And this is really accurate and really fast. Let's time it really quick.

Speaker 2: And as you can see, this one took about seven seconds.

Speaker 1: Let's try the large model. And it's worth noting here that this took two minutes and one second with the large model, and it's not as accurate. So your results are going to vary. Things are going to be kind of all over the place. So far in the tests that I've done, tiny and base have done really well for my transcription. However, at some point you might need to go to a bigger model. So what else can you do with this tool? Well, let's take a script straight from the GitHub page. So we'll import Whisper. And then we'll load a model. And then we'll create a variable named result. And we'll use model transcribe in my sample file. And then we'll print that result. And there we go. Nice output. And this doesn't even have the timestamps in it like it did when we ran the executable. So this is a nice, clean text output. Now, of course, you can just write this text to a file also. And there we go. We'll check our output. And it writes to a text file. So there's a lot of things that you can do with this. It's really cool. So in this tutorial, we installed Whisper and we played around with it a little bit. And it's super easy to use and very performant. So I'm going to do some more thorough testing with it. I'm going to do some, maybe some kind of app or something cool. So if you like this kind of stuff, be sure to subscribe to this channel. the

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript