[00:00:00] Speaker 1: Hey everyone, Kevin here. Today, we're going to look at how you can generate AI videos for free directly on your PC. No usage limits, no subscriptions, and everything stays private. We'll walk through how to run some of the most popular open source AI video models, including LTX2 and WAN. And it's not just video. These models can also generate audio and narration. That's it, dad's lost it. And we've lost dad. Stop being so dramatic, Jess. He's just having fun. Waaaaa. That's an example clip that I generated on my computer. Pretty impressive. Let's get started. First off, you'll want to check your system requirements since this is running locally on your computer. Generating AI video is resource intensive, but even lower end machines can handle it surprisingly well. You'll want a computer with a dedicated graphics card, ideally an NVIDIA GPU. You can get this running with six to eight gigabytes of VRAM, but more VRAM will give you better performance and also longer clips. To check how much VRAM you have on a Windows PC, press control shift escape to open up task manager. Over on the left-hand side, click on performance and then click on GPU. Right down here, you'll see dedicated GPU memory. That's the number that matters. Here, you'll see that I have 16 gigabytes. Again, you'll want at least six gigabytes to get started. To make this easy, we're going to use a tool called Pinocchio. If you've ever installed AI tools before and different models, you know that it could get complicated fast. You have to worry about different Python versions, CUDA, and all of these different dependencies. Pinocchio handles all of that for you. The best way to think of it is it's essentially a one-click installer for AI tools, similar to Steam with games. To get Pinocchio, head to the following website. You'll find a link right at the bottom of the screen. Once you land on this page, click on the button that says download. You can install Pinocchio on Mac, Linux, and also Windows. In my case, I'm using Windows, so I'll select this and then click on download for Windows. Go ahead and download it and then run through the install process. During the install, you'll be prompted to choose a name for the project. I'm good with the default, so over here, let's click on download. This pops up an install screen. Let's click on install. The first install can take a few minutes since it's downloading all of the models and all of the dependencies, but the good news is you only have to do this once. Once you finish installing Pinocchio, you'll land here on the main welcome screen. You'll see a button that says discover. Let's click on this. This shows you a list of all the different verified scripts that you can install. Now, I think it's worth browsing through here. You'll find things like image generators, voice cloning tools, text-to-speech, and a whole lot more. For this video, we're going to use a script called WAN2GP. Right up on top, you can type that in, WAN2GP. You'll find the text at the bottom of the screen as well. Right over here, I see the option, so let's click on that. This gives us a simple interface for generating AI video and more importantly, lets us run multiple video models in one place, including WAN, LTX, and others. Right over here, let's click on this install button. The nice thing, it's just a one-click install with Pinocchio. I'll click here and then let it run. On the next screen, we could see all the different dependencies that we need to install. If we weren't using Pinocchio, we'd have to go through and manually install all of these. But again, Pinocchio makes this a lot easier. At the very bottom, let's click on install. Once the install finishes, you'll see WAN2GP inside Pinocchio. To launch it, you simply click on it. And here on the top bar, you see the option to start it. Let's click on that icon. That might've taken a little bit of time, but once that wraps up and everything loads, you'll be taken to a simple web interface. Right up on top, you can see that we're currently in the web UI. Right down below, we have a number of different tabs. I'm currently in the video generator tab because we want to generate videos. Now, this is one of the cool parts. Right here, we have a dropdown where you can choose the AI video model that you want to run on your PC. And we have some really popular options. Right here, we have WAN2.2. And right down below, one of my favorites, we have LTX-2. In this video, I'm going to use the LTX-2 model. This is a relatively new AI video model. And one of the things that makes it interesting is that it can also generate sound or music alongside the video output. Let's select LTX-2. All in all, I've been very impressed by this model. In fact, some of the output is very similar to what you would get from Google's VO3 or even OpenAI's Sora 2. Up here in this dropdown, you can see how many parameters the model has. In this case, it's a 19 billion parameter model, which practically means it takes about 35 to 40 gigabytes of disk space just for the model weights. Next to that, we have another dropdown and we have two different options. We have default and also distilled. So what's the difference and which one should you choose? The distilled version is roughly half the size of the full model. So that means it's about 20 gigabytes. And it's also much more practical to run on consumer GPUs. Now I have a consumer GPU, so I'm going to select that. And I'm also assuming that most viewers will also have a consumer GPU. Now I found that the quality difference is fairly small, but performance and stability are noticeably better. So for that reason, I'm going with distilled. Before we actually run this model, let's make a few changes to the configuration. Up on the top tabs, let's click on configuration. And then we have another row of tabs. Let's click on performance and then scroll down just a little bit. And here we'll see the memory profile. And you have a few different options. Take a look through these and then choose the one that most closely matches your PC. Now my PC has a lot of RAM and I'm also close to 12 gigabytes of VRAM. So I'm going to choose profile two. Once you make your selection there, right up on top, click back into the video generator. If we look down just a little bit, there are a few different ways that we could run this model. We could run it with a text prompt only. That's where we use text to describe what we want the AI video to look like. But we also have a few other options. You can also provide an image and text that describes what you want the video to look like. In fact, you can even provide an end image or where the video should end. And right over here, you can even continue a video. So you could provide a video and then it'll extend it. In a little bit, we'll look at some of these different options. But for now, let's start with a text prompt only. I'll click on this. If we scroll down a little bit, you'll see the prompt field. And this is where we describe what we want the AI video to look like. And here they've provided just a sample prompt that you could use. And in fact, if you just want to test it out, you could use this just as is, or you could delete it and you could type in your own prompt. So here I'll type in my own. Now here's one of the cool things. You could even call out if you want narration in your video. So here I say, she says, and then I have a line that I want one of the characters to say. That's one of the great things about this model. It produces both sound and also narration. As with most AI models, the more descriptive your prompt is, the better the results tend to be. If you want to help refining prompts, tools like ChatsVT or Gemini can be great for brainstorming or adding detail. Underneath the prompt field, you can choose the quality level all the way up to 1080p. And next to that, you can also choose the aspect ratio. I recommend starting with a lower resolution. Lower resolutions render faster and are also easier on your system. And of course, you can always increase the quality later once you've confirmed that everything is running smoothly. Here in the aspect ratio dropdown, if you want a vertical video, say for TikTok, you could go with this nine by 16. If you want a horizontal video or what you'd traditionally find on YouTube, you could select 16 by nine. I'll select 16 by nine. Right down here, you can set the duration for your video. Now, by default, it's set to 24 frames per second. Now, with this slider, you could go all the way up to about 30 seconds of video generation. Of course, that'll take longer to generate. Or here, you could go all the way down to just under a frame. Now, I'm gonna go with about, let's say about 10 seconds. So it turns out 10 seconds is about 240 frames. So here, I'll enter in 240. Once you finish entering your prompt and configuring all the different settings, over on the right-hand side, you can click on generate. Now, here's one of the cool things. Once you click on generate, you could go back over to the left-hand side. You could enter in additional prompts. You could configure the different settings, and then you could click on generate again, and it'll add the next one to a queue. So as soon as your first video finishes generating, it'll automatically jump to the next video that it has queued up. Right up above, I'll click on generate. If you scroll up in the top right-hand corner, you can check the progress of the AI video generation. And it looks like it finished generating the video in about one minute and 55 seconds. Now, I found that when you run a prompt the first time, it usually takes a little bit longer because the model needs to load into memory. And after that, generation speeds up significantly. In fact, I've gotten video generation down to about 30 seconds a clip. Not bad. Now, right here, we can see a preview of the video file that it generated. Let's preview how this turned out. We need the brand to feel more authentic. The cookies will tell us when the time is right. Should we reschedule? Not bad at all for a first video. And it even included some dialogue. Of course, if you wanna make some modifications to it, or maybe you want to refine the video, over on the left-hand side, you could refine your prompt, and then you could generate again. And a cool trick, you could even run multiple variations of the same prompt. If you scroll down just a little bit, they have something called advanced mode. And when you toggle that on and you scroll down just a little bit, you could choose the number of generated videos per prompt. So as an example, you could type your prompt, and maybe you'd like to see four different possible outcomes from that prompt. So another option that you have. Here, I'll move that back down to one. Along with generating an AI video from text, you can also start from an image. And this is really cool. Right up on top, let's close out of advanced mode, and right at the top, let's select start video with image. Let's scroll down a little bit, and over here, we can drop in media. Here, I have a nice photo. I'm gonna drop this, and I generated this with AI, but I would love to see it animated. You could even upload multiple images. Now, right down below, let's scroll down, and here we have the prompt field again. I'll remove all the text that I had from my previous prompt, and here, I'll type in a new prompt. The person continues walking towards the castle as the snow continues to fall. So let's see how that turns out. Here, I could choose the quality and the aspect ratio. Now, I think all this looks good, so over on the right-hand side, I'll click on generate. Right up on top, let's see how it turned out. That's cool. Now, right down below, you'll see all the output that you've generated so far, and you'll only see a few different items down here, so you might be wondering, well, how do you view all of the output that you've generated? Up at the very top, there's a tab that shows the total space that WAN2 GP consumes. Let's click on that, and that'll open up File Explorer. Right in here, you'll see that we're in the WAN.github. Let's click on that, and that'll open up File Explorer. Right in here, you'll see that we're in the WAN.git folder, basically the project folder. Right over here, there's a folder titled App. Let's click on that, and if we look down, there's one titled Outputs, and here you can see all the different AI video that you've generated. What's so impressive here is that this all runs locally. No subscriptions, no usage limits, and you stay in control of your files and your hardware. Let me know what you think in the comments. Thanks for watching, and please consider subscribing.
We’re Ready to Help
Call or Book a Meeting Now