[00:00:00] Speaker 1: So I just built this entire ad all inside of one single canvas, no switching between tabs, no downloading and re-uploading files for references, everything connected, everything all in one place. And I'm about to show you exactly how I did it using a brand new tool inside of Eleven Labs called Flowers. If you've used Eleven Labs Studio before, you know it's a timeline. You lay things out, left to right, you trim, you layer, you edit. It's a very linear creation process. But when creating with AI, the creation process is different. You often start with an idea and go in a bunch of different directions, therefore making the process non-linear, which required a new way to create. Flows is a node-based visual canvas where you have access to the best AI image, AI video, and AI audio models in the world, allowing you to map out your entire creative pipeline from start to finish to create any AI asset you need. So no more going back and forth between generations. If there are any edits you would like to make, you simply swap out a node or connect a new one, saving you hours of time in the creative process. And once you've built out a flow, you can save it, edit it, and duplicate it as many times as you need to automate your creative pipeline. Here's exactly how it works. Inside of Eleven Labs, to get started with Flows, we can go ahead and click on Flows in the left toolbar. Here, we then want to go ahead and click New Flow. And so Flows is a visual canvas that allows us to add different nodes to generate different types of assets and then string them together so we can use them as references and create a repeatable workflow. So let's go ahead and add an image generation node. I simply want to right-click on the canvas and select the image generation node. Now we have our very first node. Here, it works just like within image and video. We've got the settings at the bottom. I can go ahead and select the different image generation models. So let's just use Nano Banana 2. We can then select the aspect ratio and also select the resolution. So let's generate in 2K. Below it, I can go ahead and type the prompt that I want to use. So let's just say I want to generate a baseball player on the pitch. I simply type that prompt and now I click Run. The generation then starts and while that happens, I can go and add more nodes. And so let's say I wanted to then turn this into a video. But what I could do is right-click next to it and click video generation and add a video generation node. And then here, I could use the image that I've just generated as the start frame to my video. And I could do that by simply connecting the two nodes by dragging and dropping. So I'll drag from here and drag this to the start frame endpoint. And now I could describe the movement that I want to happen. So I can just type baseball player throws ball and at the bottom, again, we can go ahead and select the model. This time, we have the different video models to choose from. We can select the aspect ratio and a bunch of other settings as well. So here, I could quickly generate it in 1080p and then click Run. And while that generates, let's build out the flow a little bit more. So right here, we generated this image based on this prompt. But let's say I wanted to try this prompt with a few different models. Well, what I could do is go and add a text node, right? I'll simply drag it over to the left and here I'm going to paste the prompt in just like so. And now I can use this text node as the prompt for this node. And as you can see, the prompt has now been input right here. But let's say I want to try and generate it with a different model such as Cdream. Well, I can actually drag and drop from this text node and here create an image generation node and this time go and select Cdream. So I'm going to search Cdream 4.5, click it and then simply click Run. As you can see, now we have a totally different generation and so I can quickly compare the two. Now you notice over here on the right, our video has finished generating. The power of flows actually comes in when we want to make edits into a workflow that we've already built. So for example, let's say I'm a really big fan of this image, but I want to change the person within it. Well, instead of regenerating the entire thing, what I can do is actually go and separate these two nodes, delete the connector between them and then I'm going to add in my own reference image. And so I can either click on the Upload Media button right here or I could head into my files and drag and drop the reference image that I already have. And now what I'm going to do is create an Edit Image node just like so. So I'll create Edit Image and I'm also going to input the reference image that I've just added as an input as well. So it's going to use the baseball picture we've generated and also the photo of me. And here we can then go and describe the change that we want to happen. So we could use the prompt, place the man from and then use the at sign to tag the specific reference. And now if we connect these two back up, I can use the new generation that's about to appear as the start frame for our video that we want to generate that reuses the same prompt. And so as you can see now we've got me placed within the image and it's a different character. And where Flows really saves you a lot of time is when you want to make a small tweak to the original prompt you had without having to go through the exact same process again. Take a look at this. Let's say I want to make a change to the initial prompt that I had. Instead of just being vague and saying a baseball player on a pitch, we could say a close-up shot of a baseball player on the pitch about to swing the bat. The magic happens is that now we can click run from here and then Flows is going to regenerate everything on the canvas instead of us having to click and drag all the references again, type in the prompts again and go through that process manually. And as you can see we've now regenerated the entire flow by making a small tweak to the original prompt. But now you'll notice that the prompt at the end doesn't work or it turns out we might think that this generation from Cdream 4.5 is a better one. Well we could drag this one in and out. We could then remove the connector that's coming from Nano Banana 2. The references that we tagged then automatically updated and then maybe we actually want to generate this with Kling 2.6 instead. Well we can go and do that by dragging the connector and creating a video generation node. And here we could then go and select our favorite Kling model. Let's use Kling 2.6. And here again we might then add in a text node because we want to use a different prompt for both of these nodes. And so we could say slow motion shot of man swinging a baseball bat. And then here I can go and delete the prompt and then I would drag this and connect it to this one and also connect it to this one. So now this prompt is being used for these two nodes right here to generate with VO 3.1 and Kling 2.6. And then I could go back to my SeaDream 4.5 generation and I could click on the drop down arrow right here and I could click run from here. So it regenerates everything afterwards with all of the edits I've made. And so I've made a ton of different changes that I can easily see visually and apply them in seconds. And as you can see the entire flow has been regenerated once again with the tweaks that we made. And so now I could go to the end and choose between my favorite the generation from Kling 2.6 or VO 3.1. And I do want to mention for any of the nodes at the top you can cycle through the history of the generations. And so we can see the previous generations that we've made right here. But once I've chosen my favorite generation what I can do is then go and combine this with audio. And to do that I'm going to drag the connector from the video output and then click mix with audio. So here I've now created a composition node which I can also create by clicking composition at the bottom just like so. And if I wanted to combine this with text to speech or sound I could drag the audio input to create a new node for audio either text to speech or sound effects. So let's say I wanted the noise of the baseball hitting the baseball bat. I could click sound effects and then literally go ahead and describe baseball hitting bat just like so. And now what I can do is actually I can right click and I can click run from here. So it runs the sound effect generation and also the composition node. So we have a little something that looks like this. And as you can see we've now got the final scene that we can use in our finished product. And so you could then go and continue creating infinitely in this canvas using the same shots as references your characters as references and link everything together to save you time. And so it's a new way of creating instead of going linearly we're creating from the middle and going outwards. So hopefully from what you've just seen today you can quickly understand how flows can end up saving you a lot of time to create the assets that you need using AI. And the best part is that you can also share your flows so you can create templates and share them with your colleagues to create faster than ever. If you have any questions about how to use flows inside of 11 labs let us know in the comment section down below and if you enjoyed this video and you want to see more please hit that like button and don't forget to subscribe. Thanks for watching.
We’re Ready to Help
Call or Book a Meeting Now