Speaker 1: In this video, I'll show you the correct way to add DeepSeek R1 to your N8N agents. The fact that you clicked on this video means that you most likely already know what DeepSeek R1 is, but in a nutshell, it's the new reasoning model from DeepSeek Labs. And looking at the benchmarks, this model is on par with OpenAI's O1 model. But the big difference is the cost comparison between the two, where on the input token side, OpenAI is about $15 per million tokens, where DeepSeek R1 is only about $0.55, and then $60 per million tokens, compared to $2 on the output token side. And because this model is open source, we can run it locally on our own machines for absolutely free. I've even seen people run the smaller models on their mobile devices. So in this video, I will show you how to use both the paid cloud API, as well as the local model in your N8N projects. First, let's have a look at running the model locally. Go over to olama.com, then download and install Olama for your operating system. After installation is complete, go to models, search for DeepSeek R1, then select any of these models. For most consumer hardware, the 7 billion parameter model or the smaller 1.5 model should be sufficient. So since I've got an RTX 4070, I'll go with the 14 billion parameter model. Then simply copy this command and then run it in your terminal or command prompt. This will download and install the model. And afterwards you should be able to send the message. And if you get a response, everything is working just fine. Then back in N8N, I've set up a very simple AI agent. So let's click on chat model, select the Olama chat model node. So let's rename this node to something like DeepSeek. Then within this credentials dropdown, you can basically leave everything as is, but you might notice that when you save this, you get this couldn't connect with the settings error. So if you do get this, you can solve it by changing localhost to 127.0.0.1. Then try again. And this time it worked. Let's save this. This closes pop up. And now we can see all the available models on our machine. And all we have to do is select a DeepSeek model that we just downloaded. So back in the canvas, let's open up the chat window. Let's see if this is working by just saying hello. And because this is a reasoning model, we do see these thinking tags, which shows all the reasoning steps. But of course, this was a very simple query. So let's try something like if Alice is taller than Bob, and Bob is taller than Charlie, and Charlie is taller than David, who is the shortest person. And now if we look at the response, we can see how the model reason through this puzzle. So there's a lot of back and forth and reasoning. And finally, it gives us the correct answer saying that David is the shortest person. So if you are running in locally, or you want to self host this large language model, then a llama could be the perfect solution for you. Now let's have a look at the paid cloud service. If we go to the chat models, you will notice that there's no dedicated DeepSeek chat model node. But what I do want to show you is if we go to the DeepSeek API documentation, they give us this example code snippet over here. And for both the Python and Node.js examples, we can see that DeepSeek is using the OpenAI SDK as a wrapper for interacting with their model. This does not mean that OpenAI will be called at any point during this process, they're simply using the OpenAI SDK as a wrapper for standardizing the way that we send messages to and receive responses from the DeepSeek models. This means that in n 8n, all we have to do is at the OpenAI chat model node is also renamed this node to DeepSeek cloud, then in the list of credentials is click on create new credential. And now we have to provide an API key as well as a base URL. And as I mentioned, we won't actually be calling OpenAI. So we will replace this URL to point to DeepSeek instead. And in fact, let's do that now. Back in the official DeepSeek documentation, we can simply grab this URL from this page. In fact, just below this, we actually get this URL that ends with v1. That's the one we want to use as our base URL. So let's copy this and add that to n 8n. Like so. Now for the API key, go to deepseek.com and click on API platform. Because this is a paid service, you do have to add some credit to your account. So go ahead and add about $2 to your account. The service is very affordable. So this credit will actually last you a long time, then go to API keys, then click on create new API key is given a name like n 8n tutorial. Let's copy this API key and add it to n 8n. Then click on save. If everything was set up correctly, you will receive this green message. Let's also rename this credential to something like DeepSeek API. Let's close this pop up and let's select that credential. Now take note that this will not list all the models on DeepSeek. And this is very common if you use services like open router, etc. So what we need to do instead is manually enter the model name by clicking on expression. And now you might be wondering what the model is actually called. Again, let's refer back to the documentation. Here, we can see that the chat model. So this is DeepSeek v3 is called DeepSeek dash chat, whereas the reasoning model is called DeepSeek reasoner. I'm going to copy this name and add that to n 8n like so let's go back to the canvas. And let's try this out. So in the chat window, I'm going to clear this chat. Let's give it that same puzzle again. So the response time was super quick. And when using the open AI node, we actually don't see the reasoning steps. Now for some people, they don't want to include the reasoning steps in the results. So this might work for you or not. And n 8n might change this behavior going forward. If you do want to see the reasoning steps as well, then you could use the HTTP request node instead. So as an added bonus, I'll show you that process as well, I'm actually going to remove our agent altogether. Let's simply add a manual trigger for now, then let's add the HTTP request node. Let's change the method from get to post. Then for the URL, let's copy the URL from the API docs and add it here. Then for authentication, let's select generic credential type. And within the generic auth type, let's select either auth, then in this drop down, let's create a new credential. And for the name, let's copy authorization. And for the value, we need to enter bearer space followed by our API key. So I'm simply going to copy bearer and space and I'll add my API key after that. Let's save this, it's close to pop up, then it's enable sent body. Then for the first parameter, let's add the model name, which was deep seek reasoner. Let's add another parameter. And this one will call messages. And as per the example, messages is an array of values. So to keep this simple, I'll simply copy this object, which contains a user message. And let's change this from fixed to expression, I'm going to expand this. And here, we need to start off with an array within this array, we can copy this user message. And let's change the text to our riddle. Like so let's go out of this pop up. And all we have to do now is test the step. Okay, and this failed because it seemed to inject some backslashes. And these are escape characters. And that is because we actually have to change this expression to be a JavaScript object. To do that, we can simply add double curly braces. So we've got these opening curly braces at the start and closing curly braces at the end. Alright, now we can see that this is indeed an array of values. Awesome. Let's try this again. And now we do get a response back. And looking at the results, we can see our final answer within this content field. And this is what the open AI node is returning in the chat window. But we also get a separate object containing the reasoning content. And this is everything that would typically sit within those thinking tags. If you found this video useful, then hit the like button and subscribe to my channel for more n8n content. And check out this other video where we build an advanced multimodal AI agent that we can access from anywhere.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now