Speaker 1: Hi everyone. Let's go ahead and see how you can run DeepSeek on your local computer. DeepSeek has taken OpenAI on the storm here. So, DeepSeek have published their models at Olama and Hugging Face, and they have made all these models as an open source. And their model quality and the performance of their models are actually competing with the OpenAI's best model. So, as of now, I can say that DeepSeek has provided the best model and at the very low cost actually. Basically, at very less number of parameters, you can achieve here a very high performance. Let's go ahead and see how you can run these models on your local computer. We will be using here Olama. So, you have to come here at olama.com and then you have to download your Olama. And Olama will be acting here as a server. So, select your operating system. I'm using here a Windows. So, I select here a Windows and here it is going to download it. Alright, so once it is downloaded, then I'll show you how you can install it. In the meantime, while it is downloading, let me show you how you can get started with DeepSeek like ChatGPT. You know how to work with the ChatGPT like chatgpt.com and that's how you can get started with the ChatGPT. And here in the DeepSeek, what you have to do here, so you need to click on start now. Otherwise, you can directly reach out to chat.deepseek.com. Alright, so it is like the ChatGPT. Here it asks that for the login. So, I click on login with Google. I login with my account here and I should be able to see here at output something like this. So, here I come. Now I can see here. Let's say hi. I say hi and it should be telling me that hello, how are you? I can ask the question, how are you? Alright, so these are like, you know, the beginner questions. But later on, I'll be showing you how you can ask these questions on your local computer using the OLAMA. So, I have downloaded here the OLAMA. Let's go ahead and see here. So, the OLAMA has been downloaded here. Let's go ahead and run this OLAMA. olamasetup.exe on my Windows. It might take a while to install this. Alright, I say OK, next. And thereafter, it says that, OK, the OLAMA is already running. So, I just want to close that application. So, basically, I already have OLAMA in my computer. But now I just want to, you know, reinstall my OLAMA. Other than that, if you have GPU in your computer, then you have to also install the PyTorch as well. So, you need to come here at the PyTorch. And then you have to also make sure that you have installed PyTorch GPU. Otherwise, there would be a problem. I mean, your OLAMA might not use GPU in that case. So, you can select the, you know, the particular version of the PyTorch. So, I select here stable, Windows, Python, and the CUDA. And then you can just simply run this to install your PyTorch with the CUDA. I already have PyTorch. So, I'm just ignoring that. So, while OLAMA is getting installed, let me just show you that deep-seek model. You need to come here at the search model. Now, you can clearly see here there is a deep-seek model. I just click on that. And thereafter, you can simply see here that deep-seek provides you 1.5 billion, 7 billion, 8 billion, 14 billion, and so many, you know, the other types of the model. And they have a 7 billion model, which comes in 4.7 gigabytes. And currently, I'm using 6 GB GPU memory. And then I have here 1.5 billion model. It is very small. It can run on mobile devices. So, it has 1.1 gigabyte here. As you scroll it down, you will see that how you can run deep-seek. So, you can use something like this to run your deep-seek. This is quite large model. So, it will not fit in my computer. I'll be using 1.5 billion or I will be using a 7 billion model. Then you can scroll it down. You can see here all other various versions. And then finally, you can see here a benchmark. So, in the benchmark, you will see deep-seek is competing with the OpenAI O1 model. So, the deep-seek has taken this AI model as of now by a storm here. All right. Having said that, let's go ahead and come back. And we see here that OLAMA is up and running. So, we have installed OLAMA. Now, what you need to do here, open your terminal and just try to run your deep-seek model. So, I'm going to run here 1.5 billion model. So, I just copy this from here. And then I come here and just paste it here. OLAMA run deep-seek R1 1.5 billion here. All right. So, it is going to, first of all, pull this model on your local computer. Depending on your internet speed, it might take some time. Excellent. So, this model has been downloaded here in my local computer. And it is doing a final verification. Everything is done. And it is up and running. I say hi. Think, think, think. How can I assist you today? Perfect. Now, let's say I ask here, what can you do for me? Let's say something like that. I ask. It says that, okay, I can do a lot of things here. Thereafter, I ask that here, can you do coding for me? And it should be telling us that, yes, I can do. Perfect. All right. Let's go ahead and try to ask some coding questions. So, I ask here that, can you write code for Streamlit chatbot using OLAMA and LangChain? Let's say. And just try to see that whether it is able to write a code or not. And I write here in Python. And here it is thinking. And, wow, this is awesome. Now, you can see here that first it says you need to install the LangChain. Thereafter, from LangChain.llm import OpenAI. So, this is something quite old, seems like. So, it is trained on old data. Right now, LangChain has changed how we import OpenAI or any other things. And then import OpenAI and so many other stuff. And here it asks that, okay, here is LLM equal to the OpenAI and these things. So, these I would say that not exactly correct. But overall, the concept what I see here is moreover seems like correct. How it is explaining us that how we need to do this. Let's go ahead and ask the same question what I asked here. The same question on DeepSeek chatbot here. So, basically, I have here DeepSeek chatbot. And then I ask this question here. And then see what answer I'm going to get here. And at the same time, I'm going to ask here at the chat GPT. And then see what answer I'm going to get here. So, right now, if I see here that I'm getting here a better answer as compared to what I had got from the 1.5 billion model. And it is pretty much understandable that DeepSeek is running here a larger model. And we ran there a very small model like the DeepSeek 1.5 billion model. But now I can see that here it is working perfectly seems like. All right. Moreover, it has written a quite good code. Okay. By first hand, I mean, by first sight, if I see that it seems mostly correct. Although they are using here an older version of the Lionchain library. So, with this, I can say that currently Lionchain does not support all of these things. But overall, seems like they have written the code. Similarly, if I come here at the Streamlit chatbot here. And then I see here Streamlit. Sorry, here I come at the chat GPT actually. So, they have written here a Streamlit application. And seems like your chat GPT output and your DeepSeek output is matching quite a lot. And moreover, I can say here that DeepSeek and chat GPT is competing with each other here. Other than that, I see here st.experimental they have used there. And they have used st.sessionstate. So, using sessionstate. Okay. Seems like they have also used here. So, the moreover, seems like they are competing with each other. And it's quite good to see this. Overall, if you are looking for a complete code how to work with the chatbot. So, you can come to my repository, Lakshmi Merit Ullama chatbot. So, at the Ullama chatbot, I have provided here the codes. How you can use simple chatbot, chatbot with the history and how you can chat with your PDF. And you will be using Ullama behind the scene. You can just change your model name. Like here you can provide the DeepSeek R1 7 billion or any model depending on your computer resource. And you can get started with this. Now, I hope that it is pretty much clear to you that how this DeepSeek is working. How you can import DeepSeek in your local computer. And then how you can develop local LLM application actually. So, the application part you can get from my repository. Alright. So, this is all about in this lesson. I'll see you in next one.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now