DeepSeek Disruption: New AI Model Rocks Market
Explore the impact of DeepSeek, a powerful AI model challenging the status quo by offering open-source solutions and igniting competition in the AI space.
File
Understanding The DeepSeek Moment and Whats Next for AI
Added on 01/29/2025
Speakers
add Add new speaker

Speaker 1: Alright, so we just had the deep-seek moment. Another AI story penetrated the mainstream, and I really want to talk about this. From the perspective of somebody who spends his entire day, his entire week, life, following not just these tools, but this entire space, I feel like there's so much misinformation out there, frankly. So many people put a spin on their story that is tinted by their particular perspective and by their interests. I want to talk from the perspective of a consumer and a power user of these applications, from somebody who's looking at this technology to improve his life. Not from the perspective of somebody looking at all the futuristic scenarios, not from the perspective of a stock analyst, not from the perspective of somebody interested in a particular national interest. I want to look at it neutrally, as somebody who, well, now has access to yet another model that is extremely powerful, and how this fits in with the existing landscape within AI. And I want to do it in a manner that is, how to put it, maybe a bit more approachable to people who don't follow this week by week. Because if you're a power user, or if you really vigorously follow this topic, you will know that deep-seek did not come out today on Monday, where the stock market is going crazy. The stock market is down, what is it, 15%, 17% earlier today. Now deep-seek is on rank one. We were talking about this on the channel last week already. No, I want to talk about this from a perspective of somebody who just heard about this and is using chat GPT and is wondering, hey, should I use this instead? Or what should I tell my parents? What does this mean for the world? What does this mean for the AI category? That's what we're going to do here today. And we're going to wrap up with a little theory of mine on where the ball is going next. Okay, so this just happened, the deep-seek moment. I don't think people even call that, I'm calling it that, just happened. And the question is now, what's coming next? Because obviously, this is a big deal. So let's start by summarizing briefly, very briefly, okay, and very concisely, what just happened. Okay, so basically, Chinese company came out with a model called deep-seek R1. They came out with it last week. It took a few days for it to climb to the rank one in all app stores across the world. I know today morning in Europe, this was still on like rank five. Now in the evening, it's already on rank one. US, it was already on rank one yesterday. And basically, it caused this like cataclysm in the stock market, especially Nvidia stock prices got culled. I'm not going to go into all that, but basically, the whole theory there and the whole idea is that this model that was released, as opposed to everything you might know already, the chat GPT from OpenAI and all the models there, and Gemini from Google and from DeepMind, and Anthropics Cloud, and Metaslama, and all the other tech giants with their AI models. Well, this one is built a little different because it trumps, it cannot just compete with the best of them. It's better than some of the best, okay? It's on the level of OpenAI's R1, which was considered state of the art as of today. Now, it's just a DeepSeek competitor. And then they have R1 Pro. We'll talk about this in the end. We'll get to this. We'll do a quick little comparison between DeepSeek and their Pro model. Hint, hint, Pro is actually still a little better, but it doesn't matter because the difference is like minimal. The point is DeepSeek is out now, and the big difference here is this, okay? This right here is gated behind a $20 a month paywall. This one here is gated behind a $200 a month paywall. They announced this as this like gift from God that you can now gain access to under the chat GPT Pro plan. They showed us their next all free models. We're going to briefly go over like how that relates and the history of chat GPT here. But basically, this was presented as this big deal that you now can gain access to. And it is like these are legitimately the best models we've ever had, but China enters the room and they release the same thing, but for free. So you can basically go in here and use this web interface. You can download the mobile application and use their models for free. And they're equally as smart as O1 over here, okay? Not as O1 Pro, maybe a bit better, but basically as smart as the smartest model out there, okay? O1 had no competition. There's Gemini thinking whatever, let's ignore that for now. O1 was just basically the best model out there. And now the Chinese released an open source alternative that they host and provide to you for free. Basically saying a big F you to all the big tech, to all the seven big tech companies in the US. Okay, so that's what just happened. Now, what does that mean practical and what do you need to know about this? Because people spin this in so many ways, okay? Well, the first big thing I want to talk about is this is a Chinese company. Do not give them your data. Do not use it, okay? Let's address this. Is that true? Yes, 100%. But, there's a massive but, okay? So first I want to back up the claim that people make and say that yes, this is 100% true. If you use this web interface, or if you use this application, okay? You are agreeing to the privacy policy, which says, this is the DeepSeek privacy policy updated on December 5th, 2024. Welcome to DeepSeek. Your user input you will read, when you use our services, we may collect your text or audio input, prompt, uploaded files, feedback, chat history, or other content that you provide to our models and services. We may take everything that you ever give us, okay? And then there's a second part here further down. Where is it stored? Well, the personal information we collect from you may be stored on a server located outside of the country where you live. We store the information we collect in, and look, no worries. We store it in secure servers located in the People's Republic of China. Okay, so that's pretty clear, okay? They couldn't be more clear. Their privacy policy tells everything that you put into this thing, when you interact with it, we'll take it and we'll use it to our advantage, okay? No, they're not playing charade here or anything. But, and here is the big but, this really matters. This is an open source product, and not some limited open source, no, this is fully open sourced. You can download DeepSeek onto your machine today through something like OLAMA or LM Studio. These are apps that make it easy to run it. And even on a M2 MacBook, let's say, you can run the smaller versions of DeepSeek locally. None of this applies when you do that. This is very important, okay? This only applies if you're using their application, where they hosted themselves on their own machines and they provide it to you, okay? So if you host this locally, if you run this locally, if you download it from here, which, you know, takes a little few extra steps and takes, you know, local computing power, none of this applies because you're not sending the data over to China, they're not running the service, you are running it on your own machine. So this is the problem for the tech companies. This is what caused this partially, because they created a model for a few million dollars where all the big companies up until now were claiming, well, you shouldn't even play in this arena. This is just way too high effort, it costs tens, if not hundreds of millions of dollars to compete at our level. And heck, we're actually so far ahead of you that there's not even a point in competing. That's what has been communicated, okay? And then this Chinese company pulls up, spends $5 million to train up something that competes with the very best model the world has ever seen, O1. Matter of fact, OpenAI teased O3 in like mid-December. And now, a few days after this DeepSeek announcement, guess what? They had to announce that, hey, even the free users will be getting access to O3 mini. Because DeepSeek forced their hand, they did not expect this, right? They came out with that, like you can always watch the OpenAI releases and kind of infer what the competition has been doing. It's funny like that, because they just, out of nowhere, they announced like, oh, by the way, O3 mini is going to be available to all our free users. No need for a $200 plan, no need for a $20 plan, hey, we'll give you that model for free because we're so generous. No, it's because DeepSeek came out and everybody can use the Chinese application and simply give them their data, give them the interactions. And yeah, so they had to compete. That was not all. There's one more part to the story. They released OpenAI Operator in a spontaneous live stream, a spontaneous live stream coming out of nowhere last Thursday. Like that did not come out of nowhere. They needed something for their investors to argue that, hey, they're still ahead, right? Like DeepSeek might be out, it might be competing with their best model that a month ago was like this world changing thing that nobody can catch up with. Well, at this point, it's freely available in web app. It's the number one app on the App Store. Everybody can use it and you can download it locally and use it by yourself and you don't have to pay a cent to OpenAI, to any other company. They did not expect this, okay? So that's why they had to release Operator, which is basically a remote agent that can control your computer. And this is what I want to talk about at the end of this video. But the main takeaway to me is this, like, yes, these apps are fantastic. You're basically getting what you would be paying for with OpenAI for free. And if you want, you can run it locally and get around all these privacy concerns, which are very legitimate. And you should take them seriously, you know, like, don't put your, you know, private information in here. Basically, don't put like rule of thumb, don't put anything into these models that you would not be openly sharing on social media. I think that is, if you do that, you'll be safe. But then on the other hand, like, you got to make up your own mind on the privacy policy. But I just really wanted to point this out because this gets, you know, misconstructed in so many ways. And this is just freely available. Like, again, you could literally build a billion dollar company on top of this model, you could build it into any application, and they have no rights to it whatsoever, because this comes under an MIT license, fully open source license, okay? So this is what happened here right now. Now let's focus on what will happen moving forward, and where this leaves you as a consumer, okay? Because I feel like that really matters. So I'm just going to pull up this little chart here, which is the cost to interact with. Again, we're talking about the state of the art AI model here. We're not talking about like another competitor that is catching up. We're literally talking about, oh, one was king. Now DeepSeek came out and, you know, shares the throne, so to say. But they don't charge money. It's open source, it's out there. Some people are even saying, hey, this is, you know, the true open AI. So like, it's just the Chinese open AI, and they decided to just put it out there. So let's look at this price difference. What does this mean for developers? What does this mean for individuals that want to use this programmatically, and, you know, use the DeepSeek API that runs on, you know, Chinese hardware? Well, O1 would have cost $7.5 per 1 million tokens of input. DeepSeek costs 14 cents, okay? 14 cents versus $7.5. What that is night and day, right? Then the same thing for output, the difference is even bigger. DeepSeek charging $2.19 for a million tokens of output. That's when you basically get, you know, messages back from it when it's spitting out messages. DeepSeek is running at $2, OpenAI running at $60. So that's a 30x there, 30 times more you're going to pay with the OpenAI services. So they just scorched the earth on this space. That's what really happened. Now, here's the main point of this video. What does this mean for you as the consumer? What does this mean for me as also a consumer? Well, this is fantastic. We have more competition. We have a new platform that basically came out, provided what was gated behind the big paywall for free. And now it forced the US players to release some of their best stuff also for free, right? They cannot charge $200 for their all free model if the competition is giving it out for free, right? So that's one big takeaway. What's the other big takeaway? Well, I kind of want to quickly point towards the quality of this thing and the transparency of this thing, because I think that's sort of a big deal. So I ran the same prompt. And by the way, this came up in our community today as a way to test these models. It's a poem without the letters A-E-I-O-U. And then we're basically asking it to recheck itself. And if you don't know, these thinking models, this took two minutes to generate a result. This one, O1, took 30 seconds. And O1 Pro took four minutes, OK? I ran the same prompt for all three of these. You can just kind of look at this. I'm not going to spend a lot of time analyzing this. We spent a lot of time on this channel actually already looking at the comparison between O1 and O1 Pro and the competitor models. O1 Pro always comes out on top, OK? Either it's equally as good or it's slightly better. So I stand behind the fact that even between these three poems, this one kind of is the best for me personally. But you can make up your own mind. Obviously, this is just one test case, and there's many more. I'll generally say this, I mean, because we've been teaching this and talking about this for months. Heck, matter of fact, we have a public challenge running in the public area of our community that is freely accessible to everybody, which is about what unique use case can you find for O1 and these thinking models. And basically, Dirk here shared a very, very interesting one that I just briefly wanted to highlight, which is, hey, you can run your own lab of different AI agents that will write a research paper for you. All of it is powered by DeepSeek. And here, he provides all the files that you can download to run this locally. And you can have your own agent laboratory. These are advanced use cases, though. So people in the community and people that are really into this stuff have been on top of this for a while. I mean, he submitted this two days ago before all the hype went off. But yeah, it's basically like using things like DeepSeek and O1 Pro. Here you have to set up for DeepSeek R1. Here's the setup for O1 Pro. And you can just do this yourself and run a little agent laboratory on your own machine. But basically, what I'm trying to say is that power users have been putting this to work for months. We've been talking about O1 Pro structure on the weekly news show we do here on the channel for a while now. We've been talking about all of this. But this is the moment where it hit the mainstream because it happened for free. And that's why I call this the DeepSeek moment as the chat GPT moment. Because GPT-free was available before that. But it was really the moment where they put it out for free in a user-friendly manner, like this application, one might say, or this web interface. That's everything changed. And I thought it was really important to make this video because these privacy policies and the fact that this thing is actually open source and it crushes the prices in this type of way, I feel like these are the things that matter to consumers. Also the difference in quality, that's what matters. And this thing is the highest quality model we've had so far. And I'm getting to my final point here, as you might recognize. And my final point is what actually matters here and what is next? Where's the ball going next? Because as I just showed you, the quality of DeepSeek is very high and it gets these things right. One more point I want to make is it's not just that they open-sourced it. It's also the thinking process of this model right here is actually revealed. You see all the thoughts. It sounds a bit like a human thinking. Look at that. It goes through all the thoughts in great detail. You want to see what this looks like on OpenAI's model? As many people are saying, for good reason, that it's more like closed AI. They're making that joke, which is fair enough. If you open up O1's thinking, OpenAI's, then this is all you get. It's the same prompt, but all you get is five paragraphs. If you open up the Pro1's pro-thinking, one, two, three, four, five, six paragraphs, right? Whereas DeepSeek gives you all the thoughts. So they never even revealed all the details on how it thinks because, I don't know, maybe they were scared that somebody would copy their secret formula to this. Maybe they didn't want to provide so much context and overwhelm users. That's also an argument to be made. But I personally really like this, seeing how it thinks, because then I can prompt to make alterations in this thinking process. That's more advanced stuff, though. So here's my final point, OK? This came out. This happened. You have access to this. You can use this. If you care about your data, like, you're going to run this privately for something like OLAM or LM Studio, there's tutorials, free tutorials on this channel on that. And a lot of them on the internet, just look up LM Studio or OLAM. You can easily install the 7B version of this thing locally. But I don't even think that's the biggest news of the week. Everybody's going crazy for this because something that, you know, all of us like AI tech nerds have been using and utilizing for months now became freely available. And that's amazing. I love that. I wanted to make this video to inform you on what that means for the consumer. But I think the thing that really matters is this, OK? It's the new operator product. And OpenAI released this, might have released this as a reaction to what DeepSeek has done to their flagship product, the best thinking model. But this thing, this thing is serious. Let me tell you. So, like, I created a separate video on this, like this one, uncut, no edits. By the way, if you notice, I'm one shot recording this because I just want to have a talk with you, you know, human to human. And this thing, it actually works and actually gets work done. I tested every single model on the market, every, you know, open interpreter, cloud computer use. There was a bunch of these, agent GPT, baby AGI. There was a bunch of these agentic AIs that claimed to be autonomous and claimed to get things done. None of them worked close to as well as OpenAI's operator does. So everybody might be distracted by this DeepSeek thing. But what I spent my weekend doing and what I spent my day doing is running use cases through this and kind of redirecting my entire team of people to run everything they do through this and see how far we can push this thing. And my conclusion is that it's better than people think. I know it gets mixed reviews because some of the usage here is not intuitive. But let me tell you, I'm beyond the point of booking a table reservation. It's basically like it's an agent that remote controls a computer with the power of AI. It's not using something as powerful as O1 or DeepSeek. It's just using GPT 4.0 that is trained on, you know, using a mouse and keyboard. But this thing, it actually works. I'll give you one example and we'll end on that. So like people are like, whoa, okay, operator doesn't seem so good. Like who cares about making a table reservation? That's no big deal. I can do that myself in about two minutes. Why would I need a $200 product? Okay, that's a ridiculous price tag right now, but it is what it is. Why would I need a $200 product to do that for me if I can do it myself? Fair enough. You don't need OpenAI Operator to make your table reservation. But we started pushing this thing and it can do things like, well, it can research five different websites for you, find them autonomously. It can pull all the summaries of the websites into a separate Google Doc for you. Then it can take those summaries and create a PowerPoint presentation from them. And it can do all of this in a one-shot prompt without asking for your assistance a single time in the process. You just have to know how to craft a prompt for it. You have to go through the process once. There's a bunch of tricks. I'm not going to go into the details, but like I'm making a separate video for the channel showing you some of that, how it works. But this thing works. I have a preset for my favorite supermarket here in Lisbon where I live. I'm logged into Uber Eats. I have all the items, my favorite items that I want to order once a week. And I have this preset where I just press one button and I set it up so it ignores all the questions in between. And like literally I press one button and within, I don't know, 40 minutes, somebody's ringing at the door bringing me bags with my favorite groceries. And these are not things that I could not do by myself. This and this is the final point. That's why I'm saying this is where it's going. This is a way of multiplying yourself. It's not about saving time. And this is important. This came up in the community today. I want to end on this. Look, this operator thing is not about this right here, okay? It's not about saving time you would spend doing something. It's about adding extra hours to your day. This is cloning technology for the common man. Like you're able to do these tasks, like wouldn't it be great if like in the topic of your interest that you may be working in your career, you could go out and every day you would spend an hour looking at 10 articles, summarizing them and then looking through that. That would be great. But you're not doing that because it takes too much effort, too much willpower, too much time for you to do that. It's probably not worth it. But it would be good to have that. Here is the first research preview of this new product category. We can do that for you at the click of a button. And there's many, many other examples of that. I'm creating content on it already. And I think this is where the ball is going next. It's not about, and again, it's not about saving time. It's about adding extra hours to your day. And then, you know, like what happens if one person, you know, is working eight hours a day and the other person is working 30? Well, it's pretty clear. The other person is going to pull ahead. And that's where I see the ball going next. I think people who really learn how to utilize this and follow the development of this and learn how to work with it are going to have an advantage that is like very real. And it's not an advantage of having $20 more by saving on DeepSeek. It's an advantage of adding more hours to your day. And that's why I think this product category within AI is the next big thing. And Operator is the first version of it that I have seen that I've tried that actually is able to get some work done. And while it might not be worth $200 and while it might not be at a stage where it's a complete game changer, it's been four days and I wouldn't want to give this up anymore. That's all I say. So yeah, go educate your loved ones on DeepSeek. Tell them about the privacy policy, maybe send them this video. And I hope this helped and yeah, I'll see you soon. All right.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript