DeepSeek R1: AI Innovation and Economic Impacts
Explore DeepSeek R1's breakthroughs, market disruption, and potential geopolitical impacts in AI technology, highlighting accessibility and competitive pricing.
File
OpenAI is Done, China Won (Deepseek Explained)
Added on 01/29/2025
Speakers
add Add new speaker

Speaker 1: Marc Andreessen said that the DeepSeek R1 model, which is the brand new AI model out of China, said that this is one of the most amazing, impressive breakthroughs I've ever seen as open source, a profound gift to the world. Ethan Mollick said, The raw chain of thought from DeepSeek is fascinating, really reads like a human thinking out loud, charming and strange. Now, what does that mean? Well, just like OpenAI's new R1 model, these are reasoning models. So prior to giving you an output, it actually thinks about the user input for a while, and it actually shows you the thinking. OpenAI actually kind of lies to you because it doesn't give you the raw chain of thought thinking. Instead, it transfers it into this summary of the chain of thought because they're worried about other companies coming in and training their AI models on those outputs. And it turns out it doesn't really matter. I mean, look at this. This is the top free apps in the world right now. Number one, DeepSeek. Number two, ChatGPT. A little more than a week after we tried to ban TikTok, the number one downloaded AI model, which is becoming the primary method for people to get information, is now a Chinese company, DeepSeek. And there's no in-app purchases in this app. So let's actually update this DeepSeek app right here. What you can do with this DeepSeek model, this app completely replaces perplexity, in my opinion. Perplexity allows you to search the internet and get AI responses based on the internet. But now DeepSeek actually does an incredible job of that, and it has one of the most powerful AI models, and it has reasoning capability, so it shows you the thinking. So we can very easily type in, tell me the latest news on OpenAI and their reaction to DeepSeek. And we can enter this prompt right here. And so it's going to first search the web, and you can see all of the web sources from this. And you can see that it's searching the web, getting 50 results, and now we can see it think. Okay, let me try and figure out the latest news about OpenAI and their reaction to DeepSeek. First, I need to go through all the search results, providing and look for any mentions of OpenAI's responses to DeepSeek's R1 model from the web pages. So it basically takes all those web sources, and it puts it into the model, and then it starts thinking about it, and that's what we're seeing right here. And once it's done thinking, it gives us a very clean output. As of January 27th, which is today, OpenAI has not issued any official public statements directly addressing DeepSeek R1. That is true. Sam Altman, the main marketing influencer and CEO of OpenAI, hasn't said anything about it. He's just been teasing the R3 model, and people don't actually care. People are switching to DeepSeek because it's significantly cheaper. In order to get access to R1 Pro right now, you need to pay $200 per month. And you get very limited R1 queries on the ChatGBT Plus account. And you can use a model nearly as good on DeepSeek for, like, no money. It is one-thirtieth the cost to use the DeepSeek API as it is the OpenAI API for R1. Another thing that's very fascinating about this model is its access to new material. This is the founder of MidJourney saying, In my testing, DeepSeek crushes Western models on ancient Chinese philosophy and literature. It feels like communing with a literary, historical, philosophical knowledge across generations that I never had access to before. It's quite emotionally moving. It makes sense, too. Western labs, like OpenAI, Google, and Anthropic, Western labs don't care about training on Chinese data. But Chinese labs train on both. Remember that China has several thousand years more of literary history than the West does. Because we lost the majority of Roman, Greek, Egyptian literature, and China preserved theirs. Basically, our AI models are missing the literary foundations of Western thought. But the Chinese models have theirs intact. This could be both a classical data advantage and a less obvious advantage for spiritual and philosophical self-actualization. One thing I have noticed is, talking to this model, it does feel like a different vibe. I mean, McKay Wrigley said here, I'm stunned by our... The vibes are off the charts. A model that's almost as good as O1 for 30 times cheaper. This is why we need a highly competitive environment for AI. All labs will be forced to ship better models and lower prices. Unbelievable. And it's incredibly ironic. At a time where OpenAI is the most closed-source, for-profit AI company in the world. As Jim Fan brings up here, he says we are living in a timeline where a non-US company is keeping the original mission of OpenAI alive. Truly open, frontier research that empowers all. It makes no sense. The most entertaining outcome is the most likely. DeepSeek R1 is not only open-sources a barrage of models, but also spills all the training secrets. They released a paper. And I am not smart enough to understand the intricacies of AI research. I will admit that right now. I like to use these models for fun and follow the space because I like to build with the technology. I am a gamer of the AI models. I am not a builder of the video games, if that makes sense. And I'm okay with that. But everyone I follow who does study how machine learning works is saying that it's one of the best papers ever written. And that all of the US companies are going to learn a ton. And their models are also going to improve. And so right now, many people, including Satya Nadella, a couple other founders, I forget their names, are talking about how this is a massive threat to the US economy because we've spent so much money. We have been investing so much money in GPUs, which are used to train these AI models that cost a ton of money. And they need money in return for all of their efforts. We should take the development out of China very, very seriously. What we found is that DeepSeek, which is the leading Chinese AI lab, their model is actually the top performing or roughly on par with the best American models. If the United States can't lead in this technology, we're going to be in a very bad place geopolitically. And apparently, US companies are now worried that they're going to go bankrupt or something because they have found a more efficient way to train these models. The model, DeepSeek, costed $5.5 million to train or something crazy. I forget how much. Trained on 14.8 trillion tokens at the cost of $5.5 million. Compare these numbers to OpenAI and their models. And based on estimates, I know they don't release their numbers, but guesses. And so let's search the web real quick. Yeah, and so it's a guess that GPT-4 costs tens to hundreds of millions of dollars. And so if these models cost hundreds of millions of dollars, and then the O3 models that are coming out very soon. OpenAI went through 12 straight days of announcements, building up for the final day, which is the announcement of the PhD in your pocket. This model is supposed to be the smartest AI model ever. That ranks 150th of all coders in the world. What happens when DeepSeek releases R2 that costs $15 million to train, to create, total, and it is as good, and then they release it for a fraction of the cost? Because OpenAI needs to get a return. Companies like Google, they don't need to have a return off this. Companies like Meta don't need a direct return off of their LLMs because they have other business endeavors. OpenAI, their only product, is their AI model. And so I have had so much fun with this DeepSeek model. So I built a Perplexity clone that uses DeepSeek. And I did this in an hour and a half without writing a single line of code using Cursor. And it literally had responses, and I'm not even exaggerating here. It had responses as good for my use case as Perplexity's. It was made as a content research tool, and it searched the web and brought back responses, and I coded it. I built it myself from scratch using Cursor, and this is just going to start happening more and more. And we're seeing these AI models, the cost of these AI models going to zero. And we're all going to be able to build with it. And as AI gets better at coding, we're all going to be able to make personalized AI software for ourselves for nearly no money. We're going to be able to build whatever we want for ourselves, for our team, to sell to others for nearly nothing. And if you're not preparing for this moment of composing software, using these really smart free models made specifically for you, especially as agents are right around the corner, a lot of the work we do, a lot of these apps will be fully automated. You'll be able to run many, many different steps for no money. It's just such a wild time. It's such a wild time to be alive right now, watching the best technology in the world become incredibly cheap and accessible to everyone. And so I'm going to be building a lot with DeepSeek R1 and posting a ton of videos, tons of tutorials, how to use R1 to build all types of different apps. And I hope you join me. So I'll see you here in the next video. Peace.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript