Speaker 1: Let's talk about DeepSeek because it is mind-blowing and it is shaking this entire industry to its core. The emergence of DeepSeek now putting pressure on the U.S. tech names. We told you all about it last week in Davos. The buzz started growing about DeepSeek's latest AI model being more efficient while running on a lot less advanced NVIDIA chips. Released last week, it's already being called what is a major breakthrough as the app shows its work, its reasoning. It answers users' prompts. I, by the way, spent basically all weekend playing with it. It is the number one app, I believe, on Apple right now. The product overtaking ChatGP today. Azat on the app store that way. DeepSeek, which was created by a Chinese hedge fund manager, also capturing the attention of leaders as we mentioned at the World Economic Forum last week in Davos. We should take the development out of China very, very seriously. What we found is that DeepSeek, which is the leading Chinese AI lab, their model is actually the top performing or roughly on par with the best American models. If the United States can't lead in this technology, we're going to be in a very bad place geopolitically. Now, you can take a look right now at the AI-related stocks. So NVIDIA, ARM, AMD, Microsoft. Meta, interestingly, on that list given it's open source. We've got to talk about the open source, closed source bit of this as well.
Speaker 2: This is a shocker to see NVIDIA down by more than 12 percent right now.
Speaker 1: Absolutely. And then you have global chip stocks also in the red across the board, ASMA Holdings and others. There is the question, I will say, Alexander Wang made the point last week, and it's become sort of the question mark about all of this, which is, you know, he suggested on our air that it is possible that they were using some of the highest performing NVIDIA chips, perhaps as many as 50,000 of them, to build this model. Now, and they weren't supposed to have those chips.
Speaker 2: Right. If that's true, the dynamic is different.
Speaker 1: If it's not true, then maybe all bets are off. It is possible, by the way, even if it is true, meaning even if they use those chips to create this, or at least partially to create this, it is still a significantly more efficient and better model. I think everybody agrees that right this moment, I mean, I don't know if you, did you get the point? It is mind blowing.
Speaker 2: It looks really great. I mean, it feels like it's that. It's open source, so people can test this out themselves.
Speaker 1: But I can tell you all the tests that I do just to see whether I think the writing is better, I think it can answer certain questions. I mean, it was not only faster, it was more human. The reasoning is shocking. I mean, there were moments where I was like, oh, my God, this, we are so much, you could feel the step change as a person. It was, I will also say, as exciting as it was, there was an element where I became scared because I thought, oh, you know, I had the opportunity to talk to all these people last week, and they all said the future is here, and then you sort of see it, and you go, oh, okay, I feel you, you know, in a different sort of visceral way. So, yes, I think this is all happening at a level that I'm, when you mark your sort of AI history timeline in life, I think this week, this past week, today, and everything else will be on it.
Speaker 2: I think Mark Andreessen put it really succinctly and really smartly when he said that this feels like the Sputnik moment for the AI race, where China is really stepping in, so not just what it can do, but what it means for American dominance in AI, what it means for the Chinese being able to step in, and to have the Chinese actually open sourcing it, meaning that that is going to be what goes around the world, and to have the cost factors that go into this, a huge, huge step up and change, and puts us a little bit on our back foot in trying to figure out what this means. I actually wonder, Satya Nadella, would he tell you again today that he's still good for the $80 billion?
Speaker 1: I think they still are good for the $80 billion. I think they're, I think actually everybody's, I think the processing power issue is, there's an element of which you're going to still have to have the current sort of efforts underway. I think the question longer term, I think it's a longer term question.
Speaker 2: Can you recoup your investment? I mean, can you recoup your investment if that's the case?
Speaker 1: Well, I think this is the big issue, and I think at one point, and I don't know if we talked about it on Squawk, or it might have been on Worldwide Exchange, one of the things that I was hearing that was fascinating out there is there were so many CEOs who were saying, you know what, I'm using open AI, and I'm playing around with Anthropic to figure out what I need to do, and then, which is expensive for us, and we don't really love that. So we're playing around with what we need, and then we're trying to figure out how much of it we can replicate using Lama, which is the meta version of it, but it's open source, and effectively, therefore free.
Speaker 2: Well, and you can have some control over it, too, as being open source. You can do things that you end up, so you're not so beholden to open AI.
Speaker 1: But to me, that's the issue. So here we have a situation where if people are already starting to move towards open source models, and I don't think that had been talked about publicly, really, that's a whole kind of paradigm change.
Speaker 2: Just back to the question on Microsoft. Microsoft shares down 6.8% this morning. Is there a point where Satya says, okay, maybe I'm not good for the $80 billion?
Speaker 1: I don't know. I don't know. I think he's still going to need this technology over the next five years. The question is, yes, can he go, is he going to start using DeepSeek? By the way, there's a separate question, which is, you know, if we think we have a problem with TikTok.
Speaker 2: I thought this instantaneously.
Speaker 1: Right. If we think we have a problem with TikTok, do we have a problem with this? Now, interestingly, this is an open source model. This is open source, and you can take it and do your own stuff with it.
Speaker 2: You can put it on your own computers. So they're not owning your, unless there's something in it that we don't understand.
Speaker 1: Right. You know, they talk about Project Texas with, you can have it on your own.
Speaker 2: But I thought the same thing. TikTok looks like poser's play compared to what this could potentially be. I mean, this is open AI, so you can take it and source it yourself and figure something out from it. So you're not storing it on their servers.
Speaker 1: Anyway, it's fascinating. I think if you're, I do, well, I don't know. I, you know, let's, we should call Satya, get him back. Because the truth is, he does need some form of AI, you know, for his copilot and all of his other software.
Speaker 2: But does it have to be?
Speaker 1: Does it have to be open AI?
Speaker 2: And is there a way to do it more cheaply and not have to spend $80 billion?
Speaker 1: I think the question is probably that over time they're going to have to spend less money. I don't know if you saw, he put out a tweet just overnight called the, I didn't know about this. It's called the Jevons Paradox. And he links to a Wikipedia page, which I'll read to everybody if you'd like. In economics, the Jevons Paradox, sometimes referred to as the Jevons Effect, occurs when technological progress increases the efficiency with which a resource is used, reducing the amount necessary for any one use. But the falling cost of use induces increased demand. Enough that resources used is increased rather than reduced.
Speaker 2: Could be, but does he still need to spend $80 billion to get to the same place? Could he spend $50 billion and still get the same out of it if, again, if you can find ways to do this more cheaply, if the architecture structure is such that you can actually get more efficient with all of these things, maybe he doesn't have to spend the $80 billion to get to that point.
Speaker 1: I think he may still need to just because of the abundance of how much processing power. I mean, look, the guys who are, I mean, this DeepSeek, for the most part, people are now running on their own laptops and doing all sorts of things, so their computers are not overwhelmed. But if you decided you were doing this in the cloud or there was going to be a cloud-based version of it, you'd still need a lot of processing power. I will say just on top of his tweet, and this was just five hours ago, he says, the Jevons paradox strikes again. As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity.
Speaker 2: Correct.
Speaker 1: We just can't get enough of, though. And so the just-can't-get-enough-of piece of it may be the thing that hopefully helps them longer term, but still I think it's a very big question.
Speaker 2: Right, and if you're doing it so that consumers are basically doing this for free, it's just who's paying the way on this? Are companies going to pay? Are they going to find cheaper ways to run some of this stuff themselves, too?
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now