Big Tech Concerns Over DeepSeek's AI Advancements
Experts discuss DeepSeek's impact on AI landscape and investor opportunities, emphasizing overblown fears and potential for efficient model innovations.
File
DeepSeek sell-off Why this analyst is not worried
Added on 01/29/2025
Speakers
add Add new speaker

Speaker 1: We want to bring in Dan Newman. He's featured in Groups, a chief strategist. We also have Stacey Raskin. He's Bernstein's managing director and senior analyst. Great to have both of you. Stacey, what do you think? How worried should we potentially be? Or how worried should big tech be about this development?

Speaker 2: I'm not that worried. I think there were a few things that have caused panic over the weekend and clearly in today, right? I mean, the headlines, the initial stuff, the takes that came out basically said, oh my God, they've duplicated OpenAI for $5 million. They did not duplicate OpenAI for $5 million. We can go through why. Number two, I don't want to discount, the models they built are fantastic. They really are. And they've pulled a number of levers on efficiency that are great. But what they're doing is not miraculous either or unknown to any of the other top tier AI researchers or AI labs that are out there. The training efficiency and everything else that they've done, it's very explainable and kind of obvious if you look at the types of model structures and everything that they're using to build this. Long story, it was not a crazy thing and I don't think it's unknown to anybody else. And then finally, you got to ask yourself, like, is this good or bad for infrastructure? I am not of the belief that we're anywhere close to the cap on compute needs for artificial intelligence. You have to remember, I'm a semi guy. I view cost reduction as a good thing. For 50 years, every two years, costs in semiconductors got cut in half. It wasn't a bad thing for semiconductor man, it was a good thing for semiconductor man. And so I'm of the belief that like, if you're freeing up compute capacity, it likely gets absorbed. I mean, you have to remember, costs were going up to do this training, everything like 10X a year, like for the last several years. I mean, that's not something that you can keep doing. We're going to need innovations like this. If you're going to be able to keep things going on a trajectory, so I actually think it's overblown. I understand why all the panic is going on. It's going to create a rich tapestry for debate as we go forward here. But I actually think it's overblown. I actually didn't think that this was, that this is necessarily bad news. I don't think DeepSeek is doomsday for AI infrastructure. And Dan, like, I don't know what your opinion on this. Stacey, point well made.

Speaker 3: And Dan, you know, as we're thinking about this too, with all of the mindset around what open AI has been, what it is right now and where it's continuing to grow, it's not like we weren't expecting more entrance into the marketplace and more competition. So what makes DeepSeek different? What's the significance of this platform or this language learning model that is outside of the realm of belief for so many who are already operating in this industry?

Speaker 4: Yeah, from a technical capacity, the way they were able to use reinforcement learning, to use less tokens, to be able to train such a large model and be able to replicate such consistent or high quality outputs, in many cases, some of their outputs are actually very interesting, was really interesting. And I think it did create shockwaves. I am in line with Stacey. I've been on a bit of a terror across X. You know, first of all, when did we decide that we're going to just believe a paper that comes out of China? You know, there is a continuum of thoughts here, but there are leaders like Elon Musk and Scale AI CEO that basically come out and said, they probably have 50,000 H100 GPUs, maybe more. And so it's not obvious right now that everything they're sharing is exactly accurate. And it's not unlike China to potentially try to play a little bit of PsyOps with the Americans and the markets to see how we would react. I think the progress is good though. As Stacey said, look, if we can get this Jevons paradox thing, where we get more compute, it's not going to stop and people aren't going to be like, we need no more. What's going to end up happening is companies are going to move faster. And I'll make one more point, is that ultimately we've looked at the possible bubble around AI, around too much cost on CapEx and not enough consumption or revenue generated by these ISVs platform companies. Well, guess what? If we can create models more inexpensively, if we can use compute more efficiently, the SaaS providers, the Salesforce's, the ServiceNow's, the Microsoft's, those companies that we're saying they're not driving enough revenue. They'll be able to build their models cheaper. They'll be able to create solutions with less overhead expense and they're going to drive more EPS. So I just think the market's completely missing this one. And the fact that we're believing China just with their word, to me, I think that needs a little more inspection.

Speaker 1: Stacey, does this at all shift that longer term narrative surrounding efficiency, surrounding spending?

Speaker 2: Shit, no, I mean, we need efficiency. So, you know, I'll go back like at the Hot Chips Conference a couple of years ago. Hot Chips is like an emerging chip conference. They have it at Stanford every August. And I think it was two years ago, NVIDIA's chief scientist, Bill Daly, gave the keynote. And he was talking about, in this context, it was GPU performance improvements. But think about that in terms of compute improvements. And he was talking about how like over the prior 10 years, they'd improved GPU performance by something like a thousand X. And it was a variety of things. Some of it was process tech. Some of it was lower numerical precision. Some of it was sparsity and architectures and all these kinds of things, but a thousand X. And what he said is over the next 10 years, we want to improve it by another million X. Again, if you sort of have a view of where the compute requirements need to go to achieve some of the things that everybody in this industry wants to achieve, we need efficiencies from wherever they're going to be coming from. So no, I don't think it changes that long-term narrative at all. And in fact, we've even seen this. You have to remember, DeepSeek started releasing the V3 model, not the reasoning model, but the chat model came out in December. This is not new. And even just last week, when all this was coming up, we got a number of data points that clearly show, at least at this point, spending's going up, not down. So Meta significantly increased their CapEx. We got the whole Stargate announcement, $500 billion. Even China announced it was like a one trillion RMB. It was about $140 billion US AI effort that they're going to be putting into place. And so clearly right now, I think spending is still continuing to accelerate. I don't think that this has to stop that.

Speaker 3: Just lastly, while we have you, Dan and Stacy, you're welcome to answer this very briefly after Dan, but for investors that have been trying to figure out where the dip buying opportunities are and have been waiting for a dip, this seems like a pretty massive dip that we're seeing here going into the open. Should investors feel comfortable buying in on some of these names with this dip?

Speaker 4: The secular trend isn't changing. And so AI, I see this as an opportunistic moment. Like I said, I actually think some of the second and third network effects that have come up, but not as much, not as much direct ability to ascribe value from AI. So I look at the SaaS players. I look at some of the platform players and the cloud players that have been making big CapEx spend. Chevron's paradox. I think we're gonna see more demand. We're gonna see more interest. I think companies can move faster and they can drive EPS expansion. So I'm not bearish on this at all. And while people are extremely negative, the fin twits are going crazy. I think calmer minds prevail. And we also, like I said, Brad, we need to investigate all of this sort of data here and make sure they've really done what they've said with what they've said. But I think we're in a good position. I don't think this is a long-term dip.

Speaker 3: Yeah, Stacy, quick comment.

Speaker 2: Yeah, I mean, look, so everybody's always looking for a dip. Like we've got a dip, like we haven't changed our views on any of these stocks. This doesn't change it.

Speaker 3: Dan, Stacy, thanks so much for taking the time ahead of the opening bell here. Appreciate it.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript