Speaker 1: Very happy to be joined by Arnaud Bertrand. He described himself as an entrepreneur who tweets too much, but I have found your insights both with regards to China in general, but this AI development in particular. I even brought you up earlier, Arnaud, in the context of this fight with Colombia. So great to have you here. Thank you so much for joining us. Thank you as always. Very good to be here. Our pleasure. So I've teased this a couple of times in the show, but this is huge development from a Chinese company, which was able with a fraction of the dollar amount to develop an AI competitor to chat GPT and all of the other sort of like big players in the space. In fact, by some metrics, this new release of it's called DeepSeek is outperforming the prior competitors in the field. This has blown up all sorts of assumptions about AI development, about where Chinese companies were in terms of their technological sophistication, about the success of Biden administration policies, trying to limit high tech development in terms of China. So Arnaud, just take us through a little bit of what happened here so people have the backstory.
Speaker 2: So what happened actually started in December when DeepSeek released the first model called V3, which already made a lot of waves because it was revealed to have been trained for only $5.5 million, which is absolutely nothing in this industry. I think OpenAI spends $5 billion a year, so 100 times less basically. And that one was already extremely good, already outperforming a lot of the models on the important benchmarks. And then OpenAI released a new model called O1, which is really supposed to be the top of the range. And then right behind just last week, DeepSeek released its own new model called R1, called R1, which outperforms or is on par with OpenAI's top of the range model on almost all benchmarks. And they released it, and that's the important bit, they released it open source, meaning that basically anyone can download it for free and use it as they please. It's released open source on an MIT license, meaning that you can really use the model however you want, which is a huge difference to OpenAI. It's called OpenAI because originally the philosophy behind it was that it was meant to be open, open source and so on. But famously, they've taken a much more closed route where they don't release their models in open source and they don't disclose much about their model. It's sort of a bit hidden. And so that's why a lot of people are shifting to DeepSeek now. And now it's even the most downloaded app in the US. As of today, it's number one. It's overtaken all the other apps, including ChargePT. Wow. And our friend Matthew Stoller made an interesting point in his
Speaker 3: newsletter, Big, where he said, when you compare what China was able to do here with what AI technology is based in the United States or AI companies based in the United States have been able to do, it almost makes the United States look Soviet. It looks like a Leviathan that is just trudging along compared to this alternative method. So do you think maybe there's something to that parallel, Arnaud? Or is there something that we can take away from the way American AI businesses are organized in comparison here? I mean, I think the interesting irony is that
Speaker 2: maybe there was a bit too much funding in the AI industry in the US for its own good. They had a bit too much of an easy life because the reason why DeepSeek, I think, was able to come up with such a good and efficient model is because they are very much operating under constraint. You had the export controls, the semiconductor export control. And so China, in many ways, doesn't have a choice. If they want to compete with the US, given they don't have such access to funding, the latest chips and so on, they need to come up with much more efficient technology. So I wouldn't say it's exactly the same situation as the Soviet versus the West of the time. It's almost a situation where the US kind of rested on its laurels and had it a bit too easy and was operating in too good an environment. And China, because of those constraints that the US put on them, they actually have to come up with simply better technology if they want to compete. And I think that's largely what happened.
Speaker 1: I think another piece that you've been pointing to, Arnaud, which I find really interesting, is our country, by and large, a lot of the smartest grads that come out with technical degrees or technical know-how, a lot of them don't go into science or research. They go into financial speculation, effectively. And the Chinese government has looked at that and said, that's not the direction we want to go in. So they crack down on salaries for the financial industry, and that creates incentives for the best and the brightest, lo and behold, to go into this sort of research and tech development. So talk about that piece a little bit, if you could.
Speaker 2: Yeah, I think it's one of the most interesting angles of DeepSea, because it was actually a side project of a hedge fund. And that was released quite coincidentally about less than one year after China did crack down against the finance industry, this overly high compensation in the financial industry, capping it. And there has been, you hear in China a lot of miscontent in the finance industry professionals, because, yeah, they can't make as much money, and the industry is becoming less attractive for graduates. And you're seeing a bit of a brain drain from that industry. But that is very much the point, I think, that the Chinese government is aiming for, because they're looking at the US and they're seeing, which is a shame when you think about it, a lot of the top graduates from Harvard and MIT and all the Ivy League schools often go in the finance industry when, I mean, think what you want about the finance industry, but if you have pure genius, arguably their brains would be of more use to society if they were to develop new technology like AI or working on curing cancer or things like that. And I think this angle is fascinating because, I mean, it's difficult to put a direct correction, did they do that side project exactly because of that or not, but at least it's an interesting coincidence. Yeah, and the stock market, and we're already seeing some
Speaker 3: of the effect of this in the stock market. Can you talk a little bit more about why it's affecting stocks in the way it is and what we could expect to see going forward as the deep reckoning goes on?
Speaker 2: Yeah, so I was looking at it actually in my portfolio just now, it's quite depressing. So NVIDIA, I just saw, is losing 12% today. And yeah, the whole tech sector is down. And basically because there was this assumption that AI was all about compute, like more chips, you could get better models, right? And DeepSeek kind of destroyed that assumption because they don't have a lot of compute and they were able to come with a better model because they had better algorithms, better software. And so what's happening is simply that those assumptions around building a mode for US AI company with compute, with those massive data centers, like the Stargate project that Trump on OpenAI and so on just announced, those are very much questioned right now. And that's what you're seeing, I think, in the stock
Speaker 1: market. Yeah, absolutely. I mean, so much of the stock market value is built on this sort of hype around AI. And so, I mean, it's incredibly deflating, literally, to see this company be able to build a comparable product for vastly less. I will say I've seen some theories that they're not being straightforward about the amount of compute that they actually used here. Most notably, the scale AI CEO, Alexander Wang, I think he was at the World Economic Forum, claimed that he thinks that they are hiding the ball on exactly how many chips, like what size of the mega cluster that they're using. Let's take a listen to that and get your reaction on the other
Speaker 4: side. You know, the Chinese labs, they have more H100s than people think. And these are the highest powered NVIDIA chips that they were not supposed to have. Yes. My understanding is that DeepSeek has about 50,000 H100s, which they can't talk about, obviously, because it is against the export controls that the United States has put in place. And I think it is true that, you know, I think they have more chips than other people expect. But also, on a go-forward basis, they are going to be limited by the chip controls and the export controls that we have in place.
Speaker 2: What do you make of that? Well, it's difficult to say one way or the other. But the good thing about DeepSeek is that when they released the model, first of all, they released it open source, so you can see what's in the model. And they released an extremely detailed paper with it. So, an AI researcher can just go through the paper and try to achieve the same outcome, which is the model, based on what they're saying they did in the paper and see if that's true or not. Or if what's the methodology that they're saying they pursued in the paper is wrong, underlying about the number of chips. So, I can't say.
Speaker 1: Yeah. Well, I did read some analyses, you know, that looked at the paper and said, well, here's the key innovations that they use to achieve so much more efficiency, way beyond my technical know-how. But people who seem to know what they were talking about were like, oh, that's how they did it, I see. They used very creative and efficient technological development here. One more I wanted to get your response to, Arno, is Sam Altman weighed in. Obviously, this is a very bad development for him because he has bet so much on, you know, now the Stargate program. I know he'd been sort of battling with Microsoft for more and more money for larger and larger data centers. And, you know, now this sort of blows that whole direction up. Let's put this up on the screen. D3 from Sam Altman. He says, it is relatively easy to copy something that you know works. It is extremely hard to do something new, risky, and difficult when you don't know if it will work. Individual researchers rightly get a lot of glory for that when they do it. It's the coolest thing in the world. And, of course, this was largely seen as being directed at the DeepSeek development. What do you make of these comments from Sam?
Speaker 2: DeepSeek is definitely getting a lot of glory. So, I'm not sure exactly what it means. I think something very interesting, I think a big sign is what Mark Andreessen is tweeting recently. I don't know if you followed what he's tweeted because obviously Sam Altman is biased. I think Mark Andreessen is slightly more of a neutral party here because he's an investor. And he literally tweeted, I can't remember the exact quote, but that DeepSeek was one of the most impressive breakthroughs it's seen in its entire career. And that's Silicon Valley's most legendary investor, the guy who invented the browser. So, you know, it's quite something, right?
Speaker 1: He said, one of the most amazing and impressive breakthroughs I've ever seen. And just to clarify, that Altman tweet was from back in December. So, presumably about the initial release of DeepSeek.
Speaker 2: But yeah, the new model is also quite a bit, it's more of a breakthrough than V3, which was the one in December. My last question for you, Arnaud,
Speaker 1: is there a legitimate reason to keep AI closed source as open AI has decided? Because I mean, there are risks inherent in AI development. I mean, there's a whole field of AI safety, a lot of concerns about what it could do in terms of the labor market, especially when it's being wielded for profit by companies like Microsoft. There are also some more far reaching concerns about AI, you know, once it's AGI, and you have this level of artificial general intelligence, and then super intelligence that, you know, effectively supplants human beings as the, you know, the smartest creatures on the planet. What does that mean? Do they decide they want to keep us around or not? So, there are these sort of like further reaching concerns about AI development as well. Do you see any drawbacks in the open source model that not only DeepSeek has pursued, but also the meta has gone in the direction of a more open source model as well?
Speaker 2: That's a good question. I honestly see more drawbacks with the closed model. Because at the end of the day, if it's closed, it's a limited group of individuals who you don't have control over. And, you know, oligarchs, effectively, like billionaires, that take all the decisions when it comes to AI, which you're right, is going to be extremely disruptive. It's an extremely powerful technology. So, would you rather have that being controlled by a small group of billionaires who are, you know, very much can't relate to the general public? Or would you rather have the general public, anyone who can, you know, have an influence on that with the open source model? I think, I mean, any way that technology has risks will have a big impact, will impact jobs and so on. But I feel more reassured with the idea that anyone will have a say on how AI develops, that it is open, free, anyone can come up with, you know, startup around it freely on his own computer, rather than it being kind of developed secretly out of our hands.
Speaker 3: Emmeline, do you have anything? Well, I was just going to say what we've learned from our own private tech companies, semi-private tech companies over the last decade is that you will also have similar levels of censorship and political corruption, whether they're private or public, they get co-opted, Arnaud.
Speaker 2: Yeah, yeah. On censorship, it's an extremely good point because people often, you see a lot on Twitter, you know, people saying that the DeepSeek model is censored. And it's true that when you go on DeepSeek.com, because it is hosted in China, it is indeed censored because you have censorship laws in China and so on. But anyone can download the model in open source, tune it however they want. If they want to turn it into a tool that, you know, even generates anti-China propaganda, they can, right? So that's what I mean by me being more reassured that it's open source because anyone can do whatever they want with it. Whereas OpenAI, if it's censored, it's also censored in its own way. There's absolutely nothing you can do about it.
Speaker 1: Yeah, that's true. And I did see the comparison of people asking just very straightforward questions about like the Israel-Palestine conflict on ChatGPT versus DeepSeek. And I would say that the DeepSeek version was much less censored in that particular instance than the OpenAI version.
Speaker 2: Exactly. Every country has their own bias. But at the end of the day, open source matters a lot because you can then download the model yourself and input your own bias, make it your own, right?
Speaker 1: That's what I need. I need the world to reflect my personal bias. Arnaud, thank you so much. This is so helpful getting your breakdown of these developments and what it ultimately means. Great to see you.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now