Speaker 1: Welcome to Deep Tech 315 action pack week, all related to AI and of course, with the flurry of announcements from Trump in his first few days, we saw the announcement on Stargate. Stargate is this $500 billion in infrastructure. This is going to be a separate company. And the two headliners on it are, of course, OpenAI and SoftBank coming together. Oracle, another partner within the company. Microsoft. I mean, it's kind of the who's who. And maybe before we really dive into the significance of it, just a little bit more about kind of the substance. So when you think of AI infrastructure, think of this as something similar to what some of the other hyperscalers have been building. What has been driving this beautiful run that NVIDIA's business has had. Think of this as like Colossus from what X is doing. I mean, this is going to be the hardware kind of brains that we have a lot of NVIDIA chips being purchased for this. And so the interaction between who gets to use it, which models run on top of it, does it favor OpenAI? There's still a lot of unanswered questions. And I think just at the highest level, my biggest takeaway is this concern that there's going to be some sort of slowdown in AI spending, that the infrastructure trade is basically over. We're seeing the exact opposite here. We're seeing an acceleration to put the 500 billion into perspective. If you look at the Mag 7 in 2024, they spent somewhere around 225, 250 billion in total CapEx. About half of that is related to AI infrastructure. So this 500 billion number is a huge number any way you cut it. But that was my biggest takeaway. It was just that the AI trade is alive and well and that the infrastructure trade. But is there something else to read into just how this is going to set up or why wasn't Elon involved in this? That was a little bit of a surprise. Anything else catch your attention?
Speaker 2: Well, I think the biggest thing that caught my attention was how it's forcing others to make big announcements too. So just today, Mark Zuckerberg with Meta said they would spend 60 to 65 billion in CapEx themselves this year.
Speaker 1: That's up from 51 billion is where the street was at.
Speaker 2: Exactly. 30% higher than where Wall Street was expecting them to spend. And so you think about that and he's talking about clusters getting up to a couple hundred thousand GPUs. So by the way, that- Velocity is at 100. Now you got to get higher than them.
Speaker 1: That's this idea of just the arms race is alive and well.
Speaker 2: It is. And I think the Stargate thing, if you silo it, that in and of itself is a big announcement. But to me, the second order effect is it's going to make everybody else think about how much they need to spend and probably increase what they were even planning on spending before. Now, whether or not all that spending actually happens over the next several years, maybe it doesn't. Maybe something happens, the environment changes, but the announcements are going to beget other announcements. And I think we might hear more of them maybe from Google, maybe from Elon as well.
Speaker 1: What was Altman's comment with like Saudi funds? This is like a year ago. Is it like a trillion dollars in investment? Do you remember that?
Speaker 2: It was, if I remember correctly, I think he was trying to build or at least investigating like semiconductor capacity. So I don't think it was data center related.
Speaker 1: It was a huge number that really hasn't played out. It's an example of, I mean, it's probably not going to happen. So I think you should probably take a little bit of the Stargate stuff with a small grain of salt. The one piece that I didn't fully understand. So we have this separate entity, Stargate. It's an AI infrastructure, data infrastructure company. And who's going to be using it? There was talk that this is going to be more geared initially towards like government usage and for potentially like healthcare, like trying to figure out cures to diseases, for example. But is there, is this something like an infrastructure that any AI developer could potentially tap into? Or is it just so open-ended right now, too hard to call?
Speaker 2: I mean, to me, it's, yeah, it's not entirely clear. And as far as I've read, I haven't even seen who is exactly leading this. I mean, this is sort of supposed to be an independent entity and that's not even entirely clear. So I think right now, what we know is this group, right? A lot of heavy hitters in the AI space, they've agreed to come together. They're going to invest a lot of money, whether it's 500 billion or something different. And they're going to build some serious infrastructure without a doubt. Open AI will, I'm sure, be using that infrastructure. As for whether others might use it, that's not clear to me.
Speaker 1: Makes sense. And I think it just, I wonder if there's also a piece, you know, the government, it was timed around Trump and taking office, but the government doesn't have like a role, a direct role in this per se. And I think too, about this idea that Jensen talks about is kind of the third wave, we're in the kind of application building type of wave. Then there's the third wave was industrials, the second wave, and then the third wave was sovereign AI and countries kind of getting more involved. But I don't think that's, I don't think this is like the US government kind of getting more involved in an AI. Am I reading that right? Or is it still just too much? There's not much to read in between the lines.
Speaker 2: Maybe it's like the toe in the water or the precursor to what you're talking about, where obviously, I mean, the government wanted and Trump wanted to make an announcement around this. Yeah, maybe he had some hand in trying to bring people together or at least saying, hey, let's figure something out and make a big announcement because it's good for all of us. Maybe that's, you know, a preamble to, you know, hey, does the EU finally stop trying to regulate everything out of existence and try to make a similar investment? You know, does Japan do something? Does the UAE or Saudi, do they do something, right? It could open the door to some of that.
Speaker 1: Well, the hit announcements kept coming. OpenAI announced Operator this week. And Operator is basically the first time a Gentic AI can get in the hands of consumers. And Gentic AI is when you basically have a bot that runs on your computer that can actually go out and do a multi-level task, not just prompt once, it can prompt itself to try to figure out what's going on. And so the use cases that they, first of all, you got to be a pro user of GPT, which is 200 bucks a month. And the use cases are anything from like booking tickets to shopping to, you can kind of turn it loose on any website. But those are kind of some of the bigger ones that they highlighted in the demo. I've downloaded it. I've used it. And my sense is that it's still early. And I want to give OpenAI, I do want to give them credit. They said earlier, many times they said in the lead up video that this is early, it's still a research project. They said that it is 38% as accurate as a human. And so, I mean, that to me basically is like red flashing lights, like don't trust this yet. And my experience with it is it's remarkable to see, because you see a browser working and it's remarkable to see it kind of check through the tasks. A lot of stops, a lot of times it asks before, if anything important happens, you have to step in and confirm. And so, I mean, initially it didn't feel like there's any sort of like productivity bump in my life to make me help me shop faster by using this. But I think from just laying the groundwork, this is a really big deal, like agentic AI is in our hands.
Speaker 2: I'll reuse the word precursor here too, which is to me, this is, it's the precursor to real agentic AI. To your point, like if you go, if you have a pro account and you try it, what you'll experience is the AI will do everything that it needs to do for you, but it's going to ask you maybe 10 questions, right? By the time it actually gets to the end. And you'll be asking yourself why I could have just done that faster. That's not the point though. The point is that AI is capable of understanding how it needs to conduct these actions. And over time, it will have to ask fewer and fewer and fewer questions. And I think if you fast forward, given how fast things are moving, six months, maybe a year, I think you will see operator doing things where instead of saying, I want to order, you know, a pizza and a two liter bottle of soda and asking you like 10 questions until you can finally get there. You say just that, and it will actually figure it out for you.
Speaker 1: Yeah. And in that piece, that idea of it just figuring it out, as you said, I think that's probably the right timeframe, probably a year before we get there. But I mean, that's a pretty complicated thing because what essentially open AI is doing here with operator is saying, we really don't want to really take the responsibility of the big questions. We want to make sure you said eight avocados in your, this is a use case I had with Target today. And I had to like double confirm that I confirmed once that it was, I did in fact want eight avocados. And so they like, those are like pretty big decisions. And I just wonder, like, is there a question about like, who's liable? I mean, I think about autonomous cars gets in an accident, who's liable? I use this bot. I turn it loose on buying my avocados, bananas, and toothpaste at Target. And I get, I show up with a $1,400 delivery from Target. Is that going to be a topic around this whole agentic AI?
Speaker 2: Probably at some point, but I think about it, there's, there's like two big buckets of information that we have to figure out with agentic experiences. One bucket is, is sort of like access or general information, I would call it, which is like, what's your location? What are your passwords, you know, your accounts, credit card numbers, things like that, where you're not going to want to have to enter that stuff every single time. If you trust the agent, you're going to want to have that.
Speaker 1: Operator will, once you put it in once, it doesn't ask you again. So it gets smart quick on that front.
Speaker 2: For the first use, if you do another use case, it will. So it won't save your information, just to be clear, as far as I've experienced. But over time, I think we will, right? You keep a card on file with GPT and you know, you're totally happy. So that's one piece of it. And then the other piece of the information that needs to understand and collect is the detail of the instructions. Right. And, and I think that's where it gets a little bit more complicated, where I think what will eventually happen.
Speaker 1: Like the prompt may be ambiguous.
Speaker 2: But, but, but I think this is an easy solution from a UI standpoint, because just like there's a, there's a feature in, in AI models now called temperature, where you can actually turn up or down the temperature of the model, which, which basically is like a lever of think of it as like creativity or randomness, like how creative the model might give you answers. Do you want to be super creative and maybe wrong, or do you want to be really, you know, pulled back and probably more likely, right? I think you're going to see a lever like that with those other decisions where you can say, if you just say, I want avocados. If you turn the lever down, it might ask you how many, but if you leave it alone, it might guess Gene probably wants five.
Speaker 1: But what, what about this case? I told it eight and then it came back and said, are you sure you want eight?
Speaker 2: Confirm. Yeah. Well, again, I think that could fit in that lever concept where it's like, do you want me to confirm when there are quantities or something like that? Confirm the quantities or say, just trust me with one input of quantities. And then it's kind of on the user.
Speaker 1: And lastly is this comes in the category of, I had to do a spit take. I did a spit take after I heard the news is that out of China, a new model has been announced. It's called Deep Seek. What is the really the mind blower on this is that they claim to have trained it on $6 million. A typical model takes hundreds of millions of dollars to train. And I know, Doug, you've been already playing around with it for, so there's two pieces. How good is the model? First question. Actually, first question is $6 million. Is this really something that can be built on 6 million?
Speaker 2: We don't know. So the conspiracy theory, and there may be something to this, is that maybe this is sort of a Chinese psyop. I saw somebody posting this on X online and it's an interesting theory because maybe the Chinese government, you know, backed the creation of this model and the training of this model and then wanted to put out this information to make the US AI community sort of question itself. Like, how could we not achieve this when there are researchers in China doing it with $6 million? So if your two scenarios are some, you know, ragtag band of researchers figured out how to do this for single digit millions or, you know, China somehow was maybe funding and helping them. I would put the odds probably a little bit more that the government maybe had a hand in it versus you just had, yeah, the true 5 million training.
Speaker 1: And Doug, with Intelligent Alpha, you can then continue to experiment with this. And can you just quickly talk about what, do you have any concern at using a model that's based in China?
Speaker 2: Not yet, no. And we haven't been sending any, I would say, really sensitive information to the extent we even really use that right now. But we have been training the model. And just as a quick recap, Intelligent Alpha is a company that we've started where we use large language models to do investment analysis and portfolio management. And so as part of that process, we train or we test and use pretty much every model that's on the market. For the most part, we rely on GPT, Gemini, and Claude. And I would say GPT is probably what we use most. We've been testing DeepSeek over the last day or two. And I would say if you just kind of compare them, some of the spot checking, right, responses side by side, DeepSeek is really good. I mean, I'd give it, you know, an A probably. But how do you feel about using a model that's based in China? I don't feel any way about it right now. I mean, it's open source, number one, you know. So if we want to download it, do some of our own work on it, I think we can do a reasonable job of protecting ourselves from a data standpoint. We don't have to rely on their infrastructure.
Speaker 1: Okay. I guess that answers the question.
Speaker 2: Right. And again, think about data and what you're giving any of these models. You know, I think you have to be comfortable to the extent you're ever using someone else's infrastructure and you're not running locally. You have to be comfortable, you know, giving those companies your data, whether it's a US company or a Chinese company.
Speaker 1: Well said. More to come. On behalf of Deep Tech, bye for now.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now