Excited About Offline AI Models & Open Source Future
Discusses the excitement for offline AI models, skepticism towards big companies, and the potential for a decentralized AI future. Embraces open source innovation.
File
I Think I Love Deepseek R1
Added on 01/29/2025
Speakers
add Add new speaker

Speaker 1: I'm extremely excited about our one, not for the reasons why other people are excited for our one. Okay, the reason why I'm excited for our one is actually quite simple. When I look at all this stuff, what I see is I see the availability for us to be able to actually have a model that is really, really good that we can actually use that you and I can actually use and put to use even some of the smaller models. Yeah, they're not as good, but we can actually use it. And for me, that is really, really amazing. That's what I absolutely love. Like I can go set up some sort of basic couple GPUs and be able to run even a decent sized models, not a great model, right? I don't need to spend 25,000, you could spend $1,000, get a good enough, like piece of hardware to be able to run something small and local, it can be completely offline. And you can have it just for you and be able to run it without having to worry about all the other stuff, the data collection, all that, that actually seems pretty amazing, like that actually seems like a huge W. And so I'm pretty happy about that. And that's actually what I want to see more of is more of that more ability to be able to run stuff myself without having to rely on open AI tracking and logging me, okay, because they already steal all of our information. They already do all that that stuff. And it's actually pretty awful. And so the fact that I can take control, I can be the one like open AI is not open, open AI is not the one that has your back. Okay, they don't, they don't, they don't have your back. They just want your data, and they want to take all the money for from you. Okay. Whereas this one actually feels pretty awesome. Now the whole Chinese government and all that. That's why I would just never want to use it online. Like I'm not going to use their online interface. I just assume that they're no better than open AI, they're using it all to take data and to be able to try to attempt to continue on using it as training and all that and who knows what else they're actually using the data for. Like, I'm totally and completely not buying any of it at all, for whatever reason. So with that in mind, I do love that I am now the owner. And so this is why I'm buying a couple of those Mac minis, I'm going to try to build my own little Mac mini farm. I'd like to also play around with a couple GPUs and build like a motherboard with a few different GPUs, right, you can get a lot of like PCI Express slots, and the PCI PCI Express, that's kind of funny PCI Express slots, jam a couple of those in and see how it goes, see how we're producing, be able to compare them and be like, this is what it costs to do this level, this is what it costs to be able to do this level. And then as the models inevitably improve, at least my hardware, despite the fact that my hardware hasn't changed, I might get some improvement in the models, which could be amazing. And then on top of that, you could refresh your own hardware and kind of be able to have the level that you want to be able to spend at, which does kind of feel pretty cool. And then even more so if you're at a company that doesn't allow AI training and all that, you could actually have your own offline one if you really wanted it. And like, I mean, that'd be awesome, right? Like, that'd be pretty dang awesome. I really love this idea of being able to play around to be able to play around with your own model, right, you're not going to run the seven 700 B model, probably not, I probably won't be able to afford that because that's going to take a lot of money to be able to do that, like a lot, a lot of money, but I'm going to be able to get something pretty close, I'm going to be able to get a pretty big model being able to run locally. And that's going to be awesome. And so for me, what I see is that I didn't actually think I was going to see this with AI. Honestly, I always thought the future of AI was going to be strictly more control by fewer companies. But it might not be that there might be a small potential future, in which it's not strictly ran by a bunch of companies, or by a few companies. And to me, that could be very, very exciting. Say thank you, China. That's kind of hard for me to say thank you, China. Thank you, the people from China. So you know, it's just something to kind of think about. I really hope so. Maybe I don't, I don't know what they can and can't run, I still got to do the exploration myself. So I'm not going to be someone that's going to tell you all the things yet, because I haven't, I haven't been able to do it yet. So when I when I get out there, and I'm going to try to really work these models and see how they actually go, I think it's going to be a lot of fun. Because I think one thing that's going to be super fun is being able to integrate your editor with your own models, right? Like if I'm able to do any form of training at all on these, or any, any type of any type of rag, or whatever all the things are that people are use or any of the new future techniques we won't know about, be able to provide my own set of data for them to be able to work off of, you could imagine that you could actually make a really amazing experience. And that could be pretty cool. I don't think GPU prices are going down at all. I think this the only thing that I see is that we're already going to have increased prices, because more and more individuals are now able to opt in to using these things. I don't see somehow GPUs and stuff going down. I see GPUs and stuff going up, because even more people have access, it's not just stuck to some small, tiny fraction of people, because right now, all that's happening is like Facebook, Amazon, Google, open AI, they're just like, Come on, please, please, Nvidia, give me all your GPUs, like they're the ones that are just funneling this current market. Next, we'll be able to have many people be able to do it. So very exciting. I think GPU prices are going to go just further up. I think that's all we're going to see. So I'm very excited about this. Honestly, I'm very excited. And lastly, I think this is a really great because open AI always claims to be wanting to do things for the good of humankind. And I don't think open AI has any intent on doing things good for the for humankind. For if they were trying to do stuff for the good of humankind, then I think they would have done more stuff, open source. And I do not believe at any point, have they done anything that's for the good of humankind. I think they're truly trying to take advantage and make us all dependent on their stuff, and be able to do that. And so I am very happy that a beautiful, beautiful piece of open source came out. Very happy about it. And there you go. That's like my true take, they're gonna rewrite the social contract for the better. Like, I mean, like the things that the way that Sam Altman talks and all that should be kind of alarming to a lot of people, right? Like he really, he wants to rewrite the social contract. And I'm gonna, I'm gonna give you a little hint about anything that has to do. When somebody of great power says this, they want to rewrite whatever world such that they get more power and you get less. That's what that means. This is true in all types of government for all time. Anybody with power that can rewrite contracts will rewrite it in their favor. This is not some uniquely American aspect. Just read a little bit of history. And it's every single kingdom is always pegging towards one person getting all the power and trying to kill everybody. It's just nuts. And so I really have always hated open AI. It's actually part of the reason why I've been so obstinate with AI is because I've always found them to just give me just like the hoof. You know what I mean? And lastly, I see a bunch of people saying a bunch of stupid stuff in chat. Okay, hey, sorry that a bunch of people are saying stupid stuff in chat. Okay, you know what pegging means? I don't mean pegging in that kind of sense. Shut up. But lastly, yeah, like, I don't trust deep seek. That's why I want to run it offline. I don't want to give it access to the internet because I don't know if it calls home or not. I don't know what it does. I'm probably too stupid to actually be able to go over all of that. So I'm just not going to go over it. Instead, if I can have a completely offline model, I will have my own offline model doing my own offline things by myself. That will be fantastic. That's like exactly what you want to see in the future. I don't want that China phone home. Okay? We don't want that. It calls home for sure. It very well could like I don't know. Again. This is why I want offline model. I found the USB on the street. Should I plug it into my computer? Well, it doesn't say family photos, because then you could look at who's in the photos and find out who has who lost their USB. Anyways, recalling how well did someone actually find that it calls home? user input. When you use our services, we may collect your text and audio input, prompt uploaded files, feedback, chat history or other content you provide to our model and service. Yeah, I'm curious what that means to our model and service. Is it model and service as one unity one unit because if that's the case, that makes that make that should be not surprising to anybody. If you are sending data to a website, they're collecting that data in some level. If you are using a model, are they collecting the data? Do they really have the power to send home? How is it happening? Right? Those are very different. Those are different ones. So when it says model and service, does it mean the previous stuff applies to the model? And or to the service? Is that an or? Or is that an and because when you when you program you use ors and when you speak in English, you mean and and often it can be kind of confusing to to work between the two. When you're when you're translating from logic to English. What I mean by that is like, let me give you an example. Let's make a function of food that I like if I say the phrase, I like pizza and burgers, you would write that function as return pizza or burger, you wouldn't say pizza and burger because the food can't be you know, okay, maybe they can be both right. But typically, you know what I'm trying to say? There's this thing that happens in the translation between what they're saying versus what they mean and I don't know what they mean. When I see and does it mean provided to our model and service as in it's either the model or the service and they both go up or does it mean it can only be through the service to the model? Right? I don't know if it's a contingent. And do you see what I'm saying? Right? Anyways, now that we got that, now that we got that out of the way, but I do want to I do want to say this is that I do think that this could actually spell a huge, cool new world. And what I mean by that is that like right now, if you ask Chad jibri to do zig stuff, it is just god awful. Okay, it suggests it suggests built ins that don't exist. It constantly does stuff that can't even be real. It just it's just constantly wrong. It is very, very bad at even simple examples. Whereas if I can take my AI, and I can have it on what I want it to be more focused on, I can start getting it to answer my questions based on my documents that I'm providing, which means that I can literally take these documents. And I can say, based on these documents, please tell please show me these spots in which I can read about how to do this specific action. And then can you give me an example based off these right? Like you could do some pretty cool stuff. You can do that with Chad jibri to you can do that with Chad jibri too. But again, I don't want to just give all my data to open AI to tell you the truth. I don't like open AI, I don't trust them. So docs like in cursor cursor does a good job with this. So if you could use cursor, plus your own offline model, would that not be awesome? Right? If I can ask cursor questions in my offline model and be able to have examples produced for me, that could actually that could actually make me really like the experience. Anyways, just a thought. That's a startup coming soon. I know, I want to I want to do something along those lines, because I think it's very exciting. Anyways, I'm very excited for the future of AI. I think this is actually the most exciting. This is the most exciting development of our one, or of AI. In my personal opinion, I thought Chad jib, I thought co pilot was probably the first most exciting thing ever, I still remember the magic I felt when co pilot finished my if else, and did the else correctly with a tie, which did like if player one wins, and then it kind of helped me with player two, and then just fully else everything and matched kind of the if if else above it, I still remember looking at that going, this was one of the most amazing experiences I've ever had. Absolutely love this. I am so like stoked about that. And so when I see this, I actually think that this is in fact, a more magic thing that has happened because this gives me the tools to create that. And I'm so excited about it. So there you go. That's how I feel. I'm very excited. Now I've actually now I'm becoming an AI fan boy. Have I just become an AI tech bro? Did I just become an AI tech bro? Maybe, maybe, maybe, maybe not probably not. But you could imagine that at least I'm pretty excited. You know, okay, I'm not. Stop it. I'm not gonna be I'm not gonna be a day I tech bro. I still like writing code. That's it. I still think that AI produces shitty code. Okay. I can't help it. It produces just the worst code in the universe. I don't know how you guys keep doing. I honestly don't understand why you guys use it all the time. To me. It's shocking. It's like it's honestly personally shocking. But hey, that's you guys. You guys can use it all you want. That's all I gotta say. I can produce worse code. I actually am not sure if that's true. It's okay at debugging. You know what you know who's much better at debugging you when you get good at debugging. You're not like just like a little bit better. You're like massively better. It's like crazy how much better you can actually be by getting good. Anyways, you know who's better your mom Damn, you guys got me on that one. You guys got me. Anyways, the name I hate you guys. What am I saying? You know what you don't even you don't even you know what? Once you press the like button in the subscribe button. Okay, how about you? How about you give me something this time? Why don't you give me a little look at what give me a little goodness. Okay, just put give it to me again.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript