20,000+ Professional Language Experts Ready to Help. Expertise in a variety of Niches.
Unmatched expertise at affordable rates tailored for your needs. Our services empower you to boost your productivity.
GoTranscript is the chosen service for top media organizations, universities, and Fortune 50 companies.
Speed Up Research, 10% Discount
Ensure Compliance, Secure Confidentiality
Court-Ready Transcriptions
HIPAA-Compliant Accuracy
Boost your revenue
Streamline Your Team’s Communication
We're with you from start to finish, whether you're a first-time user or a long-time client.
Give Support a Call
+1 (831) 222-8398
Get a reply & call within 24 hours
Let's chat about how to work together
Direct line to our Head of Sales for bulk/API inquiries
Question about your orders with GoTranscript?
Ask any general questions about GoTranscript
Interested in working at GoTranscript?
Speaker 1: Hi, welcome to another video. DeepSeek R1 just got launched, and it's pretty amazing. The distilled models are even crazier, because they allow you to run the models locally as well. The model is pretty amazing in its own way, and it beats OpenAI's O1 and Sonnet. The API is even cheaper to use than O1 and Sonnet as well. O1 costs $15 for input and $60 for output, whereas this model is literally just $0.55 and $2.19 respectively, which is just insane to think about. Also, in Ader's benchmarks, it beats Sonnet and comes just below O1, which says a lot in itself. It's just amazing to see. So I thought that I'd tell you guys how you can use this model with Klein and Ader, and use it to do anything you want quite easily, just replacing O1, because there's no need to use it anymore, and you don't need to splurge money on that either. Now first of all, using DeepSeek R1 with Klein is not recommended right now, due to the fact that it doesn't support its output format yet. So I'll recommend using RooKlein. RooKlein is a fork of Klein, and has a mode called Architect, which basically allows you to use the R1 model, or any model as an architect, to make design architecture, while you can use another model to implement those changes. Plus, it also has a bunch of other features, and officially supports the R1 model, which is what is needed. So, that's super amazing. Now, one thing that I also want to mention, is that you can obviously use the R1 model through the API, but if you don't want to, you can also use the distilled Quen32BR1 model locally via OLAMA, or you can use the distilled model on something like GLHF, with the free $10 credits, and then just use the API. R1 is also available on Fireworks, and it also gives you a $2 credit for free. So those are the options for you. I'll also show you how you can use it with Ader as well. So let's start with RooKlein. You can just go over to the VS Code marketplace, and download or update the RooKlein extension, if you already have it. Now just open it up, and you can see that we have the Architect mode and everything. I'll recommend setting up the stuff like I'm going to show you. Just go over to settings, and here, I'll recommend creating a new profile. You can just click this, and it will create a new profile. You can then hit the edit button, and rename it to something that is easily recognizable for you. Once done, you can select the DeepSeq provider, enter your API key here, and in the model, choose the DeepSeq Reasoner, as this is what points to the R1 model. Now once you've put that in, we can start using it. I recommend creating a new profile, because you can customize the other settings according to your needs. And you'll also be able to switch profiles through the main screen as well. I'll also recommend creating another profile here, and keeping the DeepSeq chat model in that profile, as that will be necessary for the way I show you next. Now to use it, you have two ways to use the model. The first one is to use DeepSeq R1 for all tasks, or there's another option to only use DeepSeq R1 as an architect, and then use the V3 model to implement those tasks. I'll recommend only using DeepSeq R1 as an architect, and using V3 as the main edit model, as that will not only be cheaper, but also allow you to do things faster, because R1 can be a bit slow at times. So I have this Expo app. I'm going to ask it to make me a playable synth keyboard. I'm also going to keep it in architect mode. Once we send it in, you'll see that R1 will basically create a whole detailed plan for us, because as you remember, it's a thinking model, and especially good at these kinds of tasks. Now we have the plan here, and this looks pretty good, as we liked. Now you can either use the same DeepSeq R1 to implement it, but I wouldn't prefer it as much. So what we can do is just select edit, then select the DeepSeq V3 model, and now we can ask DeepSeq V3 to implement the stuff as required. Once we do that, it will start working on it, and if we wait a bit, it's now done as required. So this is pretty good. It has done the stuff according to the architect's plan. If we run it, you can see that we have the stuff as required. It's like one of the best generations here as well. If you're facing some kind of error, you can also use DeepSeq R1, as it can go through a chain of thought, and make stuff accordingly, which is just amazing. R1 is actually better than O1 in most cases, for me, so that's amazing. And it's also cheaper. You can also use the distilled models locally, or via the GLHF API as well. So that's cool. Apart from this, you can also use it via Adr, as Adr also has an architect mode, and you can directly use it as a simple edit model within Adr as well, because it also performs well. You can first of all just get Adr installed, or upgraded, as you see fit with this command. Once done, we can just start using it. So, I'll recommend having an Adr config file like this, where you can set the default model as DeepSeq R1, and the edit model as DeepSeq V3. Or if you wish, you can also just use it as the main edit model as well. I'll be using it as an architect. Now, we can just run it, and once we do that, we can just ask it to make me a one-pager HTML chat interface, using the OpenAI API, and make it look good. I need it to be all in HTML, CSS, and JS. Once we do that, the main architect will create a plan, similar to how we saw it in Cline. Once that's done, it will start creating the code as well. So if we wait a bit, then it's now done. It worked extremely well. Let's run it and see. So, I set up the API key in the code here, and if we see it now, this works well, and it's pretty amazing. This could easily pass as a good-looking interface, which is pretty insane. So this is amazing. I think R1 will disrupt the market a lot for OpenAI and Anthropic, because they now have one of the best models that is not only open source, but also extremely cheap, and can be easily used by people without messing around with stuff. R1 is surely a great model, and now we can have real cheap AI coding with DeepSeek and everything. Overall, it's pretty cool. Anyway, share your thoughts below, and subscribe to the channel. You can also donate via SuperThanks option, or join the channel as well, and get some perks. I'll see you in the next video. Bye.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now