Faster research workflows · 10% .edu discount
Secure, compliant transcription
Court-ready transcripts and exhibits
HIPAA‑ready transcription
Scale capacity and protect margins
Evidence‑ready transcripts
Meetings into searchable notes
Turn sessions into insights
Ready‑to‑publish transcripts
Customer success stories
Integrations, resellers & affiliates
Security & compliance overview
Coverage in 140+ languages
Our story & mission
Meet the people behind GoTranscript
How‑to guides & industry insights
Open roles & culture
High volume projects, API and dataset labeling
Speak with a specialist about pricing and solutions
Schedule a call - we will confirmation within 24 hours
POs, Net 30 terms and .edu discounts
Help with order status, changes, or billing
Find answers and get support, 24/7
Questions about services, billing or security
Explore open roles and apply.
Human-made, publish-ready transcripts
Broadcast- and streaming-ready captions
Fix errors, formatting, and speaker labels
Clear per-minute rates, optional add-ons, and volume discounts for teams.
"GoTranscript is the most affordable human transcription service we found."
By Meg St-Esprit
Trusted by media organizations, universities, and Fortune 50 teams.
Global transcription & translation since 2005.
Based on 3,762 reviews
We're with you from start to finish, whether you're a first-time user or a long-time client.
Call Support
+1 (831) 222-8398Speaker 1: Chinese AI company DeepSeek has just released their R1 model which competes against OpenAI's R1 model. It has a chain of thought reasoning. The difference with DeepSeek is that they've open sourced all the code for their model so we can download it and run it locally. To do this I'm using Ollama. So let's go ahead and download this first. It's available for Mac, Windows and Linux and Ollama is a framework for running different large language models like Llama 3.3 and the latest DeepSeek R1. I'll be downloading and using the 32 billion parameter version. This is 20 gigabytes. There is a massive 404 gigabyte 671 billion parameter version but you can use kind of specialized hardware to run that effectively. I'm going to be doing this on a high-end gaming laptop. Once that's downloaded we'll get a prompt to install it. Now that's installed let's open up a prompt. I'm going to select the version I want which is this 32 billion parameter version which is 20 gigabytes. I'm going to copy this code here. I'm going to paste that into the terminal.
Speaker 2: Okay that's installed now let's test it.
Speaker 3: It could be criticized for overthinking the problem but it does have the right answer. Let's ask it about something more nuanced.
Speaker 1: My first impression about this is that it's more personalized than ChatGPT. It's not providing generic answers. There's more kind of I think this, I guess that, I wonder if this is happening. It's more of a human-like response rather than something that's just acting as a calculator or a tool as such. One of my favorite uses for large language models is to brainstorm in different things. Let's see if we can come up with some brand names for this YouTube channel. With this query it almost seems like it's going through the logic process a human would. Kind of coming up with different ideas of how to create a brand name and then reiterating on previous responses. It almost feels like it's thinking on the fly which is to do that chain of thought reasoning that O1 and this R1 model uses. The final thing I want to try is to get it to write some code. This is only so useful from the command line. Obviously if we want you to do something programmatically and use this model within an application we'd need to use some kind of agent and to do that we do something like Llama index. I'm going to ask it to create a AI agent using Llama index to a kid's story on any given topic. Another nuance of this model is it seems to, in output, it seems to think out loud. It's kind of these first few paragraphs it's thinking about how it should do something rather than actually giving the final response. This is kind of a little bit more behind-the-scenes with ChatGPT's O1 model. On this device which is a GeForce RTX 3080 Ti you see the 32-billion parameter model 20 gigabyte version is running slowish but it's usable. It's creating output just a little bit slower than you would naturally read it I would say. Bear in mind that the outputs that I've seen so far from this are quite wordy. It's not kind of as concise as other models I've used in the past and it's completely failed to actually give us any code as to what I want. It's quite interesting actually if you take a look at this query we've got think tags here almost like HTML tags and it's open in here and it's closed in here so all this text output here is just thinking through how to solve the problem. There might be that there's a option to remove that from natural final output or you could filter it if you're outputting this to a text file or some kind of directory then you could take that out before you store the actual result. Finally we get the output here which gives some instructions for installing alignment index and lang chain and then it's actually written the code for us which from first impressions looks pretty good. The beauty of this is that it's free to use, there's no cost involved, no subscription service required, you can simply download and run it on your local machine or on a server somewhere if you wanted to put it into a production environment. Also using open source software is really important for the industry it prevents a consolidation of power through a few closed source companies which is something I've spoken about in previous videos. I hope you've enjoyed this video if you want to stay up to date with the latest emerging technology then hit the like button to train the algorithm, subscribe for updates and thank you for watching.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateExtract key takeaways from the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateWe’re Ready to Help
Call or Book a Meeting Now