Exploring DeepSeq R1 Series: AI Tools for Enhanced Computation
Dive into the powerful DeepSeq R1 series, its limitless capabilities, and how it scales compute in AI. Discover model comparisons and prompt engineering insights.
File
No, Deepseek R1 is NOT better than o1 BUT you get 25x COMPUTE
Added on 01/29/2025
Speakers
add Add new speaker

Speaker 1: It started with OpenAI's O1. Next came Gemini 2.0 Flash, Thinking, and now you have the DeepSeq R1 series. The R1 600 billion parameter model through the DeepSeq API makes it nearly impossible to justify using O1, especially with its virtually limitless rate limits. The R1 series lets us scale our compute usage and in the generative AI age, compute is how we scale our impact. Not only do these reasoning models give you well thought out answers, they also give you the chain of thought used to derive their answer. The journey your machine takes to arrive at its answer can be just as important as the answer itself. As you'll see in this video, the internal monologue of these models gives you another feedback loop in addition to the response you can use to improve your prompts at scale. Let's play with every one of these powerful reasoning models side by side so you can understand how you can scale your compute. When you scale your compute usage, you scale your impact. Let's open up Benchy. We're going to use the ThoughtBench tool today. This is a new tool that lets you compare models and their chain of thought side by side. We'll start out with a simple prompt ping. So every model is immediately trying to figure out how they should respond to the single word prompt ping. If we open up the settings here and shrink the column width, you can see we have all of our model responses and we have their thoughts. So let's go ahead and just focus on the thoughts alone. So we're working with several DeepSeek R1 models. We're also looking at the latest Gemini 2.0 Flash experimental thinking model and we're looking at this state-of-the-art reasoning model unfortunately with no thoughts. Although the DeepSeek reasoning model has made huge strides, it does not compare to R1 and you'll see additional evidence of that in this video. So the fascinating part about these reasoning models is that they have this thought output. If we increase the column height here, we can see these models thinking through how they should respond. We can add both the thought and the response back and you can see we get several variants. And two of these, the DeepSeek and the Gemini Flash thinking, help us understand how the model is giving us the answer. You can see here Gemini Flash thinking is really working through all the possible scenarios and what we mean by ping. If we scroll down to the bottom here, you can see it eventually just says pong. And this is what we're looking for, right? When we type in ping, we're just looking for a simple pong response. This is also a network protocol command from the terminal and this is where DeepSeek reasoning decided to go with this, right? You can see OpenAI R1 understood this command better than any model. It just responds with pong. When you say ping, all you want to get back is pong. You can see 8 billion parameter and 1.5 billion parameter giving us a response in a different language. So you can see here even with a single word ping, having the thought process available is helpful for helping us understand how each model is deriving its final answer. So if we reset here with this benchmarking tool, we can do a couple cool things here. We can add any OLAMA model we have installed. We can also add any one of the common provider models that we want. So I'm going to go ahead and add DeepSeek R1 14 billion parameters. So you can see that got added there. And let's go ahead and add a Anthropic Cloud 3.5 Haiku. So let's go ahead and get the latest Haiku model just to have this in here for fun. Now let's go ahead and run an AI coding prompt and let's see how the chain of thought can be useful for improving our small AI coding prompt. So I'm going to just type create def convert csvs to duck db csv paths and then db path string as well. And this is going to give us back nothing. Just with this function definition, there is enough information here for our reasoning models to fill out the function. Let's go ahead and fire this off and let's see how our models respond. Every OLAMA model you see is running locally on my M4 Max MacBook Pro. This is a 128 gigabyte unified memory machine. This is the top of the line M4 Max. So we are blazing through the 1.5 and we're also blazing through the 8B. The M4 can also run the 14, the 32, and the 70 billion parameter models. So let's go ahead and adjust our width so we can see all of our models here side by side. Let's see who's still running. So we have the DeepSeq 600 billion parameter model. This is running in the cloud, hence the logo. And then we have the DeepSeq 14 billion parameter model on my machine. You can hear my M4's fan kicking up here. Language models is the only time I hear these fans kick on. If we knock down our displays to only looking at the response, we can see what our models gave us here. And so let me go and shrink down a little bit more so we get CLAWD 3.5 Haiku. Something interesting we can do with this benchmark is just compare a couple of models side by side. So let's say I just want to look at DeepSeq 8 billion parameter. Let's get the reasoner and let's go ahead and take a look at, we'll use O1 as our control model, right? So we can go ahead and expand the widths here and let's take a look at these answers. You can see the 8 billion parameter putting out a decent response here, quite a bit of code. We are using RM-RF. It seems quite dangerous, but that's what we have there. If we scroll down here, nice version from the DeepSeq reasoner model. And of course we have O1 giving us a great response here as well. If we want to, we can copy the outputs here and open up a editor and we can see exactly what this looks like, right? So if we just remove these pieces and take a look at the code, you can see here we have a nice functioning result just based on the function definition we gave, right? This is a common AI coding technique you can use to generate entire functions. Just by giving your LLM the right information, it needs to get the job done. We can take a look at DeepSeq reasoning's response and we're going to get something very similar. Let's go ahead and drop everything here. Remove the explanation, right? Very similar result. That looks really good. You can see that these models are picking up on the DuckDB read CSV auto method. This is a really important command that is embedded inside of DuckDB that you can use to automatically generate tables from CSV files. So it's nice to see that our models are using that. And we can see all the way at the 8B size. If we copy this out and take a look at this response, we do have that kind of scary rm-rf at the bottom here, but we can see what that's all about. We have this DuckDB SQL. Not sure what that's for, but that's there. And so it looks like this is probably going to be a bad answer. It's making up some things and we can dive into maybe why that is by looking at, of course, the thoughts. So if we look at just the thoughts of these models, let's go ahead and pull in our 14 billion as well. And let's shrink the column size just a little bit here. Pull on our 14 billion parameter model. We can see something really cool. And let's actually just go ahead and reset here and just get our R series models pulled into our view here, right? So we can see something really interesting, right? They all kind of follow a similar pattern. And that makes sense because every one of these models was distilled from the DeepSeq Reasoner model, right? So you can see the similar pattern. If we search for okay, comma, pretty much everything starts out with, okay, I need. So you can see this kind of similar pattern throughout all the distills. And then we have a couple of interesting patterns that we see throughout the thought process. So we see, you know, first comma, we see lots of weight comma, and you know, the weight pops up quite a bit. It's really interesting. This is how the model kind of double checks itself as if you or I were thinking and solving a problem. But we can see here, you know, just for this method, there's quite a lot of thought going on, right? And to me, this is telling me a couple of things, right? You can see 14 billion parameter, look at how much time it spends thinking, right? We can copy this out, paste it in an editor. And, you know, it generated 4,000 tokens of thoughts, quite a bit for a relatively simple problem. If we copy the thoughts from DeepSeek Reasoner, we can see something similar. So we have 4K from R1, 14 billion parameters. If we look at the 600 billion parameter model, we have, you know, about 2K tokens. So you would assume that, you know, a larger, more powerful reasoner should be able to solve problems with fewer thought tokens, but you can see it working through this. Something important I want to call out here, even the top of the line DeepSeek R1 model, you know, 600 billion parameters for a small prompt like this, right, or a small AI coding prompt, it has to do a lot of thinking in order to get this done, right? This is a new signal that we can take and say, hey, let's help out these models, right? Let's help them perform better by analyzing their massive thought process. And let's simplify some of that for them, right? We can do something like this, right? Let's continue on the trend of AI coding, and let's go ahead and use a AI coding prompt specifically inside of a tool. So this is not something that you would use inside of Ada or Cursor or something that already exists since they're running their own AI coding prompt formats. This is just something you would use in a separate tool. So I'm going to paste in this prompt, and we can quickly just take a look at this. There's nothing, you know, super special about this aside from the clean format. So we're just saying, you know, generate a function for the given user function request. We have a nice dynamic variable that we're going to update as if this were, you know, an application, and this is getting updated live over and over, right? So we can go ahead and just paste this in here, and then we can run that exact same AI coding prompt, right? So we'll say create def string dash none. So let's go ahead. I'm going to drop our 1.5b. So we're giving our model a bit more information, right? A bit more help on how exactly to solve this problem. We have more instructions. We have more details. It should be easier for the models to think through this and solve the problem. Kick this off again. Let's go ahead and reset. So you can see all of our models side by side. We can see Gemini Flash thinking already completed. Let's go ahead and shrink the column sizes here so we can see all of our models. Oh, let's get out of thoughts only. Let's see both sections. There we go. So we can see a Claude Haiku has a response for us. Of course, it has no thoughts. Same with 01. Unfortunately, this beast of a model does not give us insight into its internal monologue, but we can see here Gemini 2 does have internal monologue. We just got our R114b completed, 8b completed. Now we're waiting on DeepSeq R1, and the DeepSeq Reasoner is hitting the DeepSeq API. This is the 600 billion parameter model. So let's go ahead and start with Gemini. So let's see the thoughts behind Gemini's new output, and let's see what this looks like. So I'll just go to a text file, paste this in, and look at that. So quite a few fewer tokens. Check this out. Only 200 tokens now. Let's copy DeepSeq 14 billion parameter. So DeepSeq 14 billion, look at this, 500 tokens. So by giving our prompt more structure, by giving our reasoning models a lot more information to work with, we have a clear purpose, instructions, and then a clear request. It doesn't need to think so much. And if you're doing less thinking, your answer to whatever you're trying to solve is likely more accurate, more precise, more performant, right? Very cool to see that all the way down to our small model. So let's look at our R1 8 billion parameter model here, paste this in, and you can see here 600 tokens, very concise. It's working with the DuckDB functionality. It's figuring out how to play with CSVs as well. We can, of course, look at the DeepSeq Reasoner. So this is our, you know, large model, and you can see here something really interesting. DeepSeq 600 billion parameters is actually putting out 2k tokens. So still quite a bit, but down a lot from the 5k number, right? And you can see here it's picked up on that key method, read CSV auto. So it does see that, and it does know to use this method. That's really important. We can of course, you're looking at just the response, and one of our keys here is that we only want to see the code, okay? So where do we say that here? Yeah, do not include any other text, do not include any other code. So we just want the output of this method, right? And let's go ahead and dial into just a couple answers, right? Let's look at R1, let's look at R1 Reasoner, and let's look at 14b. Close this up a little bit here, expand the column width. I'm really enjoying using this tool. Link in the description, by the way, for this. We're building on Benchy. This is a suite of benchmarks that you can feel that I'm building on the channel. As I'm working through analyzing and building with large language models, I want to share some of my tooling with you. That's what this is. Feel free to get this link in the description. A tool like this is super important, not only for the thoughts, but just to compare models side by side with different prompts that you might have with different ideas. You can easily come in here and just add arbitrary models. As long as you get the model name right and use one of the available model prefixes, you can use Gemini colon, OpenAI colon, any OLAMA model, and Anthropic. Link in the description for you if you're interested. A lot of the work I'm doing behind the scenes I can't always share, but when I can, I love to share it with you here on the channel. Let's look at these responses. So we're looking for just this concise code. If we copy DeepSeek Reasoner, you can see we're getting a great response out of its thinking tokens here. It thought all of this and then it outputted just this. We go language mode, markdown. You can see how concise this is, right? This is a near perfect answer. Create or replace table name and then select from read CSV auto and then it's escaping this. That looks great. What we get out of this is a duck database with all of these tables created from these CSV paths, right? We can test this, of course, against O1 and we can see O1's response as well. It's going to look very similar because these are basically, you know, the perfect answers here. Let's go language mode, Python. Same deal, right? So you can see here Reasoner, a little bit more accurate by pulling the imports out of the function. And so very interestingly here, it looks like O1 may have made a mistake. We are selecting and importing everything. Oh, it's just creating the table here in the first statement. You can see limit zero and then it's inserting from using read CSV auto. Very interesting. So anyway, so we can see here with the small model, right, 8B having trouble following the instructions, right? We don't want anything else. We're just looking for the answer, but still not too bad. You can see it got an answer. It's not going to be perfect here. It's actually definitely wrong, but that's fine. It is a small compact model. So let's go ahead and look at the answer from our 14 billion parameter model here. And we'll just copy that there. And let's see how we've done here. So a little bit more verbose. We can see we have the table name there. That's good. Started playing with the schema, then it dropped it. We have execute many. We don't need all this. So it's not performing very well, right? At some point, the smaller you go, the less these models are going to be able to do. And we can see that here. So how is this stuff useful? Having both the thoughts and the responses of these models gives you more information to help you improve your prompt even further, right? It's also helpful to just see where the limits are, right? We can see that for the 14 billion and the 8 billion parameter deep-seek models, they simply cannot accomplish this task without additional information. So for instance, we can give them another shot by improving the prompt a little bit more. Let's try and guide our 8B and 14B. What we'll do is we'll copy, we'll reset, we'll paste the prompt, get rid of 1.5B, we'll keep 8B, and what we'll do is we'll add 14, and then we'll also add 32. Let's add a couple additional details here to help our model get the response we're looking for. We can use the thoughts, if available, and the response from our large models like DeepSeek Reasoner, OpenAI's O1, and Gemini's 2.0 Flash Thinking to guide the smaller models, and more importantly, to guide our prompt to more precise output. So we can say something like this, right? Use DuckDB's SQL star from read csv, and I think we want read csv auto, right? Yeah, read csv auto. Read csv auto, just input.csv. Okay, and then we saw the small models were making a couple mistakes here, so I'll also say use the csv name as the table name. We're going to run DeepSeek R1 8-billion parameter 14 and 32 that's running on my M4 right on my device. We're running DeepSeek Reasoner in the cloud, O1 in the cloud, and Flash Thinking in the cloud. You can see here, O1 came back with a great response. We can go ahead and just dial in to our top performers here, right? Our mega cloud models. You can see them spitting out just a great answer. We can see, again, this trend continuing of Gemini. The more precise your prompt is, the less the model has to guess, right? And we can see that. We can confirm that through the chain of thought. So we can see Gemini just thinking through everything. It's looking at the parameters. It's figuring out what that means and how to form the right response. So you can see it's doing some nice table name manipulation off the csv paths. It's looping through those. We can see a similar output here from O1. And let's take a look at our small model. So let's look at 8-b, 14-b, 32-b side by side. We'll drop the com width so we can fit them all here on screen. And let's take a look, right? So we can see 32-billion parameter is giving us a nice concise response. Let's copy this out. And I actually like this a little bit more than, you know, this is a OpenAI response. And this is going to be a DeepSeq R1, 32-b. Check this out, right? We have the connection. We're looping through. We're using the exact same code. Let's go ahead and convert this to Python. We're using the exact same code. Drop this down here. That's going to be much better to compare. So if I go down here, there we go. DeepSeq on the bottom, OpenAI on the top. While we're looping through this, we can actually see like the perfect response coming out of this. Although, you know, now that I'm looking at this side by side, once again, you can see OpenAI's O1 is a little bit ahead, right? You have to create or replace this. I don't know if SELECT INTO works without the table existing first, right? I need to actually run this command to see. So R1, 32-b getting closer here, right? And we were able to get these models a lot closer by looking at both the thoughts and the output. We can see 8-b here still having some trouble. It is generating the code that looks a little bit better. But we can see here in the 14-billion parameter, we're still getting a little bit more text, which we're not happy about. But we are getting a more precise answer, okay? So this is great to see. And we can dial into the big hitters, DeepSeq Reasoner, O1, and Thinking. And, you know, we can see, you know, the kind of, you know, perfect responses here, DeepSeq with the CREATE TABLE, O1, CREATE OR REPLACE, and then Thinking with the CREATE TABLE. So what's happening here? Why is this important? It's important because we're acting at the intersection between two feedback loops now. We have both the response from our powerful high-end cloud reasoning models and minus O1, we have the actual thoughts that they use to help derive the answer. And by looking at the thoughts, we can use that information and the response to act as inputs to our personal feedback loops that can improve our prompt. And I hope you can kind of see, you know, where that could take us here on the channel. Every detail of your prompt matters. Literally every single character can change the outcome. And so the thoughts in combination with the response, in combination with looking at models side by side like this, gives us more information to improve the prompt so that ultimately we can, at some point, hand off the process of generating prompts to an AI agent. We've talked about this pattern a little bit in our previous meta-prompting video. I'll also link that in the description. And speaking of that meta-prompt, let's go ahead and use that as another example to see how many of these models, how many of these powerful local R1 thinking models, can keep up with a very, very complex prompt that is the meta-prompt. So we'll hit reset here. This will get us back to a base state. I'm going to go ahead right away and drop off 1.5b and 8b. They will not be able to run the meta-prompt. And just to kind of show you what the meta-prompt looks like, I'll go ahead and clear. I'll paste this in here. Let's change our language mode to XML. And the meta-prompt itself is a 2000 token prompt. So this is a non-trivial large prompt. So this prompt is special because it generates other prompts. So what prompt do we want to generate with our meta-prompt? Let's keep it relatively simple. I'll say purpose convert the given text into a markdown table instructions. And then we see here we have some nice cursor tab completion coming in. Our examples, we don't need that. For our meta-prompt, we want user input, we want text blob, and we want table columns, right? So you can imagine this prompt is going to have these three additional XML blocks at the bottom where we can fill in dynamic variables. And then we have our instructions here. User input, text blob, and table columns. Create a markdown table. Great. And I'll say cover every column. Detailed. I'll also say, and we can new line this just to clean it up a little bit. Use markdown table syntax. Any other text. Include a table header, h1, and a table footer, h2, with bullets. One for each column explaining the column. Great. Okay. So this is the meta-prompt. I'm going to copy all this. We're going to paste it in our thought bench. Let's add a local model. So let's go ahead and run 32B. Let's see if 32B can keep up. Colon 32B. We'll hit add. And let's go ahead fire off these four models. And let's see. Let's also do 14B. I'm just curious if 14B can pull this off. I doubt it. But let's go ahead and add 14B. And let's go ahead and fire off the meta-prompt. So this prompt will generate a prompt that does this for us, right? We will hit thought prompt. I mentioned I'm running my M4 Max. And when I'm running 14B plus size models, that's the only time I ever hear my M4 actually turn on. So you can see we're getting some responses coming back in here. Let's go ahead and close the width a little bit so we can see all of our models. You can see how long this is for Gemini Flash. It's thinking quite a bit. And we can see DeepSeek Reasoner also, you know, quite a bit of thinking tokens here. We can copy it and take a look at what it thinks about the meta-prompt. So it's looking at the purpose, instructions, sections, right? It's breaking down the meta-prompt itself. And so really cool. It sees that variables are correctly placed with this syntax. So it's using two squares. This here is an instruction. So it is reading the instructions properly. And let's see what our model actually output here. So this is DeepSeek R1. And if we just paste this response, perfect, right? This is perfect. This meta-prompt generated a new prompt for us. You can see here it even has that. It's using that leading text prompt engineering technique that helps the LLM start its response. It has our three dynamic variables that we asked for, user input, text blob, table columns. It's got clear instructions. And you can see here, it's got a nice purpose. You're an expert at transforming unstructured text into well-formed markdown tables. Fantastic, right? If we look at our 32B and our 14B, they just can't do it, right? Look at these responses. Let's go ahead and hone in here and look at these responses, right? Somehow throughout this process, the meta-prompt in itself contains three other examples of running the meta-prompt, right? So it's very confusing for a language model. You need a lot of size to really understand this prompt. So you can see our responses are just completely bogus here. They're outputting like a garbage mock table. These are just immediately, you know, unusable for this use case. But we can see here DeepSeq Reasoner, O1, and Gemini Thinking, if we compress these a little bit, these top-tier cloud models, all likely above, you know, 300, 400, 500 billion parameters. These can all do the job well. Let's copy out O1's meta-prompt. So this is DeepSeq R1. This is O1. You can see here O1 is giving us these like table columns. And then we have, let's pull in Flash here. Not too bad, but here we have user prompt and user input. We don't need both of these. We need one or the other. So interesting to see that here, Flash is a little bit behind O1 and DeepSeq Reasoner. This is one of the most complex things you can do with these language models. Ask it to be, you know, kind of self-aware. Ask it to do meta-level thinking. This is where a lot of the value is right now in generative AI, in prompt engineering, is having these models, you know, improve other models. We can see we're getting some decent responses out of this, right? And I want to mention, once again, we can come in here, look at the thought process. Let's go ahead and expand this height a little bit. And let's just look at our reasoners that give us the actual output. So let's look at these two. And let's pull in our 32 billion as well, right? So let's say we wanted to improve the meta prompt or simplify it or something, right? We can go through and make improvements by looking at the thoughts of these models side by side, especially when we have all three of these, okay? And so you can see here, 32 billion parameter just goes off the walls quite quickly here. It thinks that its job is to actually execute that prompt. So anyway, so last thing we'll do here, just for fun, let's go ahead and take the prompt. Let's take the generated prompt from DeepSeek R1's meta prompt. Let's just copy this. It's going to be fun. Let's clear. We'll do a full reset. We'll paste this in. And we're actually just going to run our generated prompt from our meta prompt. So I want to say generate model comparison table given the text blob. Header is model price, okay? And then we can specify the column. So I want model, alias, input tokens, output tokens, input costs, output costs, okay? And you can imagine what I'm going to paste in here. I'm going to go over to DeepSeek's model pricing. I'm just going to copy all this. And just as a blob, we're just going to paste this in, right? So we have that there. And then I'm going to go over to O1's model pricing page. Just going to copy the O series models. I'm just going to paste this in here. And then I'm also going to grab the O1 and O1 mini, just copy their input and output, and just paste that in there, right? And then I'm going to also update the columns a little bit. I'm going to say max input tokens and max output tokens. And let's go ahead and see how our models perform here. Once again, I'm going to drop 1.5. I'll keep 8B. And I'll add in, let's see how 32B does here, okay? And just for fun, we've got to show love to the OG. I'm going to throw in Anthropic Claw 3.5 Sonnet latest. Let's go ahead and fire this off, right? Let's see how our models perform here. Let's drop them all down so we see this side by side. And we can see, of course, we don't have thoughts for Anthropic. But if we copy out Sonnet's response here, so if we open up a new file here and paste this in, format the results, we can see from Sonnet, we can get a decent breakdown of the model, alias, the max input, max output, input costs per million, and our output costs per million. You can hear my M4 working hard to get the 8B and the 32. There's the 32. Let's copy the 32 billion parameter model. Let's see how it's performed here. So this also looks good. 32 billion parameter model actually doing some good work for us here, giving us a nice model comparison. And this can go on down the line, right? We don't need to check O1. We know that'll be great. We can quickly look at DeepSeq, do a quick model preview here. You know, DeepSeq looking great for us here. You can see here we have the model aliases set up properly. We're missing O1 here. That's fine. These are all minor details. But you can see this working and you can see this being useful, right? So if we just expand everything, we can get that. And if we want to, we can just focus in on the responses alone. This tool is going to be linked in the description if you want to check this out. This is called Thought Bench. And it's a way to look at models side by side and iterate on prompts across many models in a live benchmarking way. With new generation reasoning models like R1 and Gemini Flash thinking, we can peer into the thought process of our machines. This enables us to gain insight into how we can improve our prompts to drive, improve results across execution of our language models at scale. It's important to call out that the DeepSeq R1 series represents a massive continuation and really acceleration of the trend we've been betting on and predicting on the channel. Price is going down, reasoning abilities are going up, and speed is going to go up. Basically, we're getting massive amounts of compute every single month now. And it's up to us to figure out how to best use it and how to best ingest the capabilities of these models. This is why I built Benchy, by the way. It's up to us to figure out how we can best use this compute and understand these models so that we can deploy them at scale. Like I said in the introduction, if you want to scale your impact and the generative AIH, you need to be scaling your compute. There's going to be a one to one linear line between your impact, your output, and how much compute you're using. The correlation is going to be a causation very, very soon. The models are improving, but as we saw here, even 14 and 32 are still lacking. I didn't test 70 billion here for time's sake, but that model is a step change above 32. So I highly recommend you check that out. But the very clear winner here is R1. The price for this is insane. It makes using O1 basically impossible just because you're getting so much more value, about 25X value with only a slight loss in capabilities. So let me know in the comment section, how are you liking the R1 series? Are you getting value out of the chain of thought of these models? If you enjoyed this video, you know exactly what to do. Drop the like, comment, and sub. Stay focused and keep building.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript