20,000+ Professional Language Experts Ready to Help. Expertise in a variety of Niches.
Unmatched expertise at affordable rates tailored for your needs. Our services empower you to boost your productivity.
GoTranscript is the chosen service for top media organizations, universities, and Fortune 50 companies.
Speed Up Research, 10% Discount
Ensure Compliance, Secure Confidentiality
Court-Ready Transcriptions
HIPAA-Compliant Accuracy
Boost your revenue
Streamline Your Team’s Communication
We're with you from start to finish, whether you're a first-time user or a long-time client.
Give Support a Call
+1 (831) 222-8398
Get a reply & call within 24 hours
Let's chat about how to work together
Direct line to our Head of Sales for bulk/API inquiries
Question about your orders with GoTranscript?
Ask any general questions about GoTranscript
Interested in working at GoTranscript?
Speaker 1: Hello. In this video, we'll look at Google's brand new Chirp AI model for speech-to-text. The new model is categorically different from older models and is accompanied by a new B2 API. As we record this video, the model is just three days old, but it can be used by anyone with a Google Cloud account. Let's create a new transcription task together and walk through some related concepts. First, let's create a bucket for the input and output files. All the defaults should be fine, except for the bucket name, which must be unique. Offscreen, I'll load a WAV file into our bucket. We'll use a pretty long audio file just to showcase that we can do long transcriptions with this new service. So we've got about seven minutes of audio here, and this is a WAV file with 48 kilohertz sampling rate. This all looks good, and in the transcription options, this is where we get to use the new speech-to-text V2 API, which features the Chirp model. So let's select English US for the language and then that new Chirp model, which is in preview. We don't yet have a recognizer set up for this model, so let's open up a new tab and look at how to set that up.
Speaker 2: So these recognizers are basically
Speaker 1: a specification for how we want to run transcriptions, and these are new in the version 2 API. Now, importantly, the Chirp model is only available in certain regions right now. So if I try to use the global location, we're actually going to get an error. Let's switch this over to a regional US Central 1 Chirp model. We have a lot of settings that we can play with, for example, punctuation and word competence, as well as profanity filters, but let's leave everything as the defaults for now. So now we have our getting started recognizer. To pick up that new change, we'll create a new transcription, which
Speaker 2: will now see that recognizer. Note that some of the settings that we had in our recognizer can be overwritten in our advanced settings.
Speaker 1: Let's go ahead and submit this transcription job, and it'll take just a couple of minutes to complete. Quite quickly, we get the transcription results. Our entire transcription took about 22 seconds for seven minutes of audio. This is an impressive transcription speed, and we can actually see that it's This is an impressive transcription speed, and we can inspect the results down here. Note that we didn't turn on punctuation, so we'll be getting these text blocks that don't have punctuation in them. Overall, the transcription accuracy is looking quite good, and even technical terminology like C++, Angular, and Palm API are well transcribed. From here, we can download the transcript in a variety of formats. With that, we hope this quick intro to the new Chirp model was helpful, and we'll keep an eye on the comments section for any questions. Thank you for watching. Thank you for watching.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now