Faster research workflows · 10% .edu discount
Secure, compliant transcription
Court-ready transcripts and exhibits
HIPAA‑ready transcription
Scale capacity and protect margins
Evidence‑ready transcripts
Meetings into searchable notes
Turn sessions into insights
Ready‑to‑publish transcripts
Customer success stories
Integrations, resellers & affiliates
Security & compliance overview
Coverage in 140+ languages
Our story & mission
Meet the people behind GoTranscript
How‑to guides & industry insights
Open roles & culture
High volume projects, API and dataset labeling
Speak with a specialist about pricing and solutions
Schedule a call - we will confirmation within 24 hours
POs, Net 30 terms and .edu discounts
Help with order status, changes, or billing
Find answers and get support, 24/7
Questions about services, billing or security
Explore open roles and apply.
Human-made, publish-ready transcripts
Broadcast- and streaming-ready captions
Fix errors, formatting, and speaker labels
Clear per-minute rates, optional add-ons, and volume discounts for teams.
"GoTranscript is the most affordable human transcription service we found."
By Meg St-Esprit
Trusted by media organizations, universities, and Fortune 50 teams.
Global transcription & translation since 2005.
Based on 3,762 reviews
We're with you from start to finish, whether you're a first-time user or a long-time client.
Call Support
+1 (831) 222-8398Speaker 1: In this bubble tutorial video, I'm going to show you step one of how you can use the assembly AI API to extract different speakers and the text that they say from audio. So we're going to be using the API to upload an audio file and then get a transcript back but know who said what in the transcript. But before I launch into that, did you know that we have got videos that you cannot find on YouTube exclusively available to our members at PlanetNoCode.com. This is going to pick up on some earlier videos where I was using the assembly AI API. And so if you need a bit of a recap on each of the individual steps, you can go back and check out those videos. But I am going to be explaining what's going on here, which is that I'm in the bubble API connector and I've added in an API called assembly AI. I've added in my API key into the authorization field, private key in header, and I'm making a post request to the assembly AI API. And this is the end point here. It is an action so that I can run in a workflow and I'm sending as JSON. And within the body, I've got one body parameter that I've made dynamic and that is I have to provide assembly AI with a public, open, accessible audio file or video file for them to be able to fetch and turn into a transcript. So I've uploaded a audio file to the bubble app storage and here is the link directly to it there. The only step that I've really done differently from my earlier assembly AI videos is I've added in this value into body, speaker underscore labels is true. And so if I initialize this call, and this will serve as a good recap for how the assembly AI API works, I get back an ID and check out my other videos for how you can get this all automated running for a webhook. But right now I'm just doing it in the API connector to demonstrate that all of the steps. So I'm going to copy the ID because this is the unique identifier for the transcript. So once assembly AI has finished processing the transcript, either you provide them with a webhook, which I've demonstrated in other videos, or you go and you look for the transcript using this ID. And so for this, I'm just going to look for the transcript. So I'm going to go down to my get process transcript ID, and this is all covered in the assembly AI documentation, but I've laid it out here in the bubble API connector. So I'm going to paste the ID into there and then initialize the call. And this is where I get back my transcript. So you can see here that my transcript starts with, hello, my name is Bob, I'm speaker one. And then someone else says, hello, my name is Emma, I'm speaker two. So if I scroll down to utterances, I can see that it begins to group them. And so I then have in utterance number one, I have, hello, my name is Bob, I'm speaker one. And then bubble only shows you one example, but if I go to raw data, oh, it's going to be a long way down. Where is it? We then have utterance two, hello, my name is Emma, I'm speaker two. So part one, I'm just showing how to get the response back that contains the data in JSON for identifying different speakers. Stay tuned for part two, I'm going to show you how you can start to process this through the bubble database and get the different parts of your conversation and display them.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateExtract key takeaways from the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateWe’re Ready to Help
Call or Book a Meeting Now