Integrate DeepGram API with Bubble: A Quick Guide
Learn to connect DeepGram's fast transcription API with Bubble. Follow step-by-step instructions to enhance your app's functionality seamlessly.
File
Deepgram Text to Speech MADE EASY with Bubble API Connector
Added on 01/29/2025
Speakers
add Add new speaker

Speaker 1: Here's how to use DeepGram's lightning-fast transcription API with your Bubble app. In this Bubble tutorial video I'm going to show you how we can take the DeepGram API and link it in with the Bubble API connector. So first thing I recommend you do whenever you sign up to a new service or party that offers an API is you look to see if they've got a playground or at the very least look at their developer documentation. Thankfully DeepGram has got a playground and it just makes it so crystal clear how we integrate it in with Bubble but don't worry I'm going to take you through every step. If you're watching this video it's because you've got an amazing idea and you want to launch it with Bubble or you're just toying around with Bubble and if you want to accelerate that process, get to launch even quicker, then we recommend checking out our website. The link is down in the description because we've got hundreds of Bob's tutorial videos, hours of content including courses and tutorials ready for you to accelerate your app to launch. But right now we're just focusing on DeepGram, so we need to take this CURL call and add it into the Bubble API connector. So notice a few things, first of all that the type is post, that the authorization is authorization token your API key, content type is application JSON, although that is now a default in Bubble you don't actually have to include that anymore, and then the data is the URL which is this is just a demo file that they're providing, it's a WAV file but you would of course swap in your own publicly accessible file and DeepGram I believe supports 40 different types of files, audio video files that you can feed into the transcription. But then notice that all of the parameters including for example like the model here, they go in as URL parameters into the endpoint. So you would make these dynamic if you wanted to. Now what does this look like in Bubble? Let's swap over, so I'm in the in Bubble, I'm in plugins, I'm in the API connector, if you've not got that installed you can just install it from the plugin directory, and I've added in a new API and I've named it DeepGram, and I've set private key in header out of all of these options, it's authorization that's the default value, and then in here I've got token, space, and my API key. Then I've scrolled down and I've added in the connection, I've named it get transcript, changed it to action, it's JSON, post, remember we've checked in DeepGram to see what type of API call it is, it's a post call, and then I've just pasted in the full URL, and I'm just going to leave it as default, so I want punctuation, I want smart formatting, and I want to use the Nova 2 model. Perfect, and now I just paste in the body, and so this is the bit that you'd make dynamic, and you'd make it dynamic like this, so I'll remove that, and then I'd say audio URL, and that adds a dynamic field in, and as long as that's not private, and it doesn't need to be private, because when we talk about private in the API connector, it's saying what details do I want shared between me, the app creator, and my bubble app users. For example, I don't want my API key, so that is private, but the audio file that my users have uploaded, that's not private, because they are using it, they can access it, and so if you now wanted to test it, as we will do, you would paste it back in here. The reason that I've removed the speech marks from the body section, and I'm going to leave them in the value section, is because when I use this in a workflow, we're going to JSON safe it, so that adds the speech marks back in, but it also accounts for any pesky punctuation, special characters that may or may not be in the URL, so it's better just to JSON safe it to be safe. But right now, let's just test out this integration, here is our wave file, and we're going to click reinitialize, because I've already tested this, but I just want you to notice how quick it is. I've been using assembly AI for a long time, I still think they're amazing, but if you want to do, for example, it took five seconds to do a 14-minute video transcript for one of our previous videos, that's just how quick they are, and I think the quality is kind of up there in the top 90% of transcription APIs I've seen. So let's reinitialize it, and see how well it works. So I'm going to click there, and that's it, there's no webhooks, you can use webhooks, I'd imagine if you've got a particularly large file, in terms of like a two-hour meeting, maybe you'd need a webhook, but I believe that bubbles API connector times out, it's at least 60 seconds, so you could still just have the API connector waiting, you could of course then still put it in a back-end workflow, so that your user's not waiting for the loading bar to crawl across the screen, but yes here we go, here is the transcript, and then you get all of this extra data. Oh, one key difference I forgot to mention in my comparison video between assembly AI and DeepGram, is that DeepGram returns the time stamps in seconds, whereas assembly AI uses milliseconds, and so I actually had to use a back-end workflow to convert milliseconds into a minute, sort of minute colon seconds format. Now you would still need to do that here, because if you had a particularly long video, you'd end up with seconds being more than 60, and so if you wanted to structure that nicely, think about time stamps in YouTube video descriptions, that's what I'm thinking about, you would then need to somehow convert the number, and I found the best way to do that was to build my own back-end workflow that took a number and returned it in a hours colon minutes colon seconds format. So it is possible to do that, I was just using some JavaScript and returning that value from an API call, I don't know if that's the most efficient way, but it worked for me in an internal project I was building. So there you go, that is how you can use the DeepGram transcription API, it's lightning fast, I'm going to be using it in the next project that I'm building, and I'll probably be showing a bit more of that project coming soon, but if you've got any questions please leave a comment down below.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript