Epiphan Live Episode 150: Introducing LiveScript for Real-Time Transcription
Join George and Greg as they unveil LiveScript, Epiphan's new product for real-time transcription, perfect for live events and diverse audiences.
File
Introducing LiveScrypt Real-time automatic transcription device from Epiphan
Added on 09/06/2024
Speakers
add Add new speaker

Speaker 1: Welcome to live at Epiphan. This is episode 150. As often, I am George Herbert and I'm joined by a new face on the show today, Greg. Yes, I'm Greg Quirk. I'm a product manager here at Epiphan. Yeah, and so for episode 150, we decided that for such a milestone, which we're super excited about, we would use the opportunity to launch and talk about our new product. Some of you may have seen the press stuff earlier in the week. On Tuesday, we officially announced. Today, we wanted to go in-depth. We wanted to have Greg on the show to really describe this and talk about it and, of course, really go into the details. So, let's start off the top again with what is LiveScript?

Speaker 2: Sure. So, LiveScript is a hardware box with a software component where we take direct audio and we will send that out to the cloud and we'll get back a transcription that we can display on in-room monitors and we can also get it on mobile devices. So, you're in a live event and someone is up presenting and the HDMI output goes to a large monitor in the room so that people in the audience can follow along with what's going on. And if you're sitting in the back of the room and you can't actually see the monitor, you can pull out your smartphone and you can actually follow along with the transcription on your smartphone as well. Right. And as we just saw in that little video

Speaker 1: clip, there's also a screen on the front that you can see that transcription as well and some basic

Speaker 2: controls. Yes. Actually, pretty much all the controls. So, one of the things we did with this device is we made it as standalone as possible. Okay. So, you have to go to AV Studio and pair it just like you would with a Pearl, for example. But once you've paired it, all your controls are directly in here. So, you have got a start button to start the transcription and we will do that later. And then you've also got a touch screen that goes into pretty much all of the setup that

Speaker 1: you need. So, it doesn't need a full web UI or anything like that. It's so basic and straightforward

Speaker 2: that you can do everything from the touch screen. Designed to be as simple as possible. Perfect. So, how exactly does this work under the hood? So, as I mentioned, you've got a bunch of different audio connections. So, you've got XLR and TRS. You've got HDMI and SDI. We will only deal with the audio side. So, we'll strip off the video. So, we're not doing any video capture? Nothing video, only audio. And we're not actually even capturing the audio. So, we are sending the audio out and we'll get back a text file of the transcription in real time. Okay. So, we're not recording any

Speaker 1: audio, just essentially streaming that audio, which means it's pretty low bandwidth needs. Very low bandwidth. So, that's good. But you do have to have that component. It's not offline at this point.

Speaker 2: Correct. It's not offline. And there's kind of pros and cons about doing it online or offline. So, when it's online, you get access to more features. It's going to do a better job because it has access to more computing power. And will be more regularly updated, I would assume. And more regularly updated. Whereas, if you're doing it offline, from a security standpoint, it's good because it is on that device only. But it's going to be a lot more limited in terms of what it's

Speaker 1: going to be capable of doing. Right. Okay. So, if we look at this, who needs, what sort of industry or market are we looking at who would need a live script? Who needs this instant transcription?

Speaker 2: Well, there's kind of two different ways. So, there's one in terms of the people that will be able to take advantage of the solution from a usability standpoint. And then there's another one from a customer base. Right. So, from a usability standpoint, there's kind of three main groups. So, the first one would be people with hearing challenges. About 5% of the world's population has hearing challenges. And in some cases, that's going to go up as people are aging. And in the younger demographic, people are using their AirPods more. You what? Yeah. And maybe not listening as well to the sound levels that they should be. And so, hearing challenges could be increasing over time. So, if they're at a live event, it's a little bit harder for them to follow along with what's going on because they can't hear what's actually happening. Okay. Whereas, if you have live scripts, you're getting automatic real-time transcriptions going on. You'll be able to read the transcription of what someone's presenting to be able to follow along. Okay. Perfect. And like

Speaker 1: you said, it could be a big monitor at the front. It could be on your own device. So, you could theoretically even leave the room and go take care of some business and continue to read the

Speaker 2: transcription. Well, and it could work for overflow rooms as well. Right. So, maybe they would have a monitor in an overflow room, maybe not. But the people in there could still have access over their phones to be able to watch what's going on. Right. The second user group would be... There's increased diversity. More people are coming from all over the place. It's easier to travel. It's cheaper to travel. And people are hiring more diverse people in their companies and they don't always speak the native language. Right. So, in that case, oftentimes people will be able to read faster than they can hear and understand. Okay. So, they're seeing a transcription of what's going on and they're able to follow along and make sure that they are understanding the message that's being presented. Okay. Yeah, that makes sense. Okay. And then the third group is for people that get distracted. You know, you're sitting at a live event and the guy beside you is annoying and he's talking about what he did last night or where he's going to go for supper. And instead of hearing what's being presented, I'm getting distracted and I'm not understanding what's going on. So, when that happens, I can just look over at the transcription and go, oh, well, that's what that person just said and re-engage with the presenter

Speaker 1: again. Kind of get a little bit of a rewind there in terms of maybe even one sentence back might be

Speaker 2: enough to catch you up. Yeah, it very well could be. And, you know, looking at some of the numbers and stuff, people can usually read about 200 to 250 words a minute and people usually talk around 120-ish words a minute. Right. So, even if you do fall behind, you can read the transcription and catch up really quick. You know, I play a lot of video games. One of the first things I do when I play a video game is go into the settings and turn on subtitles. Right. Because then I can understand what's going on and I can read it faster than they can speak it. So, I can skip through it to get to the good part of the game. Right. That makes sense. Those are the three main ones. There is a fourth minor one, which is sometimes there's legal requirement for doing transcriptions. And so, this is a way to be able to do it in a really cost-effective

Speaker 1: approach. Perfect. So, we do have one plugged in here on the desk in front of us. So, we're going to take a look at that. So, maybe if we take a look at our top-down view, we will explore LiveScript a little bit. So, for some of the people in the chat, you're going to obviously see something that looks somewhat familiar in terms of the physical device and form factor. So, maybe give us a little bit of a tour of what we can see on this top-down. Sure. So, here we

Speaker 2: can see the touchscreen interface. This is the screen that you would have while you're running the unit. You've got your audio input on the bottom just so you can make sure that things are connected properly and you're getting audio signals. There's the headphone jack as well. So, you can control the headphone jack. You've got settings. And then, if you want to actually

Speaker 1: start a transcription, you're ready for this. Yeah. Let's turn this on. So, we have this set up for my microphone running into it, I believe. So, it should primarily only transcribe what I'm saying. But of course, it might occasionally pick up Greg as well since our microphones are so close together. But as you can see, as I'm speaking, we're getting that live transcription right there on the front screen. At the same time, we do have this set up to send a signal to our Pearl for our stream and to the monitor way in the back there so that we can see that transcription there as well. So, this gives some interesting ways to leverage the data that's coming out of this, whether it's a big front monitor like the one in the back, which isn't that big, whether it's right there for the producer, or whether it's maybe even feeding a visual signal into an encoder like we are right now with the Pearl. Correct. Got it right. We were struggling earlier where I didn't particularly like my pronunciation of some words, but that's probably my lazy English more than anything. Well, and it is designed to

Speaker 2: be able to handle different accents so that it can do a decent job of transcribing. When it comes into doing the transcription, it comes down to the audio quality. Right.

Speaker 1: Okay. So, let's have some fun here. We're going to try to challenge it. So, first, what we're going to do is I need some material. So, I'm going to read from this brand new magazine, EV, Evolution Magazine. This is something that anyone who's going to attend ISE next week is going to be able to get their hands on potentially, well, supplies last. This is a magazine that we just printed. There's a whole bunch of stuff in here. There's a great article in the back, got to tell you that. But we've kind of highlighted a few possible things we're going to read here. But I have actually, not lying, I have not read this magazine yet, except for the article in the back. That's the only thing I've read. So, in order to challenge this and challenge me, we're going to pick a random passage from the Epiphan Bible. So, pick a number, any number, and let's see what happens here. We have magic number six. Six. That's pretty far back there. All right. What is six? Story time. All right. So, this is from an article talking about building corporate video production studios. So, the line here that was selected was additional equipment. There are many other ways you can enhance your corporate video studio. This can include adding a teleprompter, a confidence monitor, or a clapper board. Think about the future you might need. Various filming setups might require different kinds. For example, you might want two comfortable chairs for an interview. Whereas for a product demo, you might want a sturdy counter height table and two variable bar height stools. The more you use your studio, the more you will understand your needs. However, nailing down good audio, video, and lighting should be at the top of your list of priorities. So, hopefully... A couple words. That came up all right. But it looked pretty good. Yeah. And I'll take some of the blame. But that's generally the idea. If I was presenting, obviously, we could be getting that live transcription of what I was saying. It was shown back there as well. And so, we wanted to do that to show you that this isn't a recording. Obviously, it can't be because I'm throwing random things out there and really showing what's happening. So, one thing I wanted to look at was some of the questions we had coming in because we've had a lot of chatter here in the chat. So, I'm just going to take a moment to try to dive through some of this. Let's see. Yeah. Okay. Things go wrong. Let's see. I'm just trying to see what was actually going on in chat since I haven't been staring at it. But... Okay. Well, you look at the chat and I'll talk about a couple other things. Sure. So, one of the first ones here would be a good one, actually, was does it do multi-language?

Speaker 2: It does. So, in the options, you can go into transcription and you have a drop-down list of multiple languages. Right now, we're supporting 10 different languages or variants. So, things like English US and English UK is technically two different versions because people spell things differently. And here in Kyrgyzstan, we use both. Correct. We are going to be adding more in, but we selected some of the major languages to start out with. And then we'll add more in after we've got everything figured out.

Speaker 1: So, in that list, I see one that'll answer a question that someone in chat, Alex, asked, which was does it do Spanish? And it does. It does. It's there in there. Two versions of Spanish. Spain and United States. Similar to English with UK and American, there are differences. So, making sure that we have those. Let's see. Jeff was saying, what's the latency from when the speaker speaks to the transcription being delayed? Well, I think we probably saw that. It's pretty short. It is. Yeah. Yeah. I don't think we've timed it precisely, so I haven't.

Speaker 2: It does depend a little bit on your internet connection. A little bit because content has to get streamed, sent to the cloud and come back. But really, you're looking at a second, not six minutes. From a post-production standpoint, it says that it will take you about five to 15 minutes to transcribe a minute of audio content. Right. This isn't anywhere near that, right? You're talking, it's coming out almost in real time.

Speaker 1: Exactly. Yeah. And that's definitely a big thing. A couple of people were asking about Spanish, so they answered that. Marco was saying, do it in German, see if it makes any sense. I'm sure it would if I spoke German, but since I don't, it definitely won't make any sense.

Speaker 2: We did take a unit to another conference called OEB. Yes. That was in Germany. That was in Germany and we were showing it to some of our partners. It wasn't an official product at that point, but we were showing it to partners and we did flip it into German and they grabbed a microphone and started going and they said, this is awesome. This is doing exactly what it needs to do. It's picking everything up. So it was going really well.

Speaker 1: Yeah. And again, just to bring up ISE again, we will have this live in our booth at ISE. Next week. For people to experiment with. So people who want to ask questions or want to experiment in terms of other languages, stop by the booth and we're happy to help you through that and guide that. So let's see, let's talk about some of the frequently asked questions that have come up over the development of this product. Obviously a bunch of them have come up in chat already, but let's kind of hammer through the first couple ones. So my first one that obviously came up, and I think many people ask this, is why use this over having a human do the transcription?

Speaker 2: Yeah. And there's a few things. So there's really three things that you're going to look at when you're trying to choose a solution. You're going to look at the speed, the accuracy, and the cost. There's other things as well, but those are kind of the three main ones that you're going to take a look at first before anything else. So when we talk about speed, you can see how quickly this is able to... Way faster. You know, if you're going to hire a human to do it, you're going to need someone like a stenographer. So they've gone to school for a couple of years, they know how to use a stenography machine, which is... Shorthand, essentially. Yeah, essentially.

Speaker 1: Which means it means nothing to anyone but them when they first type it.

Speaker 2: Correct. And they've been highly trained on how to do that. They can go at the pace of what someone is going to be speaking.

Speaker 1: Right. So about 200 plus words?

Speaker 2: Depending on, yeah. Usually the certification they're looking for is 180 plus words a minute. Right. You know, a person is going to speak about 120 words a minute. I'm sure we've been going faster than that.

Speaker 1: You and I both talk pretty quickly.

Speaker 2: Yeah. The other option is you could hire a person with a keyboard and have them just type as quickly as they can. But a person is only going to type about 40 words a minute. Someone who types all the time might go faster, even say it's 80 words a minute. But when we're talking 100 or 150 words a minute, they just can't keep up. Right. So speed is the first thing you have to look at because if it can't keep up, well, there's no point. Right. The second one is in terms of accuracy. And accuracy is a little hard to measure because there's a lot of different factors that come into play. We've done a bunch of testing on it. So, you know, we kind of have an idea of how accurate it is. And it's, you know, in the 90-ish percent range, a little bit higher than 90. But the other thing about the system is it only knows how to spell words correctly because it doesn't know the wrong spelling for things. Right. And it'll also know words that a normal person wouldn't know. One of my favorite ones to use right now is the word ophthalmology. I wouldn't be able to type that if I tried, but LiveScript just knows what that word is and is able to actually transcribe it properly. Right. And then the third one is going to be your cost. So, you know, hiring a stenographer who can keep up and is accurate enough, they're going to charge a premium for that type of service. Absolutely. Whereas this allows you to have that transcription at a very reasonable price.

Speaker 1: Right. So that definitely is one of the questions that a couple people have mentioned is about cost. Obviously, there's a hardware component to this, but there's also a cloud-based system. I assume that cloud-based system has some sort of subscription.

Speaker 2: It's not a subscription. It's a per-use model. Okay. So there is a per-hour charge. And so there is a low cost for the device itself, and then there is a low cost per-hour charge for using the transcription.

Speaker 1: Which it would be with a human as well. Exactly. But it wouldn't be low cost.

Speaker 2: Yeah. And if you look at it, you know, I was estimating about 17 hours to break even between the system and hiring a person. So if you're using it for more than 17 hours for the life of the product, you're doing really good.

Speaker 1: And if you're doing this professionally, that's one event.

Speaker 2: Yes. Yeah.

Speaker 1: So another big one, though, that I have here is, well, why can't I just build this myself? I mean, I was playing around the other day with some things that have started to come out with Google Assistant and Alexa and all these things. So why couldn't I not just build my own transcription service?

Speaker 2: And you could, right? But there's a lot of stuff that you have to look into. We're actually working on a paper right now, and we looked at AI models for transcription. And that's just a small piece of what you need in a whole solution. So people could take a computer and figure out, well, what kind of dongles do I need to get the audio in? And what kind of interface do I need? And then, you know, connect it to the API. And you can do it. Right. But it's a lot easier to just buy LiveScript. And because we have priced it affordably, that really helps. And the other thing is, if it's not working, if you built it yourself and it's not working, well, who do you call? Right. Whereas if LiveScript isn't working for whatever reason, hey, George.

Speaker 1: And George's team. Luckily, the simplicity of this is that if it's not working, it's probably just your internet. Yeah. Because everything else is so simple and easy to get going that there really isn't anything else to go wrong besides your internet connection, which is the other beauty of it. Like you say, if it's a DIY build, who knows what's going on? Correct. So that definitely helps. So there's also a lot of mobile apps out there. You know, especially recently, there's been a lot of chatter about things like on the Google Pixel 4 phone and stuff like that. I've got one. Which you have. Yep. So obviously, through the development of this, we've been comparing that pretty closely. So why wouldn't I just use my Pixel 4?

Speaker 2: And a big part of it is from a person that is putting on an event. You don't want to have to rely on your attendees to be able to, you know, do they have the phone? Do they have the app?

Speaker 1: Well, how do they even get the audio, realistically?

Speaker 2: Well, and that's the thing, right? So I'm sitting in the room and I've got my phone and I've got the app and I'm holding it up like this so that I can get the audio from the speakers in the room. Maybe. And you're talking about what you had for lunch and that's what's getting transcribed. Exactly. It just doesn't give you that good experience. It's really great for a reporter.

Speaker 1: Right.

Speaker 2: Google Pixel phone. I'm doing an interview with someone. I need a transcription of it. Awesome. The old dictation machine thing.

Speaker 1: Just hold it out.

Speaker 2: Yeah. From an event standpoint, that's not going to be the solution. It's not very practical. No.

Speaker 1: And again, if, you know, here we could take XLR from the house audio system that's already being used, you're not going to get that XLR audio into your Pixel 4. Correct. Not very easily, anyways.

Speaker 2: And that's exactly how we have it wired in here.

Speaker 1: Not everyone in the room can get that same feed. So let's come back to cost for a moment. What does it cost? We kind of mentioned it already. There's the hardware cost. I assume that the hardware is one-time like most Epiphan products.

Speaker 2: Correct. It's a one-time cost for the hardware. And then you go into eBee Studio, you put in your billing information, and then you're connected for the per hour transcription service.

Speaker 1: Okay.

Speaker 2: And that's billed on a monthly basis? Billed on a monthly basis.

Speaker 1: Hourly basis.

Speaker 2: So you'll receive a bill monthly, and it will be an itemized bill. So you'll actually see how many times you've used the system on a given day and for how long. And then that's what your bill is going to be. Cool.

Speaker 1: And availability, we announced it on Tuesday. We did. We're diving into it today. We're going to be showing it at ISE in Amsterdam next week, as we talked about. So what is the on-the-street availability?

Speaker 2: So we're looking at Q2 for general availability. Okay. And that'll be through our channel partners? Through our channel partners.

Speaker 1: Perfect. Okay. So I hope that covers things. Some people were obviously asking when it was available. Let's turn to the chat. Once again, there's a lot of chat. I'm really happy about this, guys. I'm pretty stoked about how much chatter there is here, which means you guys are just as excited as we are. So we talked about Spanish. Let's see. Craig asked a question. Can you export the file to a standard transcription format file for use in other applications?

Speaker 2: So you will have a text file output as well as an SRT file. Okay.

Speaker 1: So I think someone asked further down about SRT stuff. So that answers that as well. So standard text and an SRT file. David was asking, does it perform offline? We did answer that earlier. No, at this time, it is online, tied to a cloud service in order to get that live. Correct. Yeah. Let's see. Linda was asking, how badly is it affected by accents? We talked about that a little bit already. The AI that it's based on is actually very good at most accents. There's always going to be times where that trips things up.

Speaker 2: And that's going to trip up a human too.

Speaker 1: Exactly. Exactly. Depending on what you're used to. I have a lot of colleagues here that if we get a call from some of our American friends who have thicker Southern accents, they just hand it off to me because they just cannot understand it. But I adapt to that accent very well. Or Scottish accents for that. Like there's lots of accents around the world that can be a challenge for humans. And honestly, from what I've seen so far, this does a better job than most humans.

Speaker 2: And we do have people from different countries that work at Epiphan. And we put the system in front of them and said, OK, just start talking. And we wanted to see how well it worked. And generally, it works quite well. Right. Again, it's hard to get an exact gauge because it's hard to measure accuracy from different accents because people will all talk differently.

Speaker 1: One other question here. Nolan was asking, could you feed it into a switcher and sign it to a keyer for live, to basically have a lower third?

Speaker 2: A hundred percent. And we've actually done this in the lab where we'll take the HDMI output. We'll feed it into a Pearl Mini. And then we'll just crop it down. And we've got a transcription along the bottom of the screen. Yeah.

Speaker 1: So there's definitely possibilities there. As you saw, we might see in the monitor in the back that's feeding from the HDMI output. It's just white text on a black background. So there's lots of possibilities to leverage that into another piece. Let's see. What else is in here? Clarence saying, you better put this device on the market ASAP because I see a huge opportunity to make this device work in any language. We're working on it. Like we said, we're looking to make sure this is on the street and in our customers' hands in Q2. We are working as quickly as we can. If not sooner, ideally. But that's the current target. And considering we're halfway through February almost already, by the time we get back from ISE, February is half done. Q2 is knocking on the door already. So let's see. Let's see. I'm just trying to see which one to grab there. Is this using Epiphan's own AI engine or using Google's or Amazon's?

Speaker 2: Yeah. So we are not building our own AI engine. We looked at that at one point because we wanted to see how much work it would take to do that. And it's a huge amount of work. Not only to create it initially, but continually updating it and continually making it better was a huge amount of work. So we did look at Google. We looked at AWS. We looked at IBM, as well as some other ones. And we're using Google's AI for this. Based on the testing that we did, it performed as good, if not better, than everything else. And they've just done a really good job in terms of documentation to be able to implement the API.

Speaker 1: Well, I think the other thing, and this leads to the advantages of being a cloud-based system, also using that AI engine, is that Google is working really hard on that engine. They're constantly updating it. They're tying it in with their translation services going down the road. So there's going to be a lot of possibilities based around this engine. They also have a context-based method of trying to help fine-tune things that I think they're going to make available down the road as well. So that if you want it to be better at understanding healthcare versus architecture, then there's ways to help fine-tune the context of the AI a little bit better. So those are things that are going to make it more accurate down the road. Yeah.

Speaker 2: And there are things that we're doing. So we're not just taking the API and we're not just having it in the box. We're putting some Epiphan magic around it, adding some unique features and things like that to make it a little different and a little bit better as well.

Speaker 1: So one of the questions was, can it be hand-taught and basically building your own dictionary, I suppose?

Speaker 2: Yeah. So that is something that we will be adding in. It's not something that we have in the version right now. By the time Q2 comes around, it may be because that's something that I do want in there. And so this is being able to put custom terms into it. So if it's not a term that would be in a dictionary, it's going to struggle with what that term is. LiveScript is spelt with a Y for the script part of it. It's not going to transcribe that properly because it's not an actual word. But once we do add that functionality in, you can build your own custom libraries and you can add that in.

Speaker 1: Right. All right. So there's a couple of other things in here. Stephen was asking, how about privacy? That's a good question.

Speaker 2: Yeah. So there's kind of a couple of things about privacy. So one is the content is getting streamed to the cloud, but it's going out to Google. So it's not... It's not on-premise. It's not on-premise. It's not your own cloud. But it's getting aggregated into everything else that Google is getting into their network. It's noise. Yeah.

Speaker 1: Essentially, it is. It's noise in the ether.

Speaker 2: Yeah. And so it's going out into the cloud. It is going over a secured connection.

Speaker 1: And again, that being said, the AV Studio side of it, our side of the cloud portal is obviously secured with your own account details. Correct. So it can follow proper passwords and such, which you can actually just use your Google or Facebook accounts with SSO, basically, to use AV Studio. Then that's going to give you a security from our perspective. And then, of course, files are files, right? I mean, when you have files, they're files. Depends on what you want to do with it. So I just wanted to take a look here. We'll try and address a couple more of your questions.

Speaker 2: So does it do translation? Yeah. So today, it does not. That is definitely one of the things that we have on the roadmap that we are looking at in the near term.

Speaker 1: Yeah. That is a goal. Yes. It is an absolute goal because we recognize that that is one of the biggest opportunities and potentials of a system like this. Correct. Nolan asked, anyone from Trinidad tested it? No, not yet. But we don't have anyone from Trinidad and Tobago. It works for Epiphan, unfortunately. But that would be an interesting test. That is one of the more entertaining accents out there, for sure. So that could be fun. Again, anyone who is going to drop by our booth at ISE, bring all your accents. Sure. We want to throw it because the more we can experiment, the more we can understand it and do that.

Speaker 2: The other thing is, if you're not going to be at ISE, you can reach out and we can do online demos as well. So we've been doing a bunch of those. People have been able to see the system in a little bit more detail than we've been going on here and be able to pump audio into it and watch it.

Speaker 1: Yeah. So a couple of people have still asked the question about price. Is that something that you want to put a dollar figure today or are we waiting until it's on the street? I know there was still some details we were ironing out there.

Speaker 2: I would say to contact your Friendly Neighbourhoods partner or Epithan sales team and they can direct you to a person and that way we can make sure that you get the price information.

Speaker 1: Okay. So sorry to disappoint, but we're not going to give you those dollar figures today. I know a lot of people have been asking about that. But again, we will be, like our Pearl products, we will be selling LiveScript through our regional channel partners all over the world. They will have pricing once we have that to them, which they don't even have yet. So they'll get that soon. And there were still a couple of details about the hourly pricing. I know we were ironing out as well. So sorry to disappoint there, but it will be available very shortly. Again, Noah said sign me up for testing. Best we could do there is to arrange a live demo with you. So there is a contact page on the website for LiveScript on epithan.com, epithan.com slash product slash LiveScript. You'll see lots of info there. There is a contact page there. So any questions you have, you want to arrange a personal demo with Greg himself, then we can work that out. As I've mentioned probably half a dozen times already, on Saturday, Greg and I are both leaving for ISE in Amsterdam next week, which we are super excited for, maybe prepared for. But we are very excited to be there and to welcome everyone to our booth, which is going to be in Hall 11. Be something that disappeared from my screen. I know exactly where it is to walk there, but I couldn't tell you what the number is. But we do, if you happen to be in the area, maybe you don't have access to ISE yet. We do have the invitation code there on the screen. If you want to get your badge, get your access. That is free. It's on us. Can't fly you there.

Speaker 2: Get your Epithan Evolution Magazine when you're there.

Speaker 1: Epithan Evolution Magazine. There's some actually really great articles in here. I'm going to have to actually take the time to sit down and read some of them. You've got a plane ride. But I am very particular about this one in the back. Anyway, I want to thank everyone for joining us. This is really exciting for us. We're very interested to see how our customers and our partners leverage this new technology going forward and hearing from you guys on how well it goes and how well it works and all the cool and interesting ways it's going to be deployed. We see that all the time with the Pearls. This is a great companion to that. So it's going to be amazing going forward. And of course, we'll keep updating it and keep adding things as we can and do that. Thanks, everyone, for joining us. Next week. Next week. I don't. Next week. Next week. Oh, it's on my sheet. Jeez. Yeah, I don't. Next week's episode, which will not be hosted by myself or Greg since we won't be here, is about deep fakes. Maybe we will be. Maybe we'll be deep faked into it. Next week's episode is about deep fakes. Can we even trust video anymore? Are Greg and I even here right now? I don't know. And of course, always follow us on all social things like follow, subscribe, Instagram, Facebook, LinkedIn, YouTube, Twitch, all the things, all the places. We're there. And join us every week, Thursday at 3 o'clock Eastern for Live at Epiphan, where we bring you more interesting things all the time. All the time. Thanks so much.

Speaker 2: Thanks. Bye.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript