[00:00:00] Speaker 1: Well, let's go ahead and get started. Thank you all for joining us for today's session, Spend Smarter, Stay Compliant, the Power of Predicted Accuracy. My name is Noah Pearson. I use he, him pronouns, and I'll be moderating today's webinar. And with all of that taken care of, I'd like to welcome today's speaker and fellow 3Player, Eric Ducker. And I will pass it off to Eric to share what 3Play has been working on over the last few months. Eric.
[00:00:27] Speaker 2: Awesome. Thank you, Noah. Thank you for the introduction and the housekeeping as always. For those who have been on webinars with 3Play before, you might remember me from great hits from our prior installments of this Compliance Series. But I lead our product marketing here at 3Play and really focused on understanding the problems at the customer level and what solutions we can find to solve those problems. And so we started this Countdown to Compliance Series about a few months ago, and we kicked off with some definitions around Title II. We kicked off, we had a customer join us to talk about how they're approaching Title II compliance, and we'll remind this audience of how that customer is working on using some of these tools. And then today is kind of our third installment of this series, which is Spend Smarter, Stay Compliant, the Power of Predicted Accuracy. And so we're excited to talk about some of the things that we've been working on and implementing with our customers across the country in support of Title II compliance. So with that, this is me. So we already did this part. I use he, him pronouns, by the way. But anyway, excited to dive right in. We're going to start with the reality check. So I might make some provocative statements, and it's really meant to confront some of the realities that we have in front of us as universities in the public space, especially in the United States, as we're focused on. And then we'll be going right into the solution, talking about predicted accuracy, how this can really solve the problems that you have in front of you today and make it really pain-free. Just have peace of mind with solving this problem. And then we'll obviously move to what's the impact of this? Why is this important for your university, and really, how can you benefit from it? And then, of course, one of the things that we're really excited about is we have a sneak peek of kind of the evolution of 3Play for higher education. And so we'll talk a little bit about that, which we'll also have a much more full, complete product webinar later this month to go in depth. So let's get started. The reality check is we are two months away from this law really becoming real. We've been spending the last, I think, 15 months talking about it. We've had north of 100 different university conversations myself around how we're going to accomplish Title II readiness with the video compliance requirements that we see in front of us. We are now two months away, which means after April, we all have to be living in a world of compliance-first mindset so that any new content that we produce is built into a framework that we can actually make sure and ensure that we're compliant with those videos being published. We also know that more video is continuing to be published. Our world and our education system is very video-first and video-welcoming, and it's everywhere. It's all across campus. It's every faculty member is a content creator for their courses. Students are starting to create content as well. The administration is also creating content. All of many of these videos that are produced fall underneath the definition of Title II. If you really want to dive deep again into what defines video that's covered by Title II, that first installment of the series dives deep into that definition. And then the reality here, and this is going to be our constant struggle, is budgets are tightening. We continue to live in an uncertain world around where the budget's going to come from to support these initiatives. This is driven by the environment that we're currently in across the United States, but those budgets are real, and we need to find solutions that don't ask for a bunch more budget but restructure the budget that we have in front of us as much as possible. With that, the other problem here, and just focusing in on video, is we live in this kind of current headspace that captioning's are still a little bit of a murky question. How do we solve for this? We know that true compliance, airtight compliance, would require human editing and review on every single caption file in order to feel 100% confident, which would give us full coverage, but it's really expensive. As much as 3Play would love to find a magic way to make things super, super, super affordable, the content that you have, the amount of content that you have, this is not necessarily the real path that we have in front of us. And then alternatively, the other state that we live in, the other kind of reality that we live in is AI on everything is great, right? It's captioned. We have words that are going across the video as the video plays back. It's affordable. It's either built into our payment structures with the video platform that we're hosting with, or we have direct relationships with the speech models, and it's a penny a minute. It's really, really affordable, but there's a huge variance in quality. Every single vendor out there, including 3Play, talks about 90 to 95% accuracy on ASR engines. We have a whole state of ASR research that we publish every year that confirms on average that's what's happening, but compliance isn't about average. It's about compliance across every single publication, every single published asset that you have. One gap in file, one gap in a video could be the problem that you need to protect against. This is what we call, or we have an ASR trap. We hear this time and time again. We have ASR. We have captions solved, and you might have captions solved, but at the policy level, they might not understand that ASR has this huge variance. This is data that we pulled from our system across a huge swath of our education customers. This is a representative sample of our higher education content that's coming into our system. This is how it's distributed across the accuracy of the ASR output. You see there's a high concentration of exactly where every model claims the accuracy of their engine, 90 to 95%. You're going to see 40, 50% of your files fall into this bucket. The reality is every school has this similar distribution. You're going to see a bunch of files that come into the system that are actually well below the accuracy threshold of 90%. They're well below what the claims are from the ASR engine. This is not the fault of the ASR engine. This is the fault of the reality of working in a dynamic physical environment recording content in a classroom environment where microphones might be on, might be partially muted. You have professors and faculty moving about the room. You have classroom discussions, and ASR engines are not there yet to support those really dynamic complex environments. It doesn't matter what ASR engine that you plug into your system, you're still going to see this distribution. It might be a little bit to the right, a little bit to the left, depending on who you pick, but ultimately you're going to find a bulk of your content sitting at this low accuracy rate. And policymakers at the universities need this education. So it's on us, those who are deep users of accessibility tools like 3Play, it's on us to help educate those who are making the decisions to understand that this is the real distribution. And a bad audio environment is a bad audio environment. No ASR engine can overcome those challenges. And so yes, there are very, very, very smart researchers who are trying to make incremental improvements across their performance of ASR engines, but ultimately we're going to continue to see that performance plateau. And we've seen that in the data itself. We've been doing this state of ASR research for over five years now, and we see the plateau happening. So we have to adjust our mindset, and we have to shift how we think about ASR, how we shift around captions. And this data is really, really valuable for us if we can make it actionable. And that's where we're focused next, sorry. And that's where the solution is, is predicted accuracy. We've implemented this with University of Florida across their campus. And Brian Smith, who was on a previous webinar, we had a fireside chat, he talks about it in this way. Predicted accuracy isn't just a caption quality tool, it's actually a budget strategy. It ensures we invest human review where it's actually needed, while giving faculty confidence that every video they upload meets a dependable accuracy baseline. That brings tremendous peace of mind to us and our faculty, who are already juggling so much in a resource-constrained environment. One second. So let's think about that. Let's unpack this just a little bit more. Giving faculty confidence. So the state of the world today is the IT department is providing the tools to allow and enable academic excellence. And the current state is, we tell the faculty, guess what, it's 85, 90% accurate, the captions. You go correct them as you will. This allows Brian to say, nothing is below 90% accuracy. So the faculty can have peace of mind of, well, I'm only going to pick certain files to actually correct, because I'm getting 90% accuracy on everything, and I'm getting even more accurate on others. And we're sharing that information, or Brian is sharing that information with his faculty, which makes it really compelling for him to basically provide tools to self-govern, self-administer this policy. Brian's not enforcing anything. He's not telling the faculty what to do. He's giving them better tools to make better decisions that are aligned with the university's policy of meeting as much compliant captioning outputs as possible. So let's talk about how it works. It starts, it's a really easy four-step process that is basically invisible to most people. First step is ingest. So this is connecting your video library, connecting video to us. We're an agnostic accessibility provider for video. We don't care if it's coming from Panopto, Mediasite, Kaltura, you name it, YouTube. We're just going to connect and sit as an agnostic player that oversees all of the video compliance needs across campus. And this is both to support any backlog content you might have, and more importantly, in my perspective, the ongoing content. Flip a switch where starting April 20, late April, you flip a switch and tomorrow you're compliant going forward on any new content. This isn't going to take weeks. This takes a day. You just flip it on and it's ready to go. So we're going to ingest that content. We're going to audit it, which really means we're going to run ASR on every video. And by the way, this doesn't mean you have to use three plays ASR. We're running this analysis and ASR is one of our mechanisms to actually provide this analysis. And we'll talk a little bit more about that as we preview the evolution here. But we're going to run some AI models on every single video, which is going to provide some baseline coverage and detect areas of risk. And once again, you can use our ASR as output. You can continue to use the one that you have. But either way, you're going to get pushed to number three, which is the score. Every video then receives what we call or refer to as a caption accuracy score. And that will slightly evolve in a few moments here. But that score gives you an opportunity to take an action. And that's where we go to number four, which is route. So now we can set up automation. So if you have a video that scores below a certain threshold, it automatically queues for human remediation, which is integrated into the three-play marketplace automatically. You don't have to lift a finger. Or alternatively, if you have internal resources, that same score can be routed to a trigger or an email notification that says, okay, this person at my university is going to do this, is going to run the remediation job. But either way, you get this data that you don't otherwise have confidence in what ASR is actually producing for you. So let's think about the payoff. And this is specifically how University of Florida has seen the payoff of this. They've saved thousands of dollars in captioning costs. They've saved time with fast automated workflows. Everything is two-day turnaround for them. So they get the ASR back instantaneously, or they wait two days, or up to two days, and they get the human remediated file back. And they don't have to do anything. We talk to Brian every couple months, check in, see how things are going, look at this budget, let's make sure it's good, and we're good to go. That's it. Because Brian has the visibility and control into the caption quality. He can see where the risk points are, but he's not physically having to see it. The system is doing that for him. And so this allows him to focus on other activities, and he has a good old checkbox next to video compliance across the campus. He gets to gain that peace of mind, knowing he's compliant within the constraints or within the policymaker's perspective at the university. And this was really awesome to see. Just in the first 50 days, when we turned this on at the beginning of the fall semester, they achieved over a 30% reduction in files requiring human intervention, which led to real $30,000 in cost savings. And that was just in the first 50 days. They've been turning this on. We're going to be well into the six figures of cost savings from a human-only approach to coverage here. So the bigger question is, we just kind of talked about captions, and the title of this was really predicted accuracy, not predicted caption accuracy. So captions are still only one piece of compliance. So we do want to talk a little bit about audio description, and we'll talk about that next. And then also, you know, caption quality, the system is meant to self-heal in the sense that as ASR gets better, as you fix audio issues across campus for cleaner audio capture, that caption quality is going to increase, and you're just going to see that in your savings automatically with this tool turned on. There's no more need to, like, re-budget, rethink about how much I need to spend. It's just going to be built into that model where you get to see those savings year over year, and we want to see those savings. We want to see the schools be able to have video, complete video compliance as close as to 100% as possible based on budget, and then over time, you're going to see that incremental improvement because we are very incentivized to make sure that you guys are seeing those savings as much as possible. And so all of this allows you to better plan future iterations. So, you know, we talked about this in the last webinar in the fireside chat with Brian, which is he has a budget of X today. Now he has a ton of data from the last year here, and he can go back to his policymakers and say, do we want to update our policy? Do we want to increase our minimum threshold? Here's what it's going to cost, and he can simulate that down to the penny effectively, and that is extremely powerful data that doesn't exist if you don't have a tool that really helps you model these cost savings and cost benefit analysis between ASR and human remediation. So this is a really interesting point, and Brian is going to go build his new business case and just work through it with the policymakers and say, if we want to spend an extra $50,000, this is how our policy gets to evolve. And the policymakers may say, no, thank you. We're really happy with what we're doing today, but he gets to have that conversation as opposed to guesswork. So with that in mind, we're excited to talk about what's next and how audio description is going to fit into this framework of predicted accuracy. And so we're excited to give a sneak peek and introduce Pulse. Pulse is ultimately a way for anyone at the university across the campus to get an immediate pulse on compliance. How can we identify risk anywhere across the video library and immediately surface an opportunity to make a decision about remediation? So it starts with an audit. It starts with, you're going to have proactive tools that allow you to see and get a pulse on everything that's happening in your video library from net new videos to your backlog and then make decisions in real time. And this isn't just like automation for the sake of automation. It's really something that's tactile and something that you're going to be able to control on your own without necessarily having to figure out, without having to write a support ticket. So where we are today, I just want to kind of focus on the sneak peek here because we will have a full webinar on and a full product launch at the end of this month to talk more in depth and show a lot more about Pulse. But today it encompasses the functionality we just talked about, which is predicted caption accuracy. It encompasses this new tool that I will show in a second, which is this more or less a video accessibility command center. And this is V1. We're excited to start folding in more, which gives you on-demand scoring data in the three play system along with budget simulation. So not only just giving you data, but being able to give you more tools to have more control of your budget, whether you're an existing three play customer where you are using us for student accommodations, you can still simulate how this might look across campus. And then of course, we're going to be really excited to introduce audio description capabilities. And one of the biggest questions that we get asked all the time is, does this video need AD? And guess what? We're going to be building a model or we've built, we're building a model and we're testing that model with customers now, which is, does this video need AD? And this is a really powerful tool, which answers that age old question, because there is a little bit of margin for interpretation of whether a video requires audio description or explicitly or not. And so this is going to help provide more peace of mind over whether you have a video, whether it needs AD. And then as you may or may not be aware, we have our own AI scripted audio description service and just like predicted caption accuracy, we also have a scoring mechanism, a pulse score for the, the, the, the audio description output. So once again, we can make that decision point of, do we need to send that for, for further review, further refinement from a human for remediation, or are we happy with the risk of an AI scripted audio description being published directly? So to give you an idea of like what this command center, you know, looks like, this is just a very, very quick snapshot of kind of the basic function here, but you'll have access within FreePlay to, you know, what is the sample size? So how many files is this actually aggregating the total duration? And you'll see your exact distribution of accuracy. And below this, which is not shown on this screen is a simulator. You can go play around with the minimum thresholds that you want to upgrade to the human remediation, and that will all be available in the account system, self, you know, self-managed, and you can even change that and update your workflow automatically within FreePlay. So that's going to be really, really intuitive for us, for, for any, you know, campus wide administrator of video content to really have control of implementing the right university policy that matches not the, not just the budget, but also the, the compliance standard that's set. So I've alluded to a couple things, but this is like the tip of the iceberg and I've been lucky because I've been thinking about this problem for a year now. But this, this data is really, really impactful for a student experience at the end of the year. It's not just a budget planning pool, planning tool. The budget is awesome. That's the tip of the iceberg. But it's also, how can we use this data to diagnose potentially real issues in the real world? So I talked about this at the very beginning, an audio, a bad audio environment is a bad audio environment. So if you can start seeing patterns of these sets of videos continuously have audio issues, guess what? You can go talk to your facilities team and say, Hey, can we figure out how to solve this audio issue in this classroom? We're seeing a continuous drop off of caption quality, and we can go do fix that. And you can go see that after, after you've made fixes, you may see that in real time that you improve your overall accuracy for that, for that classroom. So once again, this is like a tool that has some really, really compelling features. If we just think about it from the accessibility lens, but it's also like the student experience of learning and making sure that it's really easy for us to identify proactively opportunities to make a better physical experience, given so much of the video content on campus is recorded in lectures. So this is just one of many things. There's one other, one other concept, which I alluded to, which is, you know, today the faculty are instructed to, you know, typically resolve all of the caption issues in your videos. You're responsible for that. Well, I produce a lot of video. It would be really helpful if I had a priorities list. And so this tool could do that. I'm not saying it can, it's set up to do that today for you, but if we start working together and that becomes a use case, the data, that same data that we're producing for that dashboard is the same data that we can segment down and deliver to faculty to make sure that they understand where they need to prioritize remediation efforts. If you aren't using, you know, 3Play directly for remediation efforts. So it's not an all or nothing tool here. It's really about giving you more visibility into really problematic risk, risky files or risky caption outputs that otherwise you just have, you kind of hand wave over right now. So to kind of wrap up before we open for Q&A, you know, the last year, like, predicted accuracy is really the foundation, and we really encourage people to think about how Pulse is, can change the way that we approach video accessibility and get out and get away from our old workflows and get away from this mindset of like AI only, human only. I have AI, it's free. Humans are super expensive. But Pulse gives you that optionality and in a way that removes the decision fatigue as well. So you don't have to choose between budget or efficiency and compliance. And ultimately, this is going to help you understand where you stand today in terms of video title to compliance and plan, not just for the immediate need, but in the future as you continue to have to refine your budgets year over year. And we're excited to not just deliver this service and walk away, but really be a partner with you. I can promise you I've worked countless hours with University of Florida and other institutions across the country to really understand their problems. And that is really what 3Play is all about, understanding your problems and giving you back a solution that works for you. And that is really our mentality. And we encourage everyone to really think about who do they want at their side to support their accessibility efforts at this level of scale. And I'm biased, and I think 3Play is in a really good position to do that. So I do encourage you to stay in touch with me. I'm on LinkedIn. I tend to post interesting nuggets about the data that we're seeing from our customers in an aggregated way, anonymously. But then we have our Pulse launch webinar coming up at the end of the month, which will be presented by my colleague. And then if you want access to Pulse, we are ready to turn it on for you today. And you can go visit us at 3playmedia.com slash Pulse to request it or use the QR code. But at this point, I'm going to turn it back to Noah and take a sip of water. Perfect.
[00:27:50] Speaker 1: Thank you, Eric. Yes, let's get into some attendee questions here. First up, will Pulse work for backlog content or is it only for new content?
[00:28:01] Speaker 2: Absolutely. It works for both. We are working on a backlog project right now. And the backlog project is really compelling because there's also additional scoring mechanisms that we can look into, like thinking about video views, or we can help you segment the right backlog to focus on. So it's not necessarily like, give us your Kaltura library and we're going to just spend a lot of money. Let's make a really focused, intentional plan around making sure that if we're going to use backlog, let's make sure that it's relevant content that you're going to be using go forward. Otherwise, let's archive a lot of that content if we're truly not leveraging the impact of that content. And I think the power of this going forward on net new content is that once it's set up, you don't have to do anything. That's the magic of it. As I mentioned, Florida is not spending a day in this tool. They're saying, sweet, I've checked in, I've looked at it, and I'm good to go. It's really peace of mind to sit back and watch the computers do the work for us.
[00:29:16] Speaker 1: Awesome. Next attendee question here, does the speaker's accent factor into the accuracy score?
[00:29:24] Speaker 2: Absolutely. I think this goes back, this is more of a question for ASR engines and how they're trained. Every engine and the engine that we use, and you can learn more about the base speech engine that we use, they're all building off of diverse training sets. But the access to different varying accents is not equal. So, yes, there's going to be opportunities for improvement based on accents. And really, that's less to do. As I said, that's going to be a pretty universal problem within any engine you choose. But yeah, I would expect in certain cases for more obscure accents to have those problems as opposed to, you know, we're still, I would say that the speech engines are pretty well trained on some of the bigger English dialects, for example, already.
[00:30:25] Speaker 1: Perfect. Next question here, will the captions have speaker labels?
[00:30:33] Speaker 2: Are we referring to the AI produced draft or the human edited ones, human remediated? For AI, AI ones. We can talk about both. So, yes, you know, I think speaker labels are a complex part of a transcript. So one of the things that is configurable with our AI output is denoting speaker changes. And that is something that we can turn on for any of our customers who request it. We can, of course, you know, go one step further and label them speaker one, two, three, four. That has varying levels of success, depending on how complex the content is. If there's overlapping speech, you know, diarization is the model name for speakers. Speaker labeling, speaker diarization is not the most sophisticated. We see this specifically as a big problem in dubbing workflows. So if you get AI only dubs that have never been reviewed by a human, you might get Noah's voice matched to my voice for like three seconds or 10 seconds or 15 seconds, depending on how that model really produced the tags. And then there's a third level, which is can I name the speakers, which is which is at the scale of which we're talking about. So the entire campus video content, you're likely not set up to support being able to label all those speakers proactively. And so I wouldn't recommend that for AI. I think in terms of if you're going to go with speaker, if you need if you want to really get speaker changes, speaker labels in in your AI output. I really recommend speaker changes from a from a consistent and most reliable aid for accessibility and not try to like do something that's going to completely confuse, you know, your deaf or hard of hearing user at the end of the day. So I really caution on like sometimes imperfection is a problem, but in some cases, perfection is great and we should, you know, go for it. And then, yes, the captions that are remediated by humans, that's where we can introduce, you know, based off of your configurations, the speaker labels that are necessary and unique for your for your setup.
[00:33:11] Speaker 1: Great, thank you for for breaking that down for us. Next question here is Pulse available through the Kaltura platform.
[00:33:22] Speaker 2: Once again, I'm going to answer this with a couple of nuances. So there's two ways in which FreePlay is integrated with the Kaltura platform. There is a Kaltura Reach service, which most people should be familiar with at this point, which is kind of their controlled integration layer with services like us. That is Pulse is available through that service, and we partner very closely with their accounts team so we can work together to procure those services through that through Reach. And alternatively, you can also procure these services through FreePlay, which might be a little bit different of experience ultimately. But you can procure those services through FreePlay and we integrate with Kaltura's public APIs, allowing us to process and return files into the Kaltura system.
[00:34:21] Speaker 1: Great, thank you. Next attendee question here. Is there a capability to have one video file that allows captions and audio descriptions to be enabled, or do they have to be two separate video files?
[00:34:36] Speaker 2: Our system is set up to be, sorry, could you say that again? Oh, good.
[00:34:41] Speaker 1: Is there capability to have one video file that allows captions and audio descriptions to be enabled, or do they have to be two separate video files?
[00:34:50] Speaker 2: Let me try to, let me just explain audio description at a slightly higher level. So when you order, when you process into our system, we just need the video file. We can process all the different variants of the service. So we can process the captions, we can process an audio description output, whatever you need. The second part of audio description is where am I publishing? How am I publishing it? So audio description can be downloaded as a separate video file, or you can download just the audio portion. And then you can publish that as an audio description track to many video players. So Kaltura, I'll use as an example, they allow you for standard audio descriptions, which matches the original source length of the video, that can be produced and published as an audio, as an additional audio file into the Kaltura video player in the Kaltura management system. And then extended audio description, which allows for pauses and go over the time limit of the source video, that is actually delivered back to Kaltura, specifically as a text file, in which they will do the audio voiceover through their player technology. So there's a number of ways to publish and make these available. Even YouTube supports that standard audio description now as an output for an additional audio file. And I know Noah, you have a demo video on that specifically on our website.
[00:36:24] Speaker 1: Absolutely, and everyone should check that out if they're trying to add AD to YouTube. Awesome, thanks for that explanation. Next question, what if my institution uses different captioning vendors?
[00:36:37] Speaker 2: Yeah, so I think this is, you know, this is what Florida really did is at the end of the day, they did use multiple captioning vendors. And they found a way to restructure their spend and consolidate it all to 3Play. And the reason why is, one, we've proven ourselves on the human revision side to be the most reliable vendor in the space. And then on the AI side, they agreed that even if our AI was the best, which is a separate topic, even if our AI was the best, the problem that they had was this variance problem. So not only did they think that our AI was better than the tools that they were using, but they also appreciated that we were giving them that visibility. And so they could just have this peace of mind. If you're using other vendors or other vendors for captions, you know, ultimately, you know, the Pulse solution is only integrated into our human marketplace for caption remediation. Otherwise, you have to manage that yourself off the 3Play platform. But your caption, the AI captions are not a requirement to use Pulse, you can use your own captions, we're going to run ASR, but that just happens in the background. That's not like important information. We still can produce a score that's going to be within the relative accuracy level that any of your ASR engines are going to provide. So it's not a requirement. Of course, there's efficiencies and, you know, to use ours, but it's not a requirement to use the service.
[00:38:16] Speaker 1: Excellent. All right, moving on. This is a little bit of a longer question. Just a heads up. Is this something that can be segmented across an org? We have various 3Play projects for different units on campus. Some may be interested, some may not. Is this something that needs to be enabled and permitted at the org level or at the project level or both?
[00:38:37] Speaker 2: Yeah, so the first segment I just want to call out, and I didn't make this explicit. If you're talking about student accommodations, that workflow, we do not recommend AI captions. We do not recommend skipping human review. We really do recommend making sure that those are fully accessible, have all of the features of a truly compliant caption file. So let's exclude that from this answer. So now you're looking at, okay, I have 10 different departments. Pulse is enabled at the project level in 3Play. So for those who are not familiar, 3Play has an account system, and then you can have projects, which are basically subaccounts, which have different rules or different users, different media objects that you can use. So Pulse is at the project level, and so if you only want Pulse for a portion of your campus, totally fine. But alternatively, as I said, sometimes you can have Pulse at the entire campus level and just have it work for everyone. But ultimately, you can partition off as you need to. That might disperse the benefit a little bit, but ultimately it's available at the project level.
[00:39:53] Speaker 1: Perfect. Thank you. Next question here. How reliable is your quality prediction?
[00:39:59] Speaker 2: So we've been using this tool for 15 years. And a little inside baseball, this has been a really, really important tool for us in managing our marketplace and creating a really sticky, dynamic contractor environment. So we have built our business trusting that this data is accurate. So yes, there is model risk in the output. There might be 1, 2, 3 files that you say, whoa, whoa, whoa, you said it was 85% accurate. It was actually 98% accurate. That will happen. That's just the way that this world works. But for by and large, by 98%, 90% of the content that flows into the system, we're really confident in that score. One caveat is one of the things that we do for Florida is we actually set a maximum accuracy score of 30%. So if we see that a file is 25%, we also skip human editing because we predict that there's actually something wrong with that file and we shouldn't spend time on it. And oftentimes it's like a six hour file. And if you were to spend human time on a six hour file, that's mostly blank. We've all wasted our time and money. So we do have that. We have the both sides of this story, which is we don't want to process and remediate files that shouldn't ever be reviewed anyway. And this system helps catch a lot of those files as well.
[00:41:38] Speaker 1: All right. Excellent. Well, we're about at time. So I will get us wrapped up here. I know that there were several questions we didn't get to, but our team is available to answer any additional questions that you have and how to get started with Pulse by signing up at the link in the chat. It's also on that QR code on screen, or you can just go to 3playmedia.com slash Pulse. Thank you so much, Eric, for giving us an awesome sneak peek into Pulse and what's coming next. And big thanks to our audience for joining, asking some great questions, and also we appreciate those reactions too. Love to see that people are engaged. So thanks again, and I hope everyone has a great rest of your day. Talk to you later.
We’re Ready to Help
Call or Book a Meeting Now