Pulse de 3Play: monitoreo de accesibilidad de video para Title II (Full Transcript)

3Play presenta Pulse para auditar y remediar subtítulos y audiodescripción a escala, con scoring por riesgo y ruteo IA vs humano para optimizar presupuesto.
Download Transcript (DOCX)
Speakers
add Add new speaker

[00:00:00] Speaker 1: Thank you all for joining today's session, Meet Pulse, the new standard for video accessibility monitoring. My name is Noah Pearson. I use he him pronouns and I will be moderating today's webinar. And with all that taken care of, I'd like to welcome today's speakers, Chris Antunes and Lily Bond. And I will pass it off to Chris and Lily to share what 3Play has been working on. Take it away.

[00:00:21] Speaker 2: Awesome. Thanks so much, Noah. Just to introduce myself quickly, I'm the chief growth officer at 3Play Media. I've been with the company for almost 12 years. It'll be 12 years in March and have spent a lot of that time focused on the higher ed space. And I kind of had the privilege over the last year and a half to spend a lot of time with institutions talking through Title II compliance and really thinking about the best solution and market, which is what we're hopefully presenting to you today. And I will let Chris introduce himself as well.

[00:00:55] Speaker 3: So I'm Chris Antunes. So I am a co-CEO and a co-founder of 3Play. And so I've been here with 3Play since day one, since it was idea on a whiteboard at MIT. I consider myself a recovering engineer, so I'm really excited to go through the details of this new product. And our very first customer while I was at MIT was a group at MIT called OpenCourseWare. So we say here first, 18 years serving higher education. So from the very beginning, literally from day one, we've been focused on solutions for higher ed. And those solutions were centered around in the captioning space first, later in the audio description space were centered around solutions that prioritize using AI responsibly. So AI with tools we develop and people at scale. And just to sort of ground everyone, I'm sure many of you are our customers at 3Play are familiar with what we do, but we have captioned over those 18 years, more than 30 million videos and more than 35 million minutes of video in higher ed alone. And in higher ed, we support over a thousand customers today. So fluent in this space, fluent in the workflows in this space to make these processes scalable. And largely the universities we've served, we've been focusing on obviously accessibility with an accommodation focus at the center of it. And with the title two regulations that Lily will go through in a second looming, many of our partners and customers today are facing really new challenges in an entirely different scale. And so we've had the privilege, Lily and I and our team at 3Play have had the privilege of talking through these challenges and brainstorming, quite frankly, solutions with these thousand plus universities, about hundreds of conversations over the last few months. And we think we've arrived at a really compelling one that we'll go through today.

[00:03:06] Speaker 2: So just to ground everyone in the same context for the conversation, I'm assuming if you're joining this conversation, you're very familiar with title two, but specific to the space of video accessibility. We want to cover that, you know, ADA title two is expanding coverage to make web accessibility, web content accessible, which includes video content by April 24th of this year. And for video that includes captions, audio descriptions and transcripts for all prerecorded video content. And this also includes not just ongoing content, but backlog content as well. If it is kind of public on the web, it is covered. And one of the things that we've seen as a challenge as schools are approaching title two compliance is that everyone is looking for auditing solutions and web accessibility checking solutions. And those tools are fantastic for web compliance, but they have a blind spot, which is video. So we're really hoping that we can support you on the video side with a similar concept around being able to audit and remediate your content at scale. So when I say scale, I mean that many of the universities we're talking to are facing the need to make millions of minutes of content accessible and often millions of minutes per year accessible. This is every single lecture happening across the university going into their lecture capture system that suddenly needs compliance at scale. And manual review of this content is simply impossible at that level. There's also really no visibility, as I said, for auditing video content. A lot of people will use auto captions on content and hope that makes it accessible. But the problem is that even if automatic speech recognition or ASR is on average 90% accurate, that does not apply to individual video files. That's an average across all of your video content. So you may have a video that's 95% accurate, but you may have a video that's 60% accurate. And being able to identify which one is which and where the risk lies in your organization is a huge gap. And it's completely unknown to people today. And of course, the kind of final part of the equation is cost. With traditional solutions for captioning and audio description for millions of minutes of content, this is very quickly a multimillion dollar task. And no school has the budget to achieve that. So we've really been thinking about how do schools navigate a significant increase in content where they need visibility and they need to put limited budget towards high risk areas instead of a comprehensive single approach to all content. So we are hoping to kind of take the guesswork out of that and make this a simple equation and not a really complicated one as we move forward.

[00:06:40] Speaker 3: And Lily, just to jump in for a second here, just to kind of put a fine point on this, you know, we have a university using Pulse Now who, you know, probably in a similar situation to many of you on the call, who was captioning and describing on an ad hoc accommodation basis. And we were talking about three or four hundred hours, let's say, of content a year that needed to be captioned and described. And sort of in the Title II context, that number shot up to about 40,000 hours a year.

[00:07:13] Speaker 1: Right.

[00:07:14] Speaker 3: And so on one of the first discovery calls with that customer, they had done their homework and gone to our website and looked at the prices for our accommodation class of captioning and audio description at the prices on our website and found themselves looking at like a 20 million dollar a year number and obviously didn't have that type of budget, but really still wanted to be responsible and thoughtful about how they approach Title II. And we came to a really good solution that leverages Pulse and leverages this visibility that Lily described and control to fit inside their budget.

[00:07:53] Speaker 2: So the solution that we have developed for higher education, for campus-wide compliance where they're facing exactly that problem that Chris just talked through is Pulse. This solution will audit your entire video library anywhere you can ingest video from. We will audit that content both for backlog and on an ongoing basis every time new video is added. And every video is going to get scored automatically. And that means that we will assign for captioning an accuracy score. How accurate are we predicting the ASR is? Is it 60% accurate or is it 95% accurate? For audio description, we're going to tell you whether we think the video needs audio description at all. And then if it does, how comprehensive the AI audio description solution is compared to what a human would generate. And you can set your thresholds in accordance with your own university's comfort level for risk management. And we will then remediate and route content in a smart way so that anything above your threshold, the AI solution gets posted back and anything below the threshold will be routed for human remediation, allowing you to get a comprehensive compliance solution for your university with no manual oversight. And the goal here is that you are not spending on everything equally. You are only spending on risky content. So on the screen is a visual where some videos with under 90% accuracy rates are highlighted in pink. And we're calling those videos your risky videos. And then other videos are highlighted in yellow with scores over 90%. This is indicating that those are low risk videos. And you should put your money towards, you know, the 68% accurate ASR file, the 65% accurate ASR file. And ultimately in this distribution, you would have, you know, 55% cost savings. So this is a real kind of budget management solution for you as well. And it helps identify across a video, this is 20 videos, how would you know which of those 20 is performing poorly or performing really well without this kind of comprehensive auditing solution? So I'll let Chris talk a little bit about the technology behind the product and how Pulse works.

[00:10:35] Speaker 1: Yeah.

[00:10:35] Speaker 3: And again, Lily covered some of this already. So I'll try not to duplicate. But this is the high level overhaul or overview. And then we have a more detailed slide behind each. But fundamentally at the end of the day, connect, score, remediate, monitor. This means connect. We need to be able to access the video wherever it lives. Score that video in the way Lily described to tell you is it accurate, you know, meaning is an AI version of either captions or AD useful or sufficient depending on your risk tolerance. Remediate, meaning on-demand ability to upgrade to a expert captioning level of service. And then monitoring. This has to be available on a continuous basis. Let's go to the next slide. So connect. This is a non-trivial part of this problem. So I think it's an important one to highlight. It's not fundamentally a part of our captioning and audio description solution in the way we describe it. But we understand that at universities, there's video everywhere. Sometimes in one video platform, sometimes split across video platforms. We've dealt with universities that have 100 or more different YouTube accounts that are individually owned and the video is sort of split across them. And we try to make this as simple as possible. Subject to what these platforms are actually able to support, our goal is to make getting the video into the 3Play ecosystem so we can apply this Pulse solution simple, scalable, and frictionless. Now you see some questions already coming in that I think we'll get to at the end around various integrations. Again, we've worked with universities for 18 plus years and even at accommodation level scale, workflow has been important. So we have integrations with all these and many more video platforms that aren't on here. So scoring. So after we receive the video, we audit it. And in the context of captioning, this looks like an accuracy score. How accurate is the AI draft? And in the context of audio description, which I'll get to in a second, it's a little more nuanced. But fundamentally, there's sort of two things I want to call out on the captioning side. So one, the real power in this, I think, is pairing this score with other metrics you might have internally about the video, about the priority of the video, about the visibility of the video, about the risk. So later you'll see we recommend still, if you're captioning or describing for an accommodation in the classroom, you're going to still want to use obviously a professional captioning and description solution. But even if you're scanning across a 20,000 hour backlog, there's going to be content in there that even if the score is quite high, you still may want to opt for another round of human review because it's such high profile content. Maybe it's a high profile speaker or commencement speaker. So pairing that sort of logic or the data you have, metadata you have with these scores is really powerful. But you can see this distribution here. We've seen typically the sort of accuracy rate set around this 90% level, but that is entirely configurable. On the audio description side, what is a score? What does audit mean? Well, it starts with, do you need to describe it all? So is there enough space in the video? Is there enough like sort of empty space that you could fit a description in it? Yes, no. Is there onscreen text maybe in the video that's not represented in the transcript already? So that's critical information that would need to be described. But if there's none of that, maybe not as important. And ultimately, is there high priority visual elements? And so we have models that can detect that. So start with of the video you have, does it need to be described? And then after that, there's another decision to make, which is how comprehensive is the AI audio description? And you can see we have a low, medium, high risk, which maps to comprehensive somewhat or not. So certainly if the video needs to be described and the AI draft is not comprehensive, that's an area where you're going to want to consider an upgrade. We can go to the next. And so this lays out this routing. So once you have that score, you have choices to make. So we've carved out on the side this accommodation request. Because again, what we recommend and I think what we've seen so far is don't change that. If there's a student in the classroom who needs an accommodation, you're still going to use our either R or another vendor's professional captioning high-end solution. But for this other sort of batch of content, you'll get this score and you can configure a threshold as Lily described and route that automatically to an AI solution, which we call launch or to our refined solution, which would be another round of human review. And the nice thing here is that, like I said, is entirely configurable. And we have customers who are literally changing that month to month as budget realities change or their sort of understanding of what an 88% or 95% quality level means as they look at the content, as they get feedback from students. And you can change that literally like day-to-day or month-to-month or whatever resolution you want. You can also set up different projects in 3Play where you could have different rules. So maybe there's one category of content where you want to set that threshold very high, 97%, 98%. Because again, like I said, it maps to really high visibility content. And maybe there's another project you want to set up with a lower threshold. Really allows you to allocate budget in a responsive and responsible way. And centralized governance. And when we're thinking about compliance with something like Title II or really any of the regulation, obviously we work with media customers and think about FCC regulations and we work with corporate customers. I think the starting point is visibility and governance, right? So if there's 40,000 hours a year, like in the example I gave earlier, a video being produced and it's distributed across 10 different video platforms, you start completely in the dark. So when we think about Pulse, step one is to get command, to get control, to get visibility over the situation and understand it. And you could set a threshold really, really, really, really low to start. So maybe it's all AI to start, but now you at least have a picture of what's going on. And then you can make responsible decisions about where to go from there. So step one, and this is an example of a dashboard you would see in the 3Play account system that shows you the distribution of your content. And the dotted line here in the left diagram shows where you may decide to set a threshold. And this is actually based on real historical data across all of our EDU customers. But you can see if you were to set a threshold around 90%, you would only be upgrading content to the left of that dotted line, because all of that content to the right, roughly 50% or so of content here falls above that 90% threshold with just the AI solution. Lily, anything else you'd want to add on this slide?

[00:18:15] Speaker 2: I would just add that in a conversation I was just having last week with a CIO at a major university, his responsibility is reporting on compliance to their board. And being able to have visibility at this level in a clean, easy to read dashboard that he can just share with the board was a huge time saver for him and for all the people downstream who have to fetch this information. So thinking about how to report back that governance and how you're going to do that as a university is a helpful view also in thinking about how to use Pulse to make your job of reporting back easier.

[00:19:10] Speaker 3: Yeah, and one thing I should have mentioned in the context of the scoring, which I think is an important fact, how we got here. So we, I mentioned earlier, have captioned and described thousands, millions of videos. And I think I mentioned 35,000 minutes plus of video captioned just in higher ed. And from day one, our solution has been, like I said, AI plus tools plus people. And a key ingredient to making that work, to making that model work at thousands and thousands of video scale every day is we need to know for every AI draft caption we produce or any AI draft audio description we produce, we need to know if it's good or not. We need to know if it's not, where it fails and by how much it fails. And really at the end of the day, we need to know how much work will it take for us to get the draft AI to a compliant state so that we know how much to pay our captioner, how much to pay our describer so we do so fairly. So I'll give you an example. In the case of captioning, if we have a video and it's a single speaker and it's professionally recorded in a studio, that caption file might be, or that AI draft might be 97%, 96% accurate. And the captioner needs to do just a little bit of work to get it above that 99% plus threshold that we're aiming for. And so it might take them an hour to review that file if it's an hour long and make those final changes. On the other hand, we've done work and this is not, this is a real example where we were transcribing a focus group and the microphone was hidden in a plant somewhere and there were 10 speakers shouting over each other and that ASR draft or AI draft might be 30% accurate. And that transcriber or captioner, it might take them seven or eight hours to clean up that hour. And obviously we need to pay that in that second case a lot more. So we've been using this exact model. We are a customer of false and we've been using this exact model for 18 years. And I think the light bulb went off as we were having these hundreds and hundreds of conversations with these universities around Title II and the challenge they face, where at the root of it, they want to get these captions and descriptions to a certain level of quality and they just don't know where to start or how to allocate budget. And this model allows them to do that. And so really it's just taking something that we've been battle testing and using for years and years and years and putting a kind of product and ordering layer on top of it. So dashboards and API support and integration support, but the core technology has existed for years and years.

[00:22:09] Speaker 2: Yeah, I think that's a great point to focus on is this is, you know, not a new to market score and solution. It's something we've been using in our own business for 18 years. So there's a ton of reliability and trust behind this, giving you really strong conviction behind the arguments you're making around compliance. So just to take a step back, obviously 3Play has been functioning in the higher ed space for our entire 18 years in business. And we've been focused pretty much entirely on the student accommodations category. And the products that you have used and known and trust for student accommodations are not changing. That is something that we always recommend going directly to when you get an accommodation request or when you have like a very high visible, highly visible piece of content that you know you want kind of perfected human review on. What we're introducing now is Pulse for campus wide compliance at scale. And these solutions are really for that auditing and remediation of your entire campus. And there are going to be two packaging options that you can choose from with Pulse. There's an audit only, where we'll do all of the scoring and monitoring of your video library. And you'll get that detailed reporting. And it's going to give you a view on where the risk is. And then there is a second package, which is going to do all of that auditing and include the remediation of the high risk content. So if you're only ready to start with the audit, that's totally fine. There's an option for you if you want to get everything above a threshold and feel fully compliant. For this April deadline, the audit and remediate package is for you. And we do have several universities using this solution today. University of Florida is using this at scale to achieve full compliance. And they had a great takeaway, which is that Pulse isn't just a caption quality tool, it's a budget strategy. So I have really valued that perspective in thinking about how we can help universities position this and get approval for it. And I think that this story from University of Florida, which we can provide to anyone who needs it, makes a great kind of case for that point. So we're about to get to questions. But I just wanted to end by saying that you can start experiencing Pulse today if you are already a 3Play customer. We turned on today a preview of Pulse. So from your account, there's going to be a new tab in your upper nav that says Preview Pulse. And you can click on that, and you'll see some kind of sample data around what this experience will look like for you. And you can reach out to your account manager to help activate Pulse in full. And if you're not a customer, we would love to talk to you. You can request a demo. And we would be excited to start with the audit for you. And with that, I will let Noah start tackling the amazing questions that have been coming in.

[00:25:42] Speaker 1: Yeah, absolutely. We have some great questions here. To start off, we have a few questions around what is audited in each package. So first off, what impacts the cost of the audit system? And can we choose what is audited, or is it all or nothing?

[00:25:59] Speaker 2: That's a great question. So I'll take a stab at that. And then, Chris, if you want to layer on, go for it. So for the audit, as a starting point, there are two major options. Do you want to audit captioning, or do you want to audit captioning and audio description? Those are two different options. If you want to start by focusing on captioning, you can go down that path. If you want to really focus on both captioning and audio description, that's an option as well. And then you can really configure Pulse how you want to for your organization. If you have content in Panopto and YouTube and Canvas Studio, and you want to start with your Panopto content, you can just set it up for Panopto content, or you can set it up for projects within that. Whatever you want to point Pulse at, you can configure. Chris, how did I do?

[00:26:57] Speaker 3: You did great. And I mean, I think it's worth mentioning, on the audio description side specifically, I mentioned on the score or on the audit, it's a little bit more nuanced in that it's not just an accuracy score or a describability score. It starts with, do I need to audio describe this content at all? And anyone here who's familiar with audio description knows this is a real interesting challenge, right? Because instructors in the classroom, I know many universities have tried their best to coach professors or instructors on how to describe their content as they go. Meaning if there is writing on a chalkboard or a presentation they're sharing, making sure that they communicate what's on the screen along with it. But best intentions often fail at that endeavor in practice. So the idea that you can look at a backlog of content and identify what is already well-described and what is not, is a real unlock potentially in reducing the scope of what ultimately needs to be remediated. But I agree with everything you said about the video platforms and how you can allocate the content. And I'd say even within a platform, even within a YouTube or in a Panopto, we could even slice the data further. If there's tags, for example, on certain content in a certain platform that we want to ignore, we could accommodate that as well.

[00:28:25] Speaker 1: Amazing. Next question here around auditing for each package. Is there a cap on how many videos you can have audited slash remediated at the enterprise level?

[00:28:37] Speaker 3: So Lily, this has a good connection back to the previous question, which I should have covered. This sort of relates. The answer is really no, but the sort of input into the price is the number of minutes, right? So there are minute caps at each tier. So 500,000 minutes, a million minutes, 2 million, 5 million, we price it in that way. So no upper bound, but the volume of content is the input ultimately that matters.

[00:29:06] Speaker 2: Yep, exactly.

[00:29:08] Speaker 1: Awesome. Moving on, we have some questions around specific integrations that we can quickly go through. Will the Pulse audit that tells you if AD is needed be available if we're using 3Play services through Kaltura? And what about Canvas Studio?

[00:29:29] Speaker 2: So Kaltura, I reach. We are working with the Kaltura team now to build the functionality for audio description into the product in the same way that captioning is. And for Canvas Studio, we do have an integration where we can ingest your content from Canvas Studio. I don't know whether this is a question specific to audio description or just support for Canvas Studio completely, but we will be looking into the audio description support there.

[00:30:09] Speaker 3: Yeah, and I'd say I sort of mentioned this on the ingest side, but at the highest level, we will work with you to figure out a workflow that scales and is simple. Our aspiration is to be able to connect directly via API to wherever the content lives, ingest the video, run this Pulse solution on it, and deliver the right asset back, whether it's a caption file or a description file at either the AI level or the human level of remediation that you chose. That's where we want to be. That's where we'll ultimately get. Some, as I'm sure many people here know, many of these video platforms, particularly around audio description, are not, don't have full API support yet. So we're leaning on them, obviously for, you know, many of the people on this webinar today are really important customers of Panopto and Canvas and Kaltura. So we've had a lot of success in motivating the platforms to accelerate roadmap, partnering with customers.

[00:31:14] Speaker 1: Awesome. Next question here. Can you clarify a bit how Pulse works for audio description? Is there a quality threshold for AD instead of automatically routing for human generated descriptions?

[00:31:26] Speaker 3: Yeah, no, it's a great call out. I mean, you know, what is, so in the case of captioning, you know, it's a little bit clearer, right? We can measure accuracy with a high degree of certainty, because again, like we have many, many years of running AI, and then we have people cleaning it up and generating the truth. So our model is built on that, right? Like where there is a sort of definitive truth at the end. Description is more subjective. Fundamentally, there isn't a single sort of analytical truth. So that's why we start with on the scoring side, should you describe this video at all? That is a really important distinction. Many videos are either not describable at all, because there's literally no space to describe. There's a concept called extended AD, but many of the video platforms don't support it. And second, even if there's space to describe, either the video could be well described already, all of the visual elements, the professor is already describing, or the speaker is already describing, or there's just not a lot to describe, right? There's just nothing going on visually, other than a person standing in front of a classroom. So we have a model that can identify those things. That solves a large part of the problem. It's like, if you have 100 videos, 40 of them might not be describable. So then for the 60 that remain, which I think is the crux of this question, what does an accuracy score mean? I mean, fundamentally, our model looks at things like complexity of the video. I mean, fundamentally, we're saying, would the description improve with human intervention? So our model aims to answer that question. And would it improve materially? And like complexity of video, characteristics of the AI description, even like feedback from the system itself around sort of introspectively, did it hallucinate? There's things we can do, but fundamentally, our model helps answer the question of would, again, this description benefit from a professional describer spending time with it?

[00:33:31] Speaker 2: Yeah, I would just add that that piece will be kind of like a red, yellow, green gauge of risk. Green would be that like, we think the AI audio description is extremely comprehensive, and we don't think a human looking at it would give you anything significantly better. And red would mean like, this is really different than what we think a human would produce. So you'll get really clear gauges of that. And the other thing to note is that we've talked a lot about our history with captioning and the, you know, 18 years of captioning experience for audio description. We have decades of experience describing content, particularly in the media space with Emmy award winning description writers on our team who are working with our data science team to make sure that this AI audio description solution and the scoring around it is exceptional quality and is really based in like what makes good description. And that's a really nuanced art form, to be honest. So like having those resources internally really focused on this problem is a huge value add from our team.

[00:34:57] Speaker 3: Yeah, and I think it's another thing worth mentioning is that, you know, again, when we're thinking about would this AI description benefit from an expert describer reviewing it and modifying it? Well, we will present in our tools for our accommodation solution a draft of an AI description in front of a describer, and then we see if they actually make changes to it and what types of changes they make to it. So that's also the feedback loop where we start to learn, was that AI description good enough? If it failed, in what ways did it fail? And that can inform this model.

[00:35:33] Speaker 1: Yeah. Awesome. Thanks for that answer. While we're talking about audio description, I have a quick clarifying question here. Is extended audio description, EAD, factored in? For instance, if a video is not a good candidate for standard AD because there's not enough space, does it know how to recommend EAD?

[00:35:53] Speaker 3: Absolutely. So sophisticated audience. So that's great. So extended audio description, the reason we start with does this need EAD and how much space there is to describe is a starting point is because many video players do not support extended AD, so it's not a possibility. But absolutely, we could relax that criteria if the platform we're delivering to, in fact, does support it. And I think to the question, even recommend it. So if you are going to describe either with AI or with a human describer, use extended AD.

[00:36:30] Speaker 1: Sure. Awesome. Moving on, Chris, I think you touched on this a bit, but can you speak to how reliable are your accuracy slash quality predictions? Yeah.

[00:36:42] Speaker 3: So I think what I'd say to that is we use them every day, 10,000 videos plus a day, scale to figure out exactly how to pay our contractors, our captioners, our describers, like I said. So I think really, really high quality. I mean, like with any model, is every single video, is every score perfect? No, but over a large sample size, they're very, very highly correlated.

[00:37:07] Speaker 1: Awesome. Thank you. Next question here, can the Pulse score be viewed in more of an admin role rather than a creator, i.e. faculty support, ITS, disability services? If so, how many users can be in that type of role?

[00:37:26] Speaker 2: Yeah, there's no reason you couldn't set kind of user permissions around access to the Pulse dashboard. And there's no limit on like user count in our system.

[00:37:40] Speaker 1: Awesome. Next question we have here, how are you defining the high risk content? Does it include non-public facing content like Panopto?

[00:37:51] Speaker 3: So I think there's two, I sort of alluded to this earlier, but when I think about risk, there's sort of two dimensions. So one is around the quality of the caption or the quality of the description that we are producing. And so if the quality of And so if the quality is very low, publishing it alongside a video would be higher risk than if the quality was very high. But the second dimension, which is a little bit out of three place control, is how visible is that content? How public is that content? And while, again, we believe that all of the content everywhere, even if it has one view or a million, ought to be accessible in practical terms from a risk management perspective and budget perspective for a university, the video that's going to get a million views and is on the front of the website might be higher risk if it's attached to a, or if it's associated with a low quality caption and description file.

[00:38:45] Speaker 1: Yeah.

[00:38:46] Speaker 2: And I think just to clarify, you have configuration over what you view as high risk for your own university and the decisions you make internally. We always advise getting feedback from your general counsel to make those decisions. But these are all, we'll have defaults, but you have total configurability over what you want to consider high risk.

[00:39:13] Speaker 1: Great. Thank you for explaining that. Moving on, we have gotten a few questions around compliance. First one here, don't captions need to be 99% accurate to be compliant with Title II? Will the threshold be automatically set to 99% if we don't identify a different threshold?

[00:39:31] Speaker 2: It's a good question. So first of all, we would say that any student accommodations requests, absolutely. You need to prioritize this 99% plus accuracy. That is like a direct request from a student for accommodations. For Title II compliance more broadly, there's no specific accuracy number stated in the law. There is a concept of creating an equal experience. And it is on the university ultimately to do this math on what is the highest accuracy we can set for the university at the budget that we're getting at scale. And you can set that at 99% if that's what your university is comfortable with. You can set it at 90% if that's what your university is comfortable with. But that's a decision for you to make about your kind of risk threshold. And at the end of the day, you want to be prioritizing the content that absolutely gets that 99% plus treatment. And you may have a different opinion about how to handle the rest of the scale. And Pulse kind of gives you that configurability.

[00:40:51] Speaker 3: Yeah. And I think what we'll absolutely share is what other universities, peers of yours, are choosing. What the defaults they're choosing are, where they're finding success, where they're sort of landing ultimately. And like Lily said, ultimately the key here is it gives you governance and control and visibility. So if 95% is more comfortable, but for a smaller subset of content, maybe that's the right choice. Or you can go broader and set the threshold lower. And I think another thing, and maybe this is obvious, but it's probably worth stating, is that the nice thing is that once you set this up, and let's say you choose a threshold of 90%, AI models get better all the time. And so by doing nothing, let's say 50% of your content falls below that threshold today. Well, six months from now, it could be 45%. And then 18 months from now, maybe it's 35% as the models get better and better. So this sort of costs come down by default here as the AI models improve. And you don't have to be out there in market looking for a new AI model. Like we've got the threshold set. We're always investing on the absolute best cutting edge models. And sort of by default, the cost will just continue to go down.

[00:42:02] Speaker 1: Amazing. Well, that is all the time we have for today. Huge thanks to Chris and Lily for giving us an awesome overview of Pulse. And thank you to our audience for joining us and asking some excellent questions. Thank you again. And I hope everyone has a great rest of your day. Bye.

ai AI Insights
Arow Summary
El webinar presenta Meet Pulse, una solución de 3Play Media para monitorear, auditar y remediar accesibilidad de video a escala en universidades ante la expansión de ADA Title II (plazo 24 de abril). Lily Bond y Chris Antunes explican que las herramientas de auditoría web suelen tener un punto ciego en video y que la revisión manual es inviable cuando el volumen pasa de cientos a decenas de miles de horas/año. Pulse se conecta a múltiples plataformas (p. ej., Panopto, YouTube, Canvas Studio, Kaltura), ingiere el contenido, lo “puntúa” automáticamente y enruta la remediación: si las métricas superan un umbral configurable, se publica una solución basada en IA; si no, se envía a revisión humana. Para subtítulos, el puntaje estima precisión del ASR por archivo (no solo promedios). Para audiodescripción, el sistema evalúa si se requiere AD, si hay espacio (incluyendo consideración de AD extendida donde la plataforma lo soporte) y cuán completa es la AD de IA (riesgo rojo/amarillo/verde). El objetivo es gestionar riesgo y presupuesto, destinando gasto humano solo a contenido de alto riesgo/alta visibilidad y manteniendo flujo separado para solicitudes de acomodación (99%+ con revisión profesional). También se destaca la gobernanza: paneles para reportar a liderazgo/board, permisos de usuario sin límite, y pricing por minutos con niveles. Responden preguntas sobre integraciones, confiabilidad de predicciones (modelo usado internamente por 3Play para estimar esfuerzo y pagar justamente a revisores), definición de “alto riesgo” (calidad + visibilidad), y que Title II no fija un porcentaje explícito de precisión, por lo que el umbral es decisión institucional con asesoría legal; además, mejoras en modelos de IA reducen costos con el tiempo.
Arow Title
Meet Pulse: auditoría y remediación de accesibilidad de video a escala
Arow Keywords
3Play Media Remove
Pulse Remove
accesibilidad de video Remove
ADA Title II Remove
cumplimiento Remove
subtítulos Remove
audiodescripción Remove
transcripciones Remove
ASR Remove
auditoría Remove
remediación Remove
educación superior Remove
integraciones Remove
Panopto Remove
YouTube Remove
Canvas Studio Remove
Kaltura Remove
umbral de calidad Remove
gestión de riesgo Remove
gobernanza Remove
paneles Remove
EAD (audiodescripción extendida) Remove
Arow Key Takeaways
  • Title II amplía exigencias de accesibilidad web incluyendo video (subtítulos, AD y transcripciones) y también contenido en backlog si es público.
  • El mayor problema es la escala: millones de minutos/año hacen imposible la revisión manual y las auditorías web tradicionales no cubren bien video.
  • Pulse automatiza el flujo: conectar fuentes, puntuar calidad/necesidad, remediar con ruteo IA vs humano y monitorear continuamente.
  • En subtitulado, el puntaje estima precisión por video (variabilidad alta entre archivos), permitiendo focalizar gasto en los casos de baja precisión.
  • En audiodescripción, el scoring es más matizado: determina si se necesita AD, si hay espacio y la completitud de la AD de IA (riesgo rojo/amarillo/verde).
  • Los umbrales son configurables por institución y pueden cambiarse con el tiempo; se pueden crear proyectos con reglas distintas según visibilidad/riesgo.
  • Las solicitudes de acomodación deben seguir un camino profesional (objetivo 99%+), separado del cumplimiento masivo.
  • La gobernanza y reportabilidad (dashboards para liderazgo/board) es central para gestionar cumplimiento.
  • El pricing se basa en volumen de minutos por nivel; no hay tope duro de cantidad de videos.
  • A medida que la IA mejora, más contenido puede superar el umbral sin intervención humana, reduciendo costos con el tiempo.
Arow Sentiments
Positive: Tono informativo y optimista: se enfatizan solución, ahorro de costos, control y escalabilidad; se reconocen desafíos presupuestarios y de volumen, pero se presentan como abordables con Pulse.
Arow Enter your query
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript