[00:00:00] Speaker 1: Ultimately, the point of the Title II updates is to remove that additional barrier, that social barrier of raising your hand. So by not providing audio description in key moments where someone might not have the willingness to raise their hand, you might inadvertently be impacting their academic performance. Today is all about audio description. This is probably top of mind as we get closer and closer to the date in April. What are we going to do? So what is audio description? Audio description is fundamentally a verbal depiction of key visual elements in video and is implemented as a secondary audio track, typically, and it's meant to be a narration. It's an overview of what visuals are happening in this video that are important to understanding this video if I were low vision or blind. So when do you need it? So here it is. The first is understanding the video type. Identify the video type and the purpose, and specifically, where is this video going? If this is going to be in front of a large audience, you likely are going to be exposed to more compliance risk if you don't do audio description. There might be platform limitations. So if you're sending a video to a platform that doesn't support audio description explicitly, you may need to rethink about how you build that video. Number two, evaluating the impact of lack of access. Just because the audience might be small, the impact might be large. So it's important to understand, is this audience or is the content in this video going to be critically important? Is it safety information? Is it training information? Is it critical content to be successful in a course? Is it critical content that's required for employee training at the university? So understanding the impact of access, knowing where and who your audience is, those two components are truly your decision at the end of the day as a university, where you see the most risk in terms of not complying with this law. No one has to do any of this. There are consequences potentially to not doing this. And the third step, does the content itself allow for audio description? What we're trying to do is determine if critical visual information is properly verbalized throughout that video. So let's talk about what actually needs to be described. There's four key things that we're always looking for in audio description. One is actions. So what people are physically doing on screen, especially if not verbally described. We're looking for visual action that changes the narrative of the video. So someone creeping up in the background behind you is probably important if that's not caught in the audio at all. Characters, who is on screen, the relevant expressions or body language when it matters to understanding. Scene changes. So transitions, location changes, significant setting details. You saw in the classroom, if that lecture were to move to a different classroom with a different environment where they're now outside and they're showing things outside, that would constitute a scene change. But sitting in a classroom where you're just going through the whiteboard, the key is really actually what is being written on the whiteboard. So kind of going to on-screen text. So anything written on the screen that conveys meaning that isn't already read aloud, that's what you're going to be focused on. Especially think about the purpose of the lecture. You're listening, you're reading what's being written and what's being put in front of you. Okay, creating audio description, it can be challenging too. The first thing is we're building descriptions against a style guide, the DCMP description key so that we have a consistent language about how we're describing content. So the keys are accurate. At the highest level, you're looking for accurate descriptions. So not using subjective descriptors. It's not about like, does that person look good or not? It's about accurate. It needs to be consistent. So if you refer to a character at time zero and time 30% into the video, that should be a consistent description of that individual unless there's a fundamental change to their appearance that is important to the meeting. Prioritized. This is the most challenging aspect. How do you prioritize what makes sense? And this is usually where human judgment come into play the most appropriate. As I mentioned, it's not meant to be subjective and equal, making sure that we're trying to create a description and an experience that results in an equal experience for a blind or low vision user as the sighted user. And then finally, there's the standard versus extended audio description. Standard audio description means that the description is going to fit into the original length of the video. So we're just going to get to use the pauses. Like right there. And those pauses might be really, really short. But ultimately, this is the easiest thing to publish from a from a technology perspective. And it's also how, you know, pretty much all media and entertainment publishes audio description. But in academic content, there's not a lot of room for pauses. And there's a lot more visual information that might be missed. And so extended audio description gives a script writer, whether it's AI or human, more space because you can actually pause the video, you'll put a time code and say, we're going to pause the video here. And we're going to let the description run longer. And then the video will automatically start playing back once the description is over. And this allows for people to provide much more context and interrupt lecture style content where there's not much space. To kind of wrap up and summarize everything, the pillars of a sustainable strategy for audio description really starts with a policy position. What is the university going to prioritize from a business perspective? And then from there, you can start your audit. Okay, I have all this video content, what content might need AD, what content doesn't need AD, based on that policy, and then prioritization. From the audit, we can prioritize these sets of videos do need AD, these sets of videos need human scripted, these sets of videos need AI scripted, then you can create a budget and forecast. And then maybe before you go to ongoing monitoring, you go back to your policy and refine it. So your budget might end up influencing what the policy decision is. We're going to constantly evolve the strategy as the technology gets better for audio description, as the budget expands or shrinks for audio description. And then ultimately, it's ongoing monitoring, making sure that you have the tools in place to monitor your audio description performance from an execution perspective. That might feel overwhelming. At the end of the day, 3Plays has designed a specific tool to support all of these steps. So what is Pulse? Pulse Level A, that is our auto caption quality analysis tool, budget simulations and remediation for human upgrade for poor performing auto captions. Double A is what audio description is all about. So not only do you get all the access to Pulse captioning workflows, but then you can add in additional audit tools like does this video need audio description? Will standard be a sufficient audio description track for this video? We've integrated all of these auditing tools directly into our marketplace of human editors and script writers and additional AI script writing tools for audio description, which unlocks additional kind of insights to make sure that your strategy is as buttoned up as possible and defensible in case of a worst case scenario for your university. Ultimately, our guidance is like, just get started. You can do five videos, just get started doing something so you can get a feel and start building a plan and a budget for faculty and the entire campus.
We’re Ready to Help
Call or Book a Meeting Now