[00:00:00] Speaker 1: Thank you all for joining us for today's session, how to build an audio description strategy that actually works. My name is Noah Pearson. I use he, him pronouns and I will be moderating today's webinar. And with all that taken care of, I'd like to welcome one of 3Play's very own, Eric Ducker.
[00:00:15] Speaker 2: Take it away, Eric. Awesome. Thank you, Noah. I may call upon you to help out as we continue to monitor the chat and I asked for some audience participation, but so don't fall asleep, Noah. We will do. So we're excited to kind of start on this next installment of our Countdown to Compliance series. You have probably seen me in the previous episodes of this series, talking about captions, talking about how schools are addressing title to compliance and we've also kind of touched on kind of how we're building solutions to support our customers. Today is all about audio description. This is probably top of mind as we get closer and closer to the date in April. What are we going to do? I'm starting to see schools really actually take and put a concerted effort into closed captioning and that's a good step in the journey. And now we're gonna be talking about audio description, what's gonna work and what tools like 3Play can help you actually execute against this strategy. So as Noah mentioned, I'm Eric Ducker. I identify as male. I'm in my mid thirties, brown hair and I've been at 3Play Media for over four years now and I represent kind of our product marketing and product strategy here at 3Play. I have the privilege of talking to many, many customers, some of which were probably on this call right now. We've dug deep into their specific needs around audio description and captions and really my opportunity here is to present in aggregate what we're hearing in market and how we think schools can really take a defensible strategic approach supporting their audio description compliance against the Title II backdrop. So what we're gonna cover today are a few things. We're gonna start with the basics again. I think this is always really important. We never know exactly where people are in their journey around audio description. So we'll start with the requirements, today's agenda, the requirements and then we're gonna go through kind of what we call an audio description decision framework and then how to bundle that all into a scalable strategy and then of course, we'll have time for some questions at the end that Noah will come back to help us with. All right, so let's get started. We mentioned this at the beginning. You have captions, now what? Captions are typically the first step of a journey towards really accessible video and specifically meeting the level 2A compliance standards but remember the Title II deadline, April 24th, 2026 is fast approaching. We are in March now, which means what's next? April. So this is going, this deadline once again, kind of triggers this law going into effect which is going to require audio description and closed captions as it's referenced in the WCAG 2.1 Level AA Guidelines. And once again, it doesn't mean that everything happens on April 24th, it just means that it becomes an effect and people can now take action if you are not adhering to some of these compliance standards. So what is audio description? Audio description is fundamentally a verbal depiction of key visual elements in video and is implemented as a secondary audio track typically. And it's meant to be a narration, it's an overview of what visuals are happening in this video that are important to understanding this video as if I were low vision or blind as a user of this video. So taking that into account, we know that this is important. How do we know if it's necessary? Because if we look at the definition of audio description, it talks about important, prioritized elements in the video. So determining when audio description is necessary and how to implement it at scale, especially when we're talking about lecture content, hundreds of thousands of minutes across campuses, how are we going to scale this without making it complicated? And this is where a clear decision framework, which is going to combine not just like what's in the video, but also a little bit of prioritization and reality around what's possible and building a defensible framework for audio description strategy. So when do you need it? So here it is, very straightforward, more a little nuance once you get deeper. But the first is understanding the video type, identify the video type and the purpose. And specifically like where is this video going? If this is going to be in front of a large audience, you likely are going to be exposed to more compliance risk if you don't do audio description. There might be platform limitations. So if you're sending a video to a platform that doesn't support audio description explicitly, you may need to rethink about how you build that video, construct that video, knowing that you can't necessarily adhere to creating an accessible video content. Number two, evaluating the impact of lack of access. So just because the audience might be small, the impact might be large. So it's important to understand is this audience or is the content in this video going to be critically important? Is it safety information? Is it training information? Is it critical content to be successful in a course? Is it critical content that's required for employee training at the university? So understanding the impact of access, knowing where and who your audience is, those two components are truly your decision at the end of the day as a university. Where do you see the most risk in terms of exposure to not complying with this law? Same thing with captions. At the end of the day, no one has to do any of this. There are consequences potentially to not doing this. So then the third step, once you've decided, yes, yes, yes, we are gonna go, we wanna evaluate the last step, which is, okay, does the content itself allow for audio description or does it need audio description? So nothing to do with the outcome of it or the purpose of it, but is the video designed in a way that requires audio description? And we'll talk a lot more about that part throughout this presentation, but ultimately what we're trying to do is determine if critical visual information is properly verbalized throughout that video. Okay, so let's go into each one of these a little bit more deep. So first off, identifying the video type and purpose. So we're gonna ask these questions to ourselves and we're gonna document and we need to make a policy around this. Is this video part of essential academic content? Is this video public facing? Is this supplementary or optional content? So you might make a policy at the university level based off of the answers to these questions. You may say, I really want to be accessible and I want to make sure that I continue to improve my accessibility of my video strategy, but I just can't budget, I can't afford to do it in a scalable way. And so I'm gonna prioritize on say, essential academic content or public facing video. Those are decisions that you have to make, 3Play and vendors aren't making for you. But ultimately the guidance here and what we've picked up from the rest of, from your peers are videos that are core to participation or compliance are more likely to benefit from audio description. And then optional or decorative content, or short-lived content where the audience, where you know the audience really well, like it might be lower priority or not necessary. So there's just environments in which we need to put a lens on of like, how do we define a policy of making a decision to move forward to the next step, which is ultimately let's evaluate the impact on access. So we've made a decision that within our framework, we are going to proceed with attempting to do audio description. But before we're gonna do that, we're gonna make sure that we're evaluating the impact of this content itself. So we're starting to look more into what is in the content. What is the outcome of the content? What is being delivered in the content? Would a blind or low vision user miss essential meaning, instructions or context without the AD? Could the lack of audio description impact academic performance, equal participation in the program, safety or legal requirements or understanding? We're in the student accommodation space. We're so accustomed to people raising their hand for help, for access. But ultimately the point of the title two updates is to remove that additional barrier, that social barrier of raising your hand. You might not be in a position and comfortable with raising your hand to ask for help. And instead you let your academic performance slip. So by not providing audio description in key moments where someone might not have the ability, the willingness to raise their hand, you might inadvertently be impacting their academic performance. So it's really important for us to think about it, not from a binary, yes, they raised their hand or they did not raise their hand, but how do we make sure that access is just expected and continuously improved? There's no end point to good accessibility. It's a journey, it's a framework, it's a mental model of how do we produce digital content in a way that's accessible to everyone who might be interacting with it. So ultimately, if these answers to the above are yes, this is gonna make an impact, you should definitely start looking at is the AD required? And if it's not, if it's really not that important to the decision, sorry, to the impact of the experience of academic performance or safety or legal requirements, you may make a decision that AD is likely optional. I'm not, we're not here to endorse yes or no, we're saying there's a framework in which you can make that decision and we're giving you the tools to say, okay, how should I do this rationally in a way that allows me to document this and make it from a defensible position so that you can trace your steps back and improve over and over time. So then the last step in your framework is we've deemed that this piece of video content is in fact necessary to properly be described. But it doesn't mean that it actually needs this extra narration track. So this is where most people I believe get hung up from my discussions with customers and prospects and people across the ADU industry. How do I determine if critical visual information is verbalized without watching every single video? And so that question is so broad and so complex in the sense that how am I supposed to sit there and watch 100,000 hours of video and choose which videos to go through and do audio description? And I also don't have a budget to do 100,000 hours of audio description. So before we get to how we might be able to support you there, let's ask those questions at the micro level, at the individual video level. Does the video contain information that's conveyed visually, that's not spoken out loud? So this is charts, graphs, diagrams, demonstrations or experiments, text on screen. If you're paying close attention, you might realize that I'm trying my best to verbalize every single piece of text on screen that's relevant to understanding this presentation. I like to call it drinking our own champagne. It's a positive spin on eat your own dog food. But it's critical for us to think like that and present in that style as much as possible when we have control over that. So if these things happen, we know that there's charts, graphs, diagrams, text on screen. The guidance is likely audio description could be needed. But as I've kind of gone through this presentation, you'll probably see that this presentation may not need audio description because I've covered nearly, if not all, of the text on screen that's relevant to understanding this presentation. And so how do we kind of measure that and how do we provide the tools so that you individually don't have to go through every single video manually? And is there a way to do that? And we'll talk about that, but I want to keep moving on this kind of strategic framework. So before we move forward, I want to do a couple samples. One for me to take a break from speaking, but I want to just have an open discussion. Feel free to pop in the chat. What do you think? Does it need audio description? Does it not need audio description? So I'll play a little bit of each clip. And Noah, this is where I need you to probably read aloud some answers.
[00:14:27] Speaker 3: Here we have the Mount Lionel Shrew doing the one behavior that it does best, eating.
[00:14:32] Speaker 4: So this is the first ever footage of a Mount Lionel Shrew alive, and we captured it just last fall. My name's Vishal Subramanian. I'm a recent graduate from the College of Natural Research. Let's say some yeses. And I specialize in wildlife photography. Hi, my name is Conford.
[00:14:56] Speaker 3: And I'm in my third year here studying integrative biology, with an emphasis on ecology, evolution, and organismal learning. So here we can see the way that it's devouring this small cricket. They would eat basically any insect we would place in front of them. I guess they were.
[00:15:10] Speaker 2: All right, I'll pause there. Noah, what do we, you said a couple yeses.
[00:15:13] Speaker 1: Mostly yeses. We got a lot of yeses, a maybe, a definitely. Oh, and a couple nos, okay. So a little bit of a mix, but primarily yes.
[00:15:22] Speaker 2: Yeah, so this is a great example of just, it is so challenging to know exactly whether or not this would warrant audio description. And so it's up to you, at the end of the day, as an organization, to make the decision. But let's talk, we're gonna show a couple more examples, and I'll get to my point here. But I wanna show a couple more examples.
[00:15:47] Speaker 5: Okay, so something more cheerful, the U.S., states. The states are an important part of the framework, and the states do their own thing. The map shows the states, 50 states of the United States, by color, to represent, by different colors, to represent, they all have different laws.
[00:16:06] Speaker 2: All right, Noah, what did we get there?
[00:16:09] Speaker 1: Mostly nos. Let's see, they're explaining what is shown, and they are introducing themselves. So yeah, mostly nos on this one.
[00:16:19] Speaker 2: Yeah, and this one is a little bit tough, because it's just a clip from a broader video. But I think illustrates, actually, a good example of a presenter, even with a chart or a graph, like the map of the United States that's shown in this image, or still now. She's doing what she can to describe everything, because she's speaking with this mindset of how do I make sure that this content is described with my dialogue, as opposed to coming back later and using a narration track on top of it. So this is an example of we would probably see, from a mathematical perspective, when we get to how we score and evaluate audio description needs, that this would fall under the, maybe you don't really need to do audio description here, because if we started doing audio description on this, there's a lot of redundancy that would start to be introduced. So one more example. And I'm gonna skip around on this video just a little bit, because there's some really, really good spots, and then some interesting questions.
[00:17:38] Speaker 6: Okay, welcome to Introductory Calculus. I will start with some practical information, and then I'll tell you a little bit about... I'm gonna skip ahead just a little bit, because he does a, there we go. So we have 16 lectures. The lecture notes are online. Online. These are the lecture notes. These were written by Pat Wilkins. She taught this course for a few years. The derivative of f, which is two x times minus cos x dx. This is minus x squared cos x, plus two times x cos x dx.
[00:18:49] Speaker 2: All right, I'm gonna pause. Noah, what do we got for responses on this one?
[00:18:53] Speaker 1: This one, it seems a little bit more mixed. Definitely getting some yeses, a few nos. Some people said they need more context, so it's a little bit more of a mixed bag here.
[00:19:05] Speaker 2: Yeah, so I'll point out a couple things that I think this professor particularly does well, which is, as he is writing, he is speaking everything out loud, and so there's not really anything more to describe as he is writing on the board. One of the things that's interesting is that there is an intro video, or an intro to this course, that is definitely not described. You could argue, because in the video title on the platform that you play back, it has those details, so they are available for a user. You could argue that that intro video is not even necessary. It doesn't really add any value. You already know that you're clicking into this video that's talking about this, this intro to calculus class and derivatives. So there's a lot of kind of questions around this, I would say, but arguably, this is actually a really, really good example of a professor lecturing in a style that is favorable for all students of variability. So those are the kind of the three examples. One of the things that I'm trying to explain is at the end of the day, audio description and compliance is really about the level of risk and exposure that you're really willing to take on. And so with advanced vision models that we have now, because of AI, we can evaluate this video and actually see how much text is on this screen and how often throughout the duration of the video are there important elements happening and compare that to the dialogue and basically assign a number or a percentage that says, hey, this particular video, only 2% of the video is actually, is potentially not covered by the dialogue or 2% of the visuals are not covered by the dialogue. You might argue that that 2% super, super important and we should audio describe it anyway. Or you might argue that, wow, 2%, 98% of the video is adequately describing the visuals of the dialogue. I might not prioritize that. So we'll talk about that in a little bit, but that's where this challenge of like, what do we audio describe? What do we not? It's your decision. It's your policy. How do you defend against that policy? But then obviously having the right tools that can help you do this at scale so that you're not having to go through every single video by hand. So let's talk about what actually needs to be described. So, you know, I mentioned some visuals would have been redundant in that Laney Feingold example if we had a description overhand. It would be pretty redundant. You wouldn't necessarily need to describe it. But ultimately there's four key things that we're always looking for in audio description. One is actions. So what people are physically doing on screen, especially if not verbally described. A talking head, like what I'm doing right now, doesn't add any value to the video. Whether I move left or right, that's not the type of action that we're looking for. We're looking for visual action that changes the narrative of the video. So someone creeping up in the background behind you is probably important if that's not caught in the audio at all. Characters, who is on screen, the relevant expressions or body language when it matters to understanding. So once again, I'm not doing much. I'm pretty not, I'm not very dynamic in how I'm presenting. I'm not going around my room here. I'm staying within the frame of my screen. Not much has changed too much. And any reaction that I do have, I'm trying to verbalize anyway so that it's not being misconstrued differently or being interpreted differently. Scene changes. So transitions, location changes, significant setting details. You saw in the classroom, the class, that whiteboard moving from panel to panel, probably not that important. The whiteboard's huge. The whole point of that whiteboard is that you're going throughout that section and you just need more space. If that lecture were to move to a different classroom with a different environment where they're now outside and they're showing things outside, that might, that would constitute a scene change. But sitting in a classroom where you're just going through the whiteboard, the key is really actually what is being written on that whiteboard. So kind of going to on-screen text. So anything written on the screen that conveys meaning that isn't already read aloud, that's what you're gonna be focused on. Especially think about the purpose of the lecture. You're listening, you're reading what's being written and what's being put in front of you. But if, yeah, if that classroom becomes a dynamic environment, like the outdoors where you're picking up things off the ground and showing the students in the classroom and people are peering in, then you might wanna consider narrating that if it's not being narrated in the dialogue. Okay, so creating audio description is, it can be challenging too. This is a good example of a video where there isn't a lot of space where I'm not speaking. I'm having to do some pauses just to breathe, but ultimately there's not a lot of space for a narration to come overhead and narrate. So before we even consider the concept of standard versus extended description, the first thing is we're building descriptions against really a style guide that is agreed to upon national and international regulations so that we have a consistent language about how we're describing content. So the keys are accurate. Sorry, the DCMP description key. So if you wanna look that up, go to the dcmp.org, I believe. And they have a full, full write-up of everything here. But at the highest level, you're looking for accurate descriptions. So not using subjective descriptors in the description. It's not about like, does that person look good or not? It's about accurate. It's about that person is there in the screen. It needs to be consistent. So if you refer to a character at time zero and time 30% into the video, that should be a consistent description of that individual unless there's a fundamental change to their appearance that is important to the meeting. Prioritized. This is the most challenging aspect. How do you prioritize what makes sense? This is where human script writers have been excelling at. And this is usually where human judgment come into play the most, especially when we have very, very limited time in like a standard description length. Appropriate, as I mentioned, it's not meant to be subjective. And equal, you know, once again, making sure that we're trying to create a description and an experience that results in an equal experience for a blind or low vision user as the sighted user. So quick tips. When creating video, and I've kind of peppered these in throughout the presentation is, you know, we wanna always say what you show, especially in these lecture type formats. We have the privilege of really being able to read everything aloud. There's not really a driving narrative force like in Hollywood where you are trying to, you know, create different experiences and, you know, silence is part of the story. And explain what it means, explain why this chart is there. You don't have to label every single number along the chart. It's more, how do we generalize the chart? Make sure that people understand that this chart exists and how to analyze that chart. And then finally, there's this standard versus extended audio description. And standard audio description, which is explicitly written out in the level 2A guidelines, means that the description is going to fit into the original length of the video. So we're just gonna get to use the pauses, like right there. And those pauses might be really, really short. But ultimately, this is the easiest thing to publish from a technology perspective. And it's also how, you know, pretty much all media and entertainment publishes audio description. But in academic content, there's not a lot of room for pauses. And there's a lot more visual information that might be missed. And so extended audio description gives a script writer, whether it's AI or human, more space because you can actually pause the video. You'll put a time code and say, we're gonna pause the video here and we're gonna let the description run longer. And then we will press play or the play, the video will automatically start playing back once the description is over. And this allows for people to provide much more context and, you know, interrupt lecture style content where there's not much space. One thing that is of note is that technically from a, you know, if the standard audio description is not available, you are recommended best practice to do an extended audio description. But extended audio description is written into the WCAG 2.1 as a level 3A compliance. Standard. Cool. So this is a quick overview of what video platforms allow for audio description tracks or files. And we've, I'll walk through all of them, but basically there's, you know, there's a list of platforms and then we have support standard audio description and supports extended audio description. There's a nuance here, which some of these support audio description in different ways, but ultimately, you know, Kaltura natively and Kaltura's Reach product supports both standard audio description and extended audio description. Same with Panopto. Mediasite supports standard audio description, but does not currently have a mechanism for extended audio description outside of uploading a new video. Canvas Studio does not seem to support either audio description tracks without uploading a separate video. Yuja does support both AD and extended. Echo360 supports standard, but not extended. YouTube does support standard audio description. We have an article on that. Noah wrote, made a quick video on how that works, but still does not support extended audio description. Vimeo, the same as YouTube. And then Wistia supports both standard and extended audio description. So a quick nuanced point here is the publishing options. Not all platforms publish audio description the same way. Some support the audio as an MP4. So, sorry, MP3. Yeah, sorry, MP3. That was, it's meant, not meant to say MP4s there. Upload an AD track. You can upload an audio described MP3 to your hosted video platform. Most of these platforms do not support API upload. Most of them are manual uploads, but with 3Play, we document which ones are supported in which method. And then, technically, you can publish a second video with the audio description track as the primary audio to any platform, just like any other video. And then there is also a mechanism for some of these platforms like Panopto, which expects just the WebVTT file, and then you just upload that to their platform and they will generate the narration on their side within the Panopto player themselves. And then we have, 3Play also offers an access player which wraps around third-party players as necessary, but not required for most use cases. Cool, okay. So to kind of wrap up and summarize everything, the pillars of a sustainable strategy for audio description really starts with a policy position. What is the university's take on this? What is the university gonna prioritize from a business perspective? And then from there, you can start your audit. Okay, I have all this video content. What content might need AD? What content doesn't need AD based on that policy? And then prioritization. Now we can feed, you know, from the audit, we can prioritize, okay, these sets of videos do need AD. These sets of videos need human scripted. These sets of videos need AI scripted. Then you can create a budget and forecast. And then maybe before you go to ongoing monitoring, you go back to your policy and rephrase it and refine it. So your budget at the end of this step four here might end up influencing what the policy decision is, but this is really a continuous spectrum of we're going to constantly evolve the strategy as we can, as the technology gets better for audio description, as the budget expands or shrinks for audio description. And then ultimately it's ongoing monitoring, making sure that you have the tools in place to monitor your audio description performance from an execution perspective. So that might feel overwhelming. And so at the end of the day, 3Plays has designed a specific tool to support all of these steps. And everything from, you know, we're just wanting to test a few things to we're going to have an entire strategy dedicated and all of our audio description strategy is going to sit through the 3Plays system. So we can help audit your files. We can help prioritize those files. We can help your budgets and forecasting of how you're going to execute audio description. And we provide continuous monitoring as you continue to produce more and more video throughout every semester. So real quick, what is Pulse? At the end of the day, we've, you know, we've talked about kind of Pulse Level A, which is our captioning solution at scale for Title II compliance. So that is our auto caption quality analysis tool, budget simulations and remediation for human upgrade for poor performing auto captions. And then really Pulse AA is what audio description is all about. So not only do you get all the access to Pulse captioning workflows, but then you can add in additional audit tools. Like does this video need audio description? Will standard be a sufficient audio description track for this video? And then you can stop there. You don't have to do anything. But ultimately we have a, we've integrated all of these auditing tools directly into our marketplace of human, human editors and script writers and additional AI script writing tools for audio description, which unlocks additional kind of insights to make sure that your strategy is as buttoned up as possible and defensible in case of a worst case scenario for your university. And these things can be purchased in a bundled way or purchased a la carte, depending on your journey in terms of audio description. So ultimately there's a couple of things that would be worth this audience maybe testing, especially around, does this video need AD? And so my offer to you is email me, email me, send me a video, send me two videos, send me five videos, and we'll run a quick analysis on your audio description needs. And we'll send that back to you and we'll have a conversation. But it's really easy, nothing, no payment, just test our AD models. So just send me an email and we can go from there. If you feel like you're ready to do a lot more, obviously you can request access to Pulse, which is on the left side of the screen in the QR code. But any other questions you might have that we don't get to in this presentation, you can always reach out to me at erik, E-R-I-K, at threeplaymedia.com, three as in numeral three, P-L-A-Y-M-E-D-I-A.com. And then of course get in touch with me on LinkedIn as well. But I'm gonna hand it back to Noah. And I know that there's probably either lots of comments or lots of questions, but I have turned an eye away from that to focus on making sure that we get this content through.
[00:36:38] Speaker 1: Yes, we definitely have gotten a lot of questions. So yeah, again, if we aren't able to get to it, just make sure to send Eric an email. So first one here, since this is federal law, do you have any guidance on whether faculty slash the university should be implementing this from the get-go for all courses versus a disability services office on an accommodation-based need?
[00:37:04] Speaker 2: Yeah, so this has always been required for accommodation-based needs. This has always been, if someone has raised their hand and come to the DRC, the Disability Resource Center, and said, hey, I can't access this visual information, I need support, that has always been a requirement by the law. And there's plenty of case law to back that up. So this is really about the kind of, how do we get out of that hand-raising motion into you should just create accessible content. And as I mentioned kind of throughout the presentation, this is a journey. What's more important is about getting started than finishing. And so you might get started, you might just do an audit. You might just wanna get an idea of what I need, what's possible, or what is my scope? And then you might start chip away. And it could take a while, but ultimately the schools that don't do anything, that don't get started, are gonna be in the least defensible position if something were to hit the fan with regards to any legal ramifications. But ultimately our guidance is like, just get started. Like you can do five videos, just get started doing something so you can like get a feel and start building a plan and a budget for faculty and the entire campus.
[00:38:25] Speaker 1: Awesome. All right, next question here. The videos with audio description still need closed captions. If the original video had captions, will a new caption file need to be created?
[00:38:38] Speaker 2: So at least with 3Play, and I won't speak to every service out there, captions are kind of a building block to good audio description. So there's no like, closed caption is a separate accommodation for video. Those can be generated through automatic speech recognition. Those should be evaluated for quality, but ultimately a real accessible caption is going to be reviewed by humans and confirmed 99 plus percent accurate. The audio description is a completely separate accommodation that is just that narration of key visual elements that are happening in the video that are not covered by the dialogue. So they are mutually exclusive services.
[00:39:32] Speaker 1: Great. All right. When audio description is done by humans, what type of file is used? Is it an SRT file or something different?
[00:39:42] Speaker 2: So the audio description process from stripping away like the, what actual jobs are happening, like what are the jobs to be done that we discussed, like paying attention to important visuals, the function of it is you send a video and we deliver a narration track as an MP3, an MP4, or we can do just a script. We can deliver just the Word doc or VTT file back. Depending on where you're publishing that audio description, we just automatically produce the right, you know, the specific file type you need. But ultimately audio description is meant to be delivered as an MP3 file that is a secondary audio track in a video file.
[00:40:34] Speaker 1: Awesome. All right. We've gotten several questions here around social media posts. What is your recommendation for social media or other short form video content? Does that type of content need to be described and how can it be published?
[00:40:53] Speaker 2: Yeah. So this is where I think you have the most control on the actual production of the video. So, you know, lecture capture, all of the content that sits kind of inside, you know, the walled garden of the LMS is a little bit more out of your control. But social media, you know, if you're publishing to, you know, a platform not named YouTube, you likely have to just publish that single video option and then potentially give a secondary option as like a link to an audio description version. But ultimately the best practice is build video that doesn't require an audio description narration. So don't make, you know, silent videos with a bunch of caption text over it. With nothing to support it. I know we all love having to read video content while we're, you know, silently on the train. But ultimately we do need to make sure that we also have the audio track cover, you know, the visuals in a social post as well. But ultimately YouTube does support standard audio description as an option. So if your video, you know, can be sufficiently and adequately described through a standard audio description track, there's really no, there's very, very, very few excuses to not have that published to a YouTube video at this point. They've made it broadly available for any kind of account in good standing.
[00:42:26] Speaker 1: All right, awesome. Thanks for answering those questions. Unfortunately that's all we have time for today. Huge thanks to Eric for giving us an awesome presentation. And thank you to our audience for joining us and asking great questions. Thanks again and I hope everyone has a great rest of your day. Bye bye.
We’re Ready to Help
Call or Book a Meeting Now