The Evolution and Importance of Quality Closed Captioning for the Deaf Community
Explore the history, challenges, and significance of closed captioning in media, emphasizing the need for quality to serve the deaf and hard-of-hearing audience.
File
Does closed captioning still serve deaf people Gary Robson at TEDxBozeman
Added on 09/30/2024
Speakers
add Add new speaker

Speaker 1: Transcriber's Name Reviewer's Name David DeRuwe www.davidderuwe.com Hello, Bozeman. OK. What do you think of when you hear the word accessibility? Wheelchair ramps? Handicap stalls in a public restroom? Braille on an ATM, perhaps? I think of something less obvious, but with a dramatic impact on the lives of 360 million people around the world. Imagine a deaf person about 50 years ago who had a television set turned on and saw this. That experience would have been a little bit different for a hearing person. Yeah, that's right. Ain't it a little early for that? Ain't Nana coming tomorrow? Here is a bulletin from CBS News about accessibility and accessibility

Speaker 2: Here is a bulletin from CBS News. President Kennedy shot today just as his motorcade left downtown Dallas.

Speaker 1: When President Kennedy was assassinated, television was all but inaccessible. Fifteen years later, the advent of closed captioning promised to open the world of television to deaf audiences. Thirty years now, after that first captioned broadcast, we're asking ourselves, does closed captioning still serve the deaf and hard-of-hearing audience for whom it was created? Captions are text on a video picture that allow people who can't hear what's going on to read it instead. Closed captions are hidden until you press the CC button on your remote control so that people who don't need the captions don't have to see them. The first phase of development is development. Development can take a long time. Television broadcasts in the United States began in 1928. It was over 40 years before we had closed captioning for deaf people. 60 years before we had descriptive video service for blind people. But a lot of work had to be done. Caption decoders and encoders had to be invented. Software tools developed. We had to work together for captioners. After this first phase of development was complete, we had captions for a small audience on a few shows. The second phase of accessibility, broadening the base, depended largely upon the law. The Americans with Disabilities Act, a landmark act in almost all respects, barely mentioned closed captioning. Later laws required television sets contain decoding circuitry and eventually mandated the presence, although not the quality, of closed captioning on TV. Thanks to laws like that, today television captioning is ubiquitous. However, as law and technology have pushed captioning forward, the availability of captions has often been offset by a decline in quality and a lack of focus on what's important to the deaf community. Just last week, or last month, the FCC unanimously approved new standards for captioning. These led us directly on the path to the third phase of accessibility, quality. Now, TED Talks have to be prepared well in advance. When this announcement was made, causing me to rip out quite the lecture on why the FCC should be mandating quality. But that's okay. I don't mind the last minute changes. It's for a good cause. I just wish they'd waited until after my talk so at least I could take credit for it. Now, defining quality can be a sticky issue. The dictionary calls it a level of excellence. In captioning, the definition would be understandability. Do the captions help someone who can't hear what's going on to see what's going on? A deaf friend of mine once said, we're not asking for special treatment. All we want is what the rest of you take for granted. Legislating quality is even more difficult. Industry experts have been arguing for decades over how to put a numeric score on closed caption quality. The arguing's what we do. But the one thing we agree on is that it begins with accuracy. NCRA, the organization that certifies real-time captioners, the ones who do captioning at 250 words a minute on live events, measures caption quality by errors and omissions. The fewer errors you make, the higher your quality. This, though, supposes that all words have equal importance. Do they? Take this well-known sentence from a Dr. Seuss classic. If we were to drop the second word, it wouldn't change the meaning of the sentence at all. But if we drop the third word, the sentence means the exact opposite. I do like green eggs and ham. Clearly, all words don't have equal importance. In real-time captioning, errors are inevitable and often funny. The keyboard that real-time closed captioners use is corded, meaning you press more than one key at a time, like a piano. And a simple misfingering doesn't lead to a letter being wrong, but a syllable, a word, or a phrase. This is what led the closed captioning on a network news broadcast to introduce a lawyer as a liar, a fun guy as fungi, and a golfer's nice putt as a nice butt. Post-production captioners have plenty of time in a studio to carefully craft the text, timing, and placement. But the only way to assure high-quality real-time closed captioning is to hire and train the best people for the job. The FCC's definition of accuracy begins with matching the captions to the spoken dialogue. But it continues to include background noises. This is an example of focusing on the needs of the deaf audience. Imagine the captions going away on a television show. How is a deaf viewer to know whether there's a technical glitch or whether there's just nobody speaking at the moment? A simple, bracketed caption like applause, laughter, silence can answer that question. High-quality captions must also be well-synchronized. A significant delay between the video and the captions can make a program hard to understand. And I've recently measured delays of over 12 seconds on broadcast television. If you don't think that makes it hard for a deaf person to follow a program, try watching a TV show or a movie with the sound lagging 12 seconds behind the picture. These delays can also lead to a loss of caption data. If the captions are running 12 seconds behind, every time you go to a commercial, you're going to lose entire sentences. And the fourth critical component to caption quality is placement. Nobody wants the captions covering the score on the ball game or the weatherman's map. So what can we do to help? First, we can care. We wouldn't tolerate grainy pictures, sloppy camera work, poor audio quality, bad lighting. Why should we tolerate bad captioning? The World Health Organization estimates 360 million people around the world have disabling hearing loss. 360 million people. That's equivalent to the entire population of the United States. Those people matter. The next thing we can do is talk about it. Broadcasters don't get a lot of feedback from their deaf audiences. A lot of deaf people don't want to be seen as complainers or they feel they should be grateful for whatever they're getting. But their captions are every bit as important as our sound. Quality captioning serves everybody. Captions can help children learn to read. Captions aid in fighting against adult illiteracy. Captions help us follow a TV program in a noisy airport, bar, or gym. But captions are a lot more than that to deaf people. Captions can save lives in an emergency broadcast telling deaf people when they need to evacuate their homes or what roads to avoid. And the same technology that's used for television closed captioning is used in educational and business settings to give deaf people equal access. Captioning has been around now for over 30 years. It's time to retune, refocus, and remember who closed captioning was created for in the first place. Thank you. Applause

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript