Blog chevron right Captions

What is Closed Captioning?

Matthew Patel
Matthew Patel
Posted in Zoom Apr 12 · 15 Apr, 2020
What is Closed Captioning?

You've probably heard of closed captioning at some point, or seen the familiar "CC" icon on TV and web broadcasts. But what exactly is closed captioning? You might think of it as simply subtitles, but it's actually more than that. In this article, we will explore the details of what closed captioning actually is, how it works, and how it differs from subtitles.

In the United States and Canada, closed captioning is a method of presenting sound information to a viewer who is deaf or hard-of-hearing. This is the main difference between captioning and subtitles. Captioning conveys sound information, while subtitles assist with clarity of the language being spoken.

For example, in addition to the spoken dialog, captions will also include other sounds, such as birds singing, dog barking, and other ambient sounds, as well as sound events like a door being shut, glass breaking, etc. These indicators (known as descriptive text) help the deaf or hard-of-hearing person to understand the full context of the scene.

What Does Closed Captioning Mean?

The word "closed" in closed captioning indicates that captions are transmitted separately from the video, and can be toggled on or off by the viewer. There are also open captions, which are embedded directly into the video itself, and are permanently displayed at all times.

Closed captioning is typically used for over-the-air, digital, and online broadcasts, whereas open captioning is used for offline video and legacy media such as analog tape. In fact, the first use of captioning in the United States was "The French Chef" broadcast on PBS in 1972, which was open captioned.

How Does Closed Captioning Work?

Depending on the type of media, the exact method of implementing closed captioning differs. For over-the-air television broadcasts, closed captions are part of the broadcast transmission. Specifically, they are part of line 21 in a standard NTSC broadcast signal as defined by the FCC in 1976.

A standard NTSC CRT television signal is separated into 525 lines by the television set. There are segments outside of the visible picture called blanking lines, which control such things as vertical and horizontal refresh. Without going into too much detail on how CRT televisions work, line 21 is part of these special lines, which contains the closed captioning text. This text is encoded into line 21 using an oscillating frequency, which either the television itself or a decoder translates into on-screen text.

For modern HDTV signals, the ATSC standard for closed captioning is CEA-708. This standard includes support for many different caption implementations, including caption text as images, and the Teletext standard used in Europe. Unlike the analog broadcasts of NTSC, these digital broadcasts contain data packets, much like online videos. The captions are sent as digital data streams, and as mentioned, can include a variety of different caption formats. For more information on the technical details of CEA-708, see the Captions and Subtitles documentation.

What About Live Broadcasts?

Now, you may be wondering about live television and broadcasts. Obviously, a live event does not have pre-made captions that can be transmitted. This is where things get interesting. You might be surprised to learn that a live broadcast is actually captioned live, in real-time, by an actual person. Using a special kind of typewriter called a Real-Time Captioning Console (RTCC), a person will listen to the live broadcast and caption it on-the-fly. If you think it takes incredible speed and accuracy, you're absolutely right! A captioning stenographer needs to type 225 words-per-minute to be certified.

While automated speech-to-text continues to improve, it is still too slow and inaccurate for live captioning.

Online Video Captioning

Much like the ATSC standard mentioned above, online video captioning can take a variety of forms. Caption formats can differ depending on the media's file type used, and how the video is transmitted or streamed. For example, certain file formats such as MPEG-Layer-4 (mp4) include support for captions embedded into the file. These captions can then be viewed by selecting the option in the specific player used to view them.

The popular video hosting site YouTube includes support within the YouTube Studio platform to manually add captions and alternate language subtitles to videos. Creators (and other users) can create and upload separate caption and subtitle files, which can be selected and displayed individually by the viewer. Additionally, YouTube can also display automatically generated subtitles, though as mentioned above for real-time captioning, it's inaccurate, and often comically bad.


Regardless of the type of broadcast, adding captions to your video is a great way to increase inclusivity for the deaf and hard-of-hearing. And as we have highlighted today, there's a lot more going on behind the scenes to make this awesome technology work.

For professional captions and subtitles, you can't go wrong with GoTranscript. With 15 years of experience, clients like Netflix and BBC, GoTranscript's global team of over 20,000 expert transcribers, translators, and captioners guarantees high-quality captions and subtitles with fast turnaround and competitive prices.