[DTMF signals from the beginning of a Disney
VHS tape] [if you remember those, you’re my kind of
awesome] As you’ve gone through life as a high quality
video connoisseur, you’ve no doubt seen the little toggle for turning captions, or
subtitling, on and off. Here on YouTube, it’s right there. Or there. Depends on how you're using it. By hitting that button, your device will start
displaying the text metadata embedded in the video stream that either YouTube has auto-generated,
or that super awesome content creators like myself have put in so that you don’t have
to simply watch a stream of unpunctuated words flow by like the world's largest run on sentence
that never ends and which often contains incorrect words because the captoning system isn’t
quite perfect and that while useful as a tool could really use…
(jokes sometimes end up here, too!) But did you know that even the lowly VHS cassette
often contains captions? And that a VCR as old as this dinosaur can
make them work? That last statement is perhaps a bit disingenuous,
but you’ll see what I mean shortly. Many VHS cassettes will have a little “CC”
mark, or sometimes this mark. This tells you that the video was encoded
with closed captioning. But, how? First, let’s go over a bit of terminology. The reason why these are called “closed
captions” is that they are not normally visible. While this terminology is a little antiquated
at this point, “captions” implies that there is actual text baked onto the video,
for everyone to see. Sometimes these are called open captions,
though that’s likely a retronym. The trouble with these sorts of captions [sound of glass breaking]. So, closed captioning hides the captions until
and unless they are requested by the viewer. Believe it or not, the first television program
in the US to be captioned at all was "The French Chef”, broadcast on PBS in 1971. Those were open captions, but nonetheless
that was the first time that a US television program was accessible to the deaf. This is really surprising to me, because you’d
think that one of the great benefits of television over radio was that non-hearing people could
get some benefits from it, and yet it took until 1971 for anyone to take advantage of
the visual component for this purpose. My how we take our senses for granted. Anyway, according to the National Captioning
Institute, the idea that what would become closed captioning in the US started with an
experiment by the Nationals Bureau of Standards and ABC television. They were attempting to broadcast the precise
time using non-visible parts of the television signal. That idea never came to fruition, but it did
lead to the idea of encoding text for the purpose of closed captioning. In the early 1970’s, more testing was done
on various stations around the country, and in 1976 the FCC officially decided that Line
21 would be reserved for closed captioning. Eh, to explain what the means we need to go
a little more low tech. Woah. Suddenly I fell kind of… interlaced. Great. This again. Well, at least I can say, “Look ma! I’m on TV!” An old-fashioned CRT television like this
one is a pretty dumb device. There are no logic circuits in here, just
a bunch of frequency oscillators, deflection coils, and other analog goodies. All around the part of the picture that you
can see are blank sections that are used as triggers. I’ve made an earlier series on analog television
that you can find if you’d like more information, but simply put, the image on the screen is
made of one continuous line which is broken up and stacked 525 times per second. Other countries and continents used different
standards, but we’re talking about the Freedom Captioning System here so we’re sticking
with good ‘ol NTSC. The TV knows when to break the line up because
throughout the television signal, low-intensity pulses (called the horizontal blanking interval)
cause it to reset the drawing process, pulling the line back to the beginning, and starting
from there. But the TV also needs to know when to start
the next stack of 525, so another trigger, the vertical blanking interval, is used. This trigger is simply a group of lines that
have no information in them at all. To keep you from seeing these triggers, the
picture tube is overscanned so they appear outside the borders of the screen. Closed captioning simply uses one of the unused
and unseen lines to digitally encode text. This worked because a TV doesn’t need that
many blank lines to reset the vertical deflection--in reality it just needs a few. So sacrificing one of them wouldn’t hurt
compatibility. The first demonstrations of this experimental
system occured in 1972, and in the following year Washington DC’s public television station
WETA started doing test broadcasts with the data transmitted on line 21. In 1976 the FCC decided to standardize that,
and from then on Line 21 was reserved exclusively for closed captioning information. Developments to the captioning technology
continued to be made throughout the 1970’s and early 1980’s. Much of the work was done by engineers at
PBS, specifically at WGBH in Boston, with an important development being the editing
consoles used for inserting closed captioning data into pre-recorded material. By the end of the 1970’s, closed-captioning
advocates decided that they should form a nonprofit group, with the goal of creating
the standards for the system and encouraging larger broadcasters to get onboard. So, the National Captioning Institute was
formed in 1979, and in 1980 the first television series to be fully closed captioned was broadcast,
and this also appears to be the year that the first closed captioned home video releases
made it onto the market. This symbol here is actually a service mark
of the National Captioning Institute. I always thought it was just a generic mark
indicating closed captioning, but apparently it meant that the NCI is who actually created
the captions for the program. So that’s pretty neat. This mark, designed by Jack Foley of WGBH,
was placed in the public domain and thus became fairly standard as well. Now that we know a bit more about its history,
let’s take a closer look at how it works. But first, I gotta get out of this thing. There’s no place like 4K! There’s no place like 4K! There’s no place like 4K! If I mess with the vertical synchronization
of this television, the image will roll and the vertical blanking interval can be seen. On this line, which happens to be line 21,
you’ll see that occasionally, it bursts into a black and white scramble of dots. This is where the closed captioning is. We’re seeing this visually, as after all
the picture tube is still gonna react to it as if it were visual information. But don’t think that it’s being read literally
like a barcode--though it looks like one, what this really is is a tiny portion of the
signal going high-low-high-low. In other words, it’s a digital datastream. But how do we read that and turn it into captions? Great question! We need a CLOSED CAPTIONING DECODER! Luckily I have one right here. This is the National Captioning Institute’s
very own TeleCaption 3000. Glad they used 3000 because it still sounds
futuristic! These were actually built by Sanyo, and they
weren’t cheap. Costing around $200, they cost more than some
basic televisions. There’s a great article from the Chicago
Tribune written at the time of this unit’s introduction which has lots of cool info,
and you can check it out in the description if you’d like to learn more. Although they were expensive, they did at
least include some nice features, as we’ll see shortly. This device is able to read the data from
line 21, and generate text on-screen. Remember, that data looks like a bar-code,
but it’s processing that as a datastream because… OK now that I think about it… all barcodes
kind of work that way, at least with the laser-type scanner thing with the spinning mirror--but,
anyway! It’s reading that data. Alright. That data encodes text and timing information,
and after determining what it has to do, the device draws blocks of text on top of the
screen, providing a black background to ensure it’s visible. Now, this may seem pretty simple, but this
device actually has a pretty sophisticated job to do. See, it can only only look at one channel
at a time, so it needs to have its own television tuner built-in. Then, it has to spit out the same picture
to the TV, while altering it to contain the captioning. And it has to do that in real time with technology
of the 1970’s. Of course, this device isn’t nearly that
old--it was made in 1989 and by that time the electronics weren’t so crazy-advanced--but
the way the NCI designed this device is awesome. Take a look on the back and you’ll find
a coaxial input from the antenna, and another output to go to the TV. You’ve got your standard channel 3 or 4
selector (used in case your local market had a station broadcasting on either one of those
channels), but what’s really clever is this switched outlet on the back. Since they had to integrate a television tuner
and it would be outputting the altered signal as if it were, say, a VCR, they designed it
to give it a little bit of extra functionality. If you plug your TV into the outlet on the
back, then you can use the TeleCaption 3000 to give it a remote control! See, the TeleCaption has its own remote, and
when you turn it off, it kills the power to that outlet. So, if you leave your TV turned on, then you
could simply turn your TeleCaption on and your TV would automatically come to life. Better still, they added a volume control,
so if you left your TV on, set to channel 3, and with the volume on high, you could
control everything--the channel, the volume, the TV’s power, and of course the captions--with
this remote. I really appreciate this thinking on the part
of NCI, as even really old televisions with rotary knobs could now be operated completely
via remote control. Nice going, NCI! One slightly strange thing about this device
is the inclusion of adjustments for the background as well as the text. I wasn’t really sure what to expect from
these, but apparently they allow you to make the background lighter and the text darker. I’m not really sure why you’d want to
do this, but I suppose a grey background is a little less intrusive. And maybe some old sets would be affected
by the black box and skew the image geometry a bit. No matter the reason, options are welcome. Buying one of these decoders wasn’t your
only option, however. Some high-end VCRs were available with built-in
closed captioning decoder circuitry, but apparently this wasn’t well promoted and they never
sold well. In any case, this machine will work with a
standard VCR, and given that it creates a value-add in the form of a television upgrade,
I think it was among the best options. So let’s look a bit more at the captions
themselves. One of the great things about the closed captioning
protocol is that the text doesn’t just get plastered anywhere on the screen. Its location could be defined. This meant that the text could shift left
and right to give an indication of which character is talking. Which is pretty handy. And the text could even
be placed up high so that it wouldn’t obscure other visual information on screen. It’s pretty impressive that this encoding
was thought out for a captioning system with roots in the 1970’s, and it’s also frustrating
that YouTube captions don’t allow you to do this. That’s right YouTube, this old VCR and this box are more sophisticated than your captioning system. Maybe work on that. Now, this may look like it’s limited to
all-caps, but in fact the captions do support mixed-case. It just tended to be used for descriptions
rather than dialogue. Not sure why, and there are exceptions to
that rule, but the default seems to be all-caps. Of course with today’s context, it seems
like everyone is shouting. One of the more surprising aspects of this
system was how robust it was. This is a pretty old VHS cassette with a recording
made from live TV, done with not the greatest incoming reception, and recorded on the low-quality
SLP speed. The captions still come through fine, and
with few errors. Even in areas far from the transmitters, the
closed captioning system would still work reliably. Of course, for many years, the likelihood
that a program would actually contain captioning would be pretty slim. It took a lot of effort from the NCI and other
organizations to persuade broadcasters to start using it. But over time, it began to spread. Every home video format began to support it. You can even find CEDs with closed captioning,
like this copy of Robin Hood bearing the NCI’s service mark. CEDs? You mean those things from RCA? That really failed format that was video on
a vinyl disc? Is he gonna do a video about it sometime? Yes. Another big development was the real-time
captioning console, which allowed for captioning of live and unscripted events. Stenographers were hired to furiously spit
out captions of newscasts, sporting events, and other unscripted programs. Of course, accuracy wasn’t exactly perfect,
and one of the downsides was that the captions would often appear with a fair delay, but
hey, it was better than nothing. But the real windfall came in early 1991 when
Congress finally got around to passing the Television Circuitry Decoder Act of 1990. This required all televisions of screen size
13 inches or greater to include the ability to display closed captioning. The law went into effect on July 1st, 1993, and from that point on the important accessibility feature became… more accessible. This TV was made in 1994, and sure enough--closed
captioning can be turned on via a few button presses on the remote. When DVD rolled along in 1996, suddenly closed
captioning got a whole lot more interesting. And for a number of reasons. First, DVD players natively support subtitles. Subtitles in a DVD work very differently from
the Line 21 closed captioning, as they are stored on the DVD as series of images. When you toggle them on, the DVD player is
stamping semi-transparent bitmaps on top of the video, and because of this many languages
could be supported without needing to have support for their character sets. This also explains why the subtitles can look
so different from DVD to DVD. But--DVD still supported line 21 closed captioning. That’s pretty remarkable once you realize
that in order to do that, the DVD player has to do its own line 21 encoding from metadata
on the disc. The video files on the disc don’t contain
anything outside the frame, so the DVD player itself has to generate the vertical blanking
interval and shove that captioning data into line 21. This just happened in the background, enabling
those who relied on captions to use their existing equipment if they preferred. This was also useful in a scenario like a
hotel, where a central DVD player might be broadcasting the same program to many televisions. Then only those who wanted captions would
view them. Of course, now I needed to know if Blu-Ray
supported line 21 captions. It looks like it doesn’t given that over
HDMI there is no vertical blanking interval, and being an HD format what’s the point? BUT! The PlayStation 3 has a composite output,
so let’s see if anything happens. Oh. And just as another test--would the PS3 send
Line 21 captions out with DVDs? Looks like it does! Look at Sony being all backward compatible. Wait… Did you know that closed captioning systems
exist for movie theaters? I’ve only ever seen these systems in person
at theme park shows, but the system is very clever. At the rear of the theater is an LED dot matrix
display. This display is showing subtitles for the
film...backwards. The idea is that, if you need captions, the
theater will provide you with a rectangular piece of plexiglass on a stand, and by placing
this in front of you, you can aim it to reflect an image of that display into your eyes, either
below the screen or on top of it if you choose. And since the reflection reverses the image,
the previously backwards text appears forwards. The most common of these systems is the Rear
Window Captioning System, also created in part by WGBH Boston. Jeez, those folks are nice. Anyway, if you’ve ever looked behind you
and saw a display board with backwards subtitles--now you know what that’s for. Online streaming services seem to handle closed
captioning differently. One thing I’ve noticed with the way Netflix
handles it, is that the captions can be forced on. I’m pretty sure Blu-Ray can do this, too--and
maybe even DVDs. The practical upshot of doing this is that
any on-screen text, such as can be automatically translated into a different language if a
different audio track is selected. That’s pretty handy. One thing that the hearing impaired can look
forward to is real-time captioning. I’m sure this is already in development,
but this would be a socially acceptable use of something like Google Glass. With speech-recognition technology as advanced
as it already is, I suspect it will not be long before a simple glasses-like device enables
real-time captioning. Oh wait, of course it’s already a thing
and you can find some links in the description. But before I go--did you notice that there
were three modes on this closed captioning decoder? There’s TV, which does nothing. There’s Caption which does… captions. And then, there’s Text. What happens if I turn that mode on? Interesting. A black box covers almost all of the screen. What could that be for? That’s right James. You’ve waited so long. It’s coming. Soon, we’ll talk about Teletext, the much
more advanced sibling to closed captioning that much of Europe enjoyed, but which never
caught on here in the States. So, uh, stay tuned for that. Thanks for watching, and I hope you enjoyed
the video! I think that closed captions are now something
we very much take for granted, and the hearing impaired are very thankful that we do. Though it took an act of Congress, I’m glad
that we finally decided to give the hearing impaired the same access to television that the rest
of us have. As always, thank you to everyone who supports
the channel on Patreon, especially the fine folks that are scrolling up your screen. With the support of people just like you,
Technology Connections has gone from my hobby to, well, this! And I’m very thankful for that. If you would like to support the channel and
get perks like early video access, behind-the-scenes stuff, as well as other Patreon-exclusive
content, please check out my Patreon page. Thank you for your consideration, and I’ll
see you next time! ♫ insufferably smooth jazz ♫ I accidentally discovered something while
shooting B-roll for this video. Earlier I made a video on a little something
called Macrovision, an analog copy protection scheme. This deliberately screwed with the vertical
blanking interval to confuse VCRs, and indeed it created a few problems for my capturing
process this time. To help make the caption data easier to see,
I tried playing a Laserdisc which is known for not having Macrovision. And now, I think I know why. Laserdiscs are chock full of other barcode-like
things in the blanking interval, and judging by how they are jittering around, I’m thinking
this is what the player looks for when skipping tracks or even just looking at the timecode. And, assuming this is true, this would explain
why Macrovision never made it to Laserdisc. Laserdisc had already done its own screwing
around with the vertical blanking interval--in its case to create the control system for
the format--and trying to put Macrovision in there would probably break it. And while we’re on the subject of closed
captioning, I’ve finally turned on community subtitle submissions for the channel. If you speak another language and are willing
to help out by translating my videos into that second language, please check out the
link below with instructions on how to do so. This would be really helpful to me to give
exposure to the channel in other countries, as well as just being pretty neat. If you decide to do it, I promise you’ll
get your very own Unofficial Official Imaginary
Badge of Complete Awesomeness. You’ll know it’s true so long as you believe. Hey, for everyone who used these captions--I'm sorry they were on top of other captions so frequently. I REALLY REALLY wish that YouTube would let me move them around like.. you know... like we could in 1980. That'd be swell.
There's something about these videos that make me interested in things that, frankly, I didn't think were interesting in the slightest. So far, this is the ultimate example of that.
I'm not sure why DVDs used Closed Captioning when VobSub was much more easier to use (images) and didn't need a CC decoder
You're trying to kidnap what I've rightfully stolen.
I've gotten some good examples from you and others about how to show Teletext--including one that uses as Raspberry Pi to create a composite output.
Here's my problem, though--since I've never used it, I would really appreciate any resources you know of that show it in detail and are perhaps interactive so that I could "demonstrate" it. If you have ideas on how to do this, please reply! This will be a first for me where I'm talking about a thing with no direct experience (unless you count Muse Hi-Vision LD, but that was mostly just trivia) and so I'm asking for your input.
Thanks!
Really looking forward to his teletext episode. Hope Digitiser and Bamboozle get a shoutout
I wonder if it's possible to rip the CC stream from a VHS transfer and what hardware is required for that. Never been able to do it with a USB capture device...
I guess teletext works the same.
When underscanning a picture you can see those lines at the top of the screen. However they always move, and not just when someone is speaking. I always wondered what they were used for
How do you do the shots when you are displayed in the television? Is it special effects or do you broadcast to yourself? How is it done to make it so convincing?