This Filmmaker IQ course is sponsored by Ambient:
Manufacturers of Precise Location Sound, Timecode & Sync equipment Hi, John Hess from FilmmakerIQ.com. In today’s
course we’re going to explore the history and science of Timecode and how you can implement
timecode generators to make your production more efficient and
precise. Even though the motion picture has been part
of our cultural zeitgeist for more than one and a quarter century, frame accurate time
code didn’t actually materialize until the invention of videotape and more precisely
videotape editing. Still, traditional celluloid film needed some
kind of reference system to identify where in a roll of film a shot originated from.
In the first 20 years of film this was simply done by the editor marking up the negative
and keeping copious notes. Then in 1919, Eastman Kodak began printing Edge Numbers as a latent
image of on unprocessed film. The numbers would start at 0 and go 99,999 before it would
start all over again. After the negative was exposed and processed, these codes would appear
at every foot of 35mm film and every six inches of 16mm film. The post production process
would first make a copy of the original camera negative, creating an work print positive
for the editor to work with a create edited cut. From the edited positive work print film,
you could go back and match up the Edge numbers with the original camera negatives to make
a master negative which would be copied to create the positive release prints. Now as
filmmaking went further along they added interpositivies and internegatives but you how edge numbers
plays in the organization of celluloid post production.. And that was really pretty much good enough
for the celluloid folks because editing film was still a tactile experience and you could
easily see what frame you were working on. It wasn’t until the 1990s did Kodak did
introduce something called KeyKode which is essentially a machine readable version of
edge numbers. But let’s back up to the 1950s and introduce
a major new competitor to film: Television. Now the moving picture was being broadcast
through the air like radio. It was no longer a piece of plastic you could feel and see
in your hands. Recording this signal to something to called video tape would follow not too
far behind with Ampex releasing their first video tape recorder The Ampex VRX-1000 later
named the Mark IV in 1956. Video tape was a game changer in the television
industry. Before tape, everything had to be performed live in front or a camera or printed
in a awkward way to 16mm in what was called a kinescope. A Kinescope is basically a film
camera pointed at a TV screen - it’s like recording a TV show with your phone - it’s
not the best way. Kinescopes never worked very well and the film was expensive to use.
Video Tape was much higher quality and could be recycled - this allowed television networks
to record their live shows to tape and play them back in different time zones without
really losing any quality. Problem was, all you could do was record to tape - if you wanted
to make an edit, you would have splice the tape about where you think your edit would
be. Then you apply a magnetic developer to the tape which reveals the magnetic stripes
on which the analog signal is recorded… oh yeah and you needed a microscope to see
these stripes. You would locate when the frame actually ended - the vertical blanking period
for the outgoing tape and trim off any excess and do the same on the incoming tape, cutting
off any excess before the frame started. If your edit splice is in the wrong location
- then the image would get scrambled and you’d have to start all over again. That was the technique for making a single
cut - to organize how you wanted to make a large number of accurate cuts for an edit
- you had to get a little crafty as the engineers at NBC did. They created a master 73 minute
frame accurate audio track called the Edit Sync Guide using a series of beeps and voices
counting down the time which sounded like this. This sync guide was applied to the cue track
of the original tape - then a kinescope was made from the tape with the cue track intact.
The edits were made to the 16mm kinescope in the old fashioned film editing way and
the engineers used the audio sync guide of the 16mm film to create a list of edits. With
this list they went back to the original tape and matched up with the audio sync guide cue
track on the original video to make each of the individual splices. Complicated but it worked. The first use of
the sync guide was the Fred Astaire Special in 1958. It worked so well that when Fred
Astaire told his friends about the accuracy of NBC Edit Sync Guide a flood of television
producers came to NBC to get their video projects edited - which NBC gladly did, for a fee. It was clear that frame accurate timecode
was a necessity for video editing but it would take a decade before a standard came into
place. Throughout the years Ampex continued making
improvements on their video recorder adding cue tracks to help identify the vertical blanking
period. In the early 60s Ampex released the Electronic Editor - a way to make tape splices
electronically. This has been mockingly referred to as “Punch and Pray” as the operator
had to press the record button exactly ½ second before the edit was to occur and of
course there was no way to fix a mistake. But improvements kept coming from Ampex and
in 1963 they introduced the Editec to streamline operation of the Electronic Editor. An editor/operator
would watch the tape and mark edit points. The Editec would record electronic pulses
on the Tape’s audio cue track which were fed to the Electronic Editor to make the edit.
Still there the issue clock drift plagued the system - which, of course, Ampex sold
a solution for in the Amtec Compensator. So if you kitted out your Ampex video recorder
you could have frame accurate video tape editing by 1963 - I mean sure it cost a small fortune
that original VRX-1000 went for 50,000 1956 dollars and the Electronic Editor and Editec
were addons that cost 13,600 together - in 2019 dollars that upgrade alone would be north
$100 grade. Even though it was frame accurate, it was still pretty crude. So a timecode space race was on. And one of
the first competitors was a small company out of Santa Ana California: the Electronic
Engineering Company: EECO. EECO had actually been building timing clocks for the Air Force
and NASA going back to the X-15 flights of the 1950s. They turned their timing prowess
to the television and broadcast industries and in 1967 introduced a proprietary time
code system called “On-Time” and the EECO-900 electronic editor that could read it. Other companies joined in the timecode race
like Central Dynamics Ltd and Datatron - each creating their own proprietary timecode formats.
By 1970 there was marketplace confusion as timecode tapes from one television studio
could not be read in another that used a different system. SMPTE, the overseeing television body,
assembled a panel to pick one universal timecode format. On October 6th, 1970 the SMPTE timecode
was proposed based largely on the EECO On-Time standard with some modifications. But the
best part was, this new SMPTE timecode could be applied to any brand timecode generator
on the market with only a minor what we would call today a firmware update. It would take
another 5 years until April 2nd, 1975 that the SMPTE time code standard became official
and approved by the American National Standards Institute but by then timecode was already
an essential part of every television studio across the country. The SMPTE Time code system first made official
in 1975 was so robust that it has hardly changed even up to today - in fact because the system
is so stable, other venues like live theater and musical performances have incorporated
the same timecode system whenever they need precise synchronization. But in the video
world you will see essentially two kinds of Timecode: LTC or Longitudinal Time Code and
VITC or Vertical interval Timecode. Let’s start with LTC. LTC Time code is recorded
onto an audio track as a square wave. The bits are encoded using Differential Manchester
encoding sometimes called biphase mark code. What this essentially means is the stream
encodes both the data and the clock into the data stream which makes the signal self clocking
- which is a good thing for something like timecode. In this encoding, a 0 bit has a
single transition at the beginning of the period where as a 1 bit has two transitions,
at the beginning and middle of the period. LTC timecode consists of 80 bits. 26 bits
make up the numbers in the hours: minutes: seconds: and frames, 32 bits are available
as User Definable bits, 6 bits make up various flags and markers. Then the last 16 bits make
up a sync word which starts with 00 followed by 12 ones which can never happen in any other
part of this timecode data stream and a 01 at the end. The 00 and 01 of this sync word
tell the player whether the time code is playing forward or backwards. This is the sound of LTC timecode - not that
pretty but it’s robust. Over time the practice of Vertical interval
Timecode or VITC started working its way into tapes and linear editing. VITC takes the same
80 bit timecode as LTC, adds 10 more bits and replaces the sync word with a checksum.
This 90 bit timecode word is placed in the vertical blanking interval between each field
of video - with a special marker denoting either the upper or lower field in interlaced
video streams. Now the advantage of VITC over LTC were that
you didn’t lose a track of audio to the timecode and you could get frame accuracy
at very slow playback speed - even in a freeze frame. LTC timecode required that the playback
tape be rolling so you could actually read audio signal. But where VITC could read a
still frame, it was prone to distortion from video issues and VITC couldn’t be read like
LTC when the tape was rewinding at fast speed. When video started going digital, the VITC
analog signal was developed into a Digital Vertical Interval Time Code (DVITC) and because
component digital video didn’t have vertical blanking intervals, Engineers began hiding
the timecode signals into the V-- but formatted to “look like” video samples. We won’t
go further into those systems because they weren’t widely used and frankly quickly
replaced with the file systems we have today where the timecode is embedded in the file
either in the header or in some form of metadata. Now in terms of frame rate. SMPTE originally
only provided for a handful of frame rates: in NTSC land you had a choice between 23.976
and 29.97. In PAL land you had the even 24 or 25. An update to the SMPTE timecodes have
added the ability to count subframes by using up to 5 flags which allows Timecode to count
to 32 times of the superframe rates - so the maximum frame rate possible is 32x30 or 960
frames per second. Of course, these extra frame rates are not widely implemented. The
highest I’ve seen are 59.97 and 60 But let’s go back and talk about those oddball
frame rates of NTSC. If we use 29.97 frames per second as our frame rate and count up
to 1 hour using our timecode generator, our timecode clock would be behind of the real
clock by 3.6 seconds. That may not seem like much but hour after hour and those 3.6 seconds
add up. Within a day the clock would be off by more than a minute. After a month the time
code clock would be off by 43 minutes - obviously that was not going to work for a TV station. Now we could jam sync the time code clock
every day or so with a real world clock but this can introduce a lot of errors. Instead
what engineers developed was a counting system called drop frame which corrects this issue. Contrary to what it sounds, no frames are
actually dropped in Drop Frame Timecode, think of it more like reverse leap year. Instead
of adding a day every four years we just skip certain numbered frames. Let’s examine the
drift after one minute of video. A minute of true 30 fps video has 1800 frames. A minute
of 29.97 fps video has 1798.2 frames of video. We have to make up 1.8 frames after one minute
of video. So when we count frames of the 29.97 we go from 59 seconds and 29 frames to 1 minute
and 2 frames - skipping 1 minute zero frames and 1 minute one frames. But now we’re .2 frames ahead. Fast forward
to the next minute and again we skip counting 2 minutes and 0 and 2 minutes and 1 frame
and go straight from 1:59:29 to 2:00:02. Now we’re .4 frames ahead. We continue doing
this, dropping 2 frames at the start of each minute till we get to the tenth minute. Now
we’re ahead 2 frames - so for the tenth minute we don’t skip counting any frames
we go 9:59:29 to 10:00:00 and then 10:00:01. So now 10 minutes of our 29.97 frames per
second drop frame is exactly the same as 10 minutes in the real world. Now that does sound a bit confusing - but
you will never ever need to manually count frames like that - that’s the editing computers
job and you can often switch between drop frame or non drop frame depending on the needs
of your project. Drop frame is designated with semicolons between the numbers or a period
before the frame count where as non drop frame uses colons. You will really only need drop
frame if your formating something to go on Broadcast television. If your working on a
movie or a video for the web you can work without drop frame. And remember, no frames
are actually being dropped, it’s only a matter of how we count the frames so you can
switch between the two and the only that will change is the total running time of your project. Well now that you’re well versed in Timecode
history and science - let’s talk about how Timecode actually works in a production environment.
Now if you’re shooting with just one camera and recording audio straight to the camera
- there’s obviously little need for you to involve timecode other than for organization
and notetaking. But if you send sound to an off camera recorder or if you’re using multiple
cameras - recording synchronized timecode on each device will make syncing in post much
simpler. Now it’s important to remember that Timecode
does not automatically sync the cameras, it syncs the clocks- it provides a highly accurate
reference clock which allows multiple camera and sound sources to be synchronised in post-production. But if you need subframe accuracy - to sync
the cameras frames precisely say for a dual camera 3D rig, you’ll need Generation Locking.
Genlock in the Standard Def days used to involve sending a black burst signal from a central
switcher. In the HD world genlock is accomplished with tri level sync. But Genlock is more for
live switching situations and isn’t always practical and frankly not entirely necessary
for typical script based work. What you want to do is what they do in spy
movies before the start of every mission- synchronize all you watches. Get your timecode
clocks to all be in sync. Now I’ve tried to do this manually by going into the menu
and resetting the timecode on multiple devices at the same time but I’ve never been able
to do it with any degree of frame accuracy. Besides manufacturers of many cameras don’t
put that much stock into the long term precision of their clocks - after all they’re focused
on the sensor and processing and creating a beautiful image. So camera clocks can drift
especially after they’ve been powered off say for a battery change. The much better solution is to feed a timecode
signal into cameras and devices from a dedicated timecode generator like the NanoLockit from
our sponsor Ambient. The NanoLockit is really quite a unique and
simple device to operate. To set it up I attach the NanoLockit to a computer and run Ambient’s
Lockit Toolbox. I sync the NanoLockit to the computer’s clock and select the timecode
format: Since I’m in the NTSC world that’s really a choice between 23.98 or 29.97. In
PAL that’s a choice between 25 and 24. Higher frame rates up true 60 are available as well
and the 29.97 and 59.94 have drop frame varieties available. Since I film in 24 - I would select
the 23.98. Once everything is set in the Lockit Toolbox,
I can disconnect the NanoLockit and it’s ready to go. This little box will generate
a timecode signal all day and night until it runs out of battery which about 25 hours
after 2 hour charge. It charges using a basic USB 5v port and you can conceivably just run
it from an external battery source so it can run indefinitely. If I have a second Nano Lockit or even this
Ambient Lockit slate, all I have to do to jam sync them together is to take the NanoLockit
that I setup early and press and hold the green button. Immediately all other devices
sync up framerate and time. Now if I’m feeding a device with a timecode
input like this Zoom F8n, I just connect the NanoLockit to the recorder using the appropriate
adapter, set the timecode either to external or Internal Free Run and jam the internal
clock to the external timecode. You would do something similar for a professional camera
that has timecode in. But for cameras that don’t have a timecode input, I would use
the either the lemo to XLR adapter or Lemo to mini jack adapter and send the signal as
an audio source to the camera. Make sure the audio levels are strong but not peaking. Then when we’re in post - we have to decode
that LTC audio timecode. Right now only AVID and DaVinci Resolve can read LTC audio timecode
- there are a few standalone products out there as well. Since I’m a Premiere user,
I can use the free version of DaVinci Resolve, bring in my clips, select Update timecode
from audio LTC and then export these clips back out for editing, or just manually transfer
each starting timecode back into Premiere. Now in Premiere, I can sort my files by timecode
and merge video and audio files using the timecode as sync. This works great for a dual
audio system like shooting a short film: On the set of our latest Filmmaker IQ short:
Outta Sync, I ran the NanoLockit into the Canon C200’s audio channel while monitoring
and recording sound on my F8N - the digital slate also worked well as a visual reference
for Timecode. You can also use it on a multicam documentary shoot feeding the same timecode
to two different cameras using two NanoLockits devices. But perhaps the my favorite recent
use was in recording a Classical concert. I have a pair of mics which I position on
stage and plugged into my F8n. Then I positioned my cameras in the tech booth a couple hundred
feet away and fed an audio feed from the sound board for any live mics they might be using.
I jammed my recorder to a Nanolockit, unplugged it and then brought the nanolockit to the
booth to feed and LTC audio into my camera. Now before I always had a beast of a time
getting the stage audio and the camera audio to sync. If I relied on nat sound from the
camera, being so far from the orchestra it would always be behind because sound takes
a lot longer to travel the distance than light. This wouldn’t be especially difficult if
I was trying to mix my stage audio with the board feed because board feed travels at the
speed of light so it’s always ahead of nat sound. The nat sound is a terrible way to
try to sync. Well with the Nanolockit, I have identical timecode for both the camera and
the recorder on the stage so there’s no question about how to sync up the feeds. From its humble origin in linear tape editing,
Timecode has stayed strong as an important factor in keeping our digital motion picture
world together. I hope you’ve got some perspective on how Timecode works and consider how you
might implement time code generators like Ambient NanoLockit to help make your production
easier. It made our short film Outta Sync a whole lot simpler - which I hope you will
check out. Until next time my friends, sync up your clocks and make something great. I’m
John Hess and I’ll see you at Filmmaker IQ.com
Good ol’ timecode. You would think in the digital age we would have easily solved this by now. Part of the problem is America’s large industry is slow to change standards.
I just about remember Nagra3's and pilot tone, selsyns and 16/35mm mag machines (sprocket wrenching Westrex machines).
Thanks for SMPTE's timecode, quite a nice standard, it's lasted well, a very good sign for a well thought out standard.