The title of this video
won’t be exactly right. It only updates every few minutes at most,
and besides, YouTube doesn’t update its view counts in
real time anyway, so don’t bother refreshing
and refreshing and refreshing, you won’t actually be able to see it
ticking up second-by-second. If it’s actually 100% spot-on, it’s a miracle, but if it’s close, then the code I’ve written is still working. But at some point, that code will break, and the title of this video will slip
more and more out of touch with reality. This is the story of how I made
that changing title work, why it used to be a lot easier to
make things like that work, and how all this ties in
to the White Cliffs of Dover and the end of the universe. Now, I’m not going to talk about
the exact details of my code here, it is not the important part, because code is just incredibly dull on camera. But, big-picture, if you’re automating
a job like that, there are two main approaches you could take. First: you could write something that pretends
to be a human. Not in some Blade Runner replicant way, but you could run a system that loads up the
YouTube video page, reads the number of views,
and then goes to the video manager, changes the title, and hits save. That’s actually not too tricky to do: “screen-scrapers” have been built to do
things like that for decades. And that’s a fairly innocent use of a screen-scraper. But if you can write code to change a title, then you can also write code that
signs up for loads of new accounts. Or spams people. Or sends out messages
trying to steal personal information. That’s why you see those
“I’m not a robot” checkboxes. Which aren’t impossible to defeat,
by any means, but they make screen-scraping like that
much, much, much, much, more difficult. And, plus, it’s an approach that’ll break
quickly: every time one of the pages that’s being
scraped gets redesigned, you’ll have to rewrite your code. But for a long time, the people who build
web services like YouTube have recognised that there are legitimate reasons for letting
code interact with their systems. Like pulling analytics data into a spreadsheet, or letting captioning services add subtitles
to videos quickly and automatically. Or you might want to hook
multiple web services together: to automatically tweet when
a new video goes up, or ask your voice assistant
to search for a playlist. That can and should all be done with code. So behind the scenes, nearly every major
web service has an "API", an Application Programming Interface. It’s a way for bits of code to pass data
and instructions back and forth between services safely without
having to deal with all the complicated visual stuff
that humans need. So when I want my code to change a video title,
I don’t ask it to open up a web browser. Instead, it sends a single request to YouTube: here’s the video ID,
here’s the stuff to change, here’s my credentials to prove that
I’m allowed to do that. Bundle that all up, send it over. And YouTube sends back a single answer: hopefully it’s a response code of 200,
which means “OK”, with confirmation of what the video’s data
has changed to. But if there’s some problem with that request,
it’ll send back some other status number, and an error message about what went wrong. I can write code to handle those errors,
or for something simple like this, I can just have it fail and log it somewhere
so I can deal with it later. No typing, no clicking on things;
no pretending to be a human. One request out, one reply back. At least, that’s how it’s meant to work. This idea, that web services could interact
with each other through code, was amazing when it first became popular. It became "Web 2.0", a buzzword that is now more than
fifteen years old. And honestly, the Web 2.0 years were some
of the most optimistic times on the web. All these new startups were making sure they
could interchange data with each other, so maybe in the future, you could see your friends’ Facebook statuses
on your fridge! Or lights could flash to warn you if
your bus was arriving early! The web would be all about data, and we
could make all sorts of things of our own to understand it and control it
and shape it. It was going to be the Age of Mashups:
take data, and do interesting things with it. I built so many things
in the days of Web 2.0. So many little web toys that took data from
one place and showed it in weird ways. And the most ridiculous, over-the-top tool
that I loved to build with was called Yahoo Pipes. You didn’t need to write any code to make a
mashup with that. You could just click and drag boxes on a screen
to make a flow-chart, and it would all be done for you, Yahoo would run it all on their servers,
for free. I made a thing called Star Wars Weather. It was a really simple web page,
it’d show you the weather forecast by comparing it to a planet from Star Wars. I had a million people visit that site in
one day at its peak, a few people genuinely used it to get the
weather every morning, I got lovely emails from them, and all the processing was done in the background
through Yahoo Pipes, I didn’t have to pay for some expensive server
or pay for access to the weather data. There didn’t seem to be any limit, either. Yahoo just handled it, because this was Web 2.0 and that was the
right thing to do, and, y'know they’ll figure out how to
make money later. Google Maps. That was free to build on, too. World-class maps just to play with.
I built a terrible racing game on top of it, put it in my own site, loads of people
played it, I didn’t pay them a penny. None of those free services exist now. And in hindsight, it was never sustainable. See, when Twitter launched, it wasn’t pitched as just an app,
or a web site; Twitter was a platform, a messaging service. You could use their web site to
read and send tweets, sure, or you could write code that used the API, that looked at tweets, that reacted to them,
or even wrote tweets of its own. It was so quick, so open that anyone with a
little coding experience could make stuff easily. Everything you could do on the Twitter website,
or later, on the app, everything was available in the API for your
code to play with. The first sign that something was wrong, for me at least, was the Red Scare Bot. It appeared in August 2009 with the face of
Joseph McCarthy as its avatar, and it watched the entire Twitter timeline, everything posted by everyone -- because Twitter was small enough back then
that you could do that. And if anyone mentioned communism or socialism, it would quote-tweet them with… not even
really a joke, just a comment, just something that said
“hey, pay attention to me”! Hardly anyone followed it, because,
yeah, it was really annoying. Over the six years before it either broke
or was shut down, that bot tweeted more than two million times, two million utterly useless things added to
Twitter’s database, two million times that someone was
probably slightly confused or annoyed. Somehow it survived, even as Twitter’s rules
changed to explicitly ban “look for a word and reply to it” bots, even as countless other irritants
got shut down. “Search for every use of a word on Twitter
and reply to it” seems a lot more sinister these days, in a world where social media
shapes public opinion. We didn’t really know
what we were playing with. I built some Twitter bots myself, although they weren’t quite as annoying
as that. My best one tweeted any time someone edited
Wikipedia from within the Houses of Parliament, although I handed that political disaster
over to someone else only a couple of days later
when the press started calling, 'cos that was far too much hassle. And those were just the harmless ones. To be clear, there are still people out there making really good and interesting
and fun Twitter bots: but “bot” has a whole new meaning now. Because it turns out you can’t open up data
access just to the good guys. I remember being really impressed
with Facebook’s API: it was brilliant, I could pull my data and
all my friends’ data, and do weird, interesting things with it! We all know how that turned out. It’s amazing, in hindsight, just how
naively open everything was back then. APIs were meant to create this whole world
of collaboration, and they did -- at the cost of creating a whole lot of abuse. So over the years, the APIs got replaced with
simpler and more locked-down versions, or were shut down entirely.
Or, in the case of Twitter, their website and app gained features like
polls and group DMs which were just… never added to the API. You want to use those features? You’re going to have to go to
the official app or the official site. Because after all, if people can access the
platform however they want, with code… how on earth are Twitter
going to show them adverts? Nowadays, if you want to build something that
connects to any major site, there will be an approval process, so they can check in on what you’re doing. And that connection could get
shut down at any time. The Google Maps games I made; the Twitter
toys; anything I ever built on Yahoo Pipes; they’re all broken now and they can never
come back. Pipes must have cost Yahoo so much money. Even if the service you’re building on survives, there’s still upkeep associated with making
anything that connects to an API. Sooner or later, the server you’re hosting
your code on will fail, or there’ll be a security patch
that you’ll have to install, or technology will move on enough that you’ll
need to update and rewrite the whole thing, and you will have to ask yourself
the question: is it actually worth it? Computer history museums are filled with the
software and hardware that I grew up with and that I’m nostalgic for,
because they all ran on their own, they didn’t need any ongoing support from
an external company. But if what you’re making relies on some
other company’s service, then… archiving becomes very, very difficult. So for the time being, every few minutes,
my code is going out to YouTube, asking how many views this video has, and
then asking to update the title. Maybe it’s still working as you watch this. But eventually, it will break. Eventually, so will YouTube. So will everything. Entropy, the steady decline into disorder
that’s a fundamental part of the universe… …entropy will get us all in the end. And that’s why I chose to film this here. The White Cliffs of Dover
are a symbol of Britain, they are this imposing barrier,
but they’re just chalk. Time and tide will wash them away,
a long time in the future. This, too, shall pass. But that doesn’t mean you shouldn’t build
things anyway. Just because something is going
to break in the end, doesn’t mean that it can’t have an effect
that lasts into the future. Joy. Wonder. Laughter. Hope. The world can be better because of what you
built in the past. And while I do think that the long-term goal
of humanity should be to find a way to defeat entropy, I’m pretty sure no-one knows where to start
on that problem just yet. So until then: try and make sure the things
you’re working on push us in the right direction. They don’t have to be big projects, they might just have an audience of one. And even if they don’t last: try to make sure they leave
something positive behind. And yes, at some point, the code that’s
updating the title of this video will break. Maybe I’ll fix it.
Maybe I won’t. But that code was never the important part.