- Hi everyone, Jade here. Today's video is an origin story about the entire field
of computer science. Now, when I say computer science, I mean literally everything
to do with computers. Laptops, desktops,
smartphones, iPods, GPSs, quantum computers, literally
every physical manifestation of a computer you can think of. But I also mean the theoretical side too. So the study of algorithms, computability, and complexity theory,
all the kinds of things that a theoretical computer
scientist would study. Now, the reason I want to tell this story is because it's incredible. So not only was computer
science accidentally created, but it was created from trying to answer a very abstract question about the foundations of mathematics. This story highlights an
incredible turn of events, a brilliant thinker, and the interconnectedness of two fields. The story starts with an extraordinary
man named David Hilbert. He lived at a very exciting time in the history of mathematics. See, back in the 1900s,
it had come to light that no one was really sure
what they were studying when they were studying mathematics. When we study biology, we're studying living things
that exist in the real world. When we study physics, we're
studying real world phenomena, which we can verify by our experiment. But math? Was it the study of real-world
objects or abstract entities? Did it exist in our minds or in some other realm of existence? The ancient Greek philosopher, Plato, thought that mathematical
objects, shapes, numbers, and the relationships between them, belong to their own ideal
world separate from ours. He called it the world of forms. But Hilbert wasn't a fan. He was a modern day man, and the Platonist view was too mystical and airy-fairy for his liking. An idealized world of forms? Please. Hilbert was interested
in down to earth matters, what could and couldn't be proven. Another prevailing view put forward by the German philosopher Immanuel Kant, was that math was a mental construct based on our intuitions. But Hilbert had a problem with this too. After all, intuitions
could often lead us astray. The fact that the word paradox exists in the English language
is evidence of that. These differing viewpoints
greatly irritated Hilbert. It's not uncommon to have multiple viewpoints in other disciplines. Literature, for example, is praised for being able to have multiple interpretations of the same thing. Even physical laws are
revised and changed over time. But math, oh no, this just wouldn't do. Math was regarded as different
from the other disciplines. The epitome of certainty. The fact that two plus two equals four would never be revised and changed. And it just wasn't good enough
that different mathematicians had different interpretations
about what it meant. Aside from the prestige of math
being called into question, these differing opinions
had practical problems, too. Those who agreed with Ken's intuitionism didn't believe that the idea of infinity fit within mathematics. Those who thought that logic
was the foundation of math ran into logical paradoxes
and inconsistencies that limited what they
could and couldn't do in comparison with the
other schools of thoughts. You couldn't have different mathematicians playing by different rules. This was just unacceptable, and Hilbert was going to
do something about it. He proposed a crazily ambitious program, to axiomatize all of mathematics. This sentence requires some explaining. See, Hilbert thought that if we treat math as nothing more than a formal system, there could be no more disagreements about what was and wasn't allowed. So what is a formal system? Well, it's an alphabet or a
set of meaningless symbols which can be manipulated
according to a fixed set of rules. Chess is a formal system. The alphabet are the pieces
which are manipulated according to a fixed set of rules. It's important to note that
the pieces and the rules don't have any meaning
outside of the game. Hilbert thought that this was how we should view mathematics. Something similar had been done by Euclid on a much smaller scale
some 2,000 years earlier. By putting forward five
self-evident axioms and the rules of how to build upon them, the whole system of
Euclidean geometry was born. Equipped with these axioms, we don't need to know whether a triangle lives in the world of
forms or in our heads to know that its angles
always add to 180 degrees. So in this way, Euclidean geometry can be seen as a formal system. Hilbert wanted to do something similar, to find the foundational
axioms for formal systems, for which all of mathematics
could be built upon. This would eliminate any disagreements about what was and wasn't allowed. This was a monumental task. And to start, Hilbert thought
about the three main things you would want in a foundational
system of mathematics, consistency, completeness,
and decidability. Consistency means that no contradictions can be proven within the system. For example, you can't prove
that two plus two equals four, and that two plus two doesn't equal four. Inconsistencies would render
the entire system useless. Completeness means that
a proof was required, that all true mathematical statements which existed in the system could be proven within the system. And finally, decidability. This was the idea that there should exist an effective procedure
for deciding the truth or falsity of any mathematical statement. For example, if given the
statement P plus Q equals Z, there should be some effective
procedure for deciding whether this statement is
true within the system. This is where our hero
of the story comes along. Alan Turing. You may know him from such
films as "The Imitation Game," or the biggest award in computer science called the Turing Award. At the young age of 22, Turing
got intensely interested in this last question, decidability. Did there exist an effective procedure for deciding the truth or falsity of any mathematical statement? But Turing saw a slight
problem with the question. What exactly was an effective procedure? Today, an effective procedure is considered to be an algorithm. Some step-by-step set of
instructions that you can follow without using any real
thought or intuition. But this was back in the '30s, and the idea of an algorithm was still vague and ill-defined. We're used to the concept now because they're fundamental
to how computers work. But computers didn't exist back then, or at least not as we know them today. The word computer referred
to people, usually women, who were given a list of instructions and carried out tedious
computations by hand. This job required little thought and definitely no mathematical intuition. Throughout his life, Turing was fascinated with the relationship
between humans and machines. He's famous for coming up with
the idea of the Turing test, a test for determining whether a machine is capable of thinking like a human. This was no exception. As the term effective procedure was too vague to do
anything rigorous with, he decided to define it himself. He based his definition
on a human computer. He decided that anything
a human computer could do by mindlessly following
a list of instructions using no genius or insight whatsoever was an effective procedure. He thought about the basic components of what a human computer
does when they're computing. One, they read instructions. Two, they read and write
symbols on a piece of paper according to the instructions. Three, they occasionally erase symbols and replace them with new symbols. And four, when they're finished with the computation, they stop. That was about it. And here is where the truly
revolutionary part comes in that would transform the
world for years to come. Turing recognized that every step of what a human computer did could be replicated by a very
simple theoretical machine. In place of the human
could be a scanning head, which could read one symbol at a time, erase one symbol at a time,
and write one symbol at a time. The piece of paper could be replaced by an infinitely long piece of tape which contained one symbol per square. But how does the machine know what to do? When a human writes a set of instructions, something is going on in their mind, which tells them what to do next. Turing thought of these as states of mind. And so similarly for his machines, devise the idea of internal states, which tell the machine what to do. This idea is best understood using visual diagrams and an example. Say we want the machine to
perform the very simple task of recognizing when
there are an even number of zeros and ones in the tape. This is easy for a human who has mastered the art of counting, but it's not so
straightforward for a machine. A simple way for our machine
to tackle this problem is to pair a zero with a one, and then cross off the pair with an X. It keeps doing this
until it can't anymore. If it crosses off all
the symbols in the tape, we'll know there was an even amount. If it doesn't, we'll
know that there wasn't. Now, how do we program our
machine to carry out this task? Well, we build it with
an internal state table that looks like this. I know this looks complicated,
but it's not too bad. Let's read it together. Take this tape with these symbols. We want the machine to tell us whether there is an even
number of ones and zeros. The machine starts in the start state. The start state tells the machine, if I find a zero, I will
replace it with an X, move to the right, and into state B. If I find a one, I will
replace it with an X, move to the right, and into state C. If I find an X, I'll leave it alone, and stay in the start state. If I don't find any ones or zeros and only encounter blank squares, they must have all been paired up, so I move to the accept state. So at the moment, our scanning
head is in the start state. It reads a zero, replaces it with an X, moves one square to the
right, and into state B. Now, the machine has just seen a zero and needs to find a one to pair it with. So state B says, if I see a zero or an X, I'll leave them alone, and move right one square,
and stay in state B. If I see a one, I replace it with an X, and move to state D. So our scanning head reads a zero, moves one square to the
right, and stays in state B. Now it reads a one, replaces it with an X, and moves one square to
the left, and into state D. State D is only entered
when a zero, one pair has been crossed out by Xs. So the machine needs to
reset for the next count. It says, I keep move the
scanning head to the left, ignoring everything. If I see a one, I leave it as a one. If I see a zero, I leave it as a zero. And if I see an X, I leave it as an X. I keep moving until I
reach a blank square, which indicates I've reached the beginning of the list of symbols. I then move one square to the right and go back to the start state
to repeat the whole process. After another round of this,
the machine only encounters Xs. So it keeps moving right and
remains in the start state. When it reaches a blank, it
moves into the accept state, and we know that this list of symbols does have an even number
of ones and zeros. So that's how one of
these machines would work. But here, Turing had
another incredible insight. See, at the moment, you would need to build different machines
to do different tasks. A machine to calculate square roots would need to be built with a different internal state table than
a machine to do addition. But Turing realized that
any internal state table could be encoded into a
list of ones and zeros. In this way, we could
feed the instructions as well as the problem into a new machine. And that machine could perform
that set of instructions. Today, we know this as
a programmable computer. We have computers which we can program to do different things
after they've been built. That idea originated right
here in Turing's paper. He dreamed up a theoretical machine that could perform the same task as any other of his machines. And that was pretty freaking incredible. Even though this theoretical
machine is incredibly simple, it can, in principle, do everything that a
human computer can do. This abstract machine, now
called a Turing machine, was Turing's definition
of an effective procedure. An effective procedure was anything that could be computed by a Turing machine in a finite amount of time. So with that out of the way, he could go on to answer
Hilbert's question. Did there exist an effective procedure for deciding the truth or falsity of any mathematical statement? Or, as framed by Turing, did there exist a Turing
machine for deciding the truth or falsity of
any mathematical statement? The way that Turing answered this is beyond the scope of this video. I did make an entire
separate video about it called the Halting Problem, which you can check out after this, but I will tell you that Hilbert's program ended in complete failure and Turing contributed to it by proving that there was not an effective
procedure for deciding whether any mathematical
statement was true or false. This alone is an amazing achievement. But Turing's little machines
took on a life of their own. By making a definition of
computation that was rigorous, people were able to study
the nature of computation and its limitations like never before. This birth the entire
field of computer science, which is the study of what
computers can and can't do, the complexity of algorithms, and the overall nature of computation. Furthermore, by coming
up with an intuitive, easy-to-understand model, people were able to turn the
abstract and vague notion of computation into a physical machine that could replicate the process. Turing machines are the blueprint of the modern day computer. Anything from your
desktop to your smartphone to the computers on the
International Space Station are based on Turing's model. Anything that any of these machines can do can in principle be done
by a Turing machine. It might just take a bit longer. Now it's worth noting that other models of computation do exist. Alonzo Church, who would later become Turing's doctoral advisor, actually came up with his own model just weeks before Turing,
called the Lambda calculus. Turing found out about this just before publishing his own paper and wrote a quick appendix which showed that Church's lambda calculus and his own model were in fact equivalent, meaning that anything that can be computed by the lambda calculus
can also be computed by a Turing machine and vice versa. However, because of their concreteness, simplicity, and precision, Turing machines were the
model that caught on, and captivated the imaginations of the mathematics community. Even though they were
conceived nearly 90 years ago, there is currently no
working model of computation more powerful than Turing's. This is embodied in the
Church-Turing thesis, which states that anything
that can be computed can, in principle, be done
so by a Turing machine. If a Turing machine can't
compute it, it's not computable. That's not to say that this
might not someday change. There's a growing field of
study called hyper-computation which studies theoretical
models of computation more powerful than Turing machines. The trouble is making them
physically realizable. Another model of computation
that's been getting a lot of hype recently
are quantum computers, which if they work, will definitely bring
something new to the table. But so far, Turing's simple model which he dreamed up at age 22, kind of as a by-product to answer a totally different question, is still the most widespread
model of computation to date. This video was an origin story about one of the most fascinating and relevant fields of study, but there's so much more to computers and algorithm that's worth exploring. If you'd like to get a real handle on the fundamentals of computer science, hearing about it in a video
probably isn't enough, but there is good news. You can learn about it the same way I did, through interactive courses on Brilliant. In this computer science
fundamentals course, you can learn how to write programs, search for solutions using
the computational method of brute force and make
your own algorithms. There are so many cool
algorithms out there for solving different problems, and you'll get to explore
and create the same ones taught in a computer science degree. By getting your hands dirty, solving problems, and making mistakes, you'll get a real intuition
for the power of algorithms. I can say from my own experience
that this course is perfect for those wanting to get
started in computer science. Brilliant has a whole bunch
of other courses, too. Not just in computer science,
but physics and math as well. It's worth checking out
if you're a curious mind who loves to learn and be challenged, which as you clicked on this
video, I'm guessing you are. If you'd like to check it
out and support the channel, go to brilliant.org/UpAndAtom. It's free to sign up, but the first 200 people
that go to that link will get 20% off the annual
Brilliant premium subscription. That's a subscription I've been using. The link is also available
in the description. As always, a huge thank you to all of my wonderful patrons on Patreon. These videos would not
be possible without you, and your dedication to
science inspires me. Thank you for watching and I'll see you in the next episode. Bye.