Are there problems that
computers can’t solve? Even if you work with the cleverest programmers,
even if you have infinite time and energy and more computing power than could possibly
ever exist in the universe, are there some things that it will always
be impossible for a computer to work out? The short answer is: yes! The long answer would be an entire
lecture series in introductory computer science. The fairly short answer that I hope I haven’t
simplified too much… is the rest of this video. One of the great things about old computers
like these BBC Micros was that the operating system, the first thing you got when
you switched them on, was also somewhere that
you could just write code. So if you were a shop assistant
selling these in the 80s, you would have to watch out when
the kids weren’t in school, because while you were distracted, some nerd could slip in and quietly type in
something like this. And suddenly your display model was endlessly
typing out something rude until you hit the ‘break’ key or
turned the computer off and on again. That is a really simple example of a program
that will never halt. And it is deliberately written that way. But programs that never halt
can also be written accidentally: we’ve all had that moment where
something we’re working on freezes, stuck in an endless loop doing...
something? And you have that horrible moment of trying
to remember whether you saved recently or not. It would be brilliant to be able to analyse code,
any code, and say definitively: "yes", eventually it’ll halt or "no", there’s a point in there where it’ll
keep looping forever, if this happens. Which sounds like a simple problem.
It’s actually impossible. Literally, mathematically, impossible. And how we found that out comes from before
electronic computers were even invented. In the early twentieth century, a mathematician named David Hilbert posed
a series of problems that he thought mathematics
should be able to answer. One of them came to be known as the… …the, uh… the “Decision Problem.” Which is, in short: "Can we determine if any
given statement”, meaning absolutely any logical statement or
problem in the universe, “is provable or non-provable?” Given enough time, could we find an answer
to everything? Hilbert was optimistic that
the answer would be yes. Other mathematicians proved him wrong. One of those mathematicians was Alan Turing. And to explain his proof, we first need to
look at the tool that he used to make it. Turing was looking at the mechanisation of
logic and computation. A “computer”, at this point, wasn’t
an electronic thing, it was a job description, people who sat in rooms doing specific calculations
over and over and over, they were called computers. Because they computed. Turing was fascinated by the idea of automating
that process. The dream was if everything could be encoded
into numbers and if we had mechanical methods for working
with those numbers, then, maybe, anything could be solved. He came up with a hypothetical
calculating machine, which ended up being named after him. A Turing Machine would be made of, first: a tape. This tape would be, thanks to this existing
in magical hypothetical philosophy land, infinitely long, and divided into cells. In modern terms, that’s computer memory. Next, a read-write head, which can move
left or right over that tape, reading what’s there or changing it. Then it has a state register and
a table of instructions, which in modern terms are the code. There’s a lot of complicated stuff that
I’m glossing over there about how it all works, but in short:
it’s some code. The machine follows that code, which might include instructions like
“move left”, “move right”, “change what’s on this bit of the tape”,
or “read what’s on this bit of the tape”, and which instructions are used can depend
on what those bits of tape say. Eventually, the Turing machine reaches the
end of its instructions and it halts. Or it enters an endless loop. It was, basically, the minimum possible computer. But it turns out it’s also every computer. Any program that you write
in any programming language can be converted into something that can run
on a Turing machine. So using that machine, Turing restated the
Decision Problem: "Can we determine if any given program will
halt or not halt?" Same question, phrased differently. Will the machine find a solution,
prove a statement, or will it keep running forever? Remember, the big question that
we’re trying to answer is: is there any question that
a computer cannot solve? So, Turing proposed a series of steps. And yes, I know, computer scientists watching, this is a deliberate simplification, I know there are lots of subtle things
I’m skipping over. But: Turing proposed a series of steps. First: imagine that we come up with
a Turing machine, a program, that can look at some code, and figure out if that program will halt. At this point we don't need to know how
that machine actually does it, let’s just assume that it works somehow. Let's call this machine, this program, "Halts". "Halts" takes the code for a program as its
input, and outputs one of two answers: “Yes”, that program halts. Or “No”, it loops forever. So far, so good, right? So, we’re going to bolt another machine,
another program, on the end of “Halts”. And depending on the output from “Halts”,
it’ll do one of two things. If “Halts” outputs ‘yes’, knowing that
the program you've fed in will eventually stop, then that machine
goes into an infinite loop. Maybe it just prints something rude over
and over. And if Halts outputs a 'no', if Halts works out the code we’ve fed in
would loop forever, then this new machine just halts. Okay? So if Halts reads the code
and says “that’ll halt”, then the new machine loops. If Halts says a program will loop,
the new machine halts. We'll put a box around both of those and call
this whole new machine "Opposite". Because it does the opposite of
whatever code you feed in. With all that set up, here’s where Turing
springs the trap. He proposes that we take this new combined
program, Opposite, we take its code, and we feed its own code into itself. What will it do? Let's step through the logic. If it thinks the answer is: this code will halt,
then it will loop. But then the answer is: it’ll loop. So it’ll halt. But then it’ll halt. So it’ll loop. But then, it’ll… daaagh.
And it vanishes in a puff of logic. To be clear, that’s not a loop in itself. That’s a paradox.
That’s a logical, mathematical impossibility. And that is Turing’s answer
to the Decision Problem. If you ever have a program that can determine,
for certain, whether any particular program
will halt or loop, you can easily tweak it a little,
let it contradict itself, and cause that paradox.
And that’s impossible. So, a program to do that can not exist. Which means there is definitely an exception, there is definitely at least one thing
that computers cannot ever solve, even given infinite time and power. The answer to the decision problem,
the answer to the question “Can we determine if every program will
halt or not halt?” is: no. Now, there are obviously a lot of programs where you can easily tell at a glance if they’ll
halt or not. This code will loop forever. This code will exit and halt. The Decision Problem isn’t about whether
you can tell if some programs will halt or loop.
That's easy. It’s about every program. It’s about the fundamental fact that there
are definitely some things that computers cannot ever possibly solve. Which came as a blow to those mathematicians
like David Hilbert who believed that pure mathematics
had the ability to solve anything. Computers seemed like amazing machines that
could calculate any problem thrown at them, if given enough time. But even before the first
digital computer existed, Alan Turing had worked out that they do still
have limits.
It's a neat result. If you're into these sorts of proofs by contradiction, the mother lode is Cantor's diagonal argument.
Normal rule for article titles that ask questions does not apply here
Hmm, but only feeding itself to itself isn't enough, is it? Because the program requires input? So what is the input of the program fed as input? Again itself? Then it would recurse to infinity (just the input specification even before the start of execution). What am I not getting?
Related is the idea of Infinity computer: https://www.chiark.greenend.org.uk/~sgtatham/infinity.html
Plenty. Like answering the question from this video.
It is well known that for any finite machine, the halting problem is easy to solve. Either a program halts, or it repeats a state, and therefore does not halt.
Turing himself acknowledged that his argument requires an infinite machine. Notice 3:01 -> 3:10 where he says "magical" and "infinately long". An infinite machine would indeed be magical; it is just a fantasy. So this whole argument is about a fantasy.
In this example, he went through the contradiction so quickly that I don't understand it.
Converting the diagrams from the video into pseudocode... the set up:
The exact nature of OPPOSITE composing with itself is important to the "trap". Specifically, OPPOSITE taking its own code as input. What does that look like exactly? Because, to me, it seems perfectly reasonable to feed that source code into a compiled version of OPPOSITE and get a result.
In this kind of setup, I assume the inputs need to be resolved to actual values. So, the code OPPOSITE(quit) loops forever deterministically. And then, passing the source code "OPPOSITE(quit)" into OPPOSITE then should quit deterministically.
If we're meant to compose OPPOSITE with itself before feeding it into OPPOSITE, I'd like to see that described without causing an infinite recursion (which itself deterministically would not halt).
Can someone use these examples to better describe the crux of Turing's result?