How This One Question Breaks Computers

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
- A computer's greatest strength is also its greatest weakness. What is it? This sounds like a paradox, and on Up and Atom we do love a good paradox, but it actually isn't. Just like a person's strength might be that they're very ambitious, their weakness might be that they can never relax. Both traits are two sides of the same coin. Not being able to relax is a natural consequence of being extremely ambitious. Computers have something similar. The thing that makes them so powerful, so widespread, and so useful also places a fundamental limit on what they can do and the problems they can solve. This limit can never be broken, even with the best supercomputers of the future, even by a quantum computer. It's built into the very nature of computation. What is this intriguing quality? Well, let's start by asking, what is it that makes computers so powerful? "I think there is a world market for maybe five computers." This quote is attributed to Thomas Watson, longtime CEO of IBM. There's no doubt this prediction turned out completely wrong, but why? Let's figure it out by looking at some of the earliest computing devices, like this thing used to calculate the motions of the Moon and Sun or this one used to predict the motion of the tides. They were very good at the job they were designed to do, but they didn't receive the attention and fame of the modern computer. Why not? The first modern digital computer ever built, ENIAC, was created for the sole purpose of computing artillery firing tables. But unlike these other devices, ENIAC's descendants went on to rule the world to the point where nearly everyone carries one in their pocket. What was different about ENIAC? What could it do that computing devices before it couldn't? The secret sauce was that ENIAC could be programmed. This meant you could make it behave differently by putting in different instructions. Programmed with the right set of instructions, ENIAC could act like the device that predicted the motions of the Sun and Moon. Program it with a different set of instructions and it could act like the tide predicting device. In fact, any program that any computer could run, any app could run, at least in theory. Rather than having to build a new machine for every purpose, you could just switch out instructions. This shape-shifting ability is called universality, and it's the reason for the incredible success of the modern computer. We're so used to it today that we take it for granted, but it truly is incredible. A consequence of universality is that we can have programs running other programs. Just look at your computer's operating system. An operating system is a program, and anytime you open Word, play a movie or paint a picture, that's a program running another program. Today, programs have robots roaming around Mars, writing English essays, and helping doctors make diagnoses. Something else you can do on your computer is watch this documentary I made on our exclusive streaming service Nebula. Before we continue with the rest of the video, I'd like to tell you about today's sponsor, Nebula. Some creator friends and I teamed up with our own streaming platform where we don't have to worry about the pressure of the YouTube algorithm or clickbait titles to keep creating. Nebula is a place where we can focus on passion projects, experiment with original content all while staying completely ad-free. Nebula features a lot of YouTube's top educational creators like Real Engineering, Joe Scott, Isaac Arthur, Medlife Crisis, and many more. If you're a deep thinker and like to contemplate the boundary between math and philosophy, I think you'll like my documentary. There are entire series of exclusive content on there as well, like Real Science's amazing new documentary series about the key moments in our human evolution. Signing up to Nebula is the best way to support this channel and the entire educational community. If you sign up using the link on screen and in the description, you'll get both Nebula and Nebula classes with a 40% discount off the annual plan. That's just a little over $2.50 a month for exclusive high-quality educational content made by your favorite creators. Nebula classes are where creators make classes about how they do what they do best. One I highly recommend is "What is Code?" taught by the Coding Train's Daniel Shiffman. It's a beginner-friendly introduction to the ins and outs of coding. Thank you for listening, and thank you Nebula for sponsoring this episode. Speaking of mathematics and coding, with the way things are going, it's not hard to imagine that in a few years computers will have solved all of mathematics' problems, but in fact this property of universality has a flip side. There are some problems computers will never solve, even in a million years. Sometimes seemingly simple problems like, does this computer program have a virus or are these two real numbers equivalent? But wait, how could we possibly know that? Isn't it a bit arrogant to claim that we know for sure that a computer will never solve some problems? Mathematicians were working on Fermat's Last Theorem for 358 years before Andrew Wiles solved it. An example even more relevant to this video is the four-color theorem, one of the first computer-assisted proofs. The four-color theorem states that only four colors are needed to color in any map so that no two adjacent regions have the same color. Mathematicians were working on this for 124 years before invoking the help of a computer to finally solve it. Today, computer-assisted proofs are commonplace. One could argue that with tools like ChatGPT advancing at its current pace, we're only starting to see what computers are capable of. How can we possibly say with 100% certainty that there are problems computers will never solve? To answer that, we need to go back to the beginning of the 1900s. See, this limit of computation was discovered years before the first computer was even invented. David Hilbert was one of the greatest mathematical minds who ever lived. He had a dream to mechanize mathematics. Back then, mathematics was going through a rough time. Many paradoxes had shaken its foundations. The most famous one is Russell's Paradox. It's a paradox about set theory, but the most intuitive way to understand it is actually through hairdressing. Imagine a town where there is a barber who shaves all and only men who do not shave themselves. Does the barber shave himself? If the barber does not shave himself, then he does, and if he does shave himself, then he doesn't. You can apply this same logic to sets. Imagine a set that contains all sets that do not contain themselves. Does this set contain itself? If it does, then it doesn't, and if it doesn't, then it does. These paradoxes were not a good look for mathematics and it needed to be dealt with pronto. Hilbert took it upon himself to fix this crisis. He came up with a program, Hilbert's program, whose sole aim was to put mathematics on a firm, rigorous footing. According to Hilbert, there were three crucial questions mathematicians had to answer if mathematics was to be saved. Number one, is mathematics complete? The idea behind this question was that whatever mathematical conjecture we might think of, like every integer is the sum four squares, we should always be able to prove or disprove it. Number two, is mathematics consistent? This means that we should never be able to arrive at contradictory statements from the same set of axioms. And number three, is mathematics decidable? We'll come back to questions one and two later, but it's question number three that is of interest to our story. In mathematics, the word decidable means that there is a step-by-step mechanical procedure to decide whether a question has a yes or no answer. For example, take the question, is 17 a prime number? The mechanical procedure to answer this question goes like this. Take the number 17. For each whole number from two to the square root of 17, check if it divides 17. If you found a divisor, 17 is not prime. If you did not find a divisor, 17 is prime. We have just used a mechanical decision procedure to answer the question of whether 17 is prime. We didn't need to think or rely on any human intuition. Hilbert called this a decision procedure, but today we would call it an algorithm. Now, if this algorithm only worked for the number 17, it'd be pretty useless, but we can actually substitute any number and it will still work. So the question "is N a prime number?" is decidable. Hilbert was asking if all of mathematics is decidable. He wanted a decision procedure that could be applied to any mathematical question. He didn't want separate algorithms for separate problems. He wanted one mega algorithm for all problems. This would be nothing less than the complete mechanization of mathematics. A magical algorithm that could answer any question with a yes or no answer. We could ask at the Riemann hypothesis, do all non-trivial zeros of the Riemann zeta function lie on the critical line of one half? Or the Goldbach conjecture, is every even integer greater than two the sum of two primes? And it would run its algorithm and spit out a yes or no answer. Mathematics would be solved and mathematicians would be out of a job. Hilbert was German and he called this quest the Entscheidungsproblem, meaning decision problem in German. It quickly became one of the most important challenges of the 20th century. Hilbert was convinced that the answer to all of his questions was yes. Yes, mathematics is complete. Yes, mathematics is consistent. Yes, mathematics is decidable. He had always been an optimist who rejected the Latin maxim ignoramus et ignorabimus, we do not know and we will not know. This maxim represents the idea that scientific knowledge is limited, and Hilbert disagreed with it so much that he said, "We hear within us the perpetual call: there is the problem, seek its solution. You can find it by pure reason, for in mathematics there is no ignorabimus. In place of the foolish ignorabimus, let's stand our slogan. We must know. We shall know." The last words are printed on his tombstone. Not long after Hilbert said this, a man named Kurt Gödel came along and showed that in fact there are things we shall not know, that the first two of Hilbert's three questions were unattainable. Gödel did this via his incompleteness theorems, which prove that any set of mathematical axioms will inevitably be incomplete. There will always be true facts about numbers that cannot be proved by a set of starting axioms. He also showed that no powerful enough set of axioms could ever prove its own consistency. I'm making a video about Gödel's incompleteness theorem, so I won't go too much into it here. I know I've been saying that for a while, but I really am. I just wanna make sure that it's good. Anyway, obviously this sucked for Hilbert, but hey, at least he had question number three left, right? Right? Here, the quiet hero, or villain, enters the story. A logician named Alan Turing. Turing spent his teenage years at the Sherborne Public School in an environment in which competitive sports were valued and mathematics was not. One of his teachers even thought of science as low and cunning and spoke of mathematics as imparting a bad smell to the room. His parents were even warned of the shame of having a son who is a mere scientific specialist. Despite all this, Turing went on to do a PhD in computer science at Princeton University, and it was here that 24-year-old Turing first heard of the Entscheidungsproblem. Turing would go on to publish what has been hailed one of the top 10 science papers of the 20th century. A paper called "On Computable Numbers, With an Application to the Entscheidungsproblem". In this 36-page paper, Turing kicked off the entire field of computer science. He formalized the idea of computation, defined what an algorithm was, devised the theoretical framework for universality, proved the impossibility of Hilbert's Entscheidungsproblem and revealed the limits built into computation which stand to this day. Again, this was done before the first modern computer was even built. Let's see how Turing broke the last piece of Hilbert's heart. Turing demonstrated the first ever undecidable problem. He demonstrated it with his Turing machines, a slow and simple imaginary computer that he proved to be theoretically capable of performing all the operations of any computer. What's amazing is that although this was written years before the first computer came into existence, all of his ideas translate perfectly to modern computer programs. His undecidable problem is called the halting problem, and it goes like this. Some programs halt. Take the program countdown to zero and then halt and input the number 10. It halts when it reaches 0. 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0. Some programs do not halt, like count up from zero until you reach -1. 0, 1, 2, 3, 4, 5, 6... This program will never halt as it will never reach -1. It will just keep going on and on forever. For some programs, it's not obvious whether they will halt or not. Like you know when your browser is frozen with that little circle, is it about to load in the next few seconds or is it stuck in an eternal loop? Or what if you wrote a program that halts when it finds an even integer greater than two that isn't the sum of two primes? Knowing whether this program will halt is the same as knowing the answer to the Goldbach conjecture. Wouldn't it be handy if you could write a program that could tell you if another program will halt? The question "will this program halt?" is a decision problem. And according to Hilbert, if mathematics is decidable, we should be able to answer it with a yes or no. But Turing showed this was impossible. He used a proof method called reductio ad absurdum, which literally means reducing things to an absurdity. It's also known as proof by contradiction, but reductio ad absurdum sounds cooler. Here's his argument. Step one, imagine that a program does exist that can decide whether or not any other program and its input will halt. Call it halt checker. Give halt checker a program and its input, say this countdown program from before. Ask, does this program halt or run forever? As the countdown program clearly halts, halt checker will return halt. So far so good. Now we create a program called reverser. Reverser looks at halt checker's output and does the opposite. So here, seeing that halt checker has returned halt, reverser runs forever. If we had given halt checker the counting up program, halt checker would have returned run forever and reverser would halt. Reverser is a pretty weird program, but hey, weird programs are allowed to exist in the world. Now here comes Turing's OP move. What if we give halt checker the reverser program as input? One of two things will happen. One, reverser halts, so then halt checker returns halt, and then reverser runs forever. This is a contradiction, as reverser can't both halt and run forever. Or two, reverser runs forever, halt checker returns runs forever, and then reverser halts. This is also a contradiction for, again, reverser can't both halt and run forever. The very existence of the program halt checker has caused a logical contradiction. The only solution is that a program that can decide whether or not any program halts must not exist. So we have just found the first problem that cannot be answered by a decision procedure. The first undecidable problem. The fact that the ultimate algorithm doesn't exist is called undecidability, and it's the flip side to universality. The power of universality, that a universal computer can run any and all programs possibly run by a computer, means that we can build situations where contradictions like the halting problem have to happen. Undecidability is not a defect but an inevitable byproduct of universality. Now, you might be thinking, who cares about this? Sure, it crushed Hilbert's dream. Not every mathematical statement can be answered by some mega algorithm, but the halting problem seems like an obscure artificial problem that was made for the sole purpose of showing that mathematics is not decidable. Sure, it would be interesting to see if some programs could halt, but at the end of the day, is it really such a big deal in the grand scheme of all of computation? And the answer is, unfortunately, yes. The halting problem has enormous implications for many areas of practical computing. See, a lot of programming problems are really just the halting problem in disguise. Take the question, can you write a general algorithm to tell you if two programs do the same thing? It sounds simple enough, but it turns out that if you could solve this, you could also solve the halting problem. You could write a program that goes into an infinite loop, a very easy thing to do. Then you could write some other program to examine every even integer and halt when it finds one that isn't the sum of two primes. You know, the Goldbach conjecture. If a program can tell you whether these two programs behave in the same way, that's effectively asking whether this program will run forever or halt, which is an answer to the halting problem. It's been shown that answering any of the following questions is effectively answering the halting problem. Does this program do what it says it does? Will a program always return a one? Does this program add two numbers? Can this program convert an MP3 file? Is this program a virus? Does this program modify itself? Okay, I just wanna pause here because I know there's probably some confusion. Some of you might be thinking, no, you can write a program to tell if a program returns a one. You can write a program to tell if another program adds two numbers. A lot of you have probably written similar programs. And that is true for specific programs. What I'm talking about here is the general case. You cannot write a program to tell you whether any program returns a one or adds two numbers. In 1951, a guy named Henry Rice showed that a computer can really not say a lot of significant things about the way a program behaves. This theorem is called Rice's theorem and it's basically encapsulated in the phrase, "You cannot analyze many aspects of computation with computation." The halting problem and Rice's theorem tell us that most questions about a program's behavior are undecidable, that there is a hard theoretical limit on what computers can and can't do, even with unbounded computing power. Knowing this limit orients us in terms of what we should be attempting to solve. If a PhD student was trying to design an algorithm to do any of these tasks, their supervisor can point out that their endeavor is futile and that they'll have to find a practical workaround. So even in 2000 years, with supercomputers more advanced than we can imagine, humans will still be looking at that little spinning circle. I'd like to give a shout out to all of the amazing patrons who make these videos possible. If you enjoyed this video and would like to see more like it, please consider subscribing to my Patreon page. You can find the link in the description of this video. Bye.
Info
Channel: Up and Atom
Views: 182,219
Rating: undefined out of 5
Keywords:
Id: sG0obNcgNJM
Channel Id: undefined
Length: 20min 24sec (1224 seconds)
Published: Wed Jun 21 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.