Marvin Minsky - Artificial Intelligence

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] Thank You Marvin I usually start lectures by saying any questions many University can talk about today well I was listening to you people talking about universes and seems to me that there's one possibility that is so simple that people don't discuss it because certainly a question that occurs in all religions is who created the universe and why and what's it for and one thing that preoccupies me a little bit is the question of why do things exist and I think the answer is that that's an extra hypothesis that doesn't make any sense you see it's all right to say does this glass of water exist but if you ask does X exist that means is X one of the things in the universe so to say that the universe exists is silly because it's saying the universe is one of the things in the universe oh there's something wrong with that idea well if you carry that a little further then it seems to me that it doesn't make any sense to have a predicate like where did the universe come from or why does it exist so another view which might be related to the famous many worlds theory is that this is just a possible universe and there are lots of possible universes and there isn't any reason to say that this one is distinguished except that we're in it so if you think of of a computer game or something like that here's a little sequence of reductions first imagine that there's a computer that simulates some little world and there are people in it and the people are pretty smart we'll talk about how they could get smart later and after a while one of the people asks the other why are we in this world and where did it come from and of course they're fooling themselves there isn't any world it's just a simulation somebody wrote the program but if they got that far then they could say who wrote the program okay now step two is suppose somebody wrote the program and didn't run it so there's no computer well still once you've written the program and let's say a description of the computer it runs on then everything that happens in that world is determined and at logic this computer and the program logically imply what people will be in and what they'll do and what they'll ask and so all of a sudden there doesn't have to be a program it's just a possible computation and then the next step is you don't even have to think of it there is such a computation so anyway the question is getting back to our universe what's going on here why is the universe we're in the way it is whether it exists or not and now there's this famous thing called the Hall anthropic principle and some physicists take it seriously and some think that there's something wrong with it and they shouldn't think that way but clearly we couldn't ask that question and with less our possible program that we're running had certain properties and one of the properties that you need to have an intelligent creature is that there well we know how we got here in this universe we evolved and you can't evolve unless you have self reproducing machines or approximately things like something like DNA or RNA and some way to copy it and there are other constraints on the universe that come from the anthropic principle you couldn't have you couldn't have a booth like universe where suddenly things are exploding and creating stuff so you can't have too many bubbles or in fact I think you have to have something like conservation laws because if energy weren't conserved than any every now and then some configuration would appear and everything would blow up so the real question is why the universe has to have laws at all I suppose but one thing I've noticed is that when physicists try to explain to the public this is the old days before things got sophisticated they would make a great fuss about how what they call the uncertainty principle and so forth and they'd say you know the world isn't the way Newton described it it's things are probabilistic and indeterminate and the curious thing is it seems to me they very rarely mention the opposite side of that the reason why polling was so interested being chemists is that nobody talks about people instead of the uncertainty principle was I'd like to call it the certainty principle if you take in Newtonian solar system it won't last or at least if it's a big complicated one Gerry Sussman and John wisdom once simulated the outer planets for a while and they decided they were stable for about at least three or four billion years but it looked like the orbit of Pluto was chaotic they didn't simulate the inner planets and for all we know for all I know it might be that the influence of Jupiter and Saturn which are really big are enough that at some point in that time maybe the earth will get enough energy to be thrown out into space but generally in classical physics you can't have things like atoms with things in orbits that stay there a long time and it's only so what I'm saying is nobody explains to the children in grade school in high school when they talk about quantum mechanic that in fact quantum theory is why things are so stable and certain they'll emphasize that if you have a DNA molecule there's a possibility that one of the carbon atoms will suddenly tunnel out and land in Arcturus but the fact is that at room temperature a molecule of DNA is good for several billion years and that's one of the reasons why evolution is possible so sort of interesting that in communicating with the public most businesses are so entranced with uncertainty that I mean have you ever heard of physicist explained that quantum mechanics is why things don't jump around sort of a funny thing and I think this question of why the universe is the way it is if you think of a collection of possible ones is maybe we should take the anthropic principle more seriously and say if you're thinking of various models which of them are stable enough to support the kind of evolution that we know took the order of billion years apparently the first the first cells appeared pretty quickly after the earth got cool enough I think I've heard people estimate that it's less than 100 million years and then it took about 3 billion years to get to really good cells that animals and plants appeared about 4 or 500 million years ago and then things took off very rapidly but evidently was a long period where nothing much happened anyway somebody asked what about intelligence and emotions and why don't we have a completely different subject why don't we have good theories of what the mind is and how it works and I think we're in the last few years some people have be started to get good theories of of the nature of thinking and so forth before computer science and there weren't particularly good theories it seems to me that to understand something like a brain you have to have hundreds of concepts like data structures and different kinds of memories and so forth and if you look at psychology seems to me it has a very strange history there a few thinkers like Aristotle who sort of start out with some pretty good and pretty bad ideas about psychology and then as far as I know nothing happens the ideas of Aristotle on in the book on rhetoric seem to me about as good as the heart as the essays by John Locke and and Hume and the English philosophers even Spinoza so that a couple of thousand years elapsed without any progress in psychology and although physicists started up around that time with Galileo and we've had three centuries of rapid progress in physics it seems to me that psychology didn't start till around 1880 or 70 this is the first recognizable idea that the that the mind is some sort of mechanistic system and you can make theories and try to predict them and do experiments to confirm it is barely a century old and even that stuff didn't make any progress till around 1950 when we started to see cybernetics in the 40s and artificial intelligence in the 50s and 60s and something called cognitive science starting to appear in that same period so mysterious question is why why didn't science have ideas about thinking until recently so we don't really know very much about how the mind works because history is so short my favorite example I think is that in the 1930s late 1930s a botanist named Jean Piaget in Switzerland started to observe his behavior as children and taking notes and asking them questions and in the next 10 years of of watching these kids growing up he wrote down several hundred little theories about what sort of processes are going on in their brain and how did each of them develop from some others and and so forth he wrote about 20 books on this stuff all based on observing three children carefully and although people nitpick about the observations he's made the general structure seems to he'll have held up and strangely enough almost all of these processes as described seem to happen at about the same rate in the same ages in all the cultures that have been studied which is a whole lot but the question isn't was Piaget right or wrong but why wasn't there someone just like that 2,000 years ago what is it about the nature of cultures and science and so forth that no one thought to observe children and try to figure out how they work because Piaget didn't need cyclotrons what he uses glasses of water and pieces of candy and make little shapes of candy of different forms you know sometimes you'd have 5 candies close together and 5 spread out and children younger than 5 years old will very much prefer the ones that are spread out the conjecture is that they're estimating size by the extent where as seven year old say all those it's the same I'll take these I can because it's easier to pick up and what happens between 5 and 7 and he tried to make some theories of that and some well series we're okay Seymour Packard was a research assistant for him and he had another theory which I liked better and we studied some children and anyway the point is that theories of the mind are very recent psychology really went through many phases that we now think are very silly and shallow but the reason for that was that people didn't have the ideas of data structure there was something called mathematics which worked very well for sins in physics but it turns out that the kinds of mathematics that developed in the years before computers were not good at describing in detail what would happen in a process that involved maybe a dozen laws physics in physics we're successful because we discover ways to account for very large classes of phenomena with just three laws like Newton or four forces or or whatever and the number of assumptions are always less than 10 or so and then there's some chance that mathematics will give you ways to do logical reasoning and figure out exactly what might be consequences of those simple laws but if you take 20 assumptions mathematics is dead there's a beautiful subject that both physicists and mathematicians love called group theory and group theory is a little mathematical thing where you make about five assumptions and from these five assumptions you get lots of problems and and theories that people spend their lives on there some problems in group theory that have been unsolved for a hundred years and but there are many that have been solved but my point is that if you make five assumptions about the same things these assumptions are different then you're on the edge of what people can understand the other end if you write a computer program to do something and it has a hundred lines of code then we don't have any way to figure out the consequences of that in general it's just too hard and it seems to me that the reason why psychology didn't get anywhere until starting around the 1950s was that in the appearance of what we now call computer science several hundred new ideas appeared that no one had ever had so very often people think of computer science as the science of what computers do and I think if it is quite different computer science is a new way to describe and think about complicated systems and it comes with a huge library of new ideas for example sometimes I hear a brain people saying well it looks like memory is located in the hippocampus and then another one says it's not located in the hippocampus it's only stored there for a little while and then somehow it's moved into somewhere in the cortex other cortex the hippocampus as four or five Cortex's of its own and they talk about working memory and long-term memory and short-term memory now even in the year 2002 most of those so-called brain scientists have never heard of - I don't mean money cash see a CAG if you buy a computer today you know that it has a big memory that's slow called hard disk or tape or whatever and it has another memory that's pretty fast called RAM maybe it has a million words of that fast memory or 50 million now since it only costs a few cents per megabyte maybe less than a dollar and the average computer also has something called cache for instructions where it remembers the last few things it did in case it needs them to do again then it doesn't have to go and look somewhere else for them it's got several kinds of cache it has a front end cache and back end cache and I don't remember what those are but the point is that in computer science there are ideas about a dozens of kinds of memory there dozens of kinds of ways they can be stored in memory you can store them as if then rules if this happens do that you can store them as things called semantic networks which are little pieces of information connected by links that themselves have properties you can store things in what are called neural networks which are like semantic nets except that the links are dumb this is connected to this one by a by a link that has some number like point seven neural nets are wonderful for learning certain things but they're terrible for the rest of the machine because the rest of the machine can't figure out what the neural net knows so some years ago Peppard and I wrote a book about the limitations of neural nets although we didn't talk about the really serious limitation which is that the more of the brain that is used to learn in that particular way the less the brain will be able to think and reason about what it's learned so one thing that we learned in artificial intelligence or computer science is that there's some fast tricks which are sort of effective and and useful but in the long run they're dead ends because the Machine cat doesn't understand what it knows and John where am I going with this what I'm saying is that we're in an era in which people have just started to get what looked to me like pretty good ideas about what thinking is and how it works and we're still burdened by most of the world having one or two hundred year old bad ideas that the way thinking works is that there are ideas and they're associated with other ideas and when you do something good you reinforced and there are traces of that and so forth so the world is really very thick with old pre computational theories of how the mind works and in some way it's almost harder now to get people to think about more sophisticated ways of representing knowledge and acquiring it for example when you learn something the standard theory is if somebody gives you a reward then you're more likely to keep it in memory and if if they discourage you or don't reward you then you throw it away but suppose I happen to do something that works I find a new way to hold a screwdriver to get a screw in without the screw falling off one of the troubles with screws is but it's the old ones with slots is that as opposed to the new ones with little crosses that they won't stick to the screwdriver and but after a while you learn how to do that what is it you learn you certainly don't learn the exact sequence of motions you learn some higher-level representation and we call that the credit assignment if something works and you had to do 10 things in order to get it what is it you remember how do you figure out which part of your activity was relevant in the old psychology theory they had a simple idea the thing you did last was relevant but of course the thing you did last was put put the thing down because it was fixed so there's all sorts of new ideas coming out what has not happened strangely is is this the the first experiments to get computers or computer programs to simulate human problem solving actually started around in the early 1950s I'd even say just before computers became available to the general public for example there was a early computer at Princeton designed by John von Neumann several copies of it were built he was a mathematician who appears to be the first or second person who clearly had the idea of the modern computer there was also a fellow in Conrad in Germany man Konrad Zuse who worked out this idea but he couldn't get anyone to pay attention to him and the main idea about the difference of the main difference between the modern computer and the computing machines that had come before is that the old computing machines could perform the same kind of operations they could add numbers they could store things here and there and get them out again but they but the what they did was made by effect determined by a fixed program usually by a set of punched cards which would say first do this then do that then do that maybe if this happens use this deck of cards if that happens use a different deck so that was the early computers and what von Neumann and Zeus realized is that wouldn't it be better if you stored the program in the computer's memory just like the data and then you could have they were thinking of the future rather than the present and then maybe someday the computer compute new program for a new job and Stewart and that made the computers more powerful and in the early 1950s for the first time computers got enough memory so that there was actually room to store new programs in them and some pioneers started in fifty nineteen fifty four or five six started experiments where they actually wrote programs that wrote new programs that would then run and this led to the development of certain languages which were very good at writing in which you could write programs that would write in the same language so they could keep changing there's a lot of progress and by 1960 we had two major languages which were good at modifying themselves unfortunately these languages were a little bit unfamiliar to other people who didn't see the great power of programs that could change themselves and the world was overtaken by other terrible languages like the famous C language or Fortran or Algol and so forth which became universally popular and which in which it's almost impossible to write programs that change themselves on that's just a parenthesis there's a is it Bentham's law that says the badge drives out the good whose law is in compression Gresham's law in modern software practice we see this we I can't understand why this 35 year old language called UNIX has suddenly become popular but the only thing I can think of is that the other operating systems got filled with so much garbage in the intervening 35 years that nobody could deal with them and it's not that UNIX is the latest thing it's the last fossil that hasn't completely dissolved from the past and they're starting over and as far as I can see they're starting over to make the same mistakes well in the early days of this thing called artificial intelligence we started to try to make programs that would do very advanced things maybe that was a mistake one of the first programs that I was involved with was a program that would prove theorems in Euclidean geometry it just crossed my mind I'm curious how many how many children learn Euclidean geometry anymore what Euclidean geometry used to be was a rather amazing subject where you would learn a dozen assumptions like that two points determine a unique line or if there are two lines then they're either parallel and don't intersect or if they intersect they just intersect in one place and that two triangles are the same in all respects if they're the same in two sides and an angle between them and so forth and Euclid I don't know when the hell you Khalid was probably 600 BC this was a wonderful subject because you were in a world where assumptions were very simple and there were only small number of them and you used a logic which was very clear and so it's a beautiful thing and you get all sorts of interesting truths and falsehoods and you can check them and so forth for some reason a large fraction of humans find it very hard to deal with the subject where you don't have to know much it's a great mystery to me how you can be bad at mathematics when mathematics is the simplest of all things yet you get people who get high grades in history or social sciences where nobody knows has the slightest idea of what's going on and you can't understand anything but that's that's a side subject how could a person be bad at mathematics it must be that somehow they get a wrong model of of what the activity is I remembered once trying to tutor a student who couldn't prove theorems and I said what's your trouble he said well I think I was sick the day that the teacher explained how you prove theorems this is good so this poor kid thought that there was some special way to do this of course the way you do it is by using general common-sense knowledge to figure out what if two things are related which one is causing the other and that sort of thing and in fact when I told the student that there wasn't any known way to do this yet he brightened up and got better at it because he really seemed to think there was something he didn't know that he needed to know and that it wasn't a matter of growing on the general resourceful list that you'd expect every normal person to have if you didn't if you didn't know you're supposed to be a little bit original maybe it's hard anyway I was fussing with geometry a little bit and I wrote down a draft for how would you make a program that would in fact take some statement in Euclidean geometry and find a proof for it and that was in the late 50s and by 1960 a little group at IBM research managed to write one of these programs that in fact was pretty good and could compare pretty well to a high school students ability at that sort of thing shortly after that we had a student who wrote a program that solved symbolic problems in calculus and to grow calculus and it did it well enough on an old computer that it could get an A in the MIT first-year calculus course because it couldn't do the word problems but it did quite well in calculus and we told it about fifty or a hundred little rules of thumb that you use in different situations and work very well so here we're aiming at what's considered to be fairly high quality of human performance before long in fact that program had evolved into a computer that was better into a program that was better than any human in the world and there was a commercial version of that program called maxima for project Macs symbolic mathematician program which in fact put a lot of mathematicians out of business know who had been involved in trying to find new ways to integrate functions so exciting story ah but it couldn't do word problems so a couple of years later one of our graduate students decided he would try to get a computer to solve less formal problems like um John was four years older than then Joseph when Joseph was twice as old as John had been ten years ago most high school students I don't know if that's a real problem but most high I buy your book science but most even high school students have considerable trouble with that if you can get your head clear which is hard you end up with two equations about John's age and the other guy's age and the two equations are different and it if you're lucky they have a unique solution and this program was able to take pretty complicated English sentences and figure out what equations they were talking about and solved the equation if it was about anything else it couldn't do anything and we tried to improve that program for a while but it ended up it was dead-end in the sense that if you ask the program to do anything else it didn't know what the words mean so people started to use computers for other specialized problems and by 1980 we had tens of thousands of programs each of which was quite good at doing some very special thing but there was no program that could do the kinds of things that you'd expect your five-year-old to do like five-year-olds can beat you in an argument if you're wrong enough and the kid is right enough so to make a long story short we've regressed from calculus and geometry and high school algebra and so forth and we're trying to get people to work on common-sense problems like the sort that every four or five-year-old can do and what's interesting is that although they're perhaps a hundred thousand people writing expert programs maybe even a million around the world expert specialized programs I've only been able to find about a dozen people so far in the world who are interested in trying to do simple everyday common-sense jobs of the sort that all children can do thank you
Info
Channel: The Artificial Intelligence Channel
Views: 24,386
Rating: 4.868217 out of 5
Keywords: singularity, transhumanism, ai, artificial intelligence, deep learning, machine learning, immortality, anti aging
Id: CIoddZ1NOVM
Channel Id: undefined
Length: 33min 51sec (2031 seconds)
Published: Mon Aug 21 2017
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.