Stephen Wolfram: Computational Universe | MIT 6.S099: Artificial General Intelligence (AGI)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

He mentioned that they parse the queries (natural language) on WolframAlpha using some ideas from his book A New Kind of Science.

I wonder what ideas are those? Is he using cellular automata to parse natural language? How is that accomplished? I've not seen a paper doing that.

👍︎︎ 5 👤︎︎ u/[deleted] 📅︎︎ Mar 03 2018 🗫︎ replies

My God, that is unbelievably intelligent. Stephen is really looking at machine learning and computation from a physics point of view.

👍︎︎ 5 👤︎︎ u/deepfiz 📅︎︎ Mar 03 2018 🗫︎ replies

I've had trouble really understanding his notion of computational equivalence. It basically states that sufficiently complex (computationally irreducible) computation tends to be of maximal sophistication. E.g. he claims that it will actually be difficult to tell apart an artifact of an alien civilization and geological or biological processes, essentially because regularity is very common in maximally sophisticated computation, e.g. Saturn's atmosphere forms very regular hexagonal patterns at the poles. Though interesting, this notion strikes me as very impractical. The examples he provides in his most recent podcast episode rather seems to stem from the statistical fact that simple patterns tend to re-emerge in different situations because of their simplicity. As a pattern becomes more complicated and conveys symbolic, hierarchical representations, it should become increasingly likely that it is indeed an artifact of intelligence, because no biological or geological processes are known to produce such things, except for intelligence. Computational equivalence might be an interesting observation about arbitrary programs, mined from program space, but real life programs there are not arbitrary programs. They have very distinct functional roles, they are very limited. Some are selected by evolution or even produced by intelligences as they optimize their goal functions. It's not just random points in program space.

👍︎︎ 1 👤︎︎ u/SomeoneInTheComments 📅︎︎ Mar 04 2018 🗫︎ replies
Captions
welcome back to success $0.99 artificial general intelligence today we have Stephen Wolfram Wow that's the first I didn't even get started you're already clapping in his book a new kind of science he has explored and revealed the power beauty and complexity of cellular automata as simple computational systems for which incredible complexity can emerge it's actually one of the books that really inspired me to get into artificial intelligence he's created the Wolfram Alpha competition knowledge engine created Mathematica that has now expanded to become Wolfram language both he and his son were involved in helping analyze create the alien language from the movie arrival of which they use the Wolfram language please again gives Steven a warm welcome boy so I gather the brief here is to talk about how artificial general intelligence is going to be achieved is that they set the basic picture so I maybe I'm reminded of kind of a storage I don't think I've ever told in public but that something that happened just a few buildings over from here so this was 2009 and Wolfram Alpha was was about to arrive on the scene I assume most of you have used wolf now for a scene wolf alpha yes the how many of you've used wolf alpha ok that's good so I had long been a friend of Marvin Minsky's and Marvin was a sort of pioneer of the AI world and I kind of seen for years you know question answering systems that tried to do sort of general intelligence question answering and so at Marvin and so I was going to show Marvin you know Wolfram Alpha he looks at it and he's like okay that's fine whatever said no Marvin this time it actually works you can try real questions this is actually something useful this is not just a toy and it was kind of interesting to see it took took about five minutes for Marvin to realize that this was finally a to an answering system that could actually answer questions that were useful to people and so one question is how did we how do we achieve that so you know you go to Wolf's malphur and you can ask it I mean it's I don't know what we can ask it I don't know what's the some random question what is the population of Cambridge actually here's a question / let's try that what's the population of Cambridge is probably going to figure out that we mean Cambridge Massachusetts it's going to give us some number it's gonna give us some plot actually what I want to know is number of students at MIT divided by population of Cambridge see if it can figure that out and okay it's kind of interesting right oh no that's / ah that's interesting a guest that we were talking about Cambridge University as the as the denominator there so it says the number of students at MIT divided by the number of students at Cambridge University that's interesting I'm actually surprised let's see what happens if I say Cambridge MA there now as it probably fail horribly no that's that's good okay so no that's interesting that's a plot as a function of time of the fraction of the of okay so anyway so I'm glad it works the so one one question is how did we manage to get so that many things have to work in order to get stuff like this to work you have to be able to understand the natural language you have to have that data sources you have to be able to compute things from the data and so on one of the things that was a surprise to me was in terms of natural language understanding was the critical thing turned out to be just knowing a lot of stuff the actual pausing of the natural language is kind of I think it's kind of clever and we use a bunch of ideas that came from my new kind of science project and so on but I think the most important thing is just knowing a lot of stuff about the world is is really important to actually being able to to understand natural language in a useful situation I think the other thing is having actually having access to lots of data let me show you a typical example here of what is needed so I asked about the ISS and hopefully it'll wake up and tell us something here come on what's going on here there we go okay so it figured out that we probably are talking about a spacecraft not a file format and now it's going to give us a plot that shows us where the ISS is right now so to make this work we obviously have to have some feed of you know radar tracking data about satellites and so on which we have for every satellite that's that's out there but then that's not good enough to just have that feed then you also have to be able to do celestial mechanics to work out well where is the ISS actually right now based on the orbital elements that have been deduced from radar and then if we want to know things like okay when is it going to it's not currently visible from Boston Massachusetts it will next rise at 7:30 6:00 p.m. on Monday on today so you know this requires a mixture of data about what's going on in the world together with models about how the world is supposed to work being able to predict things and so on and I think another thing that kind of realized about about AI and so on from the wolfman alpha effort has been that you know one of the earlier ideas for how one would achieve AI was let's make it work kind of like brains do and let's make it figure stuff out and so if it has to do physics let's have it do physics by pure reasoning like you know people at least used to do physics but in the last 300 years we've had a different way to do physics that wasn't sort of based on natural philosophy it was instead based on things like mathematics and so one of the things that we were doing in in Wolfman alpha was to kind of cheat relative to what had been done in previous AI systems which was instead of using kind of reasoning type methods we're just saying okay we want to compute where the ISS is going to be well we've got a bunch of equations of motion that corresponds to differential equations we're just going to solve the equations of motion and get an answer that's kind of leveraging the last 300 years or so of of exact science that have been done rather than trying to make use of kind of human reasoning ideas and I might might say that in terms of the the history of the wolf malphur project when I was a kid a disgustingly a long time ago I was interested in AI kinds of things and I in fact I was kind of upset recently to find a bunch of stuff I did when I was 12 years old kind of trying to assemble a pre version of Wolfram Alpha way back before it was technologically possible but it's also a reminder that one just does the same thing once whole life so to speak at some level um but what happened was when when I am I started off working mainly in physics and then I got involved in building computer systems to do things like mathematical computation and so on and I then sort of got interested in okay so can we generalize this stuff and can we can we really make systems that can answer sort of arbitrary questions about the world and for example sort of the the the the promise would be if there's something that is systematically known in our civilization make it automatic to answer questions on the basis of that systematic knowledge and back in the in around late 1970s early 1980s my conclusion was if you want to do something like that the only realistic path to being able to do it was to build something much like a brain and so I got interested in neural nets and I tried to do things with neural nets back in 1980 and nothing very interesting happened well I couldn't get him to do anything very interesting and that um so I kind of had the idea that that the only way to get the kind of thing that now exists in alpha for example was to build a brain like thing and then many years later for reasons I can explain I kind of came back to this and realized actually it wasn't true that you had to build a brain like things sort of mere computation was sufficient and that was kind of what got me started actually trying to build Wolfram Alpha when we started building wolf malphur one of these I did was go to a sort of a field trip to a big reference library and you know you see all these shelves of books and so on and the question is can we take all of this knowledge that exists in all of these books and actually automate being able to answer questions on the base Javad and I think we've pretty much done that for that at least the books you find in a typical reference library so that was it looked kind of daunting at the beginning because it's this there's a lot of knowledge and information out there but actually it turns out there are a few thousand domains and we've steadily gone through and worked on these different domains another feature of the worth mouthful project was that we didn't really you know I've been involved a lot in doing basic science and in trying to have sort of grand theories of the world one of my principles in building Wolfram Alpha was not to start from a grand theory of the world that is not to kind of start from some global ontology of the world and then try and build down into all these different domains but instead to work up from having you know hundreds then thousands of domains that actually work whether they're you know information about cars or information about sports or information about movies or whatever else how each of these domains sort of building up from the bottom in each of these domains and then finding that there were common themes in these domains that we could then build into frameworks and then sort of construct the whole system on the basis of that and that's kind of that's kind of how its worked and I can talk about some of the actual frameworks that we end up using and so on but maybe I should explain a little bit more so so one question is how does how does Wolf's mouth actually sort of work inside and the answer is it's a big program it's about it's the core system is about 15 million lines of Wolfram language code and it's some number of terabytes of raw data and so the the way the thing that sort of made building wolf now for possible was this language wolf and language which started with Mathematica which came out in 1988 and has been sort of progressively growing since then so maybe I should show you some things about both language and and you know it's easy you can you know use this mit has a site license for it you can use it all over the places you can find it on the web but cetera etc etc but okay the basics work the let's let's start off with something like let's make a random graph and let's say we have a random graph with two hundred nodes 400 vertices okay so there's a random graph a first important thing about wolfing language is it's a symbolic language so I can just pick up this graph and I could say you know I don't do some analysis of this graph that graph is just a symbolic thing that I can just do computations on oh I could say let's let's get a another good thing to always do is get a current image see there we go and now I could go and say something like let's let's do some basic thing let's say let's edgy detect that image again this this image is just a a thing that we can manipulate we could take the image we could make it I don't know we could take the image and partition it little pieces do computations on that I don't know simple let's do let's just say sort each row of the image assemble the image again whoops assemble that image again we'll get some some mixed up picture there if I wanted to I could for example let's say let's make that the current image and let's say make that dynamic now I can be just running that code hopefully and little loop and there we can make that work so the you know one one general point here is there's you know this is just an image for us is just a piece of data like anything else if we just have a variable a thing called X it just says okay that's X I don't need to know particular value it's just a symbolic thing the corresponds to that's a thing called X now you know what gets interesting when you have a symbolic language and so on is we're interested in having it represent stuff about the world as well as just abstract kinds of things that many you know I can abstractly say you know find some funky integral I don't know what you know that's then representing using symbolic variables to represent algebraic kinds of things but I could also just say I don't know something like Boston and Boston is another kind of symbolic thing that has if I say what what is it really inside that's it's the today a City Boston Massachusetts United States actually noticed when I type that in I was using natural language to type it in and it gave me a bunch of disambiguation here it said assuming Boston is a city assuming Boston Massachusetts use Boston New York or okay there's let's use let's use Boston and the Philippines which I've never heard of but but um let's try using that instead and now if I look at that it'll say it's Boston in some province of the Philippines etc etc now I might ask it of that I could say something like what's the population of that and it um okay it's a fairly small place or I could say for example let me let me do this let me say a geo list plot from that Boston let's take from that Boston - and now let's type in Boston again and now let's have it used the default meaning of the word of Boston and then let's join those up and now this should plot this should show me a plot there we go okay so there's the path from the Boston that we picked in the Philippines to the Boston here oh we could ask you don't know I could just say I could ask it the distance from one to another or something like that so the the one of the things here one things we found really really useful actually in language was first of all there's a way of representing stuff about the world like cities for example or let's say I want to say let's let's do this let's say let's do something with cities let's say capital cities in South America okay so notice this is a piece of natural language this will get interpreted into something which is precise symbolic wolfram language code that we can then compute with and that will give us the citizens out the capital cities in South America I could for example let's say I say find shortest to US and I'm going to use some some oops no I don't want to do that what I want to do first is to say show me the geo positions of all those cities on line 21 there so now it will find the geo positions and now it will say compute the shortest tour so that's saying there's a 10,000 mile traveling salesman tour around those cities so I could take those cities were on line 21 and I could say order the cities according to this and then I could make another geo list plot of that join it up and this should now show us a traveling salesman tour of the of the capital cities in South America um so you know it's it's sort of interesting to see what's involved in making stuff like this work the one of you know my my goal has been to sort of automate as much as possible about things that have to be computed and that means knowing as many algorithms as possible and also knowing as much data about the world as possible and I kind of view this as sort of a knowledge-based programming approach where you have you know a typical kind of idea in programming languages is you know you have some small programming languages has a few primitives that are pretty much tied into what a machine can intrinsically do and then maybe you'll have libraries that add on to that and so on my kind of crazy idea of many many years ago has been to build an integrated system where all of the stuff about different domains of knowledge and so on are all just built into the system and and designed in a coherent way I mean this has been kind of the story of my life for the last thirty years is trying to keep the design of the system coherent even as one adds all sorts of different areas of of capability so as some I mean we can go and dive into all sorts of different kinds of things here but maybe as an example well let's do what could we do here we could take come let's try how about this is that a bone I think so that's a bone so let's try that as a mesh region see if that works so this will now use a completely different domain of human endeavor okay oops there's two of those bones let's try let's just try them let's try humorous let's try the that the mesh region for that and now we should have a bone here okay there's a there's a representation of a bone let's take that bone and we could for example say let's take the surface area of that as in some some units or I could let's do some much more outrageous thing let's say we take region distance so we're going to take the distance from some from that bone to a point let's say 0 0 Z and let's make a plot of that distance with Z going from let's say I don't have no idea where the where the spawn is but let's try something like this so that was really boring um let's try them so what this is doing again a whole bunch of stuff has to work in order for this to operate this has to be this is a this is some region in 3d space that's represented by some mesh you have to compute you know do the computational geometry to figure out where it is if I want it to let's try anatomy anatomy plot 3d and let's say something like left hand for example and now it's going to show us probably the complete data that it has about the geometry of the left hand there we go ok so there's there's the results and we could take that apart and start computing things from it and so on so what um so this this is some so there's a there's a lot of kind of computational knowledge that's built in here one let's talk a little bit about kind of the modern machine learning story so for instance if I say let's get a picture here let's say um let's let's just say picture of symbol got a favorite kind of animal what's Panda okay so let's try ok giant panda okay okay there's a panda let's see what now let's try saying um let's try for this panda let's try saying image identify and now here we'll be embarrassed probably but let's just see let's see what happens if I say image identify that and now it'll hopefully wake up wake up wake up this only takes a few hundred milliseconds okay very good giant panda let's let's see what it's we'll see what the runners-up were to the giant panda let's say we want to say the ten runners-up in all categories for that thing okay so a giant panda a prop here Ned which I've never heard of are pandas carniverous ate bamboo shoots okay so that was so lucky I didn't get that one it's really sure it's a mammal and it's absolutely certain it's a vertebrate okay so you might ask how did it figure this out and so then you can kind of look under the hood and say so we have a whole framework for representing neural nets symbolically and so this is the actual model that it's using to do this so this is a so there's a neural net and it's got we can drill down and we can see there's there's a piece of the neuron that we can drill down even further to one of these and we can probably see what that's a batch normalization layer somewhere deep deep inside the entrails of the not panda but of this thing okay so now let's take that object which is just a symbolic object and let's feed it the picture of the Panda and we can see and there oops I was not giving it the right thing what did I just do wrong here okay let's let's take our isolated okay let's take this thing and feed it the picture of the Panda and it says a giant panda okay how about we do something more outrageous let's take that neuron that and let's only use the first let's say 10 layers of the neuron that so let's just take out 10 layers of the neuron that's and feed it the Panda and now what we'll get is something from the insides of the neuron that and I could say for example let's just make those into images okay so that's what that's what the neuron that had figured out about the Panda after 10 layers of going through the neuron that and maybe actually be interesting to see let's do a feature space plots and now we're going to of those intermediate things in the sort of in the brain of the neuron that sort of speak this is now taking so what this is just doing is to do dimension reduction on this space of images and so it's not very exciting it's probably mostly distinguishing these by total gray level but that's kind of showing us the space of of different ton of different sort of features of the insides of the Shinra on that so it's also what's interesting to see here is things like the symbolic representation of the neuron that's and if you if you're wondering how does that hatch will work inside it's underneath it's using a max net which we happen to have contributed to a lot and there's sort of a bunch of symbolic layers on top of that that feed into that and maybe I can show you here let me show you how you would train one of these neural nets that's also kind of fun so we have a data repository that has all sorts of useful data one piece of data it has is a bunch of neuron that training sets so this is a standard emne straining set of handwritten digits okay so there's m missed and you notice that these things here that's just an image which i could copy out and i could do you know let's say I could do color negate on that image because it's just an image and there's there's the results and so on and now I could say let's take let's take a neuron that like let's take a simple neuron that like Linette for example okay so let's take Linette and then let's take the untrained initial evaluation Network so this is now a version of Linette simple standard neural nets that didn't get trained so for example if I if I take that that symbolic representation of Lynette and I could say net initialize then it will take that and it'll just put random weights into Lynette okay so if I take those random weights and I feed it a zero here I feed it that image of a zero it will presumably produce something completely random in this particular case - right so now now what I would like to do is to take this so that was just randomly initializing the weights so now what I'd like to do is to take the emne straining set and I'd like to actually train Lynette using MMS training set so let's take let's take this and let's take a random sample of let's say I don't know a thousand pieces of Lynette come on why is it having to load it again there we go okay so there's a there's a random sample there was on line 21 and now let me go down here and say where was it well look we can just take this this thing here so this is the uninitialized version of Lynette and we can say take that and then let's say net train of that with the thing on line 21 which was that thousand instances so now what it's doing is its running training on and that's you see the loss going down and so on it's running training for for those thousand instances of Lynette and it will we can stop it if we want to actually this is a new display this is very nice this is this is a new version of both languages is coming out next week which I'm showing you but it's quite similar to what exists today but because that's one of the features of running a software company is that you always run the the very latest version of things for better or worse and that's and this is also a good way to debug it because supposed to come out next week if I find some horrifying bug maybe it will get delayed but let's try them let's sum let's try this okay now it says it's zero okay and so so this is now a trained version of Lynette trained with that with that training data um one of the things so you know we can talk about all kinds of details of your mats and so on but maybe I should zoom out to talk a little bit about bigger picture as I see it so one question is sort of a question of what is in principle possible to do with computation so you know we have as we're you know we're building all kinds of things we're making image identifies we're figuring out those kinds of things about where the International Space Station is and so on question is what is what is in principle possible to compute and so the you know one of the places one can ask that question is when one looks at for example models of the natural world one can say you know how do we make models of the natural world kind of a a traditional approach has been let's use mathematical equations to make models of the natural world a question is if we want to kind of generalize that and say well what are all possible ways to make models of things what can we say about that question so I spent many years of my life trying to address that question and basically what what I've thought about a lot is that if you want to make a model of a thing you have to have definite rules by which the thing operates what's the most general way to represent possible rules well in today's world we think of that as a program so the next question is well what does the space of all possible programs look like and most of the time you know we're writing programs like Wolfen language is 50 million lines of code and it's a big complicated program that was for built for a fairly specific purpose but the question is if we just look at sort of the space of possible programs more or less at random what's out there in the space of possible program so I got an interest in many years ago in cellular automata which are a really good example of a very simple kind of program so let me show you an example of one of these so this is these are the rules for a typical cellular automaton and this just says you have a row of black and white squares and this just says you look at a black a look at a square say what color is that square what color left or it's left and right neighbors decide what color the square will be on the next step based on that rule okay so really simple rule so now let's let's take a look at what what actually happens if we use that rule a bunch of times so we can take that rule the 254 is just the binary digits that correspond to those positions in this rule so now I can say this I could say let's do 50 steps let me do this sum and now if I run according to the rule I just defined it turns out to be pretty trivial it's just saying if any if any square is if we start off with a black square if any square is if any neighboring square is black make a black square so we've we've used a very simple program we've got a very simple results out okay let's try a different program we can try changing this we'll get some that's a program with one bit different now we get that kind of pattern so the question is well what happens you might say okay if you've got such a trivial program it's not surprising you're just going to get Trevor a results out so but you can do an experiment to test that hypothesis you can just say let's take all possible programs there are 256 possible programs that are based on these eight bits here let's just take well let's just whoops let's just take come let's say the first 64 of those programs and let's just make a echo let's just make a table of the results that we get by running those first 64 programs here so here we get the result and what you see is well most of them are pretty trivially the lake they start off with one black cell in the middle and it just tools after one side occasionally we get something more exciting happening like here's a nice nested pattern that we get if we were to continue it longer it would it would make you know more detailed nesting but then my all-time favorite science discovery if you go on and just look at these after a while you find this one here which is rule 30 in this in this numbering scheme and that's doing something a bit more complicated you say well what's going on here you know we just started off with this very simple rule let's see what happens maybe after a while you know if we run rule 30 long enough it will resolve into something simpler so let's try running it let's say 500 steps and that's the whoops that's the result we get I'd say let's just make it fullscreen okay it's aliasing a bit on the projector there but but you get the basic idea this is a so this just started off from one black cell at the top and this is what it made and that's pretty weird because all this is you know this is sort of not the way it's supposed things are supposed to work because what we have here is just that little program down there and it makes this big complicated pattern here and you know we can see there's a certain amount of regularity on one side but for example the center column this pattern is for all practical purposes completely random in fact it was reused as a random number generator in Mathematica and Wolfram language for many years it was recently retired after after excellent service because we found a somewhat more efficient one um the but the so you know what do we learn from this what we learn from this is out in the computational universe of possible programs it's possible to get even with very simple programs very rich complicated behavior well that's important if you're interested in modeling the natural world because you might think that there are programs that represent systems in nature that might work this way and so on it's also important for technology because it says ok let's say you're trying to find a let's say you're trying to find a program that's a good random number generator how are you going to do that well you could start thinking very hard and you could try makeup you know you could try and write down all kinds of flowcharts about how this random number generator is going to work or you can say forget that I'm just going to search the computational universe for possible programs and just look for one that serves as a good random number generator in this particular case after you've searched 30 programs you'll find one that makes a good random number generator why does it work that's a complicated story it's not a story that I think necessarily we can really tell very well but what's important is that this is this idea that out in the computational universe there's a lot of rich sophisticated stuff that can be essentially mind for our technological purposes that's the important thing whether we understand how this works is a different matter I mean it's like when we look at the natural world the physical world were used to kind of mining things you know we started using magnets to do magnetic stuff long before we understand understood the theory of ferromagnetism and so on and so similarly here we can sort of go out into the computational universe and find stuff that's useful for our purposes now in fact the world of sort of deep learning and neural nets and so on is a little bit like this it uses the trick that there's a certain degree of differentiability there so you can kind of home in on let's try and find something that's incremental II better and for certain kinds of problems that works pretty well I think the thing that we've done a lot I've done a lot it's just sort of exhaustive search in the computational universe of possible programs just search of trillion programs and try and find one that does something interesting and useful for you um there's a lot of things to say about what well actually in in these search of trillion programs and find one that's useful let me show you another example of that um see so I was interested a while ago in the I have to look something up here sorry um in C in boolean algebra and in I was interested in in the space of all possible mathematic says um and let me just see here I I'm not finding what I wanted to find sorry I was a good example I should have memorized this but I haven't so um there we go there it is um so I was interested in if you just look at so we talked about sort of looking at the space of all possible the space of all possible programs another thing you can do is say if you're going to invent mathematics from nothing what possible axiom systems could be used in mathematics so I was curious where do and that again might seem like a completely crazy thing to do to just say let's just start enumerate axiom systems at random and see if we find one that's interesting and useful but it turns out once you have this idea that out in the computational universe or possible programs there's actually a lot of low-hanging fruit to be found it turns out you can apply that in lots of places I mean the thing to understand is why why do we not see a lot of engineering structures that look like this the reason is because our traditional model of engineering has been we engineer things in a way where we where we can foresee what the outcome of our engineering steps are going to be and when it comes to something like this we can find it out in the computational universe what we can't readily foresee what's going to happen we can't do sort of a step by step design of this particular thing and so in engineering and human engineering as it's been practiced so far most of it has consisted of building things where we can foresee step by step what the outcome of our engineering going to be and we see that in programs we see that in other kinds of engineering structures and so there's sort of a different kind of engineering which is about mining the computational universe of possible programs and it's worth realizing there's a lot more that can be done a lot more efficiently by mining the computational universe of possible programs than by just constructing things step by step as a human so for example if you look for optimal algorithms for things like I don't know even something like sorting networks the optimal sorting networks look very complicated they're not things that you would construct by sort of step-by-step thinking about things with in a kind of in a kind of typical human way and so this this idea you know if you're really going to have computation work efficiently you are going to end up with these programs that are sort of just mined from the computational universe and one of the issues with mining things so they're there this makes use of computation much more efficiently than a typical thing that we might construct now one feature of this is it's hard to understand what's going on and there's actually a fundamental reason for that which is in our efforts to sort of understand what's going on we get to use our brains our computers our mathematics or whatever and our goal is this this particular little program did a certain amount of computation to work out this pattern the question is can we kind of outrun that computation and say oh I can tell that actually this particular bit down here is going to be a black black bit you don't have to go and do all that computation but it turns out that then again this will maybe as a digression which which there's this phenomenon I call computational irreducibility which i think is really common and it's a consequence of this thing I call principle of computational equivalence and that the principle of computational equivalence basically says as soon as you have a system whose behavior isn't fairly easy to analyze the chances are that the computation it's doing is essentially as sophisticated as it could be and that has consequences like it implies that the typical thing like this will correspond to a universal computer that you can use to program anything it also has the consequence of this computational irreducibility phenomenon that says you can't expect our brains to be able to outrun the computations that are going on inside the system if there was computational reducibility then we can expect that this thing went to a lot of trouble and did a million steps of evolution but actually just by using our brains we can jump ahead and see what the answer will be computational irreducibility suggests that isn't the case if we're going to make the most efficient use of computational resources we will inevitably run into computational irreducibility all over the place it has the consequence that we get the situation where we can't readily sort of foresee and understand what's going to happen so back to mathematics for a second so this is just an axiom system that so I looked for all possible look through sort of all possible axiom systems starting off with very really tiny ones and I asked the question what's the first axiom system that corresponds to boolean algebra so it turns out this this thing here this tiny little thing here generates all theorems of boolean algebra it is that it is the simplest axiom for boolean algebra now something I have to show you this because it's a new feature you see they um if I say find equation or proof let's say I want to prove commutativity of the NAND operation I'm going to show you something here this is going to try to generate let's see if this works this is going to try to generate an automated proof based on that axiom system of that result so it had 102 steps in the proof and let's try and say let's look at for example the proof network here actually let's look at the proof data set um now that's not what I wanted I should learn how to use this shouldn't I um let's see what I want is the you know proof data set there we go very good ok so this is actually let's let's say first of all let's say the proof graph ok so this is going to show me the how that proof was done so they're a bunch of lemmas that got proved and from those lemmas those lemmas were combined and eventually it proved the result so let's let's take a look at the let's take a look at what some of those llamas were okay so here's the results so after so it goes through and these are various lemmas it's using and eventually after many pages of nonsense it will get to the result okay each one of these some of these llamas are kind of complicated there that's that's that llama it's a pretty complicated lemma etc etcetera etcetera so you might ask what on earth is going on here and the answer is so I first generated a version of this proof 20 years ago and I tried to understand what was going on and I completely failed and it's sort of embarrassing because this is supposed to be a proof it's supposed to be you know demonstrating some results and what we realize is that you know what does it mean to have a proof of something what does it mean to explain how a thing is done you know what is the purpose of a proof purpose of a proof is basically to let humans understand why something is true and so for example if you go to let's say we go to wolf now fur and we do you know some random thing where we say let's do you know an integral of something or another it will be able to very quickly in fact it will take it only milliseconds internally to work out the answer to that integral okay but then somebody whose wants to hand in a piece of homework or something like that needs to explain why is this true okay well we have this handy step-by-step solution thing here which explains why it's true now the thing I should admit about the step-by-step solution is it's completely fake that is the steps that are described in the step by step solution have absolutely nothing to do with the way that internally that integral was computed these are steps created purely for the purpose of telling a story to humans about why this integral came out the way it did and now what we're seeing and so that's a so that's one thing is knowing the answer the other thing is being able to tell a story about why the answer worked that way well what we see here is this is a proof but it was an automatically generated proof and it's a really lousy story for us humans I mean if it turned out that one of these theorems here was one that had been proved by Gauss or something and appeared in all the textbooks we would be much happier because then we would start to have a kind of human representable story about what was going on instead we just get a bunch of machine generated lemmas that we can't understand that we can't kind of wrap our brains around and it's sort of the same thing that's going on in when we look at when these neural nets we're seeing you know when we were looking wherever it was at the innards of that neuron that and we say well how is it figuring out that that's a picture of a panda well the answer is it decided that you know if we humans were saying how would you figure out if it's a picture of panda we might say well look and see if it has eyes that's a clue for whether it's an animal look and see if it's looks like it's kind of round and furry and things that's a version of whether it's a panda and Len cetera etcetera etcetera but what it's doing is it learnt a bunch of criteria for you know is it a panda or is it one of 10,000 other possible things that it could have recognized and it learnt those criteria in a way that was somehow optimal based on the training that it got and so on but it learnt things which were distinctions which are different from the distinctions that we humans make in the language that we as humans use and so in some sense you know when we start talking about will describe a picture we have a certain human language for describing that picture we have you know in our human in typical human languages we have maybe thirty to fifty thousand words that we use to describe things those words are words that have sort of evolved as being useful for describing the world that we live in um when it comes to there's known that it could be using it could say well that the words that it is effectively learnt which allow it to make distinctions about what's going on in the in the analysis that it's doing it has effectively invented words that describe distinctions but those words have nothing to do with our historically invented words that exist in our languages so it's kind of an interesting situation that that it is its way of thinking so to speak if you say well what's it thinking about how do we describe what it's thinking that's a tough thing to answer because just like with the with the automated theorem we're we're sort of stuck having to say well we can't really tell a human story because the things that it invented are things for which we don't even have words in our languages and so on okay so one thing to realize is in this kind of space of sort of all possible computations there's a lot of stuff out there that can be done there's this kind of ocean of sophisticated computation and then the question that we have to ask for us humans is okay how do we make use of all of that stuff so what we've got kind of on the one hand is we've got the things we know how to think about human language is our way of describing things our way of talking about stuff that's the one one set of things the other set of things we have is this very powerful kind of seething ocean of computation on the other side where lots of things can happen so the question is how do we make use of this sort of ocean of computation in the best possible way for our human purposes and building technology and so on and so the the way I see you know my kind of part of what I've spent a very long time doing is kind of building a language that allows us to take human thinking on the one hand and describe and sort of provide a sort of computational communication language that allows us to get the benefit of what's possible over in the sort of ocean of computation in a way that's rooted in what we humans actually want to do and so I kind of view both from language as being sort of an attempt to make a bridge between so you on the one hand there's all possible computations on the other hand there's things we think we want to do and I view or from language as being my best attempt right now to make a way to take our sort of human computational thinking and be able to actually implement it so in a sense it's a language which works in two on two sides it's both a language where you as a as a the machine can understand okay it's it's looking at this and that's what it's going to compute but on the other hand it's also a language for us humans to think about things in computational terms so you know if I go and I don't know one of these one of these things that I'm doing here whatever it is that this wasn't that exciting but but you know fine shortest tour of the Geo position of the capital cities in South America that is a language that's a representation in a precise language of something and the idea is that that's a language which we humans can find useful in thinking about things in computational terms it also happens to be a language that the machine can immediately understand and execute and so I think this is sort of a general you know when I think about AI in general the you know what is the sort of what's the overall problem well part of the overall problem is so how do we tell the AI is what to do so to speak there's this very powerful you know this sort of ocean of computation is what we get to mine for purposes of building AI kinds of things but then the question is how do we tell the AI is what to do and the what I see what I've tried to do with Wolfram language is to provide a a way of kind of accessing that computation and sort of making use of the knowledge that our civilization has accumulated and because that's the you know there's the general computation on on this side and there's the specific things that we humans have thought about and the question is to make use of the things that we've thought about to do do things that we care about doing actually if you're interested in these kinds of things I happen to just write a blog post where last couple of days ago it's kind the funny blog posts it's about some but you can see the title there it came because a friend of miners has this crazy project to put little little sort of discs or something that should represent kind of the best achievements of human civilization so to speak to send out it's it's hitchhiking on various spacecraft that are going out into the solar system in the next little while and the question is what to put on this little disc that kind of represents you know the achievements of civilization it's kind of it's kind of depressing when you go back and you look at what some what people have tried to do on this before and realizing how hard it is to tell even whether something is an artifact or not but this is this was sort of a yeah that's a good one that's from 11,000 years ago can you the question is can you figure out what on earth it is and what it means and and this is but but so what what's relevant about this is the this this whole question of there are things that are out there in the computational universe and you know when we think about extraterrestrial intelligence I find it kind of interesting that artificial intelligence is our first example of an alien intelligence we don't happen to have found what we view as extraterrestrial intelligence right now but we are in the process of building pretty decent version of an alien intelligence here and the question is if you ask questions like well you know what is it thinking is it does it have a purpose and what it's doing and so on and you're confronted with things like this it's very we you can kind of do a test run of you know what's what's its purpose what is it trying to do in a way that is very similar to the kinds of questions you would ask about about extraterrestrial intelligence but in case the the that the main point is that I see this sort of ocean of computation there's the let's describe what we actually want to do with that ocean of computation and that's where you know that's one of the primary problems we have now people talk about you know AI and what is AI going to allow us to automate and my basic answer that would be we'll be able to automate everything that we can describe the problem is it's not clear what we can describe or put another way you know you imagine various jobs and people are doing things they're repeated judgment jobs things like this there where we can readily automate those things but the thing that we can't really automate is saying well what are we trying to do that is what are our goals because in a sense when when we see one of these systems you know let's say let's say it's a cellular tartan here okay the question is what is this cellular automaton trying to do maybe I can maybe I'll give you another cellular automaton that is a little bit more exciting here let's do this one so that the the question is what is this cellular automaton trying to do you know it's got this whole big structure here and things are happening with it we can go we can run it for a couple thousand steps we can ask it's a nice example of kind of undecidability in action what's going to happen here this is kind of the halting problem is this going to halt what's it going to do there's computational irreducibility so we actually can't tell this is the case where we know this is a universal computer in fact eventually well I don't even spoil it for you if I went on long enough it would it would go into some kind of cycle but um we can ask what is this thing trying to do what is it you know is it what's it thinking about what's its um you know what's its goal what's its purpose and you know we get very quickly in a big mess thinking about those kinds of things I've one of the things that comes out of this principle of computational equivalence is thinking about what kinds of things have are capable of sophisticated computation so so I mentioned a while back here sort of my personal history with Wolff malphur of having thought about doing something like wolf now for when I was a kid and then believing that you sort of had to build a brain to make that possible and so on and one of the things that I then thought was that there was some kind of bright line between what is intelligent and what is merely computational so to speak in other words that there was something which is like oh we've got this great thing that we humans have that you know as intelligence and all these things in nature and so on and all the stuff that's going on it's just computation or it's just you know things operating according to rules that's different there's some bright line distinction doing these things well I think the thing that came about after I'd looked at all these cellular automata and all kinds of other things like that is I sort of came up with this principle of computational equivalence idea which we've now got quite a lot of evidence for which I talked about people are interested in but that basically there isn't a that once you reach a certain level of of computational sophistication everything is equivalent and that means that that implies that there really isn't a bright line distinction between for example the computations going on in our brains and the computations going on in these simple cellular automata and so on and that essentially philosophical point is what actually got me to start trying to build both malphur because I realized that gosh you know I've been looking for this sort of the the magic bullet of intelligence and I just decided probably there isn't one and actually it's all just computation and so that means we can actually impractical intelligent like thing and so that's what I think is the case is that there really isn't sort of a bright line distinction and that has that has more extreme consequences like people will say things like you know the weather has a mind of its own okay sounds kind of silly sounds kind of animistic primitive and so on but in fact the you know fluid dynamics of the weather is as computationally sophisticated as the stuff that goes on in our brains but we can start asking but then you say but the weather doesn't have a purpose you know what's the purpose of the weather well you know maybe the weather is trying to equalize the temperature between the you know the the North Pole and the tropics or something and then we have to say well but that's not a purpose in the way that we think about purposes that's just you know and we get very confused and in the end what we realize is when we're talking about things like purposes we have to have this kind of chain of provenance that goes back to humans and human history and all that kind of thing and I think it's the same type of thing when we talk about computation and AI and so on the thing that we this question of sort of purpose goals things like this that's a thing which is intrinsically human and not something that we can ever sort of automatically generate it makes no sense to talk about automatically generating it because these computational systems they do all kinds of stuff you know we can say they've got a purpose we can attribute purposes to them etcetera etcetera etcetera but you know ultimately it's sort of the human thread of purpose that we have to have to deal with so that means for example when we talk about AIS and we were interested in things like so how do we tell you know like like we'd like to be able to tell we talk about AI ethics for example we'd like to be able to make a statement to the AIS like you know please be nice to us humans um and that's a you know that's something so one of the issues there is so talking about that kind of thing one of the issues is how are we going to make a statement like be nice to us humans what's the you know in how are we going to explain that to an AI and this is where again you know my my efforts to build a language a computational communication language that bridges the world of what we humans think about and the world of what is possible in computation is important and so one of things I've been interested in is actually building what I call a symbolic discourse language that can be a general representation for sort of the kinds of things that we might want to put in that we might want to to say and things like be nice to him and so sort of a little bit background to that so you know in the modern world people are keen on smart contracts they often think of them as being deeply tied into blockchain which I don't think is really quite right the important thing about smart contracts is it's a way of having sort of an agreement between parties which can be executed automatically and that agreement may be you know you may choose to sort of anchor that agreement in a blockchain you may not but the whole point is you have to what you you know when people write legal contracts they write them in an approximation to English they write them in legalese typically because they're trying to write them in something a little bit more precise than regular English but the limiting case of that is to make a symbolic discourse language in which you can write the contract and code basically and the the I've been very interested in using wolfmann language to do that because in wolfen language we have a language which Candice bribe things about the world and we can talk about the kinds of things that people actually talk about in contracts and so on and we're most of the way there to being able to do that and then when you start thinking about that you start thinking about okay so we've got this language to describe things that we that we care about in the world and so when it comes to things like tell the AIS to be nice to the humans we can imagine using often language to sort of build an AI Constitution that says this is how the AI supposed to work but when we talk about sort of just the the untethered you know the untethered AI doesn't have any particular it's just going to do what it does and if we want it to you know if we want to somehow align it with human purposes we have to have some way to sort of talk to the AI and that's that's a you know I view my efforts to build or from language as as a way to do that I mean I you know as I was showing at the beginning you can use you can take natural language and with natural language you can build up a certain amount of you can say a certain number of things in natural language you can then say well how do we make this more precise in a precise symbolic language if you want to build up more complicated things it gets hard to do that in natural language and so you have to kind of build up more serious programs in in in symbolic language and I've probably been numb been yakking a while here and I'm happy to I can talk about all kinds of different things here but that maybe I've not seen as many reactions as I might have expected to think so I I'm not sure which things people are interested in which they're not but so maybe I should maybe I should stop here and we can have discussion questions comments yes [Applause] if two microphones if you have questions please come out so I have a quick question it's goes to the earlier part of your talk where you say you don't build a top-down ontology you actually build from the bottom up but disparate domains what do you feel are the core technologies of the knowledge representation which you use within Wolfram Alpha that allows you you know different domains to reason about each other to come up with solutions and is there any feeling of differentiability for examples if you were to come up with a plan to do something new within Wolfram Alpha language you know how would you go about doing that me okay so we've done maybe a couple of thousand domains okay the what is actually involved in doing one of these domains it's it's a gnarly business every domain has some crazy different thing about it I tried to make up actually a while ago we um let me show you something a kind of a hierarchy of what it means to make um see if I can find this here kind of a hierarchy of what it means to make a domain computable where is it that's okay here we go so there's sort of a hierarchy of levels of what it means to make a domain computable from just you know you've got some you know you've got some array of data that's quite structured forget you know the separate issue about extracting things from unstructured data but let's imagine that you were given you know a a bunch of data about landing sites of meteorites or something okay so you go through various levels so you know things like okay the landing sites the meteorites are the are the positions just strings or they some kind of canonical representation of geo position is the you know is the type of meteorite you know some of them are iron meteorites some of them are stone meteorites have you made a canonical representation have you made some kind of way to to identify what some sorry go ahead no no I mean to do that so my question is like you know if you did have positions as a string as well as a canonical representation do you have redundant pieces of the same redundant representations of the same information in the different no I mean I'll go you always everything canonical that you have yeah I have a minimal representation of everything yeah our goal is to make everything canonical now that's you know there is a lot of complexity in doing that I mean if you you know in each okay so another feature of these domains okay so there's another another thing to say um you know it will be lovely if one could just automate everything and cut the humans out of the loop turns out this doesn't work and in fact whenever we do these domains it's fairly critical to have expert humans who really understand the domain or you simply get it wrong and it's also having said that once you've done enough domains you can do a lot of cross checking between domains and we are the number one reporters of error and of errors and in pretty much all standardized data sources because we can do that kind of cross checking but I think you know if you ask the question what's involved in in bringing online a new domain it's you know those sort of hierarchy of things you know some of those take a few hours you can get to the point of of having you know we've got good enough tools for ingesting data figuring out oh those are names of cities in that column let's you know that's canonicalized those you know some may be questions but many of them will be able to to nail down and to get to the full level of you've got some complicated domain and it's fully computable is probably a year of work and and you might say well gosh why are you wasting you their time you've got to be able to automate that so you can probably tell we're fairly sophisticated about machine learning kinds of things and so on and we have tried you know to automate as much as we can and we have got a pretty efficient pipeline but if you actually want to get it right and you see it is an example of what what happens that there's a level even going between wolf now for more from language there's a level of so for example let's say you're looking at you know lakes in Wisconsin okay so people are querying about lakes in Wisconsin and WolframAlpha they'll name a particular lake and they want to know you know how big is the lake okay fine in Wolfram language they'll be doing a systematic computation about lakes in Wisconsin so if there's a lake missing you're gonna get the wrong answer and so that's a kind of higher level of difficulty okay but the this yeah I think you're asking some more technical questions about ontologies and I can try and answer those actually one quick question and you know that's there's a lot of other questions yes that's right okay all right very much my cyclist as to the left here I got a simple question who or what are your key influences oh gosh in terms of language design for from language are in the context of machine intelligence if you like if you want to make it tighter to this audience I don't know I've been absorbing stuff forever I think my main in terms of language design probably list span APL were my sort of early influences but in terms of thinking about AI hmm you know in I mean I'm kind of quite knowledgeable I like history of science so I'm pretty knowledgeable about the the early history of kind of mathematical logic symbolic kinds of things I would say okay maybe I can answer that in the negative okay I have for example in building Wolfram Alpha I thought gosh let me do my homework let me learn all about computational linguistics let me hire some computational linguistics PhDs that will be a good way to get this started turns out we used almost nothing from the from the previous sort of history of computational linguistics partly because what we were trying to do namely short question natural language understanding is different from a lot of the national language processing which has been was done in the past I also have made to my disappointment very little use of you know people like Marvin Minsky for example I really don't think I mean I knew Marvin for years and in fact some of his early work on simple Turing machines and things those are probably more influential to me than his work on on AI and you know probably I my mistake of not understanding that better but really I would say that I'd been been rather uninfluenced by by sort of the traditional AI kinds of things I mean it probably hasn't helped that I've kind of lived through a time when when sort of AI went from you know when I was a kid a I was gonna solve everything in the world and then you know it kind of decayed for a while and then sort of come back so I so I would say that I can describe my negative my non influences better than my impression you gave is that you made your own head and it sounds as though that's pretty much right yeah I mean yes I I mean insofar as those things to me I mean look things like the you know okay so for example studying simple programs as and trying to understand the universe of simple programs actually the personal history of that sort of interesting I mean I you know I used to do particle physics when I was a kid basically and then I actually got interested okay so I'll totally the history of that just as an example of how sort of interesting is a sort of history of ideas type thing so I was interested in in how order arises in the universe so you know you start off from the hot Big Bang and then pretty soon you end up with a bunch of humans and galaxies and things like this how does this happen so I got on just in that question I was also interested in in things like knowing that works for sort of AI purposes and I thought let me make a minimal model that encompasses sort of how complex things arise from from other stuff and I ended up sort of making simpler and simpler and simpler models and eventually wound up with cellular automata and which I didn't know were called cellular automata when I started looking at them and then found they didn't interesting things and the two areas were cellular automata had been singularly unuseful in analyzing things our large-scale structure in the universe and neural networks so turned out but but that by the way the fact that I kind of even imagined that one could just start yeah I should say you know I've been doing physics and in physics the kind of intellectual concept is you take the world as it is and you try and drill down and find out what you know what makes the world out of primitives and so on it's you know reduce to find things then I built my first computer language a thing called SMP which went the other way around where I was just like I'm just gonna make up this computer language and you know just make up what I want the primitives to be and I'm gonna build stuff up from it I think that the fact that I kind of had the idea of doing things like making up cellular automata as possible models for the world was a consequence of the fact that I worked on this computer language which was a thing which worked the opposite way around from the way that one is used to doing natural science which is sort of this reductionist approach and that's I mean so that's just an example of it you know I found I happen to have spent a bunch of time studying as I say history of science and one of my one of my hobbies is sort of history of ideas I even wrote this little book called idea makers which is about biographies of a bunch of people who for one reason or another I've written about and so I'm I'm always curious about this thing about how do people actually wind up figuring out the things they figure out and you know one of the one of the conclusions of my you know investigations of many people is there are very rarely moments of inspiration usually it's long multi-decade kinds of things which only later get compressed into something short and also the path is often much you know it's it's it's quite what can I say that the steps are quite small and you know but the path is often kind of complicated and that's what that's what it's been for me so I simple question complex answer sorry so when I basically see from the Wolfram languages it's a way to describe all of objective reality it's kind of formalizing just about the entire domain of discourse use a philosophical term and you kind of hinted at this in your lecture where that where it sort of leaves office is that when we start to talk about more esoteric philosophical concepts purpose I guess this would lean into things like epistemology because essentially you only have science there and as amazing as Sciences there are other things that are talked about not you know you know like idealism versus materialism etc do you have an idea of how Wolfram might or might not be able to branch into those discourses because I'm hearing echoes in my head at that time boström said that nai needs a you know when you give an AI a purpose there's like I think he said philosophers are divided completely evenly between the top four ways to measure how good something should be it's like utilitarianism and sure brother for most Japanese yeah so the first thing is I mean this problem of making what okay about 300 years ago people like light knits we're interested in the same problem that I'm interested in which is how do you formalize sort of everyday discourse and Leibniz had the original idea you know he was originally trained as a lawyer and he had this idea if he could only reduce all law all legal questions to matters of logic he could have a machine that would basically describe every you know answer every legal case right he was unfortunately a few hundred years too early even though he did have you know he tried to he tried to do all kinds of things very similar things I've tried to do like he tried to get various Dukes to assemble big libraries of data and stuff like this but but the point so what he tried to do was to make a formalized representation of everyday discourse for whatever reason for the last 300 years basically people haven't tried to do that there's it's a almost completely barren landscape there was this period of time in the 1600s when people talked about philosophical languages Leibniz was why and a guy called John Wilkins was another and they tried to you know break down human thought into something symbolic people haven't done that for a long time in terms of what can we do that with you know I've been trying to figure out what the best way to do it is I think it's actually not as hard as one might think these areas one thing you have to understand these areas like philosophy and so on are they're on the harder end I mean things like a good example typical example you know I want to have a piece of chocolate okay they in morphing language right now we have a pretty good description of pieces of chocolate we know all sorts of you know we probably know 100 different kinds of chocolate we know how big the pieces are all that kind of thing the I want part of that sentence we can't do that right now but I don't think that's that hard and I'm you know that's now if you ask let's say we had I think the different thing you're saying is let's say we had the omnipotent AI so to speak that was able to you know where we turn over the control of the central bank to the AI we turn over all these other things to the AI then the question is we say to the AI now do the right thing and then the problem with that is and this is why I talk about you know creating AI constitutions and so on we have absolutely no idea what do the right thing is supposed to mean and philosophers have been arguing about that you know utilitarianism is an example of that of one of the answers to that although it's not a complete answer by any means it's not not really an answer it's just a way of posing the question and so I think that the you know one of one of the features of so I think it's a really hard problem to you know you think to yourself what should the AI Constitution actually say so first thing you might think is oh there's going to be you know something like Asimov's laws of robotics there's going to be one you know golden rule for a eyes and if we just follow that golden rule all well okay I think that that is absolutely impossible and in fact I think you can even sort of mathematically prove that that's impossible because I think as soon as you have a system that you know essentially what you're trying to do is you're trying to put in constraints that okay basically as soon as you have a system that shows computational irreducibility I think it is inevitable that you have ace of have unintended consequences of things which means that you never get to just say put everything in this one very nice box you always have to say let's put in a patch here let's put in a patch there and so on a version of this much more abstract version of this of girdles theorem so girdles theorem is you know it starts up by taking the you know it's girls theorem is trying to talk about integers it says start off with piano's axioms turner's axioms you might say in piano thought describe the integers and nothing but the integers okay so anything that's provable from pianos axioms will be true about integers and vice-versa okay what girls theorem shows is that you can that will never work that there are an infinite hierarchy of patches that you have to put on two pianos axioms if you want to describe the integers and nothing about the integers and I think the same is true if you want to have a legal system effectively that has no bizarre unintended consequences so I don't think it's possible to just say you know if you when you're describing something in the world that's complicated like I don't think it's possible to just have a small set of rules that will always do what we want so to speak I think it's inevitable that you have to have a long essentially code of laws and that's what you know so my guess is that what will actually have to happen is you know as we try and describe what you want the eyes to do you know I don't know the socio-political aspects of how we'll figure out whether it's 1 AI Constitution or 1 per you know city or whatever we can talk about that that's a separate issue but but um you know I think what will happen is it'll be much like human laws it'll be a complicated thing that gets progressively patched and so I think it's it's some and these ideas like you know oh we'll just make the eyes you know run the world according to you know Mills you know John Stuart Mill's idea it's not gonna work which is not surprising this philosophy has has has made the point that it's not as easy it's not an easy problem for the last two thousand years and they're right it's not an easy problem thank you yeah I you're talking about computational irreducibility and computational equivalents and also that earlier on in your intellectual adventures you're interested in particle physics and things like that I've heard you make the comment before in other contexts that things like molecules compute and I was just ask you exactly you you know what you mean by that in what sense does a molecule I mean what would you like to compute so to speak I mean in other words you what what is the case is that you know one definition of you're computing is given a particular computation like I don't know finding square roots or something you know you can program a you know the surprising thing is that an awful lot of stuff can be programmed to do any computation you want that's some and you know when it comes to I mean I think for example when you look at nanotechnology and so on the the current you know one of the current beliefs says to make very small computers you should take what we know about making big computers and just you know make them smaller so to speak I don't think that's the approach you have to use I think you can take the components that exist at the level of molecules and say how do we assemble those components to be able to do complicated computations I mean it's like the cellular automata that the you know the underlying rule for the cellular automaton is very simple yet when that rule is applied many times it can do a sophisticated computation so I think that that's the that's the sense in which what can I say the raw material that you need for computation can be you know there's a great diversity in the raw material that you can use for computation our particular human development you know stack of of technologies that we use for computation right now is just one particular path and we can you know so a very practical example of this is algorithmic drugs so the question is right now drugs pretty much work by most drugs work well you know there is a binding site and a molecule drug fits into binding site does something question is can you imagine having something where the molecule you know is something which has computations going on in it where it goes around and it looks at that you know that thing it's supposed to be binding to and it figures out oh there's this knob here and that knob there it reconfigures itself it's computing something it's trying to figure out you know is this likely to be a tumor cell or whatever based on some more complicated thing that's the type of thing that I mean by by computations happening at an R color scale okay I guess I meant to ask if it follows from that if in your view like the the molecules in the chalkboard and in my face and in the table are in any sense currently during doing computer I mean the question of what computation look one of the things to realize if you look at kind of the sort of past and future of things the the okay so here's an observation actually I was about light nets actually and lightning says time live Nets made a calculator type computer out of brass took him 30 years okay so in his day there was you know at most one computer in the world as far as he was concerned right today's world there may be 10 billion computers maybe 20 billion computers I don't know the question is what's that going to look like in the future and I think the answer is that in time probably everything we have will be made of computers in the following sense that basically it won't be you know in today's world things are made of you know metal plastic whatever else but actually that won't make it there won't be any point in doing that once we know how to do you know molecular scale manufacturing and so on we might as well just make everything out of programmable stuff and I think that's a that's a sense in which you know the the and you know the one example we have molecular computing right now is us bio in biology you know biology does a reasonable job of specific kinds of molecular computing it's kind of embarrassing I think that the only you know molecule we know that sort of a memory molecule is DNA that's kind of you know which is kind of the you know the particular biological solution in time we'll know lots of others and you know I think the the sort of the the end point is so if you're asking is you know is computation going on in you know in this water bottle the answer is absolutely it's probably even many aspects of that computation are pretty sophisticated if we wanted to know what would happen to particular molecules here it's going to be hard to tell there's going to be computational irreducibility and so on can we make use of that for our human purposes can we piggyback on that to achieve something technological that's different issue and that's the four that we have to build up this whole sort of chain of technology to be able to connect it which is what I've kind of been been keep on talking about is how do we connect sort of what is possible computationally in the universe to what we humans can kind of conceptualize that we want to do in computation and that's you know that's the bridge that we have to make and that's the hard part but getting the intrinsic getting the computation done is is you know there's computation going on all over the place there may be a couple more questions I was hoping you could elaborate on what you're talking about earlier of like searching the entire space of possible programs so that's very broad so maybe like what kind of searching of that space we're good at and like what we're not and I guess what the outright so I mean I would say that we're at an early stage in knowing how to do that okay so I've done lots of these things and they are the thing that I've noticed is if you do an exhaustive search then you don't miss even things that you weren't looking for if you do a non exhaustive search there is a tremendous tendency to miss things that you weren't looking for and so you know we've done such as a bunch of the function evaluation and Wolfram language is done by was done by searching for optimal approximations in some big space a bunch of stuff with hashing is done that way bunch of image processing is done that way what we're just sort of searching this you know doing exhaustive searches and maybe trillions of programs to find things now you know there is on the other side of that story is the incremental improvements story with with deep learning and neural networks and so on where because there is differentiable 'ti you're able to sort of incrementally get to a better solution now in fact people are making less and less differentiability and deep learning neural nets and so I think eventually there's going to be sort of a grand unification of these kinds of approaches right now we're still you know I don't really know what the you know the exhaustive search side of things which you can use for all sorts of purposes I mean there's the reason the surprising thing that makes you is also search not crazy is that there is rich sophisticated stuff near at hand in the computational universe if you had to go you know quadrillions you know through a quadrillion cases before you ever found anything the exhaustive search will be hopeless but you don't in many cases and you know I would say that we are in a fairly primitive stage of the science of how to do those searches well my guess is that there'll be some sort of unification which needless to say I've thought a bunch of out and between kind of than known that so you know the trade-off typically in Iran that says you can have an Iran that that is very good at that is you know uses its computational resources well but it's really hard to train or you can have an Iran that that doesn't use its computational resources so well but it's very easy to train but this is very you know smoothly and you know my guess is that somewhere in the you know harder to train but makes use of things that are closer to the complete computational universe is is where one's going to see progress but it's it's a it's a really interesting area and you know I consider us only at the at the beginning of figuring that out thank you for your talk I just to give you a bit of context for my question I research how we could teach AI to kids and evolving platforms for that how we could teach artificial intelligence and machine learning to children and I know you develop resources for that as well so I was wondering like where do you think it's problematic that we have computation that is very efficient and can do you know from utilitarian and problem solving perspective it's all the goals but we don't understand how we how it works so we have to create the this fake steps and if you could think of scenarios where that could become very problematic over time and why do we approach it such in such a deterministic way and when you mentioned that computation and intelligence are dafair differentiated by this like very thin line how does that affect the way you learn and how do you think that will affect the way we kids learn we learn right so I mean look my general principle about you know future generations and what they should learn I mean first point is you know very obvious point that you know for every field that people study you know archeology to zoology there either is now a computational X or there will be soon so you know every field the paradigm of computation is becoming important perhaps the dominant paradigm in that field okay so how do you teach kids to be useful in a world where everything is computational I think the the number one thing is to each them how to think in computational terms what does that mean that doesn't mean writing code necessarily I mean in other words one of the things that's happening right now as a practical matter is you know they've been these waves of enthusiasm for teaching coding of various kinds you know we're in a we're not actually we're in the end of an uptick wave I think it's going down again um you know it's been up and down for 40 years or so um okay why doesn't that work well it doesn't work because while there are people like people who are students at MIT for example for whom they really want to learn you know engineering style coding and it really makes sense to them to learn that the vast majority of people it's just not going to be relevant because they're not going to write a low-level C program or something and it's the same thing that's happened in math education which has been sort of a disaster there which is the number one takeaway for most people from the math they learn in school is I don't like math and you know that's not for all of them obviously but that's the you know if you asked no general scale you know what people and why is that well part of the reason is because what's been taught is rather low level of mechanical it's not about mathematical thinking particularly it's mostly about you know what teachers can teach and what assessment processes can assess and so on okay so how should one teach computational thinking I mean I'm I'm kind of excited about what we can do with whorfin language because I think we have a high enough level language that people can actually write you know that for example I I reckon by age 11 or 12 and I've done many experiments on this I have some the only problem with my experiments is most of my experiments end up being with kids who are high achieving kids despite many efforts to reach lower achieving kids that always ends up that the kids who actually do the things that I set up or the high achieving kids but but you know like setting that aside you know you take the typical you know 11 12 13 year-olds and so on and they can learn how to write stuff in this language and what's interesting is they learn to start thinking here I'll show you let's be very practical I can show you I was doing every Sunday I do a little little thing with some middle school kids and I might even be able to find my stuff from yesterday this is this is um okay let's see programming adventures Janerio 28 okay let's see what I did oh look at that that was that was why I thought of the South America thing here because I just done that with these kids the and so what are we doing we were trying to figure out this this some trying to figure out the shortest tour thing like that I just showed you which is this is where I got what you is is what I was doing with these kids but this this was my version of this but the kids all had various different versions of this and we had somebody suggested you know let's just enumerate let's just look at all possible permutations of these these cities and figure out what their distances are there's the histogram of those that's what we get from those okay how do you get the largest distance from those etcetera etcetera and this is okay this was my version of it but the kids had similar stuff and this is you know this is I think and it probably went off into oh yeah there we go there's there's the one for the whole whole earth and then they wanted to know how do you do that in 3d so I was showing them how to convert to XYZ coordinates in 3d and make the corresponding thing in 3d so what's this maybe isn't the this is a random example from yesterday so it's not not a highly considered example but but um what I think is interesting is that we seem to have finally reached the point where we've automated enough of the actual doing of the computation that the kids can be exposed mostly to thinking about what you might want to compute and you know part of our role in language design as far as I'm concerned is to get it as much as possible to the point where for example you can do a bunch of natural language input you can do things which make it as easy as possible for kids to not get mixed up in the kind of what the you know how the computation gets done but rather to just think about how you formulate the computation so for example a typical example I've used much times in you know what does it mean to do write code versus two other things like a typical set of test example would be I don't know you ask somebody you're gonna there's practical problem we had in wolf's mouth you give a lat long position on the earth and you say you're gonna make a map of that like long position what what scale of map should you make alright so if the lat/long is in the middle of the Pacific making a ten mile you know radius map isn't very interesting if it's in the middle of Manhattan a 10-mile radius map might be quite quite a sensible thing to do so the question is come up with an algorithm come up with even a way of thinking about that question what do you do you know how should you figure that out well you might say you know oh let's look at the visual complexity of the image let's look at how far it is to another city let's fight you know there various different things but thinking about that as a kind of computational thinking exercise that term is you know that's the kind of thing so in terms of what one automates and whether people whether people need to understand how it works inside okay main point is you'll in the end it will not be possible to know how it works inside so you might as well stop having that be a criterion I mean that is there plenty of things that one teaches people that let's say in lots of areas of biology medicine whatever else you know maybe we'll know how it works inside one day but you can still there's an awful lot of useful stuff you can teach without knowing how it works inside and I think also as we get computation to be more efficient inevitably we will be dealing with things where you don't know how it works inside now you know we've seen this in math education because I've happened to made tools that automate a bunch of things that people do in math education and I think well to tell a silly story I'm in my my older daughter who at some point in the past was doing you know calculus you know and learning doing integrals and things and I was saying to her you know I didn't think humans still did that stuff anymore which was a very unintelligent fit but in any case I mean the the you know there's a question of whether do humans need to know how to do that stuff or not so I haven't done an integral by hand and probably thirty five years a true more or less true then but when I was using computers to do them the I was for a while you know I used to do physics and so I used computers to do this stuff I was a really really good integrator except that it wasn't really me it was me plus the computer so how did that come to be well the answer was that because I was doing things by computer I was able to try zillions of examples and I got a much better intuition the most people got for how these things would work roughly how what you did to make the thing go and so on whereas people who are like I'm just working this one thing out by hand you got a different you know you don't get that intuition so I think you know two points first of all you know this how do you think about things computationally how do you formulate the question computationally that's really important and something that we are now in a position I think to actually teach and it is not really something you teach by you know teaching you know traditional quotes coding because a lot of that is okay we're gonna make a loop we're going to define variables I just as I think I probably have a copy here yeah they I wrote this book for this is a book kind of for kids about often language except it seems to be useful to adults as well but I wrote it for kids so it's some and one of the amusing things in this book is it doesn't talking it talked about assigning values to variables until chapter 38 so in other words that will be a thing that you would find in Chapter one of most you know low-level programming coding type type things turns out it's not that relevant to know how to do that it's also kind of confusing and not necessary and so you know in terms of the you asked where will we get in trouble when people don't know how the stuff works inside that's I mean you know I think one just has to get used to that because it's like you know you might say well we live in the world and it's full of natural processes where we don't know how they work inside but somehow we manage to survive and we go to a lot of effort to do natural science to try and figure out how stuff works inside but it turns out we can still use plenty of things even when we don't know how they work inside we don't need to know and I think the May I think the main point is computational irreducibility guarantees that we will be using things where we don't know and can't know how they work inside and you know I think the the perhaps the thing that is a little bit you know to me a little bit unfortunate as a you know as a typical human type thing the fact that I can readily see that you know the AI stuff we build is sort of effectively creating languages and things that are completely outside our domain to understand and we're by that I mean you know our human language with its fifty thousand words or whatever has been developed over the last however many you know tens of thousands of years and as a society we've developed this way of communicating and explaining things you know the AIS are reproducing that process very quickly but they're coming up with a and a historical you know something you know their way of describing the world that doesn't happen to relate at all to our historical way of doing it and that's you know it's a little bit disquieting to me as a human that that you know there are things going on inside where I know it is you know in principle I could learn that language but it's you know not the historical one that we've all learnt and it really wouldn't make a lot of sense to do that because you learn it for one AI and then another one gets trained and it's going to use something different so it's some but my main I guess my main point for for education another point about education I just make which is something I haven't figured out but but um just is you know when do we get to make a good model for the human learner using machine learning so in other words you know part of what we're trying to do like like I've got that automated proof I would really like to manage to figure out a way what is the best way to present that proof so a human can understand it and basically for that we have to have a bunch of heuristics about how humans understand things so as an example if we're doing let's say a lot of visualization stuff in welcomed language okay we have tried to automate do automated aesthetics so what we're doing is you know we're laying out a graph what way of laying out that graph is most likely for humans to understand and we've done that you know by building a bunch of heuristics and so on but that's an example of you know if we could do that for learning and we say what's the optimal path given that the person is trying to understand this proof for example what's the optimal path to lead them through understanding that proof I suspect we will learn a lot more in probably a fairly small number of years about that and it will be the case that you know for example if you've got oh I don't know you can do simple things like you know you go to Wikipedia and you look at what the path of you know how do you if you want to learn this concept what other concepts you have to learn we have much more detailed symbolic information about what is actually necessary to know in order to understand this and so on it is I think reasonably likely that we will be able to I mean you know if I look at I was interested recently in the history of math education so I wanted to look at the complete sort of path of math textbooks you know for the past well basically the like twelve hundred you know people actually produced this one of the early math textbooks so they've been these different ways of teaching math and you know I think we've we've gradually evolved a fairly optimized way for the typical person though it's probably the variation of the population is not well understood for you know how to explain certain concepts and we've gone through some pretty weird ways of doing it from the 1600s and so on where which have gone out of style and possibly you know who knows whether that's versus because of well but anyway so so you know we've kind of learnt this path of what's the optimal way to explain adding fractions or something for humans for the typical human but I think we'll learn a lot more about how you know by by essentially making a model for the human a machine model for the human we'll learn more about how to you know how to optimize how to explain stuff to humans a coming attraction but Tim thanks by the way do you think we're close to that at all because you you said that there's a something in Wolfram Alpha that that presents the human a nice way are we how far said attraction yeah right so I mean in in that explaining stuff to humans thing is a lot of human work right now being able to automate explaining stuff to humans okay so some of these things let's see I mean so an interesting question actually just today I was working on something that's related to this yeah it's it's it's being able to the question is given a whole bunch of can we for example train a machine learning system from explanations that it can see roughly can we train it to give explanations that are likely to be understandable maybe I think the okay so an example that I'd like to do okay I'd like to do a debugging assistant where the typical thing is program runs program gives wrong answer human says why did you get the wrong why did it give the wrong answer well the first piece of information to the computer is that was the human thought that was the wrong answer because the computer just did what it was told and it didn't know that was supposed to be the wrong answer so then the question is can you in fact you know in that domain can you actually have a reasonable conversation in which the human is explaining the computer what they thought it was supposed to do the computer is explaining what happened and why did it happen and so on same kind of thing for math tutoring you know we have a lot of you know we've got a lot of stuff you know we're sort of very widely used for people who want to understand the steps in math you know can we make a thing where people tell us I think it's this okay I'll tell you one one little factoid which I which you did work out so if you do multi digit arithmetic multi-digit addition okay okay so the basis of this is its kind of silly silly thing but you know if you get the right answer for an addition some okay you don't get very much information the student gives the wrong answer the question is can you tell them where they went wrong so let's say you have a four digit addition sum and the student gives the wrong answer can you back trace and figure out what they likely did wrong and the answer is you can you know you just make this graph of all the different things that can happen you know when did they you know there's certain things that are more common transposing numbers and things or you know having a 1 and a 7 mixed up those kinds of things you can with very high power let's say given a for de division some with the wrong answer you can say this is the mistake you made which is sort of interesting and that's you know being done in a fairly symbolic way whether one can do that in a you know more machine learning kind of way for more complicated derivations I'm not sure but that's a that's one that works are you sir I just had a follow-up question so do you think you know like in the future this is it possible to simulate virtual environments which can actually understand how the human mind works and then build you know like finite state machines inside of this virtual environment to provide a better learning experience and a more personalized learning experience well I mean so the question is if if you're going to you know can you optimize if you're playing a video game or something and that video game is supposed to be educational can you optimize the the experience based on a model of you so to speak yeah I'm sure the answer is yes and I'm sure the you know the question of how complicated the model of you will be is an interesting question I don't know the answer to I mean I've I've kind of wanted a similar question so I I'm a kind of personal analytics enthusiast so I collect tons of data about myself and I mean I do it mostly because it's been super easy to do and I've done it for like 30 years and I have you know every keystroke I've typed on a computer like every keystroke I've typed here and I the screen of my computer every every 30 seconds or so of maybe 15 seconds I'm not sure it there's a screen shot it's a super boring movie to watch but anyway I've been collecting all this stuff and so a question that I've asked is is there enough data that a bot of me could be made in other words do I have enough data about you know I've got I've written a million emails I have all of those I've received three million emails over that period of time I've got you know endless you know things I've typed etc etcetera etcetera you know is there enough data to reconstruct you know me basically I think the answer is probably yes not sure but I think the answer is probably yes and so the question is in an environment where you are interacting with some video game trying to learn something whatever else you know how long is it going to be before it can learn enough about you to change that environment in a way that's useful for explaining the next thing to you I would guess I would guess that have done that this is comparatively easy I might be wrong but that and that the I mean I think you know it's an interesting thing because you know once dealing with you know there's a space of human personalities there's a space of human learning styles you know I'm sort of always interested in the space of all possible XYZ and there's you know there's that question over how do you parameterize the space of all possible human learning styles and is there a way that we will learn you know like can we do that symbolically and say these are ten learning styles or is it something I think that's a case where it's better to use you know sort of soft machine learning type methods to kind of feel out that space well you know maybe very last question I was just intuitively thinking when you spoke about an ocean I thought of Isaac Newton when he said I mean you know the famous quote I might not and I thought instead of Newton on the beach what if franz liszt were there what question would he ask what would he say and I'm trying to understand you're the alien ocean and humans through maybe Franz Liszt and music well so I mean the the quote from Newton is is some sort of an interesting quote I think it goes something like this if you know people are talking about how wonderful calculus and all that kind of thing are and Newton says you know to others I may seem like I've done a lot of stuff but to me I seem like a child who who picked up a particularly elegant you know seashell on the on the beach and I've been studying this this seashell for a while even though there's this ocean of truth out there waiting to be discovered that's roughly the quote okay I find that quote interesting for the following reason the what Newton did was you know calculus and things like kit if you look at the computational universe of all possible programs there is a small corner Newton was exactly right and what he said that is he picked off calculus which is a corner of the possible things that can happen in the computational universe that happened to be an elegant seashell so to speak they happened to be a case where you can figure out what's going on and and so on while there is still a sort of ocean of of other sort of computational possibilities out there but but when it comes to you know you're asking about music I I think my computer stopped being able to get anywhere but but um sort of interesting the cific get to the site yeah so this is a this is a website that we made years ago and now my computer isn't playing anything but [Music] let's try that okay so these things are created by basically just searching computational universe for possible programs it's sort of interesting because everyone has kind of a story some of them are look more interesting than others let's try that one anyway the the what's what's interesting actually what was interesting to me about this was this is a very trivial you know what this is doing is very trivial at some level it's just it actually happens to use cellular automata you can even have it show you I think someplace here where is it somewhere there's a way of showing your show the evolution this is this is showing the behind the scenes what was actually happening what it chose to use to generate that musical piece um and what I thought was interesting about the site I thought well you know how would computers be relevant to music etcetera etcetera etcetera well you know what would happen is a human would have an idea and then the computer would kind of dressup that idea and then you know a bunch of years go by and I talked to people you know who composers and things and they say oh yeah I really like that wolfram tones site okay that's nice they say it's a very good place for me to get ideas so that's sort of the opposite of what I would have expected namely what's happening is you know human comes here you know listens to some 10-second fragment and they said oh that's an interesting idea and then they kind of embellish it using kind of something that is humanly meaningful but it's like you know you're taking a photograph and you see some interesting configuration and then kind of you're you know you're filling that with kind of some human sort of context but but so I'm not quite sure what what you were asking about I I mean back to the Newton quote the thing that I think is some another way to think about that quote is us humans you know with our sort of historical development of you know our intellectual history have explored this very small corner of what's possible in the computational universe and everything that we care about is contained in the small corner and that means that you know you could say well gee you know I want to you know what we what we end up wanting to talk about other things that we as a society have decided we care about and what there's an interesting feedback loop I just mentioned mr. den but but um so you might say so here's a funny thing so let's take language for example language evolves we you say we we make up language to describe what we see in the world okay fine that's a fine idea imagine the you know in Paleolithic times people make up language they probably didn't have a word for table because they didn't have any tables they probably had a word for rock but then we end up as a result of the particular you know development that our civilization has gone through we build tables and there was sort of a synergy between coming up with a word for table and deciding tables were a thing and we should build a bunch of them and so there's a sort of complicated interplay between the things that we learn how to describe and how to think about the things that we build and put in our environment and then the things that we end up wanting to talk about because they're things that we have experience of in our environment and so that's the you know I think as we look at sort of the progress of civilization there's you know there's various layers of first we you know we invent a thing that we can then think about and talk about then we build an environment based on that then that allows us to do more stuff and we build on top of that and that's why for example when we talk about computational thinking and teaching it to kids and so on that's one reason that's kind of important because we're building a layer of things that people are then familiar with that's different from what we've had so far and they give people a way to talk about things I give you one example that um see that I have that still up the okay one one one example here from this blog post of mine actually so there okay so that that thing there is a nested pattern you know it's a bitter sierpinski that that tile pattern was created in 1210 ad okay and it's the first example I know of a fractal pattern okay well the art historians wrote about these patterns they're a bunch of this particular style of pattern they wrote about these for years they never discussed that nested pattern these patterns also have you know pictures of lions and then you know elephants and things like that on them they wrote about those kinds of things but they never mentioned the nested pattern until basically about 25 years ago when fractals and so on became a thing and then it's ah I can now talk about that it's a nested pattern it's fractal and then you know before that time the art historians were blind to that particular part of this pattern it's just like I don't know what that is that there's no you know I don't have a word to describe it I'm not going to I'm not going to talk about it so that's a you know it's part of this feedback loop of things that we we learn to describe then we build in terms of those things then we build another layer I think one of the things I mean you talk about you know just in the sort of the thing like one thing I'm really interested in is the evolution of purposes so you know if you look back in human history there's a you know what was thought to be worth doing a thousand years ago it's different from what's thought to be worth doing today and part of that is is you know good examples of things like you know walking on a treadmill or buying goods in virtual worlds both of these are hard to explain to somebody from a thousand years ago because each one ends up being a whole sort of societal story about we're doing this because we do that because we do that and so the question is how will these purposes evolve in the future and I think one of the things that I view as a sort of sobering thought is that that term actually one thing I found other disappointing and then I became less pessimistic about it is if you think about the future of the human condition and you know we've been successful in making our AI systems and we can read out brains and we can upload consciousnesses and things like that and we've eventually got this box with trillions of souls and and the question is what are these souls doing and to us today it looks like they're playing video games to the rest of eternity right and that seems like a kind of a bad outcome it's like we've gone through all of this long history and what do we end up with we end up with a trillion souls in a box playing video games okay um and I thought this is a very you know depressing outcome so to speak and then I realized that that actually you know if you look at the sort of arc of human history people aren't any given time in history people have been you know they've my main conclusion is that any time in history the things people do seem meaningful and purposeful to them at that time in history and history moves on and you know like a thousand years ago there were a lot of purposes that people had that you know what to do with weird superstitions and things like that that we say why the heck are you doing that that just doesn't make any sense right but to them at that time it made all the sense in the world and I think that you know the thing that makes me sort of less depressed about the future so to speak is that at any given time in history you know you can still have meaningful purposes even though they may not look meaningful from a different point in history and that there's sort of a whole theory you can kind of build up based on kind of the the trajectories that you've followed through the space of purposes and sort of interesting if you can't jump like you say let's get chronically frozen for a you know 300 years and then you know be be back again the the interesting cases then you know are the purposes that you sort of you know that you find yourself in ones that have any continuity with what we know today I should stop with that that's a beautiful way to end it they Han [Applause]
Info
Channel: Lex Fridman
Views: 118,193
Rating: 4.9217792 out of 5
Keywords: mit, artificial intelligence, artificial general intelligence, human-level intelligence, deep learning, machine learning, free, open, wolfram alpha, wolfram language, cellular automata, new kind of science, stephen wolfram, stephen wolfram lecture, stephen wolfram interview, stephen wolfram podcast
Id: P7kX7BuHSFI
Channel Id: undefined
Length: 115min 4sec (6904 seconds)
Published: Fri Mar 02 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.