What We've Learned from NKS Chapter 2: The Crucial Experiment

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
okay hello everyone welcome to part two of my effort to go through my decade-long book a new kind of science new kind of science will have been out for 20 years in may of this year and we've been doing a chapter a week we are now on chapter two talking about what was in nks um the uh uh it's worth realizing the book took a decade to write there are 12 chapters in the book so i'm kind of trying to summarize in an hour or so what in many cases took a year or more to produce now actually chapter two it's kind of one of my favorite chapters perhaps maybe even my very favorite chapter and uh just to give you a sense of of uh what's what's there that's the uh that's the collection of chapters chapter two is called the crucial experiment and it is uh the story of the experiment that launched a new kind of science and that has basically launched the whole tower of science that i've been building since then that's led in recent times to our physics project and other kinds of things what i'll talk about today is uh the crucial experiment that launched all those kinds of things it's my all-time favorite science discovery and uh um it's um it's something uh i think significant in um for many purposes okay so let's talk about that um crucial experiment so the the main topic here is how do simple programs behave and the the really the the um the kind of the the sort of um the concept here is to ask the question we we talked maybe last time about the idea of um what how should one make models of things in science and the notion that well mathematical equations have been used for a long time to do that but if we want to generalize how we make models of things we need to be able to operate with more general kinds of rules and one of the key sources of those kinds of rules i think the the sort of the next step from mathematical equations 300 years later so to speak is the idea of using programs as the source of our models so maybe i'll just uh read you what i what i wrote at the beginning of this um uh this chapter and then we'll talk about some some of the some of the actual experiments and pictures and so on that came out so the the um this section is called how do simple programs behave new directions in science have typically been initiated by certain central observations or experiments and for the kind of science that i describe in this book these concern the behavior of simple programs in our everyday experience with computers the programs that we encounter are normally set up to perform very definite tasks but the key idea that i had nearly 20 years ago when i wrote that which is now nearly 40 years ago and that eventually led to the whole new kind of science in this book was to ask what happens if one instead looks at simple arbitrarily chosen programs created without any specific task in mind how do such programs typically behave mathematical methods that we've in the past have in the past dominated theoretical science don't help much with such a question but with a computer it's straightforward to start doing experiments to investigate it for one you do is set up a sequence of possible simple programs and then run them and see how they behave any program can at some level be thought of as consisting of a set of rules that specify what it should do at each step there are many possible ways to set up these rules and indeed we'll study quite a few of them in the course of the book but for now i'll consider a particular class of examples called cellular automata that were the very first kinds of simple programs that i investigated in the early 1980s so let's talk about uh how cellular automata work so let me um back here so in um uh one of the things that's really important about cellular automata is that their behavior can be presented in a visual way so this is a um this should be there we go yes um let's do that again this is a typical cellular automaton uh of the kind that i studied a lot um and the idea is there's a line of cells each colored either black or white and in every step there's a definite rule that determines the color of a given cell from the color of that cell and its immediate left and right neighbors on the step before so in this particular case let me uh i can bring up hold on let me just do that um this is the actual page of the book that introduced cellular automata and you can see uh there that's what happens on the series of steps and then if we let the thing run for a while uh with the particular rule that we have here is one that says make a particular cell black if either it or its neighbors were black on the step before and so we can summarize that rule by saying for all these different cases of what the cell and its neighbors was were like on the step before this is the color to make the cell on the next step so the the first sort of observation is that with that kind of um uh with that kind of rule with a very simple rule you get very simple behavior so then i looked at well what happens if you change the rule well here we've changed just one bit in the rule and the result is a kind of checkerboard pattern here we've changed uh a few more bits in the rule and we start off from a a single black cell and we end up generating this nested pattern if we continue it a while longer we'll find out that it is indeed a self-similar nested pattern self-similar in the sense that a part of the pattern is similar in structure to the whole pattern and so the uh that what we might conclude from that is um uh that okay we've we've had this we with the the kind of the intuition would be we've got a simple rule we run it we produce in this case not necessarily such simple behavior but at least behavior that has a certain definite regularity to it so let's in in modern times we get to do this with wolverine language and we could just say make a picture like the one i just showed that's um uh just a cellular automaton there that was rule 90 started off from a single black cell and um have it with with white cells around it and have it run for let's say um 100 steps and uh that's oops that's um that's the uh result we'll get okay so the thing that i ended up doing uh well it took a few steps before i did the most obvious experiment it's you would think that you would do the simplest and most obvious experiment first but in order to do that you have to kind of have some kind of conceptual framework um in which to uh um in which to kind of operate and i didn't have that at first so it took an extra couple of years to really nail down what the most obvious possible experiment to do was but let me show you what um what i ended up doing so let's instead of just using this particular rule let's use all possible rules uh let's say we just go 50 steps and we do that and we say use all possible rules of this type let's say well let's just go all the way the 256 of these possible rules so let's go ahead and use all of those and see what result we get okay so this is now for every one of those possible rules each picture corresponds to a different rule always starting from just one black cell at the beginning and what we see is there's there's a whole bunch that just have stripes because the all white state turns into all black and vice versa sometimes you'll just have the black cell in the middle stays there and nothing uh it doesn't grow or anything sometimes you'll get these nested patterns and sometimes just sometimes you'll get something else happening and this here is my kind of all-time favorite science discovery in the numbering of these rules this is rule 30 and that's what it does so if we go back to the book the next the next page um introduces uh rule 30 there it is showing the um the rules that are used to generate this picture again starting just with one black cell and um then i say here picture shows what happens when it starts with just a single black cell but one is something quite startling and i say here probably the single most surprising science discovery i've ever made and i i go on on successive pages uh and show it's kind of a fun thing that um on page 29 and page 30 of the book page 30 of the book has a high resolution picture of needless to say rule 30. and when the book was printed there was it was very non-trivial to actually get the resolution of printing that was needed to achieve to kind of have one pixel per cell um well one to to have to have the individual pixels the individual cells here be small printing can be very high resolution and this was making use of that to uh in the actual physical book produce this very high resolution picture but there's the result starting from this one black cell at the top here you get all of this stuff this is something in a sense very very surprising very shocking in some sense that from such a simple rule you can get such complicated behavior i went on and talked about uh um another rule with with similarly and perhaps even more shocking in some ways behavior this is rule 110 it happens to grow only on the left it's actually shown at the same resolution as that early rule 30 picture but rule 110 has much larger structures that are more visually apparent to a person okay so let's let's run this a little bit longer here's what happens starting from the one black cell over here we get some fairly regular pattern on the left and we get this kind of elaborate set of localized structures here continue at a while longer this was kind of the um you could think of this as telling a story in pictures in the book um you're going for page off to page here just saying what's going to happen what what are these structures going to do are they all going to die out are they going to start producing some some big sort of explosion of activity what's going to happen well we go through multiple pages of the book in a sense these were the these were the fastest pages in the book for a human to write um although they took plenty of of computational effort i think there were 12 million cells um pictured in these in these images which about it's about the same as the number of stones in the great pyramid of egypt um but uh it took a lot less time for a computer today to produce these results so it turns out after 4 000 steps eventually you discover that well actually this thing evolves in this particular case with this particular initial condition it evolves to something that has rather simple behavior um and that's kind of a summary of what of what the thing does over all those steps so sort of the uh this was this was the kind of the um uh the remarkable thing that with a simple program from out there in the computational universe of possible programs even with such a simple program even though you fed no complexity in at the beginning even though once started off with just one black cell even despite that there was immense complexity in the behavior that could be produced and in a sense that was sort of the the launching experiment that um uh uh that kind of um launched what became a new kind of science so i kind of talked about that the next section that i have here is the need for a new intuition because as i say that the pictures plainly showed takes only very simple rules to produce highly complex behavior yet at first as i say this this may seem almost impossible to believe but it goes against some of our most basic intuition about the way things normally work our everyday experience has led us to expect that an object that looks complicated must have been constructed in a complicated way and so for example if we see a complicated mechanical device we normally assume that the plans from which the device was built must somehow be correspondingly complicated it's kind of reflecting that um image of picking up the um uh the the pocket watch back in the 1800s somewhere on the heath and asking you know what was the how how complex was the thing that made this how complex were its plans and comparing that to the to biological organisms and and thinking and realizing that biological organisms are much more complex our normal intuition is if we see that complicated mechanical device if we see that complicated thing it must have been created by a complicated set of rules a complicated set of plans but as i say the results in the previous section of the book show that at least sometimes such an assumption can be completely wrong for the patterns we saw an effect built according to very simple plans that just tell us to start with a single black cell and then repeatedly to apply a simple cellular automaton rule yet what emerges from these plans shows an immense level of complexity so what is it that makes our normal intuition fail the most important point seems to be that it's most that our intuition is mostly derived from experience with building things and doing engineering where it so happens that one avoids encountering systems like the ones in the previous section but normally we start from whatever behavior we want to get and then try to design a system that will produce it yet to do this reliably we have to restrict ourselves to systems whose behavior we can readily understand and predict for unless we can foresee how a system will behave we can't be sure that the system will do what we want so but unlike engineering nature operates under no such constraint so there's nothing to stop systems like the ones that we just looked at from showing up and in fact one of the important conclusions of the book is that such systems are actually very common in nature but because the situ the only situations in which we are routinely aware both of underlying rules and overall behavior are ones in which we are building things or doing engineering we never normally get any intuition about systems like the ones that we just saw so is there then any aspect of everyday experience that should give us a hint about the phenomena that occur in these systems probably the closest is thinking about features of practical computing but we know that computers can perform many complex tasks yet at the level of basic hardware a typical computer is capable of executing just a few tens of kinds of simple logical arithmetic and other instructions and to some extent the fact that by executing large numbers of such uh instructions one can get all sorts of complex behavior is similar to the phenomenon we've just seen in cellular automata but there's an important difference for while the individual machine instructions executed by a computer may be quite simple the sequence of such instructions defined by a program may be long and complicated and indeed much as in other areas of engineering the typical experience in developing software is that to make a computer do something complicated requires setting a program that is itself somehow correspondingly complicated in a system like a cellular automaton the underlying rules can be thought of as rough analogs of the machine instructions for a computer while the initial conditions can be thought of as rough analogs of the program yet what we just saw is that in cellular automata not only can the underlying rules be simple but the initial conditions can also be simple consisting say of just a single black cell and still the behavior that's produced can be highly complex so while practical computing gives us a part of a hint of part of what we saw the whole phenomenon is something much larger and stronger in a sense the most puzzling aspect of it that it seems to involve getting something for nothing but the cellular automata we set up are by any measure simple to describe yet when we run them we end up with patterns so complex that they seem to defy any simple description at all one might hope that it will be possible to call on some existing kind of intuition to understand such a fundamental phenomenon but in fact there seems to be no branch of everyday experience that provides what's needed and so we have no choice but to develop a whole new kind of intuition and the only reasonable way to do this is to expose ourselves to a large number of examples we've so far seen only a few examples all in cellular automata in in the book in the next few chapters we're going to see many more examples both in cellular automata and all sorts of other systems and by absorbing these examples ones in the end able to develop an intuition that makes the basic phenomena that i've shown here seem almost obvious and inevitable so that's that's kind of the um uh the the thing that i'm emphasizing there is you do this experiment in the computational universe and you discover these things that are really very surprising you didn't expect were there now you have to kind of reset your intuition to encompass what you now know is true and in a sense it was that effort to reset one's intuition that was the foundation for the science and new kind of science and the basis for a lot of work that's been done since then ultimately the basis for things like our physics project and all the things that are emerging from that all of that came from this one observation that came out of this in the end best summarized by this one rule this rule 30 cellular automaton it's it's so much my favorite discovery that like on um uh on my uh business card i i have a a nice rule 30 on the back and a rule 30 embossed on the front um just in case i'm i'm chatting with someone and it's like you know there's this rule 30 thing and um uh it's like it's very hard to describe because it's like well you you kind of get um uh you run a program and it does this thing that you don't expect it to do so you kind of have to see it do it um i'm not sure that business cards in the in the ap after pandemic world maybe business cards aren't going to still be a thing but uh um uh i've always found them useful well in any case so the um the thing that um in in this effort in in the book i i really talked about two other things in this chapter one is much more technical detail about what one knows about these particular simple cellular automata and the other more of a kind of conceptual historical question of this seems like such an incredibly obvious discovery how come this wasn't known a thousand years ago and i try and talk about that and and about how that how that comes to be but maybe i should show first a little bit of um more of the technical details and that kind of allows me to emphasize another feature of the book so there's the main text of the book and then there are the notes at the back of the book so um the uh this is this is my the chapter but each of these each of these sections of that each page has a whole bunch of notes um of uh details about um uh um the um uh what was said in the main text and i have to say that in my own use of of the book in all these intervening years it's probably the notes more than anything that i've referred to a lot because they just have a huge amount of information packed in so maybe i'll just go through a little bit some of the notes uh for chapter two here um i have a note about implementing cellular automata um at the time wolfram language mathematica as we then mostly referred to it um didn't have a built-in cellular automaton function i think i i viewed that a little bit as sort of a um uh at some level i i suppose i avoided it in some in some uh fit of uh avoid what you know best kind of thing but there was another thing which was cellular automata are in a sense very general kinds of systems and just as with mathematical functions we have hundreds of named kinds of mathematical functions vessel functions hypergeometric functions and so on one could imagine doing the same kind of thing for cellular automata all kinds of named types of cellular automata and i was reticent to introduce kind of this whole zoo of different kind of cellular automaton functions into the system and then sometime towards the end of working on new kind of science i realized that there's a very nice elegant way to kind of pack in the most common kinds of cellular automaton rules uh sort of all together in a single function and so that's uh that's in the end what we what we did in um in the version of orphan language that came out right right after the nks book 20 years ago um so all the things that i'm saying here in this in the section about implementation these are things that um uh in in modern times there's just the cellular automaton function um in the language um but this is kind of telling you how that function can work and it's very very simple and elegant i mean it's it's uh you know even in the sort of the base language you're just taking uh a um the list of bits that correspond to the rule and you're indexing into that list of bits by this array that corresponds to the if for example you have periodic boundary conditions as we do here something where you're rotating left and right the the array of cells and then uh um introducing it now now one thing that um is a feature of of cellular automata is that in a sense those those functions that say um the um uh that um give you kind of the let me let me bring one up here if i say something like um uh here let me show you in modern terms if i say something like rule plot cellular automaton 30 for instance um this uh you can view this output here as being a boolean function of the inputs here you can just say these are trues and falses and what boolean function corresponds to uh that set of outputs and in fact you can you can say in in modern times uh let's see how do i do this i think i say 30 comma 3 here um there we go and that is a boolean function let's say a pq and r um and that would give us if i say boolean convert um actually i could say boolean minimize um and this would give us the boolean function in terms of ands and ors that corresponds to the cellular automaton rule and if you're going to implement cellular automata on a present day computer a computer that wasn't sort of designed specifically with cellular automata in mind a super efficient way to do that implementation is to use kind of the compilation of cellular automata into into these bitwise operations partly because bitwise operations can be done on standard computer a whole word at a time you can do you know the bit and or the bit x or of a whole word of bits 64 bits in modern times with another whole word of bits and that all happens in hardware in parallel in the computer and so that's a way of of dramatically speeding up the implementation of cellular automata and i i talked a little bit about that here um the uh and actually there's some there are all kinds of tricks you can use about um how to do this and indeed in the in the wall from language implementation of cellular automata needless to say things are exceptionally optimized well one of the things that i did in the book was that basically no formulas in a new kind of science the only way of communicating precise things well they're two ways one is in pictures and the other is in wolfram language code or as we called it then mathematica code and so one of the things i did uh oh this this is actually the section about bitwise optimization of cellular automata and about the tricks that you can pull to do that as well as possible um the uh so i talked more about about different ways to implement things and about the fact that kind of the the core idea of orphan language of making transformations on symbolic expressions has a direct application because to cellular automata because you can think of you have these blocks of cells and then you're asking how should you transform that block of cells and you can do that using patterns and that's kind of the worst language way to do it and of course the code that i wrote now in many cases like 30 years ago is because of the effort we've put in to maintain compatibility and not make design mistakes in morphing language that codal still runs which is really cool i i should say that this first this this chapter two of new kind of science i wrote that the the year i started working on new kind of science 1991 and um in a sense it was uh uh i would have been happy if it was kind of like if that was all i needed to say i could have finished the book in 1992 and spent the rest of the 90s doing other things but as it was this kind of discovery that rule 30 led to uh the the set of discoveries kind of cascaded for a whole decade i mean i had discovered rule 30 back in well 1990 1982 1984 um and uh then i sort of tried to see its consequences starting in 1991 and see how general that phenomenon was so uh this is talking about special purpose hardware for cellular automata which seems to come up with some regularity i'm sure in modern times actually there are gpu implementations although gpus tend to be very much oriented towards uh floating point operations and cellular automata are very much discrete kinds of things you probably make even more efficient gpus um operating for cellular automata than and then what's done right now oh boy i have all kinds of things hidden in the notes here this is the audio implementation of um uh of cellular automata let's see if we can um i bet we could still do this we can just say play um and uh what's this going to do this is going to make um oh this is a very it's just 0.2 seconds i wonder if this is going to play through all the layers of everything here um that's not a good sign that's that's some oh i say list there so i didn't i didn't that's that's why i didn't work because i didn't have a an actual um uh let's see what this says it does we can probably play the yeah it says um okay a step in the evolution of a cell automatically represented as a sound by treating each key um like a key on a piano and so this yields a chord such as this so for example i'm sure we could just take the center column of uh let's just take um a little piece of well let's do rule 90 here let's take a little piece of rule 90 let's not do too many steps let's just do 10 steps here okay there we go and now let's try uh doing this um let's see what happens here i think i want to take this and uh we can write that code a little bit more nicely actually in modern times but um uh let me see what do i want to do here i want to say i wonder if i put that whole list in there i think that's going to produce a very strange result but let's take a single single thing here let's see what that does i just take that and put it in place of that list there i suspect we're going to do something okay not too exciting but we could probably go through and um uh we could go and um uh play the whole sequence of those um uh okay if i if i run for a longer time let's just run let's just get the hundredth step in rule 30 for example and that should give us oops i need to probably need two of those um there we go and that's so let's say let's take the 500th step here and let's go ahead and say um let's go do this on that um let's run this here and this may produce an absolutely terrible sound but we'll see what happens here um okay let's have a listen [Music] oh it was all for just 0.2 seconds i should have extended it let's see do we have the set yeah okay let's try this let's just try it for two seconds just for fun um and that was line thirteen this may sound absolutely horrible so watch out yeah well in a sense that sound reveals one of the core features of rule 30. what it does seems kind of random and seems as random seems very random as far as our tests of randomness are concerned and now that's a test based on our ears so to speak um and uh okay well so let's let's keep going that was about the audio representation of cellular automata you never know what's hidden in these notes i i'm i'm constantly finding things in these notes that i didn't remember were there this is about kind of the mathematical representation like rule 30 is um can be thought of as p plus q plus r plus q times r mod two so if p q and r are the values zero one of the three cells in the neighborhood one way to represent rule 30 is just you add up the three values plus q times r result mod 2. like rule 90 the rule that i showed you was some uh um that um has uh um uh nested has a nested structure that's just p plus r mod two it's a and that one feature of that is it's a linear additive function mod two whereas rule thirty it's p plus q plus r okay it's almost just a straight linear function but then it has plus qr that's its non-linearity and that one little tiny nonlinearity is the source of all of that complexity in in the behavior of all 30. well okay let's go on here so this is this is a page about kind of well what if what if you thought about something like rule 30 like a mathematical function what if you just said take the configuration of values at a particular step take them as digits and a number you know point and then it would be in binary one zero one one one whatever it is take that as a binary number and say on the next step what binary number do you get it's like turning a cellular automaton into an iterated map and this is these are the functions so uh let's see these are these are what you get so that's the function for rule 30 where as a function of the value for a finite width rule 30 that's the new value that you get from that finite rit rule 30 and what you see is this structure itself has a nested pattern this is like a cantoset structure and in fact you can even view uh cellular automata as being like maps continuous maps of the cantoset to itself and that's kind of revealed by this in a sense traditional mathematical representation in some sense if if cellular automata dealt with numbers this is how they would do things with numbers uh okay so one feature of rule rule 90 the nested pattern is that it is a cellular automaton where there is a definite mathematical interpretation of what it's doing what it's doing is it says take the left left element and the right element add them up mod 2 and make that be the new element so that is the direct analog of pascal's triangle where you're doing that with just complete integers 1 1 1 1 2 1 1 3 3 1 and so on in pascal's triangle that's uh rule rule 90 is pascal's triangle mod 2 and so there's all kinds of interesting things to say about pascal's triangle mod 2. um for example if you just want to count how many black cells are there on pascal's triangle on the nth step of pascal's triangle mod 2. so for example if we take um uh let's just take our favorite uh okay that was lots of those other cellular automata but let's just take um uh uh rule 90 let's start it from a single black cell let's um let's run it for 100 steps and let's for each one of these let's um uh let's find the total so this is the number of black cells um as a function of so let's say we say this line plot this will show us that it's cut off it cuts off at the top there because it went bigger than the default thought range but this is showing us as a function of of of level in that nested pattern how many black cells are there so uh if if we look here well actually we can we can very rapidly see that this is that they're always powers of two so let's go ahead and take um uh log to the base two of of that there's the result and so now we can say well what does that look like and for people like me who've looked at way too many of these kinds of um these kinds of things i look at that picture and i know i say i know what that is that's a picture of the number of ones in the binary decomposition of the number so what we'll see here is as we go up to for example 32 it'll it'll drop down to just one the 32th element here because 32 in base 2 is just one a bunch of zeros the um uh this has value one for 31 which is a bunch of ones uh it'll have its maximum value and so on and so it turns out that one of the sort of mathematical results is that the nth row or the teeth row of pascal's triangle mod two has a number of uh that the number of ones there is proportional to the number of ones in the binary decomposition of the number t so that's an example of a result and there are a whole bunch of other results here that i talk about about thinking about this in terms of polynomials you can think about pascal's triangles like those are the coefficients in 1 plus x to the power t in this case it's because it kind of goes in both directions it's 1 over x plus x to the power t and one can look at the coefficients of that and those correspond to the values in pascal's triangle and uh there are different forms of um different ways to compute things um this is this is a just another these are these are all quite interesting if you try and take these apart they both tell you how convenient it is to use wolfram language as a representation of these kind of algorithm fragments of how you compute things and the fact that if you were to try to write this in sort of traditional kind of uh math and numbers notation it would be very awkward there isn't even really a a good symbol for the digits in an integer that's not something it's not like a function like you know the divisor sigma function or something where you know number goes in industry goes integer comes out these this these kinds of cases don't have such a convenient notation but in more from language we we do have a convenient notation and i kind of made it a principle in writing new kind of science that i would only use wolfram language code plus pictures plus english to explain what i was talking about and it worked really well um and i think in a sense the the discipline of of writing things in wolfram language code was really important in in getting clarity of what one was talking about plus 20 30 years later can still run the code and and can play with it oneself okay that's talking about the nesting and self-similarity in rule 30 um and uh that's talking about the fact that um with different initial conditions you can get all kinds of wild um uh um wild kinds of pictures um i think probably here let's see there there are starting to be um image source notebooks and uh let's see if we can pull that out from there um there's starting to be uh there will be more of these but this is some okay so that's an example of what happens if you start rule 30 off not from a one in a sea of zeros but uh a um a one sorry one one zero and a sea of ones no i'm sorry this is a sea of oh i'm sorry that this is um yeah a single one in a c of one one zero blocks so it kind of makes that that kind of pattern and it's sort of remarkable that people still discover things about rule 90. um did i say rule 30 i meant rule 90 here um rule 30 there's that's just an amazing amount still to discover um but even rule 90 which is something that sort of seems accessible to traditional mathematics through thinking about binomial coefficients and so on there's still things to discover okay oh this is about um thinking about more general additive rules and uh it's kind of a tricky thing you can um if you just take uh the sum mod k for any for any integer k you'll get a nested pattern that's something i i noticed in um bizarrely nobody seemed to have noticed that before i noticed that in the maybe 1981 or so um and uh when when the number is a prime it's a nice clean nested pattern when the number is composite it's a bit more complicated but there's a question of things like um what is the fractal dimension of um uh actually there was some precursor literature but it didn't really emphasize this idea of of nesting and so on but but you can do things like ask what what is the what is the fractal dimension of the nested pattern and it turns out it's possible to work that out for any prime k again a formula that's rather easy to write in more from language and will be quite hard to write in um uh in sort of traditional mathematical notation i i wonder if we can work this out let's say let's take um uh for example for um uh oh this is in arbitrary dimensions d so i think i want to say is d the dimensionality let's we have to read the the documentation here what is d as d the um uh let's see um number of cells blah blah what is d let's see for non-prime oh my gosh it has to say what d is um it's given by the number of borrows that i see the this um oh no i see this is a completely different thing oh yeah right sorry i was i was i was misreading what was going on here so we this this is this is telling us the fractal dimension um just in terms of k uh so this is this is a more general kind of thing here um okay onward oh history of pascal's triangle mod k so i talk about pascal's triangle uh probably known in china in the 1200s maybe even well the sort of drawings of it from that time discussed by pascal 1654 um in particular in connection with probability theory because it's kind of like you can have all these different uh sort of what we would now call binomial trees are possibilities of you know you flip a coin it goes it it's some uh heads tails and you can have these different sequences and that can be represented by this pascal's triangle um so this idea of using the digits of numbers to find binomial coefficients mod k uh has been where i found various independent inventions of them uh lucas glacier um and uh um uh were two people who studied this um uh this phenomenon whereby you can compute the um uh binomial coefficients mod k um uh which are the binomial coefficients being the things that appear in pascal's triangle you can compute that using digit sequences of numbers oh this is this is kind of well if you're going to do it for binomials what about other things what about sterling numbers and so on and you tend to find sort of these these simple combinatorial uh functions end up giving you in these cases mostly nested patterns although somewhat distorted ones here um oh yeah this is this is kind of fun if you if you ask about things like bitten and bit xor you can kind of ask this question what does the bit bit xor function look like if you just say bit xor of x and y what are the values of bit x or of x and y um does this yet have click to copy oh this might have um okay this has a cloud notebook let's see oh there we go nice um all right so let's let's say i pick out um i'm not sure this is the best way to do it anymore we can now do we now have image image um we don't need to do it this way things have become more modern but anyway i could do that um and this is now uh bit xor so what this is saying is every everything here the height of a particular column here is the bit xor of the values of its x and y coordinates and that's the kind of pattern it makes and in fact this this phenomenon these kinds of patterns um are quite quite fun um and uh it was a well-known display hack for the pdp-1 computer in 1962 was the so-called munching squares hack where you're progressively bit xoring squares and you end up going through this whole sequence of different um uh different uh different images it's kind of a thing where had one tried not just uh using the bit xor operation but using a combination of operations maybe bit xor combined with bit or one would have made something just like rule 30 but in fact people just used bit or and bit xor and so on well let's see so um i talk about what one can tell about rule 30 so for example one of the questions is if you look at the center column of rule 30 um what um uh what uh what kind of thing can you say about it and back in those days i had computed originally i did it um oops let me um uh um um originally i had done it actually on the connection machine parallel computer i had computed a large number of of uh of bits of rule 30 actually in a sense keeping memory because you have to store in order to compute the nth bit you have to store uh at least um n bits at time n over two um to compute the nth bit in the center column and so i had i think when i did nks i had computed okay a million cells i think we've now computed more than that and um but still a great deal is unknown about um uh um about rule 30 in fact so much so that uh just a couple of years ago i put up some prizes for figuring out things about rule 30 and um these are things where and the the problem one does the center column always remain non-periodic or does it eventually uh kind of like like the digits in a rational number eventually just become periodic does this does the color of each cell on average occur equally often in the center column are there as many ones as zeros and then the final one the kind of most difficult one does computing the nth cell of the center column require at least about order n computational effort and that's really the question of computational irreducibility is there a way to kind of jump ahead and compute the nth value in rule 30 without computing the things before it but uh in in the in the notes of the nks book um i talk about what one does know about um uh about this pattern so for example in the rule 30 pattern if you look at that rule 30 pattern there is on the left hand side there's periodicity in this pattern and one could say how do the periods increase as you go to a certain depth they also on in principle the periods can increase exponentially with depth and on the right hand side they do increase more or less exponentially with depth uh with exponent but just like two to the d and we're deals depth and here on the left-hand side they increase much more slowly and people have in the intervening years studied that um in more detail so for example they only reach uh period 32 at depth 87 867 and so on um so uh anyway that was that was i was kind of summarizing what was known about rule 30 sadly we know very little else in the intervening 20 years we've we've learned very little else despite a fair amount of effort spent on kind of the analysis of rule 30 we don't yet know bigger facts about it uh one of the things that um the center column the um uh there's a there's a kind of a line between the regularity of the left-hand side and the irregularity of the right-hand side and um that's something you can kind of plot that line of irregularity and it makes a pretty good random walk a random walk with a slight bias of about 0.252 cells per step and i i should tell a story here when i was first working on this in oh 83-84 kind of time frame uh dick feynman richard feynman uh well-known physicist was a was a friend of mine and he sort of looked at rule 13 he said i just don't believe it's really as complicated as you say it is you know and he actually spent um a vacation out in hawaii i think where he had a a computer shipped out there for the purpose of playing with that and trying to figure out you know could he decode rule 30 and uh he he came back from that and said okay okay okay i think you might actually have something there um you know i can't decode it but one of the things that um uh he noticed was this point that that i'd also noticed that there was this kind of line of regularity and he had this argument for why the um the line of regularity should be um uh should be just um exactly uh uh uh 0.25 and um just to uh maybe i can pull up here if i can do that let me see if i can find it in a moment here um i might even be able to find um you see here i should have been better prepared for this but let's take a look um let me see what i can find something here uh maybe maybe maybe this will be here um oh look at that uh so this was in this is in my online scrapbook um and that is uh some notes um does that get bigger oh dear um maybe that gets bigger there we go there are some notes from um that stick fireman's handwriting of uh looking at rule 30 and uh trying to see whether one can um uh trying to compute the velocity this is so quintessentially finding that um you know when in doubt start calculating for me that will be when in doubt you know run more from language on a computer for feynman that was when in doubt start calculating by hand i think maybe there are some other things here let's see oh this is this is more on rule 30 um trying to work out sort of the probabilities and i think there oh yeah oh yeah that's fun that's i told you he was a serious hand calculator and that i i guess is some probability estimate for um uh um uh for for rule 30 um so it's very close to two-thirds there well yes the thing that i found is that if you measure that velocity that of that random walk it's like 0.252 but it's not quite 0.25 it's definitely not 0.25 so in any case just a just a fun a little piece of uh of anecdotal history there well so um going back to the uh the book uh yeah i also uh talk about um um uh rule 110 and actually one of the big questions in rule 110 is those those structures that um you see in rule 110 uh what are they and um uh what kinds of things can occur i talk about that a whole bunch later on in the book let me not talk about that here um so and i have some other notes there are all kinds of funky things hidden in these notes um the uh here's one reactions of scientists many scientists find the complexity of the pictures so surprising that at first they assume it can't be real they assume that while pictures may look complicated if they were only subjected to the appropriate kind of analysis they would sort of crush to simplicity and dick feynman was precisely an example of somebody who assumed that as i was actually at the beginning um the um it's it's interesting to notice that you know it you might think that sort of using just your eyes to tell you something is somehow unscientific and that no no we have to turn everything into numbers and that's the only way we can be scientific but actually one of the things that sort of a meta lesson of new kind of science is that one's eyes are useful one's eyes have the sort of highest data rate that gets into our brains right now for for kind of things we can tell what's going on and that it's visualization and being able to sort of see things and and come to conclusions just with one's eyes is really important one can then sort of back that up with various kinds of computational and mathematical and numerical and whatever methods although sometimes they won't get that far and one's eyes still have it so to speak i think i had another note here about intuition from practical computing this is not a note i remember at all so i guess this is this is commenting on what one could learn from just experience with computers maybe in 20 years later maybe there's even more one can learn first the general purpose computers can be built different programs for doing all sorts of things can be set up any given program can be implemented in many ways uh programs can behave in complicated and seemingly random ways particularly when they're not working properly debugging a program can be difficult it's often difficult to foresee what a program can do by reading its code the lower the level of representation of the code for a program the more difficult it tends to be to understand some computational problems are easy to state but hard to solve programs that simulate natural systems are among the most computationally expensive it's possible to create large programs at least in pieces it's almost always possible to optimize a program more but the optimized version may be more difficult to understand shorter programs are sometimes more efficient but optimizations often require many cases to be treated separately making the programs longer if programs are patched too much they typically stop working at all so some interesting pieces of intuition there which uh some of which we kind of see reflected both in new kind of science and more recently in physics project and so on but some i think are still uh are still sort of interesting i mean like shorter programs are sometimes more efficient but um uh optimizations often require um uh many cases to be treated separately so in other words you can uh it's often the case that there will be special code paths that can have that are special cases that sort of aren't the general case that one can where one can find these sort of pockets of reducibility from that particular code path but when you add all those pockets of reducibility the actual intrinsic program one has may get considerably longer so it's sort of an interesting trade-off i mean the limiting version of that trade-off is just have your program just cache all the cases of what you're trying to compute let's see um things about design okay well so the next big section of um of this chapter was something that was kind of something i really really wanted about for a lot and so i really wanted to actually make a section that talked about this and the question was uh as i titled the section why these discoveries were not made before you know why weren't these things discovered before and maybe i can just read a little bit of what i had to say about that so i said the main result of this chapter the programs based on simple rules can produce behavior of great complexity seems so fundamental that one might assume it must have been discovered long ago but it was not and it's useful to understand some of the reasons why it was not in the history of science it's fairly common that new technologies are ultimately what make new areas of basic science develop and thus for example telescope technology was what led to modern astronomy microscope technology to modern biology and now in much the same way it is computer technology that's led to the new kind of science that i described in this book so indeed this chapter and several of those that follow can in a sense be viewed as an account of some of the very simplest experiments that can be done using computers but why is it that such simple experiments weren't done before one reason is just they were not in the mainstream of any existing field of science or mathematics but a more important reason is that standard intuition in traditional science gave no reason to think that their results would be interesting and indeed it had been if it had been known that they were worthwhile many of the experiments could actually have been done even long before computers existed for while it may be somewhat tedious it's certainly possible to work out the behavior of something like a cellular automaton by hand and in fact to do so requires absolutely no sophisticated ideas from mathematics or elsewhere all it takes is an understanding of how to apply simple rules repeatedly well i i actually showed a picture um of uh uh sort of to highlight that point i showed a picture i had collected um let me share this again uh of um of ornamental art that were kind of proto-pre-cellular automata um these are each one of these has a has a story so to speak um this is first century bc um and uh actually oh there are even earlier ones up here yeah right that's the 22 000 bc you're seeing this kind of uh a bone bracelet thing with all kinds of elaborate patterns on it 3500 bc um in um i guess this is in the city of ore um the uh this is an early mosaic pattern and i think uh at least mythologically uh gilgamesh himself was supposed to be in responsible for the um uh um uh for the creation of those patterns um actually we see um let me just um uh pull something up here uh hold on one second and [Music] let's go back to these pictures and i can tell you a little bit about some of these pictures um the uh uh let's see so uh that one there is a mammoth ivory bracelet apparently from a place that's now in ukraine um and uh it's from stone age um and uh the um uh one thing i noted was that actually the the um uh the the lines of these patterns although it's clearly if you if you sort of dig this up and you say was that made by naturally by mammoth or was that made artificially by a human you would immediately say it's definitely made by a human but it turns out that the angle of those lines follows uh some angles that exist in mammoth dentin uh naturally so uh this is um as i mentioned this is um uh in uruk i guess in mesopotamia now in iraq um apparently this very temple is mentioned in the epic of gilgamesh um it's uh that that object is now in a museum in berlin um okay this is a this is greek from 1200 bc um it's apparently a doodle on the back of an accounting tablet that was found in pylose greece um and i kind of described the um i showed uh kind of the the how one expects that that pattern was actually made successively um how how the kind of successive steps were made but it's sort of interesting that that that pattern occurs that very same pattern occurs in multiple places around the world um that particular labyrinth pattern appears in many places and it's you know it's said to have been the the plan of the labyrinth housing the minotaur in knossos crete and it said that it was designed by daedalus himself it's also said that it was the logo of the city of troy or maybe it was the plan of some of its walls but uh you know th this shows up all over the place uh coins from crete graffiti and pompeii and much later in the floor of the cathedral and chartres uh they're also carvings in peru um etc so it's uh you know for 3000 years it's been the most common design used for for mazes well okay then we get to other kinds of patterns these are loons loons were very popular in in a as a geometrical pattern they they seem to have fallen out of out of popularity a bit but this is a phoenician example of loons where what you basically do is you use those those those points there and you put a compass that will draw a circular arc circle here and you just look at the intersection of all those circles and it makes that kind of hexagonal pattern of looms um the uh euclid discussed those patterns and uh in fact leonardo da vinci had an unpublished book um about um i think it was called geometrical play that is full of those things um well then you see patterns like this and it's always so hard to tell you know you see a pattern like this and what on earth is it you know how did somebody make that um it turns out that pattern again is a collection of circles from different center points and it's kind of the overlap of those circles but if you just see this it's clearly a pattern made sort of on purpose in some sense but what was the mechanism of making it hard to say and it's something where you know one kind of wonders by the time you're making a pattern that elaborate in the first century bc you know why not make a cellular automaton pattern well uh this is um that's a roman mosaic um this is a um uh um a mosque in uh uh cordova in spain from 785 a.d um then um you see and that that's that's you know on the wall of a mosque sometimes you see in in finer art that's been preserved that's uh that's a two inch square um uh piece of decoration um in the book of kells in the cairo page i guess cairo for christ um in the uh which was a um um a uh a book that was created over the course of many years starting in about 800 a.d um uh and um there are one thing that's notable here is the existence of these nested structures that um uh somebody very elaborately drew on this page well you go further and you're starting to see um uh things um uh again this strange arab norman style seems to be from two very different geographical areas but um uh that was sort of the the course of history in those days um these these kinds of sort of partially nested patterns seen in architecture um that's uh that's the lincoln cathedral in england from 1225 a.d um this is uh this was sort of a favorite thing that i found this was um uh the first example i could find of nested patterns truly nested patterns that that really go many levels um is in this this work of um uh by the uh cosmetic group um this is was a group of mosaic layers um uh originally that these particular mosaics were um uh from a cathedral in agony in italy um it was uh kind of an adventure getting those pictures at the time when we got them in the in the 1990s although i went to see this thing a few years later and um it was easier to see at the time but um uh okay so these were made around 1226 by a chap called cosmos of the cosmeti family um and uh this um um there's a art historians have traced the cosmetic work as it's called it's pretty famous in the history of mosaic and so on and it can be found in all sorts of places you know and there's stuff in westminster abbey there's stuff in venice and so on but it was uh uh in the art history books there's there's a bunch of tracing of the um of the kind of genealogy of the cosmology for generations of them and there's a lot of discussion of when this representational art um in in these kind of mosaics and the methods of actually making the colors in the mosaic and so on but when it comes to these nested patterns the old art history books are very erudite in their form are completely silent i mean people just didn't understand this idea of nesting and so they didn't comment on it um okay i have to show you this one this was um this was a pattern where it's like look if you can make a thing like this maybe it is a cellular automaton maybe it is some number theoretical function maybe it's some something like this this was uh from around 1300 a.d um and it's in a place that's now in iran um it's uh um and the question is what is this and it took me i i tried to like use what crypto analysis skills i have to figure out what on earth could this be the answer in the end is that it's calligraphy it's an example of so-called square kufic style and it's a verse from the quran um and which sort of starts off like that and then it can be kind of squared up like this and then you can kind of wrap it um like this and then if you pack it in you can you make this kind of pattern here and this thing unrolls i mean talk about some uh sort of right to left being a difficult form of uh for computers at least sometimes um for for dealing with um characters but here you've not got that you've got pacton and a square so to speak it's a um but but that's um so that that turns out to be calligraphy and it turns out not to be something that was sort of made according to rules of the type that might be like cellular automaton rules now uh so so this was a i was very curious what kind of rules had people used and and why had they not come up with something just like cellular automaton rules and so i i was curious about about sort of what kind of things based on rules have been made in human history and there are lots of different examples so in architecture for instance there are definite rules you know when you make a ziggurat when you make the great pyramid uh you know 2500 bc um it's um uh um the um i say in the in the notes in the nks book that the great pyramid uh contains two million large stones and that's the about the number um in that in the original rule 30 picture in um on page 30 of the nks book there are about a million cells shown there so uh in any case uh hindu temples from a thousand uh bc kind of uh constructed with multiple scales in kind of a nested pattern um and um you know in roman architecture uh you know even the sort of palladian villa kind of thing there's there's a whole bunch of of sort of nested patterns that get made i i mentioned in the nks book the castel del monte from the 1200s um somewhat later um there are i think things called star forts which are kind of constructed in these sort of nested patterns of fortifications um many persian gardens like the ones from the taj mahal have fairly regular nested structures so in any case there's a sort of nested structures uh in architecture fairly common but until modern times until people actually started using cellular automata to make architectural forms no rule 30 that i could find another place where sort of definite rules um uh have been um uh have been set up textile making weaving in fact it's become one of the rather popular ways to artistic ways to render cellular automata is um in in weaving patterns you can program a computer-controlled knitting machine to just actually run the cellular automaton kind of right at the um at the at the sort of knitting uh point so to speak you know you could have a jacquard loom where where instead of having punch cards to program it you could actually just have the cellular automaton program um being the thing that was determining the knitting pattern another one that i mentioned in the nks book is kind of a kind of an odd one is rope that was that was one that i i realized is some is another example of um uh um uh of nesting there's an example when you make a piece of rope you're typically making it out of multiple strands and um those strands are arranged in this kind of nested pattern um let's see did i oops i didn't share that sorry there you go there's the nested rope well okay other lots of other kinds of things i kind of went on this big tour of different kinds of um kinds of rules knots string figures paper folding also talked about mathematics but that's a bigger discussion grammar the idea that you can make uh set up words in a language according to definite rules that seems to have been a thing um from a grammar that was given by a person called panini for sanskrit in 500 bc that kind of idea of making sort of definite generative rules for grammar didn't really reappear seriously until 1956. i actually recently looked back at the at what panini had actually written it's so hard to understand a lot of this stuff that was written in a very different context a long time ago because it's kind of almost aphorisms about you know after the verb there must be this after that there must be that and you can kind of you know if you say well how would i write that in computational terms you wind up with precisely a generative grammar but if you take it on its face value it's kind of almost more like here are templates and examples of how the grammatical structure should work i mentioned in nks poetry where 200 bc indian work on uh enumerating possible forms of of prosody led to potentially both pascal's triangle and and more certainly fibonacci numbers um and then you know rhymes involving iterated length six permutations sustena iterated repetitive sequences terzarima uh used by people like dante and so on dante was a pretty science and math oriented person i talk about music and different forms of regularity used in music but again no cellular automata i talked about um uh another wild one military drill um you know fortifications by made up by soldiers probably existed in babylonia in the syrian times but it was pretty well codified you know the tortoise configuration for the roman legion or something like this and uh machiavelli actually in in the 1520s described some um some some elaborate rules but they were set up to lead to rather simple behavior like a column of soldiers arranged in lines and so on another example rules games things like game of go from 500 bc maybe as early as 2000 bc um go is indeed a case where there are simple rules that potentially lead to complex patterns of play but it's again not quite like a cellular automaton it's not quite something where you say here it is now these are the rules now just go so to speak just just use those rules automatically similarly with puzzles they're usually based on constraints rather than based on saying generate this using these rules they're rather you have these constraints what can fit so to speak oh i just go on talking about all sorts of different um uh different things um here but um cryptography is another case again um not really the same kind of you know the schemes that involved repetition it turns out rule 30 itself can be used as a good cryptosystem you're basically generating randomness to kind of grind up the uh the message that you have in um uh when you generate that center column and you use as a key use the initial condition it's a good stream cipher but nothing like it seems to have been actually used in cryptography well until modern times um uh i talked about mazes uh all these kinds of doodles uh the people make i mentioned leonardo da vinci's geometrical play has all sorts of elaborate patterns particularly things based on loons um but anyway so lots and lots of different uh kinds of um places where uh versions of um things based on rules appeared but no um uh no cellular automata and i you know i've kind of thought one day somebody's going to send me mail and they're going to have a picture and it's going to be a babylonian tablet and it's going to have something like rule 30 on it that will be really cool if it happens but so far nothing like that seems to have been found and nothing like that uh that we don't think that anything like that seems to exist um the uh and i say in the incas book actually i mentioned the thing about bible an artifact it might be on earth but um uh i say i i don't think that's going to happen because if pictures like these cellular automaton pictures had been seen in ancient times probably science would have progressed in a different way than the way it actually did and so i am i talk um in in the young case book i talk about in this chapter you know even early in antiquity attempts were presumably made to see what the simple abstract rules could reproduce the behavior of natural systems but so far as one can tell the only types of rules that were tried were ones associated with standard geometry and arithmetic and using these kinds of rules only rather simple behavior could be obtained adequate to explain some of the regularities observed in astronomy but unable to capture much of what we see elsewhere in nature and so perhaps because of this it typically came to be assumed that a great many aspects of the natural world are simply beyond human understanding finally with the successes based on calculus in the late 1600s that belief began to be overthrown because with calculus it was finally there was finally real success in taking what amounted to abstract rules created by human thought and using them to reproduce all sorts of phenomena in the natural world but the particular rules that were found to work were fairly sophisticated once based on particular kinds of mathematical equations and from seeing the sophistication of these rules that began to develop an implicit belief that in almost no important cases would simpler rules be useful in reproducing the behavior of natural systems so during the 1700s and 1800s there was an ever increasing success in using rules based on mathematical equations to analyze physical phenomena and after the spectacular results in physics in the early 1900s with mathematical equations there emerged in almost universal belief that absolutely every aspect of the natural world will in the end be explained by using such equations needless to say there were many phenomena that did not readily yield to this approach but it was generally assumed that if only the necessary calculations could be done then an explanation in terms of mathematical equations would eventually be found beginning in the 1940s the development of electronic computers greatly broadened the range of calculations that could be done but disappointingly enough most of the actual calculations that were tried yielded no fundamentally new insights and as a result many people came to believe in in some cases still believe today i think that becoming less true 20 years later that computers could never make a real contribution to issues of basic science i think that's something that i'd like to think even the efforts of nks have helped to change that over the past 20 years but i say there that the crucial point that was missed is that computers are not just limited to working out consequences of mathematical equations and indeed what we've seen in in this chapter of the book is that there are fundamental discoveries that can be made if one just studies directly the behavior of even some of the very simplest computer programs in retrospect it's perhaps ironic that the idea of using simple programs as models for natural systems did not surface in the early days of computing for systems like cellular automata would have been immensely easier to handle on early computers than mathematical equations were but the issue was that computer time was an expensive commodity and so it was not thought worth taking the risk of trying anything but well-established mathematical models by the end of the 1970s though the situation had changed and large amounts of computer time were becoming readily available and this is basically what allowed me in 1981 to begin my experiments on cellular automata so you know there's nothing in principle that makes one have to use a computer to study cellular automata but as a practical matter it's difficult to imagine that anyone at least in modern times i think back to the book of kells and so on would have the patience to generate many pictures of cellular odometer by hand because i mentioned you know it would take roughly an hour to generate one of the low resolution pictures of uh of rule 30 but even the somewhat higher resolution picture of of rule 30 not the highest resolution that i show would take a few weeks to make that picture um i have to say you know if this i'm sure it took a lot longer than that to make that illumination of the book of kells so with if if there's a will to do it there would have been a way to do it back a thousand years earlier well the um i i say that even with early mainframe computers data for these pictures could have been generated in a few seconds or a few minutes but the point is one will be very unlikely to discover the kinds of fundamental phenomena discussed here just by looking at one or two pictures and indeed in my own case it took uh carrying out quite large-scale computer experiments on a considerable number of different cellular automata to kind of understand absorb believe in the things that i was seeing if one already has a clear idea about the basic features of a particular phenomenon then one can often get more details by doing fairly specific experiments but in my experience the only way to find phenomena that one does not already know exists is to do very systematic and general experiments and then to look at the results with as few preconceptions as possible and uh while it takes only uh rather basic computer technology to make single pictures of cellular automata it requires considerably more to do large-scale systematic experiments and indeed many of my discoveries about cellular automata have come came and have come as a direct consequence of using progressively better computer technology and the same can be said about the kinds of discoveries that have led to things like our physics project now so uh i give some examples in the nks book about some um using high-resolution displays and when that became possible and i i first really understood the randomness of rule 30 when i could generate it on a an early laser printer high resolution display and so on so i say in the book undoubtedly one of the main reasons that the discoveries i described in this chapter were not made before the 1980s it's just that computer technology did not exist did not yet exist powerful enough to do the kinds of exploratory experiments that were needed but beyond the practicalities of carrying out such experiments it was also necessary to have the idea that the experiments might be worth doing in the first place and here again computer technology played a crucial role for it was from practical experience in using computers that i for example developed much of the necessary intuition i mean as uh you know a simple example might have imagined that systems like cellular automata being made up of discrete cells would never be able to reproduce realistic natural shapes but knowing about computer displays it's clear that's not the case for a computer display like a cellular automaton consists of a regular array of dispute cells or pixels yet practical experiments experience shows that such displays can produce quite realistic images even with fairly small numbers of pixels and as a more significant example i might have imagined that the simple structure of cellular automaton programs would make it straightforward to foresee their behavior but from experience in practical computing one knows it's usually very difficult to foresee what even a simple program will do indeed that's exactly why bugs and programs are so common because if one could just look at a program and immediately know what it would do then it would be an easy matter to check that the program doesn't contain any bugs so notions like the difficulty of finding bugs have no obvious connection to traditional ideas and science and perhaps there was a result of this even after computers have been in use for several decades essentially none of this type of intuition from practical computing had found its way into basic science in 1981 it so happened that i had for some years been deeply involved in both practical computing and basic science and therefore happened to be in an almost unique position to apply ideas derived from practical computing to basic science yet even despite this my discoveries about cellular automata still involves a substantial element of yuck luck because uh as some as i talked about elsewhere in the book my my very first experiments on cellular automata showed only very simple behavior and it was only because because it was technically easy to do further experiments that i persisted and ended up finding things like rule 30 and actually even after i'd first seen signs of complexity in cellular automata it was several more years before i realized the sort of full range of examples and realized just how easily complexity can be generated in systems like cellular automata so but um you know i say that part of the reason is that that involved sort of doing experiments with progressively more sophisticated computer technology but the more important reason is that it required developing new intuition and at almost every stage intuition for traditional science took me exactly in the wrong direction but i found that intuition from practical computing did better and even though it was sometimes misleading it was in the end fairly important in putting me on the right track well it's some uh so i talk about um um now that one knows the fact that simple rules can lead to complicated behavior one can go back and see quite a number of times in the past when they came at least close to being discovered actually two-dimensional versions of cellular automata already considered in the 1950s as possible idealized models for biological systems but until i started looking at these things actual investigations of cellular automata mostly consistent in constructions of complicated sets of rules that could be shown to lead to fairly specific kinds of behavior and uh this this sort of question of whether complex behavior could occur in cellular automata was occasionally raised but with the intuition from engineering it was generally assumed that to get substantial complexity one would have to have very complicated underlying rules and so the idea of studying cellular automata with simple rules didn't surface and so these experiments didn't end up getting done well there were other areas where systems effectively based on simple rules were studied and in fact complex behavior was sometimes seen but without a framework to understand its significance the behavior tended to be either ignored entirely or treated as some kind of curiosity of no particular fundamental significance even in the early history of traditional mathematics there were signs of this sort of basic phenomenon of complexity um it was known for well over 2 000 years about the distribution of prime numbers it's there's an easy procedure for generating primes yet once you've generated them the primes the the actual positions of the prime seem in many respects random but almost without exception the mathematical work that was done on primes concentrated not on this kind of randomness but instead on proving the presence of various regularities and average case properties of this distribution of primes so another early sign of a phenomenon of complexity could be seen the digit sequence of a number like pi you know it's by the 1700s people knew more than 100 digits of pi and they seemed quite random but this fact was basically treated as a curiosity and the idea just doesn't seem to have arisen um that there could be a more general phenomenon whereby simple rules like the ones for computing bi could produce complex results so uh in the early 1900s things like nested and fractal behavior were sometimes generated and sometimes more complex behavior was seen in kind of uh systems based on mathematical rules but the uh the very complexity of this behavior was usually shown taken to show that that this behavior couldn't be relevant for real mathematical work and could only be of kind of recreational interest the when electronic computers first started getting used in the 1940s there were more opportunities for this phenomenon of complexity to be seen and if one looks back significant complexity probably did occur in many scientific calculations but those calculations were almost always based on traditional mathematical models and since previous analyses of these models had not revealed complexity it tended to be assumed that any complexity in the computer calculations was just a spurious consequence of the approximations used in them particularly sort of approximating uh perfect continuum real numbers by discrete uh things that could be could be stored in a computer i mentioned iterative maps from the 1950s um mention cryptography one of the tricky things there is that um actually systems not unlike cellular automata were studied in the late 1950s for generating random sequences to be used in cryptography and uh almost all the results i say from 20 years ago almost all the results that were obtained are still military secrets but i do not believe that any phenomena like the ones described in this chapter were discovered i think it is still true that those things are secret um any case the the within mainstream science the sort of standard intuition that had been developed made it difficult for anyone to imagine that it would be worth studying uh the very simple kinds of programs that i ended up studying um in the 1960s early computer enthusiasts tried running various simple programs and discovered that they could make nested patterns for example um i talked about game of life and as a cellular automaton and sort of making structures there which ended up being done sort of for recreational purposes but not really for scientific purposes it's some uh yeah i think that the um the thing that uh i ended this chapter by saying is that um you know whatever the reasons the fact remains that despite many hints the basic phenomenon of simple programs simple rules generating complex behavior was not something that had been seen before um the um and so i i comment it's um it's not uncommon in the history of science that once a general new phenomenon has been identified one can see that there was already evidence of it much earlier but the point is that without the framework that comes from knowing the general phenomenon it's almost inevitable that such evidence will have been ignored it's also one of the ironies of progress in science that results which at one time was so unexpected that they were missed despite many hints eventually come to seem almost obvious and having lived with the results of this chapter then for nearly two decades now for four decades it's it's now difficult for me and that to me to imagine that things could possibly work in any other way but uh the history that i've outlined in this section this is something i've certainly thought about a lot like the history of many other scientific discoveries provides a sobering reminder of just how easy it is to miss what will later seem obvious well one of the things that i also did in this section um [Music] i had uh um quite a lot of discussion um of uh uh let's see i talked about some talked some about um a variety of issues let's see and i'm just going to talk a bit about these notes and then then open this up to some questions and then we should wrap up for today um i talked a bit about um uh things like even recognizing uh you know what is that thing that um uh that you have uh is that thing that you are showing in a piece of sort of ornamental art is it merely ornamental does it mean something was it generated according to rules did it just get produced randomly and so on um i talked let me talk a little bit about some yeah let me let's talk about a few uh close approaches that i mentioned i mentioned primes uh fibonacci sequences leonardo da vinci i mentioned well euler 1700s made continued fraction approximations to numbers and noted that um uh there was regularity in these continued fraction approximations for some things like e for example but the continued fraction approximation to pi which is quite irregular he just didn't say anything about that um same type of thing with with um uh um uh with with the digits of pi um in the eighteen hundreds an interesting case complicated behavior is found in the three body problem the gravitational attractions of earth moon sun but it's assumed that with better mathematical techniques this complexity that's seen in this difficulty of figuring out what's going on will eventually be resolved a fun one is john van inventure of venn diagrams there's a nice little book in which um uh he shows actually the first sort of plot probably of a random walk and the way he gets the the random steps in the random walk is from the digits of pi but he makes no comment about this it's just that's a convenient way to get random digits why they're random or what the largest significance of that is he is silent on that um various studies of um rewriting systems that um are nested ultimately more nested patterns um uh uh combinators very early um uh thing that i did a lot of work on a couple of years ago now for their 100th anniversary um very uh very sort of minimal examples of computation but again uh which in fact as i as i showed recently and people had seen smaller examples of before can produce immensely complicated behavior even with a very simple sort of initial setup but that wasn't something that moses schoenfinkel looked at nor did the people who followed him studying combinators really look at emma poster happened to study his work again because it was its 100th anniversary recently um with tag systems he noted that it was very complicated and he basically was like he realized that if he could solve the problem of tag he could solve every problem in mathematics and but then he tried to solve the problem of tag and he didn't succeed he almost discovered girdle's theorem as a result but didn't quite but again he didn't internalize kind of what this sort of complexity that he had discovered by by essentially doing experiments on this game of tag what that complexity he didn't make a bigger implication of that complexity had he done so he kind of would have discovered godel's theorem so it might have discovered computational irreducibility and so on uh the izing model kind of structurally a bit like a cellular automaton but mostly thought about probabilistically rather than according to definite rules of evolution um talk about turing machines and so on the 3m plus 1 problem a kind of mathematical problem where there's unpredictability but again the big focus is proving a simple result about it not oh let's understand the meaning of the unpredictability let's instead try and prove that the thing always terminates in a certain way pseudo-random number generators uh for example i don't know von neumann's middle square method where you square numbers and take the middle digits and so on they were always viewed as hacks von neumann always would say things like you know anybody using uh machine generated random numbers deterministically generated random numbers is operating in a state of sin so to speak um because they're not sort of truly random whatever that might mean now that we have better theories of physics we can discuss it in a deeper way but um uh this idea of you know you grind up numbers and you make randomness with something was kind of just viewed as a hack not viewed as something that was um of fundamental significance uh cybernetics when it came on the scene um late 40s early 50s people built electronic devices they found sort of complex behavior in like simple neural nets and so on but they just said it's a nuisance we really want this neural net to learn you know light from dark or something like this and that noise in quotes is just a nuisance um alan turing might have discovered some of these things when he studied biological systems and late in his life in 1952 but instead he kind of i think made the assumption that oh we're studying biology something physics like we better go back to the methods of physics namely partial differential equations uh john von neumann he came so close and missed it so badly uh he was studying sort of biological organisms trying to find sort of a simple idealization of biological systems but he kind of had the assumption that because biological systems are so complicated whatever it is that makes them be able to do things like self-reproduction it must be correspondingly complicated and he never looked at a simple cellular automaton i i i uh von neumann died before i was born but but i've certainly talked to many people who knew him uh one was marvin minsky for example and marvin uh told me but i think it may be one of these back projections of history that uh you know he'd asked von neumann about you know why not use simpler rules and that von neumann had just sort of seemed confused by that question um but uh fermi uh studied on computers um nonlinear springs and discovered a lot of interesting mathematics and solitons and so on but again the sort of he was actually interested in how thermodynamic behavior sort of randomness gets generated in a system like that anyway it goes on and on and on and um uh simulations of hard sphere gases uh my friend solomon golomb uh actually came really close to studying rule 30 like things um uh in um when he studied non-linear feedback shift registers which he did uh as studying uh sort of things for radar and for sort of a mixture of radar satellite communications error correcting codes and maybe a dash of cryptography um that was that was a really close approach i think i think i showed in the book some of what he did he made a a rule that had he sort of made pictures of it it would have looked like this but he was mostly interested in the overall period of um uh of the of if you made it a a a a circular register um that um as well feedback shift registers are always in a sense bounded length circular registers um and he was interested in how long until it repeats rather than how complicated is what it does before it repeats so to speak um well we keep going here i mean lots of different examples and uh so close but yet uh not quite there i mean i i should perhaps mention um in uh um uh in there's the question of sort of the history of cellular automata themselves um i just mentioned that maybe this is my last point here um cellular automata a very easy obvious idea when did they actually get thought about and and so on it's um uh in the 1950s basically they were invented multiple times and uh um there were precursors even to that so for example operations on digits had been used since antiquity in doing and doing arithmetic finite difference approximations to differential equations in the early 1900s and by the 1930s people they were fairly well known and people were doing them by hand famously in world war ii there were a number of things that were done that way for computing fluid dynamics and so on um and uh you know things like turing machines have been invented um but uh the the the sort of the thing that actually gave cellular thomas of their name um was uh john von neumann trying to make an abstract model for self-reproduction in biology so around 1947 um so remember von neumann was trained as a chemical engineer so uh although he was primarily a mathematician he certainly had a background in that and von neumann kind of imagined what would it take to make a chemical engineering thing that was like biology so he was starting to think about sort of factories based on 3d models governed by partial differential equations kind of the the the you know us as factories of of uh of chemical uh uh species and so on so then he started thinking about robotics and um uh then he started thinking about sort of toy construction sets could you make kind of a robotic thing that would sort of uh operate the um it kind of reminds me of the emerald cloud lab guys who have this giant wall from language based stack of operating actual physical uh biological and chemical experiments and so on they've now finally built in real life what von neumann imagined in the late 1940s might be the way that sort of uh is sort of an analogy of what life um uh does in in real biology and so on but um uh then he kind of concluded that um he could use you know mcconnell set some other lego-like set lego didn't exist i think by then but mcconnell certainly did um and uh you know could he uh then he realized that maybe you could just use 2d layouts because electronic circuits managed to basically be in 2d with a few wires that cross and so on um and uh uh then stan ulam 1951 so stan ulam had told me that he'd sort of independently um invented um cellular automata i don't know if this was really true um i i think this was the thing which came out of conversations that he had with with what john von neumann and uh sort of triangulating from other things that stan olam said i you know i think it's very hard to remember i'm sure it came up in various conversations and he mentioned to von neumann that um uh you know why don't you simplify your model and just have a discrete array and um you know stanley had studied infinite matrices and so that may have been a a stimulus for that um so the original von neumann cellular automaton was a thing that's much more like a circuit you know a computer specification 29 possible colors for each cell a whole book of possible rules and so on to you know to basically emulate the operation of a of an electronic computer in various mechanical devices and kind of the idea was just as computers might be universal for computation for neumann wanted to make a universal constructor that could kind of construct somehow anything and uh eventually art books who i knew um was uh filled in this kind of 200 000 cell configuration in this 29 state incredibly complicated cellular automaton that was kind of engineering its way to making a copy of itself but um so i mean i think that basically von neumann uh the original sort of use of cellular automata in an art books was the person who who coined the name cellular automaton um the the original use was for this very engineering oriented kind of uh almost um you know uh circuit design kind of approach to things um anyway from von neumann's work um a certain amount emerged not that much in the 1960s there was sort of uh uh all sorts of discussion rather whimsical discussion of building actual self-reproducing automata often because of what was going on at the time in the form of spacecraft you know self-replicating lunar factories and things like that were discussed and um the second was sort of an attempt to capture more of the essence of self-reduction by mathematical studies of detailed properties of cellular automata and um uh let's see there were various properties of sort of this was kind of the early days of theoretical computer science cellular automata were quite popular turing machines eventually took over as the thing to sort of discuss algorithmic computer science but they were they were in the 1960s particularly there were sort of discover discussions of of properties of cellular automata but they were they were mostly in explicit constructions done with mathematics nobody simulated the things or even even you know in the papers don't have um at best they have a type written you know one or two lines of um uh of cellular automaton um uh uh behavior so by the end of the 50s people realized cellular trauma could be viewed as parallel computers and um there were all sorts of sort of formal things proved about them as parallel computers um the uh um by the mid 70s it was kind of dead um my friend ed fredkin is fond of telling the story that when he came to mit uh people and he wanted to work on things related to seller automata people told him it's a dead field nothing will ever come out of that um he persisted nevertheless uh some work remained on cellular automata uh after the mid-seventies particularly in russia and in japan um and i know there's been quite a tradition that's continued to today in japan along these lines it doesn't help things that um there were all sorts of different names used for cellular automata because they've been invented in different places by different people tessellation automata cellular spaces iterative automata homogeneous structures universal spaces all used for different um uh different different uh differently anyway lots of different things um uh i talk in the encased book there's a there's a there's a lot more detail here about about all the different things that happened um and uh uh sort of perhaps some some some interesting anecdotes some uh well i mentioned um uh infinite sequences zeros and ones considered in in various kinds of symbolic dynamics and mathematics i could tell a number of stories about this um well i mentioned um in the ncaa's book i said in the 1950s and early 1960s there was work in this area of kind of shift commuting block maps as they were called um at least in the u.s by a number of distinguished pure mathematicians but since it was in large part for application to cryptography much of it was kept secret and what was published was mostly abstract theorems about features too global to reveal any of the kind of complexity that i discuss it's an interesting um uh interesting kind of seeing little uh pieces of the iceberg of work that was done um uh in in particular in the 1950s in connection with cryptography um i think in the end uh cryptography sort of went in a different direction but um uh in those days a lot of uh distinguished mathematicians were consultants for the for the us government uh working on these kinds of things and they discovered a bunch of interesting things and they published the sort of more abstract end of it um in the open literature well in any case um it's i think at the end of the um uh the end of this section in the nks book i am i plot the um uh the kind of the the plot down the bottom there of the number of papers that mentioned cellular automata um as a function of time and uh after my efforts back in 82 you can see that things things start to take off again so to speak and i think this trend has continued to today um so that that's um that's the story of cellular automata there um so okay well that was um uh that was kind of going over a little bit of what's in chapter two i have to say it's a little bit uh uh um scary for me that that um uh chapter two is one of the shorter chapters in the nks book and it's taken me a while to go over some of what's in there but i encourage people to actually look at um uh look at the book online or in a physical form i see there are a number of of uh questions and comments here i can i can try and address these if they're quick here um the uh um okay i see asks um says that um in the in the notes i i worked i said i worked hard to analyze the behavior using ideas from statistical mechanics dynamical system theory discrete mathematics the question is has more been figured out since the book came out a bit not much a little bit a few things about different initial conditions have been figured out um but the big problems remain uh undone and that's that's exactly why i put up these rule 30 prizes um the um uh the um um uh um sorry i have to um uh uh to try and make forward progress on that um let's see uh rbs a lot of manchester city of toronto related to galwa pseudorandom number generators not quite sure what those are um but i think if they're based on gawa fields well uh probably i would guess that those are like additive cellular automata like rule rule 90. um uh are nested patterns reversible well no no they're not reversible in the sense that you know given an output you can deduce the the inputs again that there are some cellular automata that do have that property but and there's some nested ones that have that property but the ones i've just been talking about are not of that kind um question uh what happened to the rule 13 random number generator yeah well we used it for many many many years in wolfram language as the default for random integer we retired it only because with rule 30 to keep it kind of securely random you can only sample it at a certain rate you can only kind of sample the bits because if you take two columns next to each other they are no longer completely random um and if you take you can take multiple columns but there are correlations between those coral columns but we found we did a big search for other rules that would have better randomness properties and by sampling because in the hardware of the computer it's perfectly possible to sample not just nearest neighbors but slightly further away neighbors we have a rule that samples slightly further away neighbours and generates uh randomness a little bit more efficiently than rule 30 does and so we switched to using that a few years ago but rule 30 no it's never been cracked um and in fact these rule 30 prizes are partly prizes for showing that certain aspects of it can be cracked or that they cannot be and i think by the way people wonder a lot whether whether one can make an np complete cryptographic system one where it is formally np complete and rule 30 i think has the best chance of any sort of practical uh uh cryptographic scheme of being actually proved to be mp complete one can prove that in general cellular automaton inversion is mp complete whether it's mp complete for the specific case of rule 30 is a more complicated issue um let's see william asks is it a good rule of thumb that if it can't be decoded by dick feynman then it's irreducible well based on what i think from the principle of computational equivalence it doesn't even take a dick fineman to get to the point if it isn't fairly obviously reducible it will typically be irreducible although there will be exceptions to that and those exceptions correspond to sort of new forms of perception that we might have that might be sort of mathematically inspired forms of perception um let's see uh well there's a question here from rbs i i mentioned this point that i said that in the in the um notes to the nks book the programs that simulate natural systems are among the most computationally expensive and they're asking have i changed my point of view i have not thought about that um that is a good question i mean i think that the thing that's come out of the physics project is that certain kinds of systems that are these kind of multi-computational systems the slice of those systems that we seem to care the most about does not seem to have the same property of such irreducibility it's a funny thing because you get to the reducible slice by going through an irreducible step a sequence of steps it's as if there is this sort of uh you know what emerges is reducible but kind of what's underneath to get there at least from the underlying rules is irreducible but yes i mean i i suppose that the the fact that the physics project is capable of sort of plugging into these shorthand laws of physics is a sign uh that um that there isn't um that kind of uh sort of uh pure irreducibility all the way down well i should probably wrap up there and um thank you all very much for for joining me and um uh next week we will tackle chapter three which is a very meaty chapter um but i will probably cover in a little bit less detail um quite a lot less detail probably then i covered chapter two here um just as a preview chapter three is called the world of simple programs and uh it's kind of about the exploration that i did to ask the question of given these phenomena that we had seen in cellular automata how general were those phenomena did they apply to other kinds of programs that's what chapter three was about that's what i'll talk about next week so thanks very much and
Info
Channel: Wolfram
Views: 16,527
Rating: undefined out of 5
Keywords:
Id: SKoW-UjLj5k
Channel Id: undefined
Length: 117min 10sec (7030 seconds)
Published: Mon Feb 14 2022
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.