A New Kind of Science - Stephen Wolfram

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Wolfram is definitely a genius, hard to keep up with him.

👍︎︎ 2 👤︎︎ u/mcscom 📅︎︎ Mar 15 2012 🗫︎ replies

Cellular automata are amazing but does anyone else think Stephen Wolfram tries to promote it a little bit too much? Promoting something that's equivalent to a visualization system as a new kind of science.

👍︎︎ 2 👤︎︎ u/Alternative_Same 📅︎︎ Mar 19 2012 🗫︎ replies
Captions
it is particularly appropriate that for today for this year's lecture for the 2003 H Paul Rockwood memorial lecture we are welcoming here today stephen wolfram who has really made an enormous contribution to computer science to the science of computation and also through the development of the Mathematica program to research in many many areas of Science and Mathematics now the other factor that goes into Stephens background and lecture is the fact that he comes from physics in fact he was the youngest physics PhD in history of Caltech won the MacArthur Genius award and was the youngest to receive that he was at the professor at Caltech he moved to the Institute for Vance study where I first met him and that from there to the University of Illinois where he started a Center for complex systems research and finally starting the company Mathematica which is now based in urbana-champaign he really set the standards for mathematics programs so without any further ado I would like to introduce the president and CEO of Wolfram research the creator of Mathematica and the author of a new kind of science Stephen Wolfram well thank you thank you very much Terry I'm sure many of you have now seen this big book somewhere it took me about ten years to write it and I'm very proud of it and actually by now I hope some of you at least have had a chance to actually read it well what I want to do here today is to tell you a little bit about what it says and actually what I want to do is to tell you about sort of an intellectual structure that I've spent the better part of the past 20 years building and I think it's a structure that has some pretty exciting implications both now and fairly far into the future well back in the late 1970s I was a young physicist mostly working on particle physics I also did a certain amount of work on cosmology and from that I got interested in the question of how structures emerge in our universe from galaxies on down looking at that question I quickly realized that it was actually an instance of a much more general question sort of how does anything complicated get produced in nature well there lots of everyday examples snowflakes turbulent fluid flows forms of plants and animals lots of others my first assumption was that with all the sort of sophisticated math that I knew from particle physics and so on I'd easily be able to figure out what was going on with these kind of ordinary everyday systems when I actually tried to do it though it just didn't seem to work and gradually I started thinking that perhaps there might be a fundamental problem with the whole approach that I was using well if one looks at history the idea of using math and mathematical equations to understand nature has been sort of a defining feature of the exact sciences for perhaps 300 years and certainly it worked out extremely well for Newton and friends and figuring out orbits of comets and for lots and lots of things since then but when somehow when the behavior one's looking at is more complicated it just doesn't seem to work so well gradually what I began to think was that that was actually why there had never really been a good theory for complicated processes in nature and in physics and particularly in biology and so on and I got to wondering whether there might somehow be a way to go beyond the usual paradigm of thinking about mathematical equations in thinking about nature well that was around 1981 and it so happened that at that time I had just spent time developing a big software system called SMP that was in some ways a forerunner of Mathematica and at the core of SMP was a computer language and what I'd done to design that language was somehow to try to think about all the computations that people might want to do and then to try to identify primitives that could be strung together to build up those computations well that worked out fairly well and in fact in a much evolved form I think it's now worked out spectacularly well in Mathematica but anyway back in 1981 I had the idea that perhaps just as I'd been able to find primitives for computations people want to do I might somehow also be able to find primitives for what nature does and the crucial thing that I realized is that those primitives don't have to be based only on traditional mathematical constructs I mean if one's going to be able to do theoretical science one has to assume that nature follows some kind of definite rules but why do those rules involve only the kinds of constructs that have been invented in human mathematics things like numbers and Exponential's calculus and so on can't the rules somehow be more general well in the past there wouldn't really have been any systematic way to think about such things but now we have computers whose programs in effect implement arbitrarily general rules and the idea I had was that perhaps the kinds of rules that can be embodied in programs might actually be what nature is using so what sorts of programs might be relevant well from knowing about mathematical things I'd assume they have to be at least somewhat complicated but still I figure I should at least start by looking at really simple programs so in practical computing we're used to fairly long and complicated programs that are specifically set up for particular tasks but what I wanted to know about were really simple short programs say ones just chosen at random so what do programs like that do it's sort of the simplest possible computer experiment you run the simplest programs and see how they behave well back in 1981 I came up with particular kinds of programs to try that are known as cellular automata so here's how a simple cellular automaton works one starts off with a row of cells each either black or white then the cellular automaton evolves down the page with the color of each cell on each step being determined by a definite rule from the color of the cell on its neighbors on the step before well in this particular case the rule is really simple it just says that a cell will be black whenever it or its neighbors were black on the on the row before and what happens is just that we get a simple uniform black pattern well we can use that icon at the bottom there to represent the rule the program that we're using so what happens if we change the rule a bit well now instead of getting just a uniform black pattern we get a checkerboard so far none of this is terribly surprising we're using very simple rules we're getting simple patterns out okay let's try another rule it's the same setup as before but what's going on here we don't seem to be getting any kind of simple repeating pattern let's run it a bit longer it gets pretty intricate we can see though that it's very regular it's it's it's just a self-similar or a fractal pattern just formed from identical nested pieces well one might think that if one has a simple rule and one starts from just a single black cell then one would always have to get a pattern that somehow very regular like this at least back in 1982 that's what I assumed was true but one day I decided to try a very systematic experiment and just run every single one of the 256 possible simplest cellular automaton rules so when I got to rule number 30 this is what I saw so what's going on here well let's run it some more well there's a bit of regularity over there on the left but otherwise this looks really complicated actually kind of random well when I first saw this I thought it might just be a problem with our visual system that really there were regularities but we just couldn't see them so I ran all sorts of elaborate mathematical and statistical tests and what I found was that no so far as I could tell something like the center column of cells here really was perfectly random well this is rather amazing we have a very simple rule we're starting off from a single black cell but what we're getting out as an incredibly complicated pattern seems in many ways random so it just doesn't seem right we put so little in yet we're getting so much out it's it's not what our ordinary intuition says should happen I mean in our everyday experience and say in doing engineering what we're used to is that to make something complicated we somehow have to start off with complicated plans or use complicated rules but what we're seeing here is that actually even extremely simple rules can produce incredibly complicated behavior well it took me years to come to terms with this phenomenon and in fact it's gradually overturned almost everything I thought I knew about the foundations of science and it's what's led me to spend the past 15 years building a new intellectual structure really a new kind of science well okay so so having seen rule 30 what more can happen here's another one of the 256 simplest cellular automata this is rule 110 so this one grows only on the left what's it doing so let's run it a little bit longer it's not making any kind of uniform randomness instead we can see little complicated structures running around you can ask what what's going to happen well the only way to find out seems to be to just keep running the system are we going to get more of the little structures or are they all going to die out what's going to happen well after two thousand seven hundred and eighty steps we finally get the answer at least in this case we basically end up with just one structure but it's kind of amazing everything we get just starting from one little black cell and then following a really simple rule well in the mid-1980s I'd studied a bunch of cellular automata and I'd seen these basic phenomena but I wasn't sure whether they were somehow specific to cellular automata or more general and I had the practical problem that there wasn't any easy way to set up all the very different types of computer experiments that I'd have to do to find that out in fact that was one of the main reasons that I started building Mathematica I wanted once and for all to make a system that I could use to do all the computations and all the computer experiments I'd ever want well for five years that the task of building Mathematica and I come pretty much consumed me but finally in 1991 I decided I could start looking at my questions in science again actually it was it was really incredible things that have taken me days to do before now two minutes it was easy to do all the experiments I wanted I mean I guess it was a little how it must have felt when telescopes or microscopes were first invented one could just sort of point them somewhere and almost immediately see a whole world of new phenomena like little microscopic creatures swimming around in pond water and so on well in the computational world one just pointed mathematica somewhere and suddenly one could see all this amazing stuff so the first question was just how special are cellular automata well I looked at lots of cellular automata and often saw very complicated behavior what about programs with different setups though like here's a Turing machine for example it's got a row of cells like a cellular automaton but unlike a cellular automaton only one cell gets updated at each step but that cell moves around well the first quite a few turing machines only do these kinds of things but if one keeps going still with very simple rules then suddenly one sees this the same kind of complexity that we saw in cellular automata so what about other kinds of programs well I looked at many many different kinds sequential substitution systems that rewrites strings like an iterated text-editor register machines kind of like Aidid minimal idealizations of machine language also symbolic systems like generalizations of combinators or kind of minimal idealizations of Mathematica it's the same story in two dimensions in cellular automata or in Turing machines or for that matter in three dimensions every time one sees the same thing even with very simple rules one can get extremely complicated behavior one doesn't even need rules that specify an explicit process of evolution constraints work to like like here's the simplest set of tiling constraints that force a non-periodic pattern and here are constraints that give a kind of crystal that's forced to have a rule 30 pattern well wherever one's looks it's always the same basic thing incredibly simple rules can give incredibly complicated behavior it seems to be a very robust and very general phenomenon but so how comes something that fundamental hadn't been known for ages well partly it's because one can only easily see it by doing lots of computer experiments and it's only with computers and particularly with Mathematica that those have become easy to do but more important is that with our ordinary intuition there just didn't seem to be any reason to try the experiments it seemed so obvious they wouldn't show anything interesting well now that one's made the clear discovery that simple program simple rules can produce complicated behavior one can go back and find all sorts of hints about it from really long ago I mean nearly 2500 years ago for example the Greeks looked at prime numbers and there's a fairly simple rule a fairly simple program for generating all the primes but the sequence of primes once generated looks remarkably irregular and complicated there's also a fairly simple rule for generating the digits of pi but once generated that sequence looks to us completely random and actually there are lots of cases of this kind of thing with numbers I mean here's what happens if you write out successive powers of 3 in base 2 kind of a minimal version of a linear congruential random number generator and already amazingly complicated well you can also make very complicated sequences from simple integer occurrences and here for example is the simplest primitive recursive function that has complex behavior well what about other systems based on numbers what about for example those favorite things from traditional mathematical science partial differential equations does their continuous character make them work differently well the ones people usually study show only rather simple behavior but doing an automated search through the space of possible symbolic equations I ended up finding these creatures that they're just simple nonlinear PDEs but even with very simple initial conditions they end up doing all sorts of complicated things actually it's a little hard to tell exactly what they do I mean we we use these pretty easy to test our ever improving state-of-the-art PDE solving capabilities in in Mathematica but with continuous systems there's always a problem because you end up having to we ties them and without her any more or less knowing the answer it's hard to tell whether what you're seeing is something real that's probably why among mathematicians numerical experiments can sometimes have kind of a bad name but in a discrete system like rule 30 there are no issues like that the bits of the bits and one can see that ones one can tell the ones seeing a real phenomenon actually I think rule 30 is so simple to set up there's really no reason the Babylonians couldn't have done it and I sometimes wonder whether a bit like these some ancient mosaic of rule 30 might one day be unearthed but I rather think that if rule 30 had actually been known in antiquity a lot of ideas about science and nature would have developed somewhat differently because as it is it's always seemed like a big mystery how nature could manage apparently so effortlessly to produce so much that seems to us so complex I mean it's like Nature has some special secret that allows it to make things that are much more complex than we as humans normally build and often it seemed like that must be evidence that there's something somehow beyond human intelligence involved but once one scene rule 30 it suggests a very different explanation it suggests that all it takes is four things in nature to follow the rules of typical simple programs and then it's almost inevitable that like in the case of rule 30 their behavior can be highly complex I mean the way we as humans are used to doing engineering and to building things we tend to operate under the constraint that we have to foresee what the things we're building are going to do and that means that we've ended up being forced to use only a very special set of programs that always happen to have simple foreseeable behavior but the point is that nature is presumably under no such constraint so that means that there's nothing wrong with it using something like rule 30 and that way inevitably producing all sorts of complexity well once one starts thinking in terms of simple programs it's kind of amazing how easily one can start to understand the essential features of what's going on in all sorts of systems in nature so let's talk about a simple example snowflakes crystals start by growing start grow by starting from the seed and then successively adding pieces of solid and one can try to capture that with a simple two dimension cellular automata I mean imagine a grid where every cell either has solid in it or not then start from a seed and have a rule that says solid will be added at any cell that's adjacent to one that's already solid so here's what one gets it's it's an ordinary-looking faceted crystal here on a square grid well one could do the same kind of thing on a hexagonal grid but for snowflakes there's obviously an important effect missing here and what that is is that when a piece of ice solidifies from water vapor there's some latent heat released and that inhibits more ice from solidifying nearby well what's the simplest way to capture that effect one can just change the cellular automaton to say that ice gets added only if the number of neighboring cells that are already iced is exactly 1 okay so what happens then well here's the answer and these are all stages that one sees and these look an awful lot like real snowflakes it really seems like we've captured the basic mechanism that makes snowflakes have the shapes they do and we get various predictions like the big snowflakes will have little holes in them from where arms collided and so on which indeed they do okay but even though the pictures we've got look a lot like real snowflakes there are obviously details that are different one thing one has to understand though is that that's always going to happen with a model because the whole point of a model is to capture certain essential features of a system and to idealize away everything else and depending on what one's interested in one may pick different features to up to capture so the cellular automaton model is good for example if one is interested in the basic question of why snowflakes have complicated shapes what the distribution of shapes in some snowflake population will be but it's not so useful if one's trying to answer a question like specifically how fast each arm will grow at a certain temperature I might say that there's actually a general confusion about models that often seems to surface when people first hear about cellular automata that they'll say okay it's very nice that cellular automata can reproduce what snowflakes do but of course real snowflakes aren't actually made from cellular automaton cells well the whole point of a model is that it's supposed to be an abstract reproducing what a system does it's not supposed to be the system itself I mean when we have differential equations that describe how the earth moves around the Sun we don't imagine that inside the earth there are all sorts of little Mathematica solving differential equations instead it's just that the differential equations represent abstractly the way the earth moves and it's it's exactly the same thing with models based on cellular automata I mean the cells and rules abstractly represent certain features of a system and again what abstract representation what type of model is best depends on what one is interested in in the case of snowflakes there are certainly traditional differential equations that could be used but they're complicated and hard to solve and if what one's actually interested in is the basic question of why snowflakes have complicated shapes the cellular automaton model is a much better way to get at that okay let's take another example let's talk about fluid turbulence so whenever there's an obstacle and a fast-moving fluid the pattern of flow around it looks complicated and quite random but where does that randomness come from well one can ask the same question about randomness in any system and I think there really three basic ways that randomness can arise the one that's traditionally most talked about is that the randomness can come from external perturbations from the environment I mean an example is a is a boat bobbing on an ocean the boat itself doesn't produce randomness it just moves randomly because it's exposed to all the randomness of the ocean it's the same kind of thing in Brownian motion where the randomness comes from lots of microscopic molecules randomly bouncing around or an electronic noise where one's listening too much amplified random thermal motion well in the last 20 years or so another way to get randomness that's often been talked about is the one from chaos theory and the idea there is not to have randomness continually injected into a system but instead to have it fed in only at the beginning from the details of the initial conditions for the system so think about tossing a coin or spinning a wheel once it's started there's no randomness and how it moves but which way it will be pointed when it stops depends sensitively on what its initial speed was and if it was star say by hand there'll always be a little randomness in that so one won't be able to predict the outcome well there are more elaborate versions of this well ones effectively taking numbers that represent the initial conditions for a system and successively excavating higher and higher order digits in them and from the perspective of ordinary continuous mathematics there's some trickiness and accounting for where the randomness comes from they're looked at in terms of programs though it's very clear that the randomness one gets out it's just randomness that one put in in the detailed pattern of digits and the initial conditions so again this ends up being an explanation that says essentially randomness comes from outside the system that one's looking at okay well so is there any other possibility well it turns out that there is just look at rule xxx here one doesn't have any randomness initially one just has a single black cell and one doesn't have any subsequent input from the outside but what happens is that the evolution of the system just intrinsically generates apparent randomness okay so what about fluid turbulence where does random has come from there well with the traditional differential equations way of modeling fluids it's very difficult to find out but one can make a simple cellular automaton model where it's much easier well so remember that at the lowest level a fluid just consists of a bunch of molecules bouncing around and actually we know that the details aren't too important because air and water and all sorts of other fluids that have completely different microscopic structures still show the same continuum fluid behavior so knowing that we can try to just make a a minimal model for the underlying molecules just have them for example on a discreet grid with discreet velocities and so on well if one does that one both gets a rather nice practical way to do fluid dynamics and one can start addressing some fundamental questions and what one seems to find is that there's no need for randomness from the environment or randomness from initial conditions one can get randomness from intrinsic randomness generation from something like rule thirty well okay so what if one's trying to make models of fluid turbulence it's kind of important to know where the randomness and it comes from an intrinsic randomness generation makes at least one immediate prediction it says that in a cathlin of controlled experiment the turbulence should be exactly repeatable see if the randomness comes from the environment or from details of initial conditions it will inevitably be different in different runs of the experiment but if it's like rule 30 then it will always be the same every time one runs the experiment well one can ask about randomness and all sorts of systems like in finance for example where the most obvious feature of almost any market is that the prices in it seem to fluctuate quite randomly so where does that randomness come from some of it is surely from the environment but some of it is probably intrinsically generated and knowing that is important if you want for example to know if you can predict it well back in physics one place randomness has been discussed a lot is the second law of thermodynamics that the law of entropy increase a lot is known about it but there's still sort of a basic mystery about how the laws of physics can be reversible yet in our everyday experience we see so much apparent irreversibility and I think that intrinsic randomness generation finally gives one way to explain that it's a slightly long story but it's basically that things like rule 30 can so encrypt the data associated with initial conditions that no realistic experiments or observations could never decode it and see how to go backwards well okay that's a little about physics what about biology well there's certainly a lot of complexity in biology and people often assume that it's kind of a higher level than in physics and usually they figure that somehow because of adaptation and natural selection but in reality it's never been clear why natural selection should actually lead to much complexity at all and that's probably why at least outside of science many people have thought there must be something else going on the question is what is it well I think that actually it's just the abstract fact that we discovered with rule 30 and so on that among simple programs is actually very easy to get complexity I mean of course the complete genetic program for a typical organism is pretty long and complicated for us humans it happens to be about the same length for example as the source code for Mathematica but it's increasingly clear that lots of the most obvious aspects of forms and patterns in biology are actually governed by rather small programs and looking at for example these kinds of regularities that one sees in biological systems that doesn't seem to surprise it when one sees more complicated stuff though traditional intuition tends to suggest that somehow it must have been difficult to get and that for example with the natural selection picture there's sort of the idea that it must be the result of a long and difficult process of optimization or of trying to fit into some complicated ecological niche well I actually think that that's not where many of the most obvious examples of complexity and biology come from I mean natural selection seems to be quite good at operating on small numbers of smooth parameters you know lengthening one bone shortening another and so on when there's more complexity involved though it's very hard for natural selection to operate well and instead where I think one ends up seeing is much more just the outcome of typical randomly chosen genetic programs so let me give you an example here are some mollusk shells with pretty complicated pigmentation patterns on them well in the past we might have assumed that to get things as complicated as this must be difficult and that somehow it must be the result of a sophisticated biological optimization process but if you look at these patterns they look incredibly similar to patterns we get from cellular automata like rule 30 well in the actual shell the pattern is laid down by a line of pigment producing cells on on the growing edge of the shell and it seems that what happens can be captured rather well by cellular automaton rule so why one rule and not another well if one looks at different species one sees all sorts of different patterns but the neat thing is that there are definite classes of patterns that one sees that correspond remarkably well with the classes of behavior that one sees in cellular automata so it sort of as if the mollusks of the world are just sampling the space of possible simple programs and we just get to see the results of those programs displayed on their shells well we sort of all the emphasis on natural selection one's kind of gotten used to the idea that there be much of a fundamental theory in biology and that the things one sees in current organisms must just reflect detailed accidents in the history of biological evolution but what the mollusk shell example suggests is that things might actually be different and that instead it might be reasonable to think of different types of organisms as somehow uniformly sampling a whole space of possible programs so let me give one more example shapes of leaves what might think that these are too diverse to explain in a uniform way but actually it turns out that there's a simple type of program that seems to capture almost all of them it just involves successive repeated branch chains and what's remarkable here is that the limiting shapes one gets look just like actual leaves sometimes smooth sometimes jagged and so on well here's a simple case where one can actually lay out all of the possible shapes that one gets and one thing one can see is that varying a parameter can change not only quantitatively but also qualitatively what kind of leaf one gets and therefore also potentially change how it can function biologically well to be a little more sophisticated one can kind of summarize features of possible leaves in a parameter space set that turns out to be a rather interesting simpler linear analog of the Mandelbrot set and from the properties of the set one can deduce all sorts of features of leaves and they're likely evolution well okay there's a lot to say about how one can use ideas about simple programs to capture what's going on in biology both at the microscopic levels of the kind I've been talking about and at molecular levels let me let me now turn for a few minutes back to physics and in particular fundamental physics for which we don't immediately need a picture well traditional mathematical approaches have obviously had lots of success there but still they haven't been able to give us a truly fundamental theory of physics and I suspect that the reason for that is that one really needs more primitives not just the ones from traditional mathematics but also the more general ones that one can have in programs and now that we've seen that very simple programs can produce immensely rich and complex behavior one can't help wondering whether perhaps all of the amazing things that we see in our universe couldn't just be the result of some particular simple program well that'd be pretty exciting to have a little program that's a precise ultimate model of our universe so the riff one just runs that program long enough it'll reproduce every single thing that happens in our universe but ok what might such a program be like well one thing that's kind of inevitable is that very few familiar features of our universe will immediately be visible in the program I mean there just isn't run I mean if the program is small there's no way to fit in separate identifiable pieces that represent electrons or gravity or even space or time and in fact I think that if the program is going to be really small it sort of has to have the very least possible structure already built in and for example I think a cellular automaton already has far too much structure built in for example it's got a whole rigid array of cells laid out in space and it also separates the notion of space from the notion of states of cells and I don't think one even needs that I mean in ordinary physics space is a kind of background on top of which matter and everything else exists but I think that in an ultimate model one needs only space I don't think one needs any other basic concepts well ok so given that what might space be I mean we normally think of space has just being something that is not something that has any kind of underlying structure but I think it's actually a little like what happens with fluids I mean our everyday experience is that something like water is a continuous fluid but we actually know that underneath it's made up of lot of lots of little discrete molecules and I think that something similar is happening with space and that at a small enough scale space is just a huge collection of discrete points and actually I think it's really a giant network with a with a changing pattern of connections between points where where all that specified is how each point each node is connected to others well in the end they'll probably be lots of ways to formulate it but a simple one is just to say that each point is connected to exactly three other points making a trivalent Network well ok so so how can anything like space as we know it come from this actually it's quite easy and in fact one can have networks that correspond to space in any number of dimensions I mean here are some examples this is this one dimension two dimensions three dimensions what's important here is that all these are just try valent networks the only thing that's different is the pattern of connections between the nodes well I've drawn these to make the correspondence with ordinary space clear but it's important to realize that there's absolutely no intrinsic information in the actual network about how it should be drawn it's just a bunch of nodes with a certain pattern of connections well okay given a pattern of connections how do we tell whether it corresponds to one dimensional two dimensional three dimensional or whatever space it's actually quite easy think think about starting at a particular node then go to all nodes that take one connection to reach then two then three and so on well what one's doing here is to form some kind of circle or sphere or something and then the thing to do is just to ask how many nodes there are inside that after one's gone say our connections well in two dimensions it'll be roughly the area of a circle PI R squared in three dimensions the volume of a sphere 4/3 PI R cubed and so on and in general in D dimensions it'll be something that grows like R to the D so given a network that's how one can tell how many dimensions one's in ok well so that's a little bit about space so what about time in the usual mathematical formulation of physics space and time were always very much the same kind of thing I mean just different variables corresponding to different dimensions but when one looks at programs they seem much more different I mean in a cellular automaton for example one moves in space by going from one cell to another but one moves in time by actually applying the cellular automaton rule okay so so can space and time really be that different it's all rather tricky but I think that at the lowest level they are I mean it's definitely not like in a cellular automaton though because in a cellular automaton there's some kind of global clock with each cell getting updated in parallel at every tick well it's hard to imagine how such a global clock could exist in our universe okay so so what might actually be going on well here's something that at first seems crazy I mean how about if the universe works like a Turing machine or what I call a mobile automaton we're at each step there's only one cell that's getting updated so that there's no problem with synchronization here because there's only one place where nothing happens at a time but so so how can this possibly be right I mean after all we normally have the impression that everything in the universe is going through time together I mean I certainly don't have the impression that what's happening for example is that first I'm getting updated then you're getting updated and so on but the point is how would I know because until I'm updated I can't tell whether you've been updated or not well okay so if one follows this all the way through one realizes that all we can actually know about in the end is a kind of causal network of what event influences what other event and here's an example of how one can go from an underlying sequence of updates in this case in a mobile automaton to a causal Network and the important thing here is that even though the updates only affect one cell at a time the final causal network corresponds to something kind of uniform in space-time okay so so how might time work in the in the context of the space networks that I talked about before well the obvious thing is to imagine that there's an updating rule that says that whenever there's a piece of network that has a particular form it should get replaced by piece of network with another form so here are examples of rules like that there's an immediate problem though I mean if there are several places in the network where a particular rule could apply where should one update first in general different updating orders will lead to different causal networks and one will get a sort of whole tree of possible histories for the universe and then to say what actually happens in our universe will somehow have to have more information to say what branch runs on well I don't consider that very plausible and it turns out that there's actually another rather subtle possibility it turns out that with the appropriate kinds of underlying rules it actually doesn't matter what order they get applied in that's what I call causal invariants that means that the causal network one gets out is always the same well for those of you know about such things this is related to the so called confluence or church-rosser properties in rewrite systems it's also related to the way that Mathematica manages to find canonical forms expressions but anyway there are conditions about avoiding overlaps and so on which turn out for example to allow rules based on graphs like these and whenever one uses just these graphs for example one has causal invariance and that means that in a sense there's always just a single thread of time in the universe well okay in addition to making there be a single thread of time this setup has another important consequence it turns out to immediately imply that special relativity must hold so it's a slightly complicated story for those of you know about such things let me just say that different updating orders correspond to different space like hyper surfaces and that's why subject to various tricky issues of limits and averages and so on causal invariance implies relativistic invariance well ok let's go on what about general relativity the standard theory for gravity one needs to start off by talking about curvature in space there's an example of a network that corresponds to flat two-dimensional space what happens though if we change the connections so we mix some up too gonz or some Pentagon's into those hexagons the answer is that we get a space that bulges out or bulges in well remember that in 2d the number of nodes we get by going out a distance R is supposed to grow like r-squared well in ordinary flat space it's exactly r-squared but in curved space there's a correction term and it turns out that that's proportional to the so-called Ricci scalar curvature well that's already kind of interesting because the Ricci scalar curvature is exactly a thing that appears in the Einstein equations that specify the structure of space-time in general all sylheti the whole story is quite complicated we actually need to look at curvature not just in space but in space-time defined by causal networks but it then turns out that the growth rates of volumes of space-time cones are related to the so called Ricci tensor and then with certain microscopic randomness and other conditions it looks like one can derive conditions on the Ricci tensor and guess what they seem to be exactly the Einstein equations well so there are many issues and caveats but it's rather exciting it seems like from almost nothing once being able to derive a major feature of our universe namely general relativity and gravity okay so another major thing in our universe is particles like electrons and photons and so on well remember that all we have in the universe is space so how can we get particles well here's how it can work in something like a cellular automaton here's a particular cellular automaton it happens to be our friend rule 110 we start off at the top from random initial colors of cells but what we see is that the system quickly organizes itself into a few persistent localized structures and these localized structures act just like particles so for example here's a collision between them two particles come in then after a certain amount of interaction by a whole bunch of particles come out it almost looks like an explicit version of a Fineman diagram in particle physics well okay so how does something like this work in a network what happens is that the particles end up being based on on definite little tangles like kind of non planar pieces in otherwise planar graphs well okay so talking about particles brings up quantum mechanics quantum mechanics is really a big bundle of mathematical results and constructs it's certainly not easy to see how to derive it all from the underlying theory I've been talking about and actually just to make things even more complicated my guess is that it will in the end be easier to derive quantum field theory than quantum mechanics just like it's easier to go from molecular dynamics to fluid mechanics than to rigid body mechanics but still we can see a few things quite easily the way quantum mechanics is usually set up it has a kind of fundamental randomness built-in my theory is completely deterministic but the point is that it makes its own randomness and actually that randomness is crucial not only in giving quantum mechanics but even in building up things like space and time well one might think that with the underlying determinism in my theory there couldn't be the kinds of correlations that one needs to violate bells inequalities to get what's observed in quantum mechanics but it turns out that that conclusion relies on some basic assumptions about space-time and with my network set up it's actually quite easy to at least imagine getting correlations that one wants roughly it can happen just by there being a few kind of long distance network connection between particles that just aren't part of the usual three plus one dimensional space-time structure in fact it's quite amazing how many known features of physics one seems to be able to get rather easily from the kind of simple programs I've been talking about and I must say that it makes me increasingly hopeful that it'll really be possible to find a single simple program that really is the ultimate program for the universe I mean there are lots of technical difficulties lots of tools that have to be built in fact you can you can watch for those in future versions of Mathematica but I think in the end it's going to work and it's going to be pretty exciting well okay I wanted to come back now to the original discovery that really launched everything I'd been talking about the discovery that even simple programs like rule 30 can produce immensely complex behavior so why does that happen what's the fundamental reason well to answer that one needs to set up a somewhat new conceptual framework and the basis of that is to think about all processes as computations the initial conditions for a system of the input and the behavior that's generated is the output well sometimes the computations are ones that we kind of immediately know the point of like here's a cellular automaton that computes the square of any number you give it a block of n cells at the top and it generates a block of N squared cells at the bottom and here's a cellular automaton that generates the primes but actually any cellular automaton can be thought of as doing the computation it just isn't necessarily a computation that we kind of know the pointer beforehand okay so we have all sorts of systems and they do all sorts of computations but how do all these computations compare well we might have thought that every different system would always do a completely different kind of computation so that for example if I wanted to do addition one would by an adding machine if I wanted to do exponentiation one would buy an exponentiation machine but the remarkable idea that's now about 70 years old is that no that's not necessary instead it's possible to make a universal machine that can do any computation if it's just fed the right input of course that's been a pretty important idea because it's the idea that makes software possible and really the idea that launched the whole computer revolution strangely enough though it's not an idea that's in the past had much effect on the foundations of natural science but one of the things that comes out of what I've done is that it actually has some very important implications there too okay so let's talk about what it means to be a universal system basically it's that with appropriate input the system can be programmed to act like any other system so here's an example of that a universal cellular automaton and the idea here is that by changing initial conditions this single cellular automaton can be made to act like any other cellular automaton okay so here it's behaving like rule 254 which happens to make a simple uniform pattern here it's behaving like rule 90 here it's behaving like rule 13 and remember each of these pictures is of the same Universal cellular automaton with the same underlying rules but what's happening is that by giving different initial conditions were effectively programming the universal cellular automaton to emulate all sorts of other cellular automata and in fact it's able to emulate absolutely any other cellular automaton with rules of any size now one might have thought that a system would only be able to emulate systems that were somehow simpler than itself but the existence of universality says that we can have a fixed system that can emulate any other system however complicated that may be so in a sense once one's got to a system that's universal one's maxed out from a computational point of view one's got a system that can do essentially any computation however sophisticated okay so what about all these various cellular automata like rule 30 or all the systems that we see in nature how sophisticated are the computations that they're doing well I spent a long time thinking about this and accumulating all sorts of evidence and what I ended up concluding is something that at first seems pretty surprising I call it the principle of computational equivalence it's a very general principle and in its roughest form what it says is this that essentially anytime the behavior of a system looks to us complex it will end up corresponding to a computation of exactly equivalent sophistication so if we see behavior that's repetitive or nested then it's pretty obvious that it corresponds to a simple computation but what the principle of computational equivalence says is that when we don't see those kinds of regularities we're almost always seeing a process that's in a sense maximally computationally sophisticated now at first that's pretty surprising because we might have thought that the sophistication of the computations that get done would depend on the sophistication of the rules that got put in but the principle of computational equivalence says it doesn't and that immediately gives us a prediction it says that even though there rules are extremely simple systems like rule 30 should be computation universal well normally we imagined to achieve something as sophisticated as computation universality we'd somehow need sophisticated underlying rules and certainly the computers that we use that a universal have CPU chips with millions of gates and so on but the principle of computational equivalence says you don't need all of that it says that even cellular automata with very simple rules should be universal well here's one of them this is rule 110 I showed it earlier it's got a fairly simple rule but as you can see it does some fairly complicated things it's got a little structures running around that seem like they might be doing logic operations or something but come on really assemble them to get something one can see as universal well one day I think it's going to be possible to automate most of the process of figuring that out unfortunately I did not have that automation so I had to get a human assistant of mine to do it instead but after a lot of painstaking work one gets the result that rule 110 is indeed universal well that's just what the principle of computational equivalence said should be true but it's really a remarkable thing because it means that this little rule can in effect produce behavior that's as complex as any system one doesn't need anything like a whole computer CPU to do universal computation one just needs this little rule and that has some very important consequences when it comes to thinking about nature because we wouldn't expect to find whole computer CPUs just lying around in nature but we definitely can expect to find things with rules like rule 110 and that means for example that lots of everyday systems in nature are likely to be universal by the in the past this was the simplest Turing machine that was known to be universal but now one can see that this much simpler Turing machine is actually Universal and in fact I suspect that one can go even further and that actually this Turing machine will end up being the very simplest possible one that's universal well there's a lot to say about what the principle of computational equivalence is and what it means one thing it does is to make church's thesis definite by saying that there really is a hard upper limit on the computations that can be done in our universe but the place where the principle really starts to get teeth is when it says that not only is there an upper limit but that upper limit has actually reached most of the time with incredibly simple rules one or often get just simple behavior let's say repetitive or nested but the point is that if one makes the rules even a tiny bit more complicated than the principle of computational equivalence says that one immediately crosses a threshold and ends up with the system that's maximally computationally sophisticated but actually the principle goes even further than that normally when one talks about universal computation one imagines being able to set up whatever initial conditions were once but the principle of computational equivalence says that that's not necessary because it says that even when the initial conditions are simple they'll still usually be maximally sophisticated computation going on okay so what does all this mean well the first thing first of all it gives us a way to answer the original question of how something like rule 30 manages to show behavior that seems so complex the main point is that there's always a competition between an observer and a system they're observing and if the observer is somehow computationally more sophisticated than the system they're making in a sense decode what the system is doing so it'll look simple to them but what the principle of computational equivalence says is that in most cases the observer will be exactly computationally equivalent to the system they're observing and that's why the behavior of the system will inevitably seemed to them complex well a related consequence of the principle of computational equivalence is a very important phenomenon that I call computational irreducibility let's say you know the rules and the initial conditions for us well then you can certainly work out what the system will do by explicitly running it but the question is whether you can somehow shortcut that process can you for example just work out a formula for what will happen in the system without ever explicitly having to trace each step if you can then what it means is that you can figure out what the system will do with a lot less computational effort than it takes the system itself and that kind of computational reduce ability is at the core of most traditional theoretical science I mean if you want to work out where an idealized earth will be a million years from now you don't have to trace all its million orbits you just have to plug a number into a formula and get out a result but the problem is what happens if the behavior is more complex if a system is repetitive or even nested it's easy to shortcut things what about a case like this there's certainly no obvious way to shortcut this and in fact I think it's computationally irreducible there's essentially no way to work out what the system will do by any procedure that takes less computational effort than just running the system and seeing what happens well in traditional theoretical science they're sort of being an idealization made that the observer is infinitely computationally powerful relative to the system they're observing but the point is that when there's complex behavior the principle of computational equivalence says that instead the system is just as computationally sophisticated as the observer and that's what leads to computational irreducibility and that's in a sense why traditional theoretical science hasn't been able to make more progress when one sees complexity there are always pockets of reducibility where one can make progress but there's always a core of computational irreducibility well I think computational irreducibility is a pretty important phenomenon that's relevant even beyond what a normally viewed as purely scientific issues like for example to the problem of free will I mean it's always seen mysterious how we manage to act in ways that seem free of obvious predictive laws if it's the case that our brains actually follow definite underlying laws but I think at least a crucial ingredient of the answer is computational irreducibility that even with the definite lying laws there can still be no effective way to predict what a system will do except in effect just by running the system and seeing what happens well computational irreducibility can also be viewed as what leads to the phenomenon of undecidability originally discovered in the 1930s I mean look at this cellular automaton for example and ask the question starting from a given initial condition will the pattern that's produced eventually die out or will it just keep going forever in this case here just running the cellular automaton tells one that after 36 steps the pattern dies out in this case though it takes a thousand and seventeen steps to find that out and in these cases even after 10 million steps it's still not clear what's going to happen and the point is that if there's computational irreducibility there's no way to shortcut this evolution so there's no finite computation at all that can always figure out what will happen after an infinite time and that means that one has to say that it's in general formally undecidable what will happen well undecidability has been known about in mathematics and in computer science for quite a long time but with the principle of computational equivalence one realizes now that it's also relevant to Natural Science I mean if one asks questions about infinite time or infinite size limits the answers can be undecidable like whether a body will ever escape in a gravitational three-body problem or whether some idealized biological cell line will grow forever or eventually die out or whether there's a way to arrange some complicated molecule into a crystal below a certain temperature and actually I expect that when there are things that seem so random that they just have to get tabulated in things like chemical tables it's a sign that this computational irreducibility at work well there's another big place where I think computational irreducibility is very important and that's in the foundations of mathematics I mean it may sound kind of obvious but it's really a deep observation about mathematics that it's often hard to do yet it's based on pretty simple axioms in fact right here are the ones for essentially all of current mathematics but even though these axioms are simple proofs of things like the four-color theorem or Fermat's Last Theorem are really long and it turns out that one can think of that it's just another case of the phenomenon of computational irreducibility so let me show you a little example about how that works here's an example of a simple proof in mathematics these are some axioms up here in this case specifying equivalences in logic and this is a proof it starts from one expression at the top then keeps on using the axioms and eventually proves the theorem that the expression at the top is equivalent to the one at the bottom well okay so as a kind of minimal idealization of mathematics one can imagine that the axioms just defined transformations between strings so with the axioms at the bottom here these are proofs of a few theorems so how long do the proofs need to be well here's a picture for three axiom systems showing the network of all possible transformations and the way this works every possible path through each Network corresponds to the proof of some theorem well the point is that the shortest path from one particular string to another may be really long and that means that the theorem that the strings are equivalent has only a really long proof well when people were thinking about formalizing mathematics a century ago they kind of assumed that in any given axiom system it'll always eventually be possible to give a proof of whether a particular statement was true or false so it was a big shock in 1931 when girls theorem showed that that wasn't true for piano arithmetic the standard formal axiom system for ordinary integer arithmetic what girl actually did though was to look at the kind of funky self referential statement this statement is unprovable well just by what it says the statement fairly obviously can't be proved true or false but as such it doesn't seem like a statement and arithmetic and girdles real achievement was essentially to show that arithmetic is universal so that in particular it can encode his funky statement well for the foundations of math girdle's theorem was a pretty big deal but somehow in all these years it's never seen to relevant to most of the things working mathematicians deal with and if you needed something like girdles funky statement to get undecidability that wouldn't be surprising but here's the thing the principle of computational equivalence should be general enough to apply to systems in mathematics and it then says that computational irreducibility and undecidability should actually not be rare at all so where are all these undecidable statements in mathematics well it's been known for quite a while that there are integer equations so-called dyfan tiny equations about which there are undecidable statements here's an actual example a defiant equation explicitly set up to emulate rule 110 well this is obviously pretty complicated and not something that would sort of show up everyday but what about simpler dyfan tiny equations well here are a bunch linear Diophantine equations were cracked in antiquity quadratic ones around 1800 and so far another kind seems to be cracked roughly every 50 or 100 years but I'm guessing that that's actually not going to go on and there actually many of the currently unsolved problems in number theory will turn out to be undecidable okay but why is so much math then successfully been done without running into undecidability I think it's kind of like theoretical physics it's tended to stick to places where there's computational reduced stability and where it's methods can make progress but at least in recent times mathematics has prided itself on somehow being very general so then why haven't ruled xxx and rule 110 and all the other phenomena I've talked about and found in simple programs showed up well I think part of the reason is that mathematics isn't really quite as general as advertised I mean to see what it could be one could just imagine enumerate alexia missed UM's and for example this shows what theorems are true for a sequence of different axiom systems it's like a sort of ultimately desiccated form of mathematics the axioms go down the left here the theorems go across the top and there's a black dot every time there's a theorem that's true for a particular axiom system so is there something special about the actual axiom systems that get used in mathematics perhaps something that makes undecidability less rampant well if one york looks at axiom systems from textbooks they're usually pretty complicated like his logic for instance well it's been known for a hundred years that one doesn't have to have those three different operators in there the single manned operator or Shaffer stroke is enough but the obvious axiom system with that is still pretty complicated well from all the intuition that I built up about simple programs I suspected that there should actually be a really simple axiom system for logic probably with just one axiom and so I searched for it and eventually I found it and I know that this is the very simplest possible axiom system for logic here's the proof by the way needless to say generated by computer well knowing this I can say that if one just enumerates axiom systems logic will be about the 50,000th one that one finds well what about all the other ones well most of them are perfectly reasonable axiom systems too they just don't happen to be no in fields of mathematics and actually I think that mathematics as it's developed has been in a sense tremendously constrained at some level it's really still pretty much just various direct generalizations of the arithmetic and geometry that got studied in ancient Babylon well if one starts looking at simple programs and just doing experiments one immediately sees a much wider world kind of a huge generalization of what mathematics has been so far okay well let me now turn to a somewhat different topic I want to talk about what the principle of computational equivalence says about sort of a big question our place in the universe it sort of always been natural for us to think that we as humans are very special but the history of science keeps on showing us ways in which we're not for example four hundred years ago we found out that our earth isn't at a special place in the universe and a century of a half ago we found out that there wasn't anything special about the origin of our species well every time we lose something in specialness science gets more general because it can drop another footnote that says except in the case of humans but right now we still often think that we're special in our level of complexity or our computational ability but one of the big statements that the principle of computational equivalence makes is that that isn't right it says that there are lots of simple abstract systems and systems in nature that are exactly equivalent in terms of their computational sophistication well one sometimes has that impression anyway for example woman says something like the weather has a mind of its own but what the principle of computational equivalence now says is that yes fluid turbulence in the atmosphere will correspond to as sophisticated a computation as anything we do so we're not special that way well one of the things this has a consequence for is the search for extraterrestrial intelligence there's sort of been an idea that if we saw a signal that was produced by a sophisticated computation then there'd be no choice but to conclude that it came from a sophisticated extraterrestrial intelligence some sort of extraterrestrial civilization well the principle of computational equivalence says no it's actually easy to do sophisticated computations and lots of things in nature do them it doesn't take our whole biological development and civilization to manage it and that's by the way sort of why it's so hard to distinguish random radio noise from some kind of intelligent compressed encrypted signal well okay so where does this leave us it's it's sort of interesting to think about how we interact with the ultimate limits of Technology I don't have any doubt that there'll be a time potentially quite soon when will be possible to capture all the important features of human thinking in pieces of solid-state electronics and no doubt things will get more and more efficient until everything is on an atomic scale so that our processes of human thinking are just implemented by individual electrons whizzing around in lumps of something well of course there are electrons whizzing around in all sorts of complicated patterns in ordinary pieces of rock - and what the principle of computational equivalence tells us is that we can't expect the patterns made by the ones that represent human thinking to be ultimately any more sophisticated than those that occur naturally in something like a rock that there isn't sort of any abstract essence of human intelligence that one can identify but what of course is still special about us is all our details and all our history in a sense it's the principle of computational equivalence that shows us that that history can really add up to something because if everything was computational irreducible then in a sense nothing could be achieved by history we'd always be able to get to the same end point without all that effort it's sort of interesting what the principle of computational equivalence ends up saying it kind of encapsulates both the great strengths and the great weakness of science because on one hand it says that all the wonders of our universe can be captured by simple rules yet it also says that there's ultimately no way to know the consequences of those rules except in effect just to watch and see how they unfold well it's remarkable to me what's what's growing from those little computer experiments I did in the early 1980s it's been very exciting you know when I actually started writing my book in 1991 I thought that it would not take very long but I just kept on discovering more and more things I kept on looking at different areas I kept on finding all this wonderful stuff one thing though was really scary it seemed like I just kept on finding things that disagreed with existing conventional wisdom that at least I'd always believed and actually sort of having the confidence to see beyond that was one of the biggest personal challenges in what I did people who've looked at the notes part of my book will know that I put a lot of effort into tracking down history and one of the main reasons for that was that that was how I finally got convinced about things I figured out by knowing enough about history to see why a field took the path that did rather than the one I now think it could have done well back in the 1980s I used to write academic papers about what I was doing and I think it's fair to say that they were well received they certainly started some some big literature trees and so on but by the 1990s particularly with Mathematica I was beginning to discover new things very quickly and soon I had material for many tens perhaps hundreds of papers most of all I was beginning to build up a pretty big intellectual structure and it was very clear that a bunch of papers scattered across all sorts of fields wouldn't be able to communicate that so I decided I had to just keep working until I'd finished and until I could present everything in a single coherent way it took a lot of personal focus to do that but even though I was the CEO of a very active company I I ended up working on my science everyday and everynight for more than ten years I talked to experts when I needed to particularly about history but mostly I just kept to myself and I kept on polishing everything until I could really explain it clearly and gradually I filled in all the parts of the book I wanted to write and finally early last year that the writing was finished well one problem was that the book I'd made didn't really fit very well any of the usual existing models in the publishing industry so I ended up deciding it was easier just to publish it through through our own company well one of the big questions is always then how many copies to print well we talked to some people and needless to say the responses were all over the map some was saying you know this is terrific stuff print lots of them others were saying you know you're crazy nobody's going to be interested well in the end I decided to print 50,000 copies and on May 14th of last year they were done and the book was officially published what happened was rather exciting by the end of the day on May 14th all the 50,000 copies we had printed were spoken for well then things really started happening a huge response in many communities and in the media some of it very sensible some of it pretty wild and woolly and really all the classic signs of the early stages of a paradigm shift well as a student of the history of science I've certainly read a lot about paradigm shifts but it's rather scary to see one up close to see the kind of actual dynamics and emotions of the whole thing that you never see when you just learn the science and the ideas years later of course it helps me personally that I've seen some of this before because when Mathematica came out in 1988 I think it's fair to say that it led to something of a paradigm shift - and as usual there was a certain amount of turbulence at the beginning but gradually almost invisibly it worked itself out and now nearly 15 years later it's like hasn't it always been this way of course there's still much more to come with Mathematica particularly when some of the ideas in Mathematica are about symbolic programming and so on really get absorbed but the nks paradigm shift is much much bigger and it will certainly take many years many decades to work its way through but I think it's off to a really good start I mean of course there's the usual sort of kuhnian stuff going on people saying this must be just like thing X that I've seen before or no no nothing like this can possibly be right or this just isn't science the way I think of science well it took me 20 years to come to terms with what I discovered and so far the book has been out for less than a year but I'm really impressed at how fast lots of people are understanding things and and really starting to work on things I mean I I wrote the book very carefully and it's nice to know that there are lots of people who read it cover-to-cover multiple times by the way I might mention that if you're just dipping into the book do be sure to read not only the beginning but also the end of the main text and don't forget the notes that the whole second half of the book is historical and technical notes about all kinds of things and if there's some particular area you know well you might just start off by looking at the notes about it that seems to be a good way from any specialized folks to get into the book one thing that's great about the technical notes was that I was able to use Mathematica everywhere as the notation that ended up working out really well it let me say a lot in each line really clearly and of course it also has the huge advantage that not only can you read it you can also actually run it and in fact you can download all the programs from the book with test examples and so on from from the Wolfram science website by the way in version 4.2 of Mathematica which came out soon after the book we added a very efficient built-in cellular automaton function kind of as a way to commemorate the book coming out well within the book a lot of the presentation is very graphical and of course all the graphics were made by Mathematica programs and it turns out that with some nifty new technology of ours we've been able to take those programs and make a standalone piece of software from them it's called MKS explorer and it lets anyone reproduce the main pictures in the book and then go on do their own experiments actually it's kind of amazing how easily one can discover new stuff within ksx it's a lot of fun and there's there's a lot of science to be done with it you know my book is really just the beginning I mean it's been exciting over the past six months to see just how many people have gotten energized to work on the ideas in it actually in many ways it's been quite overwhelming and it's made me really want to figure out just what the best infrastructure is to make all the stuff really prosper we're already organizing some things that have started appearing on the awesome science website and there's quite a bit of reference material related to the book and they'll gradually be more they'll also soon be a large collection of open problems an average of about one for every page of the book and there'll be a huge repository of information about specific simple programs about sort of what's out there in the computational world you know one might have thought that once one had finished figuring out all the things in MKS and then writing a book about them that will be enough and certainly in all the extra time I've had from not being in the middle of writing and chaos I'm I'm having a great time right now working on Mathematica 5 and Mathematica 6 and really digging into all the terrific things that I realized one can do with what we've built in Mathematica and I'm also gradually recovering from the actual process of writing NK it's not being a recluse anymore and going out and giving talks and so on and getting ready to take the next steps to make the ideas in the book grow and prosper in the best possible way in the world and building more tools to do what I like doing best of all which is to come up with new ideas and figure out new things so ok well what's going to happen with all the stuff I've talked about here today I think it's going to be a long story played out over at least many decades but I think three big things will emerge first a new area of basic science like a physics or chemistry or mathematics but concerned with understanding what's out there in the computational world second a whole bunch of applications to science to technology and to other things lots of new raw material for making models of all sorts of things including maps our whole universe also lots of new directions for technology because now we have all these new mechanisms not just gears and wheels but things like rule 32 that give us access to many more of the things that nature can do and it also give us new ways for example to approach making algorithms are doing nanotechnology but there's also kind of a conceptual direction understanding more about the fundamental character of science and mathematics and about the place we have in our universe and in a sense giving a new framework for thinking about things in general and new foundation for a basic thread in education sort of an alternative to mathematics well there are just so many possibilities so many so many things to be discovered so much kind of low-hanging fruit to be picked I I wish I'd been able to tell you more here today I've really only been able to scratch the surface but I hope I've been able to communicate at least a little of what's in that big book of mine and what at least I've been so excited about all these years thank you very much you see all kinds of interesting patterns in your cellular automata do you see any patterns of correspond to activated events - what uncultivated events like in chemistry for example the activator you know like when you jump over barrier for example a rare event like that so what one might say is is that um in this is this rule 110 cellular automaton for example it's a completely deterministic thing but occasionally one might say that if one looks over here that that one of these kinds of structures that what it does when it first collides with something or some such other thing is a rare event that happens only under very particular circumstance I mean another example of that might be let's take another example here so one can ask the question starting from lots of different initial conditions exponential kinetics related to what's up boobs or exponential kinetics related to crossing barriers and things activate events that would be a that would be a I mean yes you can set up cellular automaton kinds of things which do that because they do it for the sort of the same reasons that you can have things happen with in optics and places like that because they effectively obey the same differential equations on an aggregate in terms of on an aggregate level but that isn't particularly interesting I mean what's more interesting is kind of whether you can for sort of more combinatorial reasons see rare sort of rare events happen I mean let me give you an example here so that this is a particular cellular automaton rule and this is showing what happens for lots of different initial conditions usually it just dies out but occasionally it gives it leads to some particular persistent structures and and here are all the persistent structures that you can get and what you see is that let's say you want to you're trying to get a structure which is let's say you're trying to get something which which propagates across the system rapidly that that doesn't happen very often it only happens in this very rare case and if for example you look at all possible initial conditions which sort of randomly chosen ones and zeros the probability that you get something that propagates across the screen like this will be in a sense exponentially small because it occurs only for a particular configuration of ones and zeros that's quite long and that's has probability two to the minus that length of occurring so that's the kind of thing and actually it's pretty weird because if you look at different kinds of systems like for example here's another one of these these systems where you're seeing where these are some of the current kind of persistent structures that occur you might conclude from looking at this that well yes there can be I mean if we look at these systems just starting off from random initial conditions you might say well okay there are these persistent structures that arise and in particular cases but there'll always be things that just look roughly like this okay well it turns out that if you run that system for lots and lots of different initial conditions there you eventually find an initial condition where no it doesn't look anything like that instead the system actually produces this this this funny kind of single block of stuff as its output so in other words it's a very rare thing the probability that you have one of these from random initial conditions it's quite small but if you have a long enough initial condition at least somewhere you'll get one of these things and the system will be sort of taken over by that thing it's kind of an interesting thing because if you ask for sort of asked in statistical physics or something for limits of of what the system does at large times what you realize is the presence of these kind of unexpected things showing up prevents one from ever having an ordinary kind of thermodynamic limit for for one of these kinds of systems you see what I'm getting at is in order to to make a connection with life for example have to be able to see metastable situations in your simulation right you know we are limited stable situation how does your did you think simulate that well so so there are okay let me try and say one more thing about this is some there's a question about um let me let me show you what what happens in this has to do with thermodynamics and the question is sort of that there's a basic question which is if the underlying laws for the universe are reversible how come the things that we see in everyday life is so often irreversible what it means for the laws to be reversible is that one can as well go backwards as well as forwards from a different given state in the universe yet we know from many kinds of things if we take an object and we let it you know we drop it on the floor and it breaks it's very hard for us to go backwards and reassemble that object yet at the level of individual molecules bouncing around obeying let's say laws of mechanics it's perfectly possible for their motions to be reversed and for what they do to to go backwards just as just as well as they go forwards well that kind of phenomenon it's pretty easy to capture and one of the kinds of systems I've talked about this is an example of a cellular automaton that is that is reversible in the sense that um it you can it has a rule that allows you from any given condition here both to figure out uniquely how to go forwards and to figure out uniquely how to go backwards and so this is this is a cellular automaton which microscopically is precisely reversible what's interesting about it is that if you look kind of on a large scale what you see is something that is while in principle reversible is in practice very hard to reverse let me let me show you an example here this is an example of running a similar cellular automaton um starting off from starting off from OLC down down here there was sort of a simple state that was that was produced here and one can uniquely go backwards from that simple state one can uniquely blow forwards from that simple state but once one's got down here even though in principle one can uniquely go backwards in practice it's sort of a hard problem of crypt analysis to figure out exactly how how one should get back to that initial state the thing that I'll say and then I'll stop addressing this particular issue is one can ask for different cellular automata for example what happens with respect to this kind of randomization that we saw in that case now normally what sometimes the behavior is simple enough that you don't get any randomization you just get the thing sort of periodically repeating doing the same thing often you get what sort of typical second law of thermodynamics behavior that even though you start from something simple you quickly get apparent randomness and the thing sort of degenerates into sort of the motion turns into heat and you get all this kind of reversal randomness okay one thing that one can do when one starts looking at these simple programs is one can get a little bit more explicit about how this kind of phenomenon of randomization works and it turns out that curiously enough there are some cases in which you do not get the kind of you don't get something that kind of just remains fixed or becomes periodic nor do you get something that shows the kind of standard randomization of the second law of thermodynamics instead you get a funny kind of thing that seems to have essentially an infinite length of transient and that contains lots of little pieces that in effect act like metastable States the usual interpretation of sort of law of entropy increase doesn't seem to apply to this particular system it seems like this system sort of continually generates little metastable pieces that lasts forever and it may be that something like this is a reasonable model of some kinds of things that happen in for example biological systems and the way that they kind of sort of at least for a very long time seem to evade the second law of thermodynamics so that's a little bit on that at least if you had lots of examples of complex patterns generally goodbye programs say different kinds of leaf patterns for example is that any way to explore the space of simple programs that would have given rise to the leaf pattern the complex pattern so is that any guidance that you can give in that respect right so the question is how do you find which program it is that can given a phenomenon how can you find a simple program that will reproduce that phenomenon I mean traditional sort of statistics and model fitting and so on is very much oriented towards fitting parameters fitting functions to behavior what we're asking for here is fitting programs to behavior well unfortunately there's a there's sort of a piece of bad news about that which is that it's it's fundamentally hard because even given a program it's very hard to know what it will do and so given given that fact going backwards from the phenomenon to the program is sort of inevitably very difficult now you know in a sense what so there's this sort of it's not realistic to have a systematic machine that takes phenomena grinds them up and find simple programs and it's interesting to see the various things that we do in perception and analysis whether it's for visual perception whether it's for data compression whether it's for crypt analysis all these kinds of things we can kind of ask what do those things how far do those things get in taking a phenomenon and kind of finding things from it and the answer is they don't get very far they do well with with repetition whether it's Fourier transforms frequency spectra things like that some things work with nesting for example Lempel Ziv compression works with finding nested patterns and things some forms of crypt analysis do that too but finding sort of breaking down these other things that's not something that that those methods can do and in fact I claim that it's sort of fundamentally difficult I mean you can even look for example in mathematics you can ask the question whether what kinds of things sort of traditional mathematics can do and and so for example well if you have some kind of periodic pattern it's very easy to reproduce that with traditional mathematics I mean the color of a cell at position X Y is you know whatever it is X plus 2y mod whatever whatever it is in this particular picture it's a simple formula for something like it turns out for something like a nested pattern like this there's a simple procedure that involves the digit sequences of the coordinates or in fact another thing you can do I mention well another thing you can do for these nested patterns it turns out that at least simple nested patterns that very simple nested pattern at the top is binomial coefficients mod 2 even the nested pattern at the bottom that isn't much more complicated to us visually turns out to be given by gegen bar polynomials mod 2 and even for for nested patterns it's not very clear mathematically so there's no mathematical analysis technique that will give you the kinds of generalized things that you need for that and in fact you can you can look at other kinds of things you could look at for example representing these things in terms of logic functions and you have the same kind of problem so I think the sort of a general result that you can't that you can't get that you can't sort of systematically go from the phenomenon to find a simple program that fits it as a practical matter what can you do well for instance one of the things the first thing you do is get some intuition about what simple programs can do so that you can have an idea whether searching a space of simple programs exhaustively searching through a trillion programs or something might actually find one that's relevant the second thing is that as a practical matter what what I've been interested in doing is making a kind of giant atlas of what simple programs actually do kind of what's out there in the computational world so that one can kind of pick that pick up the things from that atlas to kind of know whether the phenomenon that one's looking at is actually something that's relevant and it's kind of an an analog of something like an organic chemistry database where sort of the the instead of chemical compounds one has different simple programs and one one sort of asking what are all the properties of all these simple programs and and then kind of hoping that or expecting that one one has a particular phenomenon that one once or a particular engineering problem one's trying to solve that one can kind of go to this Atlas and potentially find something that's relevant
Info
Channel: University of California Television (UCTV)
Views: 357,703
Rating: 4.8476629 out of 5
Keywords: science, Stephen, Wolfram, computer, universe
Id: _eC14GonZnU
Channel Id: undefined
Length: 86min 42sec (5202 seconds)
Published: Thu Apr 24 2008
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.