Robert C Martin - Functional Programming; What? Why? When?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
and I should be live in my life that's good news to me why is there air why do we have air where did it come from what's it made of who knows what it's made of nitrogen and oxygen almost entirely nitrogen and oxygen it's about three-quarters nitrogen one-quarter oxygen Li the actual oxygen percentage is about 21% there's a little bit of carbon dioxide about 300 parts per million and growing there's tiny bits of other gases but for the most part it's nitrogen and oxygen where did the nitrogen come from how do we have nitrogen free nitrogen in our atmosphere where did that come from it probably came from ammonia which is fairly common in the universe at large especially in molecular clouds and that probably got ripped apart by sunlight where did the oxygen come from plants green plants green plants do this they emit oxygen and it's a good thing they do because we use that oxygen the plants use sunlight to gather energy and the way they store that energy is they tear the oxygen off of carbon and they put all that energy into the carbon atom they mix it with a few hydrogen's to turn it into sugar and then they take the sugars and they stack them end to end to turn it into wood and that's why wood burns by the way wood burns with solar energy the oxygen ratio in our atmosphere is about 21% but it was not always so before there were plants there was no free oxygen and in fact free oxygen is something you don't expect to find in a planetary atmosphere because oxygen doesn't want to be free oxygen combines with things for example it combines with iron if there were any iron anywhere on the surface of the planet or dissolved into the oceans the oxygen would disappear overnight it would rust that oxygen that way it would rust that iron away and in fact that's what happened for the first three billion years of the history of life on Earth every oxygen atom emitted by a plant got grabbed by an iron atom and fell to the bottom of the sea because in those days there was a lot of iron dissolved into the sea and so this rust iron oxide fell like rain down to the bottom of the sea floor nowadays we call this iron ore it took 3 billion years to get rid of all the iron in the oceans so it was only about a billion years ago that oxygen began to accumulate in our atmosphere and it accumulated and accumulated and accumulated at one point about 250 million years ago the atmosphere was almost 50 percent free oxygen in that kind of an atmosphere you could sneeze and start a forest fire the animals grew to enormous size and I'm not talking about the dinosaurs here I'm talking about dragonflies there were dragonflies that had six foot wingspans because there was so much oxygen in the air they had plenty of free energy eventually that oxygen level tapered down a little bit nowadays we have a more rational amount of oxygen in the atmosphere although frankly living in a sea of oxygen is fraught with danger which is why we all have smoke alarms in our houses but of course this is not what we're supposed to be talking about the name of the talk I'm going to do today is called functional programming what where when why how or the failure of state how many of you are functional programmers meaning that you program in some nominally functional language like f-sharp who's doing F sharp this a functional language who's doing Scala a few of you who's doing some kind of Lisp e language who some lispy languages over here who's doing a real functional language like Haskell nobody okay how about ml yeah nobody okay so fine I didn't name them all that's alright we're going to be talking about functional programming not a functional programming language at the end of this talk I'll show you a little bit of closure which is a lisp e kind of language this guy's name is rich Hickey who's heard of rich Hickey all right this is the author of the closure language he is a brilliant speaker find some of his talks on YouTube and you will be amazed at what a good speaker he is and the interesting insights he can give you one of his talks is about state identity and value briefly one is a value I don't think that's lost on anybody the next line says that X is an identity an identifier and in this case that identifier identifies the value one the next line however is problematic because it suddenly says that the identifier will identify a value but you've got no idea what that value is the identifier has a state not a value and that state can change the subject of this talk is that state has failed but how can this fail a statement like this is so common in our programs how can we call this a failure is that program stateless well from the point of view of the program it's stateless it does have an effect it seems to print something on the screen but we can ignore that from the point of view of the internals of this program there's no state being changed anywhere so here's an example of a program which does something nominally useful and does not change any state here's another program and probably all of you have written this program at one time in your life it is the squares of integers program it prints out the squares of the first 20 integers and what we notice here is a variable that changes state now this looks perfectly normal what's a for loop for I if you can't change state write a for loop changes the state of variables this works just fine but it is stateful it is not stateless and we're going to talk about why that can be dangerous now can this program be written so that it is stateless and it can you could write it that way it's not particularly useful but no variable is changing state in here there's a better way of course you could write it that way this is a recursive algorithm print squares calls itself if it's greater than zero it continues to call itself if n is greater than 0 it continues to call itself and for every iteration it prints out the square of that particular value of n no variable changes state here new variables are introduced the variable n gets recreated and recreated and recreated but at no point does any variable change its state this is a functional program sort of written in a non functional language but it is stateless by the way how many of your Java programmers is some of you see I got these big lights in my eyes so I can't see you say get your hands way up in the air Java programmers yeah there's some of you in here how many of you are dotnet programmers hmm seems to be a slight bias in this audience and Java programmers does your execution platform support recursion well dotnet programmers does your platform support recursion well does it for example allow for tail call optimization it's an interesting question the Java Runtime does not the dotnet runtime does in some circumstances a program like this if you were to change this number to say 2 million to print out the first two million squares might cause the stack to blow in fact this particular function would cause the stack to blow because it's not tail call optimized so the stack will blow here whereas the original one that one that was stateful wouldn't blow the stack on anything so there's a certain memory usage here memory is getting used in a an inefficient way if you cannot tail call optimized by the way what the heck is it with these platforms no net in Java why would tail call optimization even be an issue the year is 2014 this is a optimization that was invented in the 1950s what's up with our platform people I think they were kids out of school who read this book ah some of you have read this book alright wonderful book it's free by the way you can download it off the web they give it away now which I think is remarkable along with it they give away all the video lectures of these two guys you can watch them teach the computer science course at MIT in the 1980s as they deliver the content of this book the book is fascinating I picked it up and read it maybe ten years ago and I noticed something about this book right away it makes no apologies it moves at light speed you open up the book you start turning the pages they hit concept after concept after concept they don't diddle around they don't over explain it's just goes boom boom boom boom very fast and as I was reading it I was just throwing the pages it was an exciting book to read if you can think of a computer science book is being exciting but I was excited by this book throwing the pages reading it oh this is cool the language inside was scheme they don't really explain scheme but it doesn't matter because scheme has almost no syntax so you can easily infer what these programs do page after page after page after page they're talking about basic algorithms queueing structures stacking structures symbol tables message passing all kinds of stuff tons and tons of code you get to page 249 I believe it is and they stopped and they apologize for what's about to come they say we're sorry now we're going to have to we're going to have to corrupt our currently very clean view of what a computer is and they go on paragraph after paragraph apologizing for what's about to come and they introduce an assignment statement and I was thunderstruck I stopped reading and I stared at this thing in it and it made the claim that no assignment statement had been used in any of the previous code in the 249 pages I had read I had to go back and read that code to look for an assignment statement and nowhere in there was there an assignment statement that really fascinated me I thought wow they did that whole first 250 pages with no assignment typically in a computer book the first thing you learn is an assignment statement they delayed it for 259 pages and they apologized it for it I'll tell you why they apologized for tanam in a minute here's how their model of computing worked before they introduced an assignment statement and I will use the the squares of integers you see this function call here a function call in a functional language can be replaced by its implementation so if I were to take this here and simply stick it there and put the values in it would still be the same program let me show that to you there we go I have now taken that first first call to print squares and I've just put the values in but of course I have to do it again but I've got to do it again and I'm simply substituting the function calls for their implementations if you think about this carefully you'll realize that that turns in to the very silly implementation that I had put up there before with nothing but the 20 lines that printed the squares of integers it turns into almost the same thing except with the four these cascading ifs this was the model of computing that that book that I recommended was using for all 249 of its first pages you could simply replace a function call with its implementation but when you introduce an assignment statement that breaks and this was the apology that they made in the book once you introduce assignment you can no longer replace a function call with its implementation and why because the state of the system may have changed an assignment statement introduces the concept of time which is why I show time here in such a warped way time becomes important whenever you have an assignment statement an assignment statement separates the code above the assignment from the code below the assignment in time because the state of the system has changed in a functional program that statement will always be true no matter what time it is the value of f of X will remain the value of f of X no matter what the heck the time is no external force can change the value of f of X to put that into a J unit or an N unit for those of you who are crippled in that way by the way who's using n unit is using that other thing m/s test stop doing that slow it's complicated use em unit or there's another one now X unit I think written by the same guy you wrote in unit so cool anyway look at that statement there should that statement pass should that test pass if F is functional that statement will always pass but if F contains an assignment statement that somehow changes the state of the system that function could fail that statement could fail imagine staring at that in a test and noting the test failing what conclusion would you have to come to you'd have to come to the conclusion that F has a side effect what's a side effect a side effect is an assignment statement all side effects are the result of assignment statements if there are no assignment statements that cannot be side effects only assignment statements change the state of variables if there's no assignment no variable can change its state and so there cannot be side effects when you have a function that gives you a side effect you need another function to undo the side effect consider the function open it opens a file you need another function close to close the file to undo the side effect consider the function malloc the old C function malloc that creates a side effect it allocates memory you need another function free to undo that side effect if you sees a semaphore there's another function to release it to free it if you grab a graphical context there is another function to release it functions with side effects are like the SIS always to there are and they are separated in time the one must always come before the other before in time Malik must always proceed free open must always proceed close close we hope follows open what happens when you don't do this correctly leaks one of the grossest symptoms leaks so anybody ever had a memory leak you are using assignment statements you are using functions that had side-effects and so you had a memory leak what have we done in our languages to protect us from memory leaks garbage collection the greatest hack ever imposed upon any programmer garbage collection the final admission that we are terrible at dealing with side effects we've put it into our language and languages now that we're so bad at dealing with side effects our languages have to clean up after us because we are incapable of cleaning up after ourselves that's what side effects do unfortunately we don't have garbage collection for semaphores we don't have garbage collection for files left open maybe some of us do many of us don't we don't have garbage collection for all of the funny functions out there that have side effects so we still have the problem we've only introduced this horrible hack of garbage collection in the one case where we can get some control over it so let me show you a an implementation of the bowling game how many how many of you Bowl ten pin bowling you knows well you don't need to know how score bowling it doesn't matter I'm just going to show you these two implementations and we'll look at them one of them is sort of functional and one of them is definitely not functional and we'll look at the functional one first this is functional sort of it's functional if you blur your eyes enough we begin with a function called a roll this roll function allows us to capture the number of pins knocked down by a ball you would call this function every time you rolled a ball at the pins and you would record into a list the pins that you knock down now you think well this is some kind of state change not exactly each element of this list is being initialized no value of the list is being changed there's a variable here called current roll that's definitely getting altered however that alteration only exists within the within the roll context so once I have called a roll for the entire game I don't need to worry about that variable anymore so this is not perfectly functional but I can blur my eyes I can step back from it from a few thousand feet and say well it's functional in the sense that once you're done calling roll you don't care about this variable anymore the list has been built and then I can process the list and I can process the list by walking through the list looking at the balls looking at the rolls and deciding whether or not the rolls are a strike or a spare or a non striker spare and manipulating some kind of pointer once again this is not perfectly functional because I've got this variable here that that gets manipulated however once score returns all these variables are destroyed so from the point of view of the call to score in its return there's no side of internally there are side-effects but that's a very limited scope so at a very limited scope this is not functional at a wider scope it is or I could do it this way I've got this enum here this is the stateful representation I've got some enum here it's going to record the the state of the system as I roll balls and here's the role function the role function attempts to calculate the score in real time and in order to do that it's got to store a state variable and that state variable alters the way this program works from roll to roll to roll to roll so a call to roll will do something different depending on the state it was left in by the last call to roll this one is not functional this one is highly stateful if I were to put the call to role here in the first example it would pass if I were to put the call to role there in the second example I doubt it would if I were to put the call to score here it would probably pass but in the second one well it would pass too because it didn't do anything which of these two is simpler that's the the stateful version with the finite state machine in it this is the functional quasi functional version which of those two is simpler it turns out the functional version is much simpler which one is faster mmm probably the the stateful one is faster probably because it's doing less work it's saving state but it doesn't have to squirrel away all those variables I'm not sure I haven't measured them and probably not a huge difference which one is more thread-safe the functional one is much more thread-safe there's hardly any variables to get confused in there but the non functional one has that state variable and if you had multiple threads calling roll it would get pretty interesting which one uses more memory the functional one does it's got to save all those rolls up in a list before it can process them all and that's one of the issues what do we know about memory it's cheap how cheap is memory I got a thumb drive here uh what is it I don't know probably five gigabyte no wouldn't be five would it eight gigabytes maybe eight maybe sixteen I really don't know I don't use it I just keep it in my pocket because it's fun to have eight gigabytes in your pocket I am eight gigabytes in my pocket how many bits is that 64 billion bits in my pocket how did that happen because memory didn't always used to be cheap we've got lots of it we have virtually infinite amounts of memory nowadays this machine here has a half a terabyte of solid-state memory when's the last time you saw rotating discs does anybody in the room still have a rotating disc in their laptop of course all of you have laptops yeah oh there's some rotating discs over him so sorry couple but if I'd asked that question a year ago about 10 percent of you would have put your hands up if I'd asked that question two years ago half of you would have put your hands up if I'd asked that question five years ago everybody would have had their hand up except for one person and we would have all hated him memory has gotten cheap absurdly cheap we are filthy rich with the stuff we are wealthy beyond belief because memory is pouring out of every orifice of our bodies it's unbelievable how much memory we have and it's dirt cheap hundreds of dollars for a terabyte that's absurd it didn't used to be that way who knows what that is that's memory core memory core memory of the 1960s every one of those little Donuts you see there is made out of iron every one of them had to be put into that network of wires by hand there was no machine that could make core memory it was woven on a loom by human beings bit by bit by bit it was frightfully expensive I used to purchase this when I was a teenager I would get surplus army surplus core memory $400 hundreds of dollars for a thousand bits I once purchased a a solid-state memory rack of 512 bits it cost me five hundred and twelve dollars sixty four billion dollars worth of memory when I was a teenager we used to do bizarre things like try and figure out how to store bits on rotating memory surfaces this is an old disc look at that thing it was 14 inches across it had I don't know a dozen platters you wrote bits on the top and on the bottom of each platter so the heads would slide in there and they'd read they'd write on the top and they'd write on the bottom the head said to move in and out to find the different tracks on the disc these things would spin at about 3600 rpm that's a drum look at how inefficient that is that we would write on the surface of that drum this is an old deck tape we used to write on the surface of mylar tape impregnated with iron magnetic tape and that's an old CRT memory which used the persistence of the phosphors to remember bits if a phosphor point was glowing and you hit it with the electron beam it would impede the beam and you could detect that with the amount of current you put into the beam so you could tell if a point was still glowing absurd kinds of memory things nowadays of course it's dirt cheap functional programming was invented in 1957 before Oh Oh nobody would even thought of Oh Oh before structured programming Dijkstra had not yet written his paper about goto being considered harmful and yet in 1957 we were already doing functional programming functional programming was the first of the three major paradigms to be invented the last to be adopted oddly and why because memory was too expensive to make it practical I mean do you remember when we worried about that in a date but that's changed we don't worry about memory anymore memory is too cheap to worry about we throw it away in in megabyte Lots we think of a megabyte as infinitesimally small so why should we change how we program should we change how we program given that memory is dirt cheap well probably we should functional programs are simpler you can prove this to yourself by writing a few by the way it takes much longer to write a line of functional code than it takes to write a line of non-functional code but you wind up with far fewer lines of functional code oddly enough and the amount of time spent programming turns into a smaller amount of time because you don't have to worry about the state of a variable so it makes them easier to write although it doesn't feel that way because every line you have to think about much harder and yet in the end the functional program is easier to write it's easier to maintain everybody says this about everything right it's always easier to maintain but it actually is and why because of that there are no temporal couplings no side-effects no worries about what function to call before any other function or what function must be called after some other function how many of you have debugged for weeks only to find that the problem was two functions that were called out of order and you swap the two and the system started to work and you don't know why these two functions had to be called in this order they just do for some reason this is not an uncommon debugging scenario in a functional program that disappears I said here that there are fewer concurrency issues in a purely functional program there are no concurrency issues because there's no variables what is it that makes a program unthread safe side effects to functions trying to create a side effect they the two of them collide because of thread swapping and they improperly modify the side effect if there are no side effects if there are no assignment statements you can't have thread problems why did I say fewer because in most functional programs there is a portion of the program a well isolated portion of the program which actually does do some assignment and in that portion you can get some concurrency issues but in the vast majority of the code you know so we can get a lot less concurrency problems if we're using functional programs has anybody debug a race condition for a month and then given up and said I'll just reboot the thing every once in a while you never have to ask think about this right you're in the middle of a debugging session you're sitting there you break pointed your way deep down into the code and then you ask yourself what the hell is the state of the system you never ask that in a functional program the system has no state what you're looking at here is Moore's law from 1970 to 2010 the the number of transistors in a chip has been going up at notice this is a log scale so at some doubling rate which people usually say is about 18 months so every 18 months the number of transistors on a chip doubles here's the clock speed that's this dark blue line and look at what happened here right about 2003 it went flat do you remember 2003 we got up to 3 3 gigahertz clock rates and the yields were bad the power was bad we dropped down to about two and a half gigahertz and it stayed there for 10 years for the last 10 years we've been sitting at nothing at 2 and 1/2 gigahertz and it doesn't look like it's going to change there's a possibility of some new materials that might make an incremental change in the clock rate but not the geometric growth this this growth here is gone we're not going to see that continue up here it's folded over but the number of transistors on the chip has not the density has continued to grow now that's going to fall over to probably pretty soon because we're down to about 20 atoms in a wire so there's only so much further you can go but for the moment anyway we continue to double this density number and that has given the hardware engineers the ability to do more coarse how many of you have 4 cores in your laptops how many of you have more than 4 I know don't fall for the hyper threading thing yeah they'll tell you there's 8 cores on there there's not 8 cores on there is 4 and they do this this this lie they call hyper threading oh he's got true 8-core yeah okay good I recently bought a 12 core machine for my daughter actually that was three chips with four cores each but they still share nicely notice what's happening here right we're multiplying cores why would we multiply course because we want to keep increasing throughput at some rate like this cost per cycle dollars per cycle we want to increase this by this this rate but we can't do it with clock rate anymore so we do of course and the hardware engineers have started making some very bizarre trade-offs do you know all that caching stuff they used to put in the chips the l1 cache in the l2 cache in the l3 cache and all that pipelining goop they used to do to squirrel the way the instructions that we're about to be executed and they'd flush that if you did a jump you know that stuff they're ripping all that stuff out they're going to make the processors slower they're just going to put more processors in so as we add more and more cores the individual cores will slow down but the throughput of the chip goes up if you can take advantage of those cores how do you take advantage of those course how do you do that how good it we are writing threaded code now multi-threaded code is code which operates one instruction at a time the processor is still a linear processor the operating system tells one process it can go and the operating system is like a mother it watches over the process as it runs it makes sure the registers are loaded before it runs when it tells it to stop it grabs all the registers and squirrel's them away and puts the process away in a nice place and then gets the next process out and unpacks the registers and lets it run for a while and it takes nine care of the process there is no mother when you've got multiple cores running because now you have simultaneous execution not concurrent execution you've got four cores you have four instructions running simultaneously and they're all hitting the bus and they're all Angry Animals scrapping for that bus they want that bus they want their bites they say give me a bite here take this bite give me a bite and there's no operating system to hold them off and make them behave nicely so we programmers who have grown up with the nice operating system that lets us use our threads nicely and we still can't do that well are now faced with the jungle of the bus and how many cores will we have to deal with we have for now in most of our chips some of the chips will have more if I come back here in two years your laptops will have eight if I come back in four years your laptops will have sixteen if I come back in ten years your laptops may have 512 course how are you going to write programs how are you going to write systems that behave well with 1024 cores how are you going to get the maximum out of your machine when you've got 16 384 cores how are you going to do that and you may think well the operating system will handle that for me I don't think so I don't think so i think the operating system folks are going to go i programmers this is your problem so we programmers who have for the last 60 years lived in this fantasy world of one instruction at a time are now facing the real world and the real world is the real is the world of competing cores on a single memory and we're going to have to deal with that somehow and maybe one of the ways to deal with that is to give up the assignment statement walk away from the assignment statement and never use it again except in very disciplined environments maybe all of us have gotten addicted to assignment and we're going to have to break that addiction if these two F's are executed on separate cores doesn't matter so long as there's no state change so I can take my function the same function execute it on multiple course so long as there's no state change I'll get the same results this is why these languages have suddenly become important anybody noticed that these languages you know five years ago you didn't hear much about a functional language why are these languages become suddenly important and it's because of this multi-core problem everybody's trying to figure out how to solve the coming problem the freight train that's on the tracks ready to run us all over and out of this has come a number of languages some of them are old these languages are very old air lang is becoming very popular now functional language but very interesting in the high reliability market it's possible to write very high reliability functions in air lang because they've got a very good recovery mechanism and it's a nice functional language who studied air like this would be worthwhile there's a couple of good books on air Lang just read the books get an idea write a couple of lines of code and you'll see what's going on in this language there's another language derived from airline called elixir which makes airline look a little bit like Ruby who's the Ruby programmer here one guy one guy wow you guys are really convinced about dotnet artists in the United States a Ruby programmer can write a number on a piece of paper and find someone to pay him that number because all the social networking companies are using Ruby on Rails and they're all convinced that they've got to have good Ruby programmers so the market for Ruby programmers is going through the roof that's a bubble it's going to pop I don't know when it'll pop but right now if you're a Ruby programmer in the u.s. you feel pretty good who's doing a little F sharp that's the dotnet answer hey a reasonably functional language I'm not really I'm not horribly familiar with it but I've looked at it a little bit slightly hybrid but you know you can do some functional code in it Scala on the Java side more of a hybrid language what do I mean by a hybrid language a hybrid language is a language that supports functional programming but allows you to do unn blend assignment and if the language allows you to do undisciplined assignment you can't really call it a functional language I put closure down here in special font because in a special color because closure is a language which is functional it's essentially Lisp who knows Lisp alright some of you do how many of you are afraid of all those parentheses yeah okay so here's the thing about the parentheses and Lisp you know a function call in Java looks like this or in dotnet in it looks like that you've got this name of the function open parenthesis argument closed parenthesis that's how you write a function call in Lisp what you do is you take that open parenthesis right there and you move it there and now you know Lisp that's it there's no extra parenthesis same number parenthesis it's just that funny little positional move and it scares everybody did F right and then the convention of the Lisp programmers is to stack all the closing parenthesis at the end of the line instead of putting them on separate lines like dotnet and Java programmers do but if you count them up same number no difference okay that's the difference just move that parenthesis like so I like closure because it runs on the both the Java and the dotnet stack it sits on top of the CLR or the JVM it's a very nice little lispy language there's some good conventions in it it imposes strict discipline on assignment it's possible to do assignment but you cannot do an assignment in closure unless you in in effect open a transaction an assignment statement in closure is treated like a database operation you have to open up something like a transaction that can retry and then you can do your assignment and it detects collisions in threading space and it retries and make sure that there's no threading problems that's what a closure program looks like and doesn't look that different from a ruby program or a JavaScript program except of course for that open parenthesis which scares everybody to death if you were to take that open parenthesis just move it there or maybe there would look a lot better from your point of view but all I'm doing here is defining a function named accelerate all which takes an argument named OS and it calls the map function and calls and maps the function accelerate to the list of objects pretty straightforward stuff people like this gets people crazy here yeah that's a function call right there it's the greater than or equal operator and then the two arguments and everybody wants to move that into the middle and they can't quite minute you know manipulated in their brains to move it in the middle takes a little practice here's how you add yeah that's a function the plus function yeah we don't have operators in these languages we just have functions but we can use special characters for the function names so that's a plus function adds those twos the divide function takes that divides it by that not real hard to figure out what about ello oo is procedure plus state right and state is evil in the functional world so does that mean that when you are writing functional code you can't be doing oo and the instant of that is no you can be doing oo in a functional program you just can't manipulate state because remember that oo is exposed procedure but hidden state remember we were supposed to be hiding all of our state in an hour program all the variables are supposed to be private you not supposed to know those variables exist and so it's possible to write functional programs using an oo style and not only are you hiding all the variables you're also not changing any of them all of the objects become immutable now you may think to yourself yeah I'm mutable that means I got to make copies every time I change an object I got to make a copy of that object because I can't modify the state of the object and it turns out that these languages are actually very clever the languages the implementers of the languages understand that linked lists can have multiple heads and you can make a linked list look like two different lists by moving the pointer to two different heads so you can modify a linked list without making a copy just by creating a different head and they use this technique to make it possible to modify objects without making it without needing to make a copy the old object is still there but it gets linked to the new version of the object by some very clever linked list manipulations which keeps the speed very high in closure this is called persistent data structures when you modify a data structure you do not destroy the old version you just keep a new version that should sound familiar to you that's your source code control system you modify your source code but you don't destroy the old version and you have very clever ways inside your source code control system to make sure that you re linked to the old source code if you want to you can move back in time you know they don't make copies of all that old source code what they do is they're very cleverly store the differences in just the right way and they maintain the pointers so that you can reconstruct the source code at any time that's what these persistent data structures do remember that oo is a lot more than just 800 is dependency management oh oh is about managing the dependencies inside of an application so that high-level concepts are independent and low level concepts depend on high-level concepts this is called dependency inversion and that dependency inversion can still be done in functional programming in an O program we use polymorphism to do that in a functional program we can still use polymorphism there's no reason that you can't have a function and when you call that function it dispatches two different different other sub functions based on some kind of type identifying all of that can still be done and closure as a language allows that to be done as well as the others functional languages can still have polymorphic interfaces they all still need dependency management none of that stuff changes they all still need those principles of object-oriented design and they are the principles of dependency management but they need something else they need the discipline imposed upon changes of state so a language like closure has special functions in it transactional memory that allows you to change variables but only in this in the context of a transaction this discipline has to be maintained if you're doing a closure program there's no locking you don't block for anything you just make sure you've got this nice transactional memory because locking requires superpowers it's difficult to know when to lock and when not is anybody debugged an application horribly only to find out that you forgot to lock somewhere locking requires superpowers let's not use them locking means that you have side effects and you're trying to lock around those side effects and with that I think I'm going to I had a lot more to talk about but with five minutes left I think I'll open it up for questions other any you're going to have to holler and put your hand up really high because I can't see anybody yep memory is cheaper but what about cache misses alright so we do have the problem now that we've got all this caching in our processors but the hardware guys are ripping all the caches out all those caches are going to go away all those hardware caches are going to go away now we still have software caches and yes the more memory we use in our lists and the more memory we use in our persistent data structures the more we're likely to have some some issues there functional programs can be a little slower not much a little bit slower because there's this funny linked list structures that you have to be walking through but the the kind of time difference is fairly small and if we're talking about multi-core or then the time difference is almost irrelevant because we're trying to find a way to program with 1024 cores if that costs us 2% for each individual core it's not much of a cost anybody else do I see a hand somewhere hard for me to see okay I don't see any hands one guy one guy so the question is how do you structure your program because now I have nice objects that I can put func my functions into how do i structure it now and the answer is the same way you still have data structures you still have gatherings of data that and functions that operate on that gatherings of data the difference is that you don't change any of the variables inside those gatherings of data in a good functional language there is a way to create a suite of functions that operate on a particular kind of data structure it looks like an O language in that sense closure has that facility for example you can create records and inside those records you can put functions and those records can behave polymorphically just like methods and classes except that none of the variables in the records can change you have to create new objects even though you're not actually creating new objects it looks to you like you're creating new objects and you can maintain state that way all right I think that's enough thank you all for your attention I'll see you another time
Info
Channel: gnbitcom
Views: 302,984
Rating: 4.7719054 out of 5
Keywords: uncle bob, FP, Functional Programming
Id: 7Zlp9rKHGD4
Channel Id: undefined
Length: 58min 26sec (3506 seconds)
Published: Fri Dec 26 2014
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.