Why Functional Programming Matters by John Hughes at Functional Conf 2016

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Around the 28 minute mark it has the great quote by Peter Henderson:

It seems that there is a positive correlation between the simplicity of the rules and the quality of the algebra as a description tool.

Later on he talks about the importance of several things, what other (boring or crazy creative) examples are there of them:

  • Whole values
  • Combining forms
  • Algebra as litmus test
  • Functions as representations (‘tremendously important’)
👍︎︎ 4 👤︎︎ u/Iceland_jack 📅︎︎ Oct 26 2016 🗫︎ replies

Great lecture.

👍︎︎ 2 👤︎︎ u/[deleted] 📅︎︎ Oct 28 2016 🗫︎ replies
Captions
okay thank you very much for the invitation it's a pleasure to be here so why functional programming matters as the paper with this title a long time ago I actually wrote it 30 years ago but I thought it was too easy so I didn't send it to be published for five years and what a mistake and so about 25 years after it was published and I was invited to give a keynote about this but who just talked about a 25 year old paper so instead I am Mary who is my colleague and my wife and decided that we would give a talk that mentions this paper but also takes a dive through some of the history of functional programming talks about some of the classic papers or the subject and also some of the papers that have inspired us in particular so I got to take you on that journey with me and although I'm the one giving the talk I have a picture of Mary even if I couldn't bring her with me so why functional programming matters is a very well-known paper it's very popular for many people coming into the field it's the first paper they read but of course it's not when functional programming started function programming started much earlier than that there was functional programming back in the 1940s but it was very different there in the 1940s functional programming was really minimalist for example aliens who needs them what is a boolean for all you do with the boolean is make a choice that's its purpose how we make we can make choices without boolean so let's define two functions that we'll call true and false that will make choices if they're going to make choices they've got to have at least two things to choose between so let's give them two arguments we'll call them x and y and now true we'll just make the first choice and false we'll make the second so back in the 40s people thought let's represent boolean's this way and well do these really work as boolean x' if so we should be able to define if-then-else yes we can it's easy we define if then else take a boolean the then branch and the else branch and it just calls the boolean to make the choice between the else branches so that's great we've got functions that behave like bullies we don't need bullies well if we don't need boolean what about numbers who need them okay let's talk about positive integers here what's a positive integer for well counting loop iterations let's see if we can count loop iterations without notes so what I want to do now is I want to define to to be a function that takes a loop body at a starting value and just runs the loop twice so we apply the function twice to the starting value so that's going to be my representation of two and one is going to apply the loop body once and zeros going to apply at zero times and so on so you can see that for every positive integer I can construct a function that I claim corresponds to it but is this actually a good way of representing integers can I recover a normal integer given one of these functions well sure I can all I have to do is call the function and give it a loop body this is Haskell notation a loop body that just increments a value starting from the value 0 so now if I sit in the Haskell interpreter and I call my two function and I give it the increment function and 0 as arguments what do I get to I've converted back to a normal integer so there we are so I can represent natural numbers this way can I do anything with them well look I could add them together I can do that like this how do I add m and n well I'm going to want to construct a loop that iterates f n plus n times and I can do that just by uttering F n times and then M times ok that's addition Oh or what about multiplication well if sequencing loops gives me addition nesting loops give me multiplication right so here I'm using to iterate em times that's the outer loop an inner loop that iterates F n times how many times do I call F M times n okay does this really work well let's write out so I can sit in the Haskell interpreter again say let's let's add one to multiply two by two that should be five and then five times I'll iterate the increment function on zero sure enough we get five cool so I can represent integers nothing to arithmetic on them well now there was a time when every talk on functional programming had to endure the factorial function here it is here is the factorial function a line 1940 okay so factorial n is here's my if-then-else so that's my using functions to model boolean's and if n is zero then I'm going to return that function I called one otherwise I'm using my arithmetic implemented by functions to multiply n by recursive call of factorial with decrement n does it work sure in Haskell interpreter I can take factorial of 1 plus 2 times 2 that's factorial of 5 and that many times I'll iterate incrementing 0 and what do I get 120 that's right and it's not even terribly slow so there we are there's factorial is it a couple of auxiliary functions that I haven't shown you there is zero is one of them that's really easy how do I tell it what are these functions represent zero i iterate a loop body that returns false and time starting from true so if n is 0 I'll get true if it's anything else I'll get false and the other one I didn't show you was decrementing and the less said about that the better but you can do it it works I promise you so there we are we see that boolean x' integers and actually any algebraic data type any data structure could be entirely replaced by functions cool why well the guy who came up with this was Alonzo Church these are the church encodings of values and what church wanted to do was to show that functional programming could be a foundation for mathematics so you didn't need anything else to start off with you could just start off with functions and then construct everything else that you might want from them which is interesting theoretically but this is not just a theoretical interest it turns out that early versions of the Glasgow Haskell compiler actually implemented data structures this way not numbers and boolean of course but other data structures were implemented by functions in this way because it turned out that for a while there in the ER T's this was actually the fastest way of doing it believe it or not I remember the Banat had discovered that had a different functional language piler which he built from scratch and to save time he had implemented data structures using the church encodings and then one day he decided to do it properly and so he built in data structures as we know and love them and the code went slower there was a man called Yann Fairburn so I should warn you I've shown you at the Haskell shell evaluating these things if you try it at home you're going to get an error message g8 CI says that the occurs check failed we can't construct an infinite type blah okay that sounds kind of serious it's a little hard to tell just from that message what the problem is though but don't worry GHC is very helpful it tells you it was expecting this type but it got this one this is the first time I've had a chance to use the three point font on the slide okay so now you can see what the problem as I suppose no don't worry there's more these are the types of the relevant bindings so going on here is that the tricks I've shown you they work very well in an untyped language and you can do them in Haskell too but they go a little bit beyond what the Haskell type checker can figure out for itself so you have to give a little bit of help you have to add these two lexemes in red in order to tell the compiler what the correct type of the factorial function is and if you do that then all the examples I showed you just go through perfectly so that's great so in the 40s we could write functional programs but we couldn't run them and that was kind of a shame wasn't it so then roundabout 1960 John McCarthy built the first list implementation and suddenly it was possible to run functional programs for the first time here is the factorial function in Lisp it's pretty similar to the 1940s version so Lisp wasn't a purely functional language but nevertheless you could do a lot that we think of as functional programming today right from the start it had the map function which was called map list in Lisp so you could map the factorial function down a list of five numbers and sure enough you would get the list of factorials so higher-order functions were in there from the start and now in the early 60s there was a lot of work going on in the research community to do with functional programming but I want to skip to a paper from 1965 which is a real classic the next 700 programming languages by Peter Landon so Peter Landon was working for univac at the time and he was aware of no less than 1,700 different special programming languages already by 1965 why were there 1700 languages well they were used in over 700 application areas and Peter Landon thought that even 700 programming languages would be far too many all that was needed was one his programming language which he called I swim for if you see what I mean plus plus 700 different libraries that was his idea so at this time every application area would have its own programming language and that was unnecessary as Landon thought so he presented I swim which was a very nice idealized language here is the factorial function and I swim we haven't forgotten that and and it was hugely influential as an early functional language but I'm not going to focus on I swim so very much as on another point that Peter Landon made very strongly Peter Landon said it's not only important that we should be able to express programs it's important that we should be able to reason about them and he was very interested in equivalences laws that relate different programs like this one that says that if you map F over the reverse for list that's the same as reversing the result of mapping F over the list now this law is nearly true in Lisp it's not actually true because if F has side effects then we're going to be performing the calls in a different order and so we may end up with different results but Landon was very opposed to laws that are nearly true he thought a law should hold always without question and he even discusses in his paper and whether or not that's a good thing so you could imagine somebody coming along and saying well what's the point of having this law what's the point of having two programs that do exactly the same thing wouldn't it be better if they did different things because then you'd have a choice Landon Thunders No expressive power should be by design rather than by accident yes miss London so this is really the first paper that emphasized law so much I think it's tremendously important idea so let's give on a bit further again so there was lots more research going on and then in 1977 John Backus won the Turing award now John Backus assembly that we everybody who studies computer science has heard of because of Backus normal form for grammars but he won the Turing award because he was the man who led the development of the first Fortran compiler now even by 1977 it was quite popular among computer scientists to be a bit rude about Fortran but that doesn't mean that one should forget what an enormous achievement that first compiler was it was the first compiler that could generate better machine code that a person could write and in those days when computers were the expensive part all people cared about was efficiency so Fortran made high-level language programming possible and that was a huge step forward so it was a very well deserved Turing award and John Backus when you win the Turing award you get to give a lecture and write a paper about it of course John Backus could have written the paper about how they developed the Fortran compiler I'm sure that's what people expected I'm sure it would have been a great paper he wrote one like that later his Turing award lecture he didn't talk about Fortran at all he presented this talk cam programming be liberated from the von Lohmann style which is a manifesto for functional programming and a description of his idealized functional programming language and this paper is a wonderful read the opening paragraphs are just fantastic I've got then some of his text on my slides she starts off conventional programming languages are growing ever more enormous but not stronger the queue had ADA in mind in particular at that time inherent defects at the most basic level cause them to be both fat and weak terrible isn't it what are these defense their primitive word at a time style of programming inherited from their common ancestor the von Lohmann computer so he was very negative about word at a time programming and he said because the processor operates on one word at a time our programming languages operate on one world word at a time and that creates this bottleneck between what we want to do in our data and by the von mises bottleneck he was thinking of the bus basically that connects a processor to memory and he said I call this the von Lohmann bottleneck this paper is where that term comes from which has since become so very well known so next time you're going to buy a computer don't ask how fast the buses ask how fast the von Lohmann bottleneck is what else is wrong with them their inability to effectively use powerful combining forms for building new programs from existing ones what combining forms was he thinking of well I'm going to illustrate a couple of his combining forms with diagrams so in these diagrams boxes are going to represent functions mapping inputs to outputs and the the inputs are coming from the right might surprise you a little bit at first so one of actors combining for what he called apply to all or which he wrote as alpha F and the idea here is that if you've got a function f you can apply alpha to it and get a function that takes a list of inputs and just applies F to each one it's bad so that was one of the forms that he thought was important another one he called construction so here the idea is that you have a number of different functions F 1 2 F 4 that's these colored boxes but you give them all the same input by making a construction of these functions you get something that when given an input passes it each of the functions and makes a list of there is what else is wrong with conventional languages their lack of useful mathematical properties for reasoning about programs so Bacchus was also very concerned about laws he is one of his laws this represents a composition of a construction with another function G and if you look at the diagram it's pretty obvious this will do the same thing as this provided G has no side effects it doesn't matter whether we apply G once and then duplicate the result or whether we duplicate the input and then apply G to it and that tells us that those two terms at the top and the bottom in backus's a programming language F P must always be equal and what about the vada normal bottleneck the word at a time programming Welbeck has proposed replacing programs like this this computes the scalar product of two vectors a and B and it's written in Algol 60 so rather than write this program which manipulates the vectors one word at a time back is proposed writing a scalar product function which first of all takes a pair of vectors transposes them into a vector of pairs applies multiplication to all those pairs and then folds addition to add all the results together and that gives you the scalar product actually he wouldn't have written it like this he would have written it like this a little bit of a PLN via there perhaps but the point is that this example demonstrates very powerful ways to compose simple components into complex programs so this really was a landmark paper the fact that John Backus the father of Fortran devoted his Turing award lecture to a manifesto for functional programming inspired a generation of researchers myself among them if you haven't read this paper Google John Backus Turing Award today and read it that you will not regret doing so I should say maybe that the half the paper is really the part that has stood the test of time and being tremendously influential the second half of the paper was backus's ideas for doing IO in functional languages which has not had the same impact it's the first half of the paper really that I absolutely strongly recommend everybody should read that so this paper prompted a huge amount of research by people who are inspired among them Mary's PhD supervisor Pete Henderson so let me tell you about some of the work that he was doing when we first met him a Peter had two loves functional programming and the drawings of Usher and he decided he would like to combine the two and if you look at this a sure print which is them it's called square limit you can see that it has some kind of a recursive structure and recursions the bread and butter of functional programming Peter thought wouldn't be great to construct this picture by functional programming so in those days then you did computer graphics by sending drawing commands to a plotter move to pen on a piece of paper yes really was that long ago so graphics programs would be very much word at a time one drawing command at a time but Peter said let's not do that let's set a picture be a value so here's a value fish as soon as a picture is a value then you can define functions that combine pictures for example you make can make this one by overlaying the original fish with rotate rotate fish so here rotate is a 90 degree rotation and if you do it twice then we rotate the fish around and then they fit together that the fact they fit is Ash's genius so that's cool is something else we can do so here we've got the original fish and two smaller copies this this one is the original fish this one is fish too and fish too is a 45 degree rotation of the original fish and it's also shrunk a little bit in order to fit in there in the same box so it's actually being rotated 45 degrees and then flipped around the y-axis and then this is fish 3 which is a further rotation of fish 2 and now we've got a fish and some smaller fish that reminds us of the original print doesn't it let's keep going what else could we do well here we've taken that smaller fish fish 2 and just put it together four times in four different rotations and we get this nice little picture of fish swimming in a circle so Peter went on to define more and more operations for combining pictures like quartet that would take four pictures and draw them in the foot quarter of a square or cycle that would take one picture and draw that in the four quarters of the square but with a rotation applied to make the kind of swim around and if you have an implementation of these things it's great fun to play with you can just strip whatever you like here's if we've taken them the t image with efficient two little ones and just cycle it around here I've made a quartet of this thing so as you'd ever drew this but it doesn't matter we can make our own pictures whatever we feel like so in order to construct the final picture then Peter wanted to make an edge so here we've made a quartet of to empty pictures that's what nil is supposed to be you can see the gaps up there and that T picture in one of its rotations and you can see that here we've got larger fish on the inside and smaller fish on the outside but really we'd like to have even smaller fish up here wouldn't we so let's take this picture let's call it side 1 and then let's make this picture where we replace the empty pictures by smaller copies of side 1 so that code does that and you can see that by continuing in this way we're going to be able to get smaller and smaller fish approaching the side of a picture in a corner as well let's use this picture as the corner so once again I've got a corner with three empty squares around it and now if I call this corner one and put copies of that in the empty squares we'll get copper to where the fish are getting a bit smaller and by repeating this process once again I can make smaller and smaller and smaller fish approaching the the corner so Peter constructed square limit at the end from the picture you this side and this corner in a nonet I haven't shown you the definition that's a square divided into nine little squares so we've got a corner in each corner and we've got the side on each of the sides and there's the u thing in the middle and what does that give you square limit do you remember the original picture not bad eh that's pretty close so this was a delightful fun exercise but we didn't didn't stop there because he still had to implement picture somehow let's see how can we represent pictures hmm well we need some kind of complex data structure do we know let's let pictures be functions so Peters idea here was to say well when we draw a picture we have to end up at the end of the day with a list of drawing commands but let's like a picture be a function that constructs that list and we might want to draw a picture in a variety of places and in a variety of sizes so let's let a picture just be a function that takes three vectors a B and C but specify a box on the plane okay so vector a specifies where the bottom left-hand corner is and then B and C specify the edges of the box and then a picture will return drawing commands the draw itself within that box so the really nice thing about representing pictures as functions is that it made picture combinators really easy to define so here's how you might overlay two pictures you just draw them both in the same box and take the union of the drawing commands here's how you put picture P beside picture Q so just to make this easier to understand take the box ABC we divide it into two boxes and you figure out what the vectors there need to be and then we draw P in the left box and Q in the right box and take the Union you can even rotate a picture by fiddling with the vectors okay so if we take this Square and rotate it we can see that it's four bottom left hand corner is going to end up here and we have to swap the axes B and C and negate the b1 that gives us a 90 degree anti-clockwise rotate so Peter was able to define his picture combinators really easily and thanks to those simple definitions he could also easily prove laws about them like this one that if you put P above Q and rotate it you get the same result as if you rotate P and rotate Q and put the beside each other and that's a little bit of simple algebra because the implementations of these functions are so simple and Peter studied those laws very much and he says this in the paper it seems there's a positive correlation between simplicity of the rules the laws and the quality of the algebra as a description tool in other words by making sure that the functions he defined had these nice properties he got a language for digis that was more expressive and nicer to use so I think this is a really great example of functional programming at its best and we've seen a number of examples of ideas coming back again and again the idea of programming with the whole value the whole picture the whole vector the whole array not word by word of defining combining forms like map and construction and overlay and of using the algebra the laws as a litmus test to guide the design of these combined forms and another idea dating back to church of using functions representations which is tremendously important so I want to very quickly now hop-hop ahead to a paper from 1994 which is a bit less well-known which I really like it meant a great deal to me it's called the somewhat clumsy name high school versus a do versus c++ versus orc versus dot dot dot so this paper originates from a big DARPA product project in software prototyping software prototyping was very trendy at that time so dark I wanted to support work on it and you couldn't get DARPA funding to write Haskell programs for real use in those days but Paul got some DARPA money through using Haskell to build software prototypes and many other people got got money from the same project so then upon some point the DARPA adabo wanted to evaluate how well people were doing so they gave everybody the same problem to prototype and they were to record how much code they had to write and how long it took the problem was a geometric one so it involved defining regions in the plane with complex shapes and DARPA being DARPA these regions had names like weapon doctrine and slave doctrine and there a point in here called hostile aircraft and so on so the prototype was supposed to input a description of this stuff and then applies it and outputs things that would say no the commercial aircraft was in the engage ability zone and so on Mark Jones got the job of implementing this prototype in Haskell you can see that the key thing here is you have to work with regions with complex shapes you've got to represent region somehow there are lots of different shapes of them it could be a very complex data structure how do you think Mark Jones represented a region a function from point two bull and that made the Combinator Sun regions extremely easy to define I mean the outside of a region it's just you know not of what the region would return intersection the Union they're really easy to define and as a result Marc's prototype was very very tiny I don't know if you can read this but this is Mark's code he wrote 85 lines of code the Eider solution was over 760 lines C++ 1100 lines and there are there are some others that approach 85 lines many of them didn't work though Marx did in fact when when mark solution was submitted the evaluators at first thought he hadn't solved the problem he's just written down the specification where's the code but it was actually executive or what I think is a real shame is that mark had written 29 lines of type signatures and type synonyms but the type checker would have inferred by itself so his solution could have been 29 lines smaller so anyway when these M results came into the evaluators they thought there's something funny going on they're cheating you might see that Haskell appears to this list twice so what's the second one the evaluators gave the same problem to a graduate student somewhere else without telling Paul they just said okay you've got 8 days to learn Haskell then see if you can solve the problem and the student did the second smallest solution how about that it's almost twice as large as Marc's well he'd only had eight days to learn Haskell but it was the second-best solution so you would have thought at this point DARPA would say wow Haskell is clearly the best prototyping language but they didn't they said it's too cute for its own good higher-order functions are just a trick probably not useful in other contexts well so much for that let me go back in time a little bit and I want to talk about another pair of classic papers that really inspired me and these were the two papers both came out in 1976 that introduced and publicized lazy evaluation there we're lazy evaluation had been discovered before this but these were the two papers that put it on the map and one of the guys was Peter Henson Mary supervisor so I was really inspired by the idea of lazy evaluation because it meant that the whole value that we've been talking about could be infinite so a program committed may plate a whole value that would be the infinite list of natural numbers or the list of iterations of a function all right so here's f apply 0 times 1 time 2 times how many iterations all of them now of course this doesn't mean that any single program would compute all of them but the choice of how many to compute is left to the consumer right this definition can compute any number of iterations of f as many as you want it's up to the consumer how many you get and I was very excited about the possibility of separating the decision of how many things to compute from the code that told you how to compute them here's an example of a consumer for numerical methods I had studied numerical analysis without very much success a few years earlier but one of the things I had learned is that numerical methods often take the limit of a sequence of approximations so let's just make that a consumer limit get an infinite list of approximations and returns the first one it finds where the distance between approximations is less than Epsilon so now we've got a part of many numerical algorithms but expressed as a reusable separate function here's the newton-raphson square root algorithm that starts from some value let's say 1.0 and compute successive approximations here's the next function that does that so I can compute new paths and square roots just by iterating the next function until I reach the limit and so I'm able to separate the computation of the approximations from taking the limit here's code for computing a numerical derivative so the derivative is the limit of the slope of the graph as points get closer and closer together so let's compute a series of HS by starting from one and making them smaller and smaller and then let's map a slope calculation over that and take the limit and the convergence check can be the same code even though the algorithm for computing the approximations is different so I thought this is great actually there's something even cooler you can do so differentiation is done by taking the limit of a slope as H gets smaller and smaller integration is done by taking the limit of a sum of areas as H gets smaller and smaller because of the trapezium rule for integration results in that so in both cases we've got something an H that is getting smaller and smaller okay think back to your numerical analysis courses when you do something like this the result that you get can be expressed as a plus B times some power of H where a is the right answer and B times a to the N is the error term so suppose we take two successive approximations now we've got two values we know what ages in each case we've got two two simultaneous equations for a and B we can just solve them right and if what if we solve it what we get will we get B which we don't really care about but we can a which is the right answer so we can take a sequence of approximations and just take the first two and compute the right from them who needs the rest of them okay well it's not quite as brilliant as this because this form is only an approximation happens is that we get a much more accurate answer than either of these two but it's not completely right but never mind we could take the next two and do it again so we can make a new sequence of approximations which converges faster it's a better numerical method and we can write a function that does that so here's a really fast function for computing derivatives we make the initial approximations in the same ways before we improve it a couple of times and improving the first removing the first-order error term and the second or error term now it converges really quickly we'll take the limit isn't that sweet and everything is programmed separately everything is reusable here thanks to whole value programming if you do this with integration which is a much more expensive operation you get a really good numerical integration algorithm very very easily so this is some of the stuff that appears in my paper why functional programming matters and the basic idea that paper uses again and again is that of connecting a consumer and the producer with lazy evaluation so the consumer demands values and then the producer generates them so I've shown you how we can do that with numerical approximations in the paper I also show that you can represent a search space in at the same lazy manner and then the consumer becomes the search strategy that decides which parts of the search space we need to compute and explore and in the paper I use that to implement the alpha-beta algorithm for knots and crosses my implementation was extremely slow so this was about the most complex game I could solve this time but I was able to program the alpha-beta heuristic as a separate function whereas all the presentations I'd seen before hand had to mix the alpha-beta part with the construction of the game tree and there's one more paper I've used this idea again and again during my career there's one more paper that I want to show you that uses the same idea it's the quick Chek paper so for those of you haven't seen quick check the idea is that you test your program by writing general properties like this one so here's the property you reverse that just set for any X's that is a list of integers if we take X's and reverse it twice then we get back the original new list and if you give that quick check it'll generate a hundred random values by default run the test for each of them and in this case all the tests pass so that's great what's more interesting is what happens if the property is not true so here's a wrong property that says that if you reverse a list you get itself and of course that's not true for all lists and in that case quick check quickly finds a random list for which it's not true such as this one but you can see that that could be hard to debug and so then it goes on to simplify the failing test and finally ends up reporting a minimal case that fails in this case just a list 0 1 this is the smallest list that falsifies this property why is it smallest well if we removed either the 0 or the 1 we get a 1 element list that is its own reversal why is it 0 and 1 well if we replace the 1 by 0 so we had 0 0 we've got another list that is its own reversal so this is the simplest case that fails and these are the cases the quick check presents to the user for further debugging so how does it work well the property defines a face of all possible tests and then the quick check search strategy is consumer that traverses that space running the tests first of all random and then systematically to find the smallest possible case ok so I think in the interest of time I will skip over some hardware stuff except and to tell you let's talk about hardware for a little while so something else that we'll all remember from around this time is the Intel Pentium bug here's a calculation you could drop on any processor we take some number 419 something divided by another one 314 then multiply it by the 314 one again so this should give us back the original number we subtract it from itself and of course we can get zero which you do on modern processors and you should have on Intel's processors well as well but on the faulty Pentiums you got 256 so this wasn't good of course and for a while Intel tried to pass it off saying was very unlikely to happen and it's nearly right but in the end they had to replace enormous numbers of Pentiums it cost them four hundred and seventy five million dollars there's a lot of money even for Intel and of course then they had all these flawed Pentiums what did they do with him simple they put them in two key rings and every member of staff got one of these keyrings at the time I've seen one of these on the back they say bad companies are destroyed by crises good companies survive them great companies are improved by them so what was the improvement the improvement was they recruited this man Carl Seeger who went to Intel to build his own lazy functional language FL which was specially designed for doing proofs about hardware it had some facilities and using that he built his forte system that has thousands of users at Intel this is what they used to verify Intel Mac processes and make sure that pentium vote will never happen again so FL it's used as a design language specification language for scripting for all the formal verification tools and so on and it's also the language that the tools prove things about though functional programming is really at the heart of Intel's approach to formally verifying circuits so why hasn't there been another Pentium bug at least in part it's thanks to functional programming that's good to know on the subject of hardware and also want to just mention a language called blue spec which is Haskell for hardware basically and then this is the brainchild of the Arvind at MIT which combines Haskell for the large-scale structure of hardware description and some atomic transition rules from which the compiler can infer parallelism to generate very log that goes into industry standard tools for synthesis after that so blue spec lets you write write your hardware in Haskell basically and the code you get is often faster than handwritten code as functional programming always does by simplifying the task of just making things work it gives designers more time to use better algorithms and it's made architectural change of designs are so and very easy to do it's a very nice paper about this by Nikhil who is the CTO of the blue spec company and there's a reference there and of course blue spec is essentially a kind of Haskell so it has quick check this is work by Matthew Naylor and more live in Cambridge and you can take that the blue spec quick check and generate and shrink tests on the chip and this blows away hardware designers I mean auto designers are used to maybe having some support for testing on a chip but having the whole process where the chip just outputs by the way I don't work for this input that that's just black magic so I think that's really cool so we've made a very long journey starting from church numerals and coming all the way to Haskell and quickcheck running on it on a chip but throughout this whole journey then there are a number of ideas that have come back again and again and again programming with the whole values not wear out at a time powerful combining forms that satisfies simple laws and using functions as representations and I think these four ideas pervade good functional programming and they've delivered a whole lot of value over the years so I think we would have made back as proud thank you yeah questions certainly your the table of numbers yeah here it is I can hear you testing two three five seven okay I'm curious why the number of lines of code is used as the criteria for judging goodness rather than development I'm developing time according to development type Haskell still does quite well but Liz that's true so this blue Haskell away completely the difference if you read the paper is that the Haskell one worked okay thank you well if this worked no um well if they had kept going until it worked then provided took zero time to do that it could have won but same my understanding is that it wasn't anything resembling a complete solution but you might ask as well you know what what about lines of documentation if only Mark had written less documentation perhaps it could be much quicker it's it's not really a very scientific study but it's a nice a nice story at the bat paper by the way is on the web at the Yale functional program groups Webster website still so one can download it and read it other questions oh yes yes so one in one area where a great deal of research effort is being spent at the moment is on enriching the type system so a lot of people are very excited about dependent types and they're going into high school so I guess we're going to find out how useful they really are I'm I know a lot of people enthusiastic about that I'm not as enthusiastic as many others because I see that there is perhaps a point where you get diminishing returns there are costs as well as benefits for a more powerful type system but that's clearly something where a lot of work is being done and whatever comes out of it it's going to be interesting and the other things that I wonder about are well first of all there's more and more interest in constructing software to meet specifications and our work with quickcheck is is a part of that right we have a specification and then we test programs against it we're not doing proofs but nevertheless we're identifying inconsistencies between the code and specification and the problem that we face is getting the specification from somewhere and I think this is a general problem that many people who develop software don't really know what it's supposed to do not precisely not precisely enough to formulate a specification and even test that the code meets it so if we can't develop specifications then you know the whole process of trying to develop software that meet specifications is facing a roadblock so I think finding ways to develop specifications is one very important problem for functional programming in all kinds of programming and the other thing that I think about is when I got started in functional programming then we were in the early days of the software crisis that was named in 1968 and so everybody was worried that software was growing in size so rapidly and our capacity to develop it was not so when I set started out in functional programming I remember going to a summer school in 1981 and people like David Turner presented functional programming beautiful and David said there's no point fiddling with software development with programming languages we've gotta make a radical revolution that's what functional programming does and it will give an order of magnitude improvement in productivity that's what we need no factors of two were any of that nonsense an order of magnitude and I think that order of magnitude has been delivered if you look at today's haskell software and compare it to you know software in pascal or c from that time i think it's easily an order of magnitude shorter easier to develop quick to develop and so on but the software crisis is still with us where is the next order of magnitude coming from how could something be an order of magnitude more expressive than Haskell I don't have a good answer to that I'm excited about the whole area of programmed synthesis because that seems to me to be one of the few ideas that offers any hope of another order of magnitude but if I had a plausible idea of how to get that order of magnitude that's what I would be working on John strong has been quoted as saying that our lying never got a static type system because they didn't know how to and recently added that some of the Haskell people offered some ideas but it broke down because they couldn't really agree on how to enforce our type system on sending messages so you what you see is the future of Titus what is the movements well there are strongly typed concurrent programming systems like concurrence on it for example so we have type channels you have to do your concurrency a little bit differently from Erlang but you can certainly type concurrency so the fact that you're passing messages doesn't necessarily mean that static typing will work the other thing bear Lang was designed to cope with was hot code upgrade and that is a lot more challenging because the first time you know when you start running a program you don't know what types might even be defined in code that will be hot loaded later so coming up with a type system that can deal with that it's a much more challenging problem and any such type system is bound to be a lot more complex so I think I don't think the airline design has made a mistake in excluding types they had good reasons for excluding them but it doesn't mean that types wouldn't be valuable in that context to what are the things I would like to do when I have time is to build a front-end for the beam with Haskell like syntax and Haskell like type checker but I think you couldn't do everything but you want to do on the beam in such a language but you could do maybe 98% of it and then fall down into air lang for those things that you can't type so I think that could be really useful again that that's when I have time okay maybe we ready for our guess we're not ready for coffee ready for the next talk
Info
Channel: ConfEngine
Views: 52,852
Rating: 4.9260631 out of 5
Keywords: Functional Conf 2016, #FnConf16, functional-programming, haskell, erlang
Id: XrNdvWqxBvA
Channel Id: undefined
Length: 56min 10sec (3370 seconds)
Published: Mon Oct 24 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.