Functional C++ for Fun and Profit by Phil Nash

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
and so I'm Phil Nash some of you may know me from the test framework that I've written catch others because I now work at JetBrains I've got a new job out of interest how many people here there work at JetBrains so just a just a handful so most of you not so you didn't put your hand up you should totally use all of our products at so that's my sales pitch tonight so we're going to talk about functional C++ before we do that actually talk about my change of job previously I was was working our bank and a bit doing talks like this for years and it's never really been relevant to put my company name up on the first slide now I can finally do that and actually most of what we're going to be talking about tonight is based on work I did in my previous role so that's ironic really I wanted to emphasize that because when we talk about functional programming we often get into is a very academic territory but I wanted to emphasize that this is all stuff that I've used a good effect in real-world code most unit and a quant library place the practical application of these techniques we're not going to get very academic at all so that's why I chose the title functional C++ for fun and profit so let's get started I'd actually wanted to start just by defining what functional programming actually means because in order to see how we can do that in C++ we need to understand that they may have some preconceived idea of what that actually means already may be thinking of first-class functions higher-order functions immutability referential transparency monads any of these words definitely will form part of the the tag cloud around functional programming they don't actually get to the root of what functional programming is so let's do that I think we need to consult the authority on the subject which is the Haskell wiki of course and from the drawing attention to the first bit here where it says functional programming is a style of programming which models computations as the evaluation of expressions that's interesting and actually spit further down the same page goes on in functional programming programs are executed by evaluating expressions in contrast with imperative programming where programs are composed of statements which change global state when executed functional programming typically avoids using mutable state so there's a few things interesting there first thing is it's all about evaluating expressions expression oriented programming if you like that's really the the root node of functional programming everything else hangs off that and you can see it already starts rolling out into immutability global state that's sort of thing so what will bow drill into that a bit more but just want to emphasize that expression oriented bits so in fact we're going to look at one go call expression oriented programming and see how we can we can start doing that in C++ but remember we also contrasted it with statements that's that's where I want to start what does it mean to use expressions instead of statements because they certainly have statements lots of statements in C++ if statements switch statements for and while loops or with all forms of statements why is that contrasted with expressions so let's look an example so very simple piece of code you'll see this in any C++ code base something similar all we're trying to do is start a variable off with some initial value that depends on some condition see this all the time and we're doing that by using a statement here the if statement in this case and that's getting in the way it's causing a number of problems in such a small block of code so just to to highlight those in order to get the value that we want that we're evaluating here out of their statement we need to use a side-effect by assigning two immutable value that also means that we need to have an uninitialized variable up there the start know we could have initialized that to some default value but that's not what we actually want it to end up with we actually want it to be one of these two values but we've had to make it non Const now even if that variable doesn't change anywhere else in this code beyond this point it's now mutable that makes it harder to reason about that code because in order to know what its value is at any one time we need to look at all of the code that mentions it could have change anywhere now the opposite of that is being able to apply local reasoning that means by looking at a bit of code we can see what's coming in and what's going out and reason about that one its own whatever happen to get it every other part of your code and that's getting in the way of this as well so there's quality problems for such a small bit of code now what would that look like if C++ was an expression oriented language so this is hypothetical C++ obviously you can't do this but let's see how that cleans me up now this is an expression which means rather than assigning a value within within a statement we're just evaluating to a value so both branches are just values that could have had a more complex expression here mouth to the same thing and because the whole if expression evaluates to a value now we can use that to initialize our variable and as a bonus we can also use type inference as well so we've avoided some extra redundancy and noticed we can make it Const so that solves all of the problems we just talked about and in fact you look at a real functional programming language I use the example of F sharp here because that's something I'm familiar with that looks very similar to what we just looked at this is an if expression in F sharp and apart from the syntactic differences it's basically identical but of course C++ doesn't work that way we can't do this probably all screaming at me in your heads right now but we can do that in C++ we use the ternary operator and yes of course we can do it exactly that way using the ternary operator so what's the problem with this actually very little they were often told you know we shouldn't use the ternary operator or we should be very careful about using it and it is true that we can run into to some problems we have to be very careful about precedence of operators so we may need to put a few extra brackets in and it's also readable if if the expressions get complex and I wouldn't recommend nesting them either but for simple things like this actually I think this is a really nice way of doing it because it achieves all of the the positive goals that we talked about without any of those side effects so that that's one way that we can do expression oriented programming in C++ very very simple example but actually the biggest problem of this of course is it's just not generally enough doesn't scale so if we have more values that we need to look at then we're still going to have to resort to something like a switch statement we can't do can't use the ternary operator here at least not without nesting which they said gets very unreadable so we're back to the same problem again how do we sell with this time there is actually a solution and there is actually another way to get values out of a statement without introducing side-effects and that's ironically to introduce another statement in this case the return statement we wrap that whole it explode the whole statement in a lambda and then return out of it then we haven't had to introduce any side-effects and we can use that value coming out to initialize our variable this solves all our problems but it might seem like a heavyweight solution might be well what you know I'm going to go to all that extra trouble just to make this into an expression but think about it from is this actually extra work this is actually less code than the example without the lambda mostly because we've taken out the break statement so it's less code there's less places it can go wrong what about the runtime efficiency surely it's got to be less efficient because we're doing all this extra function calling well I would expect any decent compiler to optimize that down to close to or basically the same code and I actually ran that through the disassembler in the visual c++ and it did get identical code out so the runtime overhead as well that the only downside to this there may be a slight compile time performance there problem which may be relevant in your code base or there may not be but either way I would seriously consider arranging your code in ways like this in order to make it more reasonable we do see the scope for four issues it can creep in and it's a step towards expression oriented programming in C++ might look a bit weird at first but give it a chance so that's if statements and switch statements taken care of mentioned in loops as well what can we do about those because they have similar similar problems but we do already have other alternatives to two for loops we've got the the standard algorithms so for each doesn't really bias that much it's really just another way of writing a for loop and these days a range base for loop is probably simpler what you really want to be reaching for the more specific algorithms so I've got standard transform standard accumulate here now these are all examples of higher-order functions so we mentioned that as one of our functional programming terms or then all that means is that these are functions that take other functions or function like objects or return functional objects and in fact there's a range of them in in the standard library I think this is all of them all the ones at the bottom are the if if not in copy of versions the ones at the top I've highlighted those because they actually map on to some well-known functional algorithms map filter and reduce technically accumulate it's close to reduce reduce it's one like a left fold but some minor difference so these can be very useful but they have some limitations as well they're not really suitable for foot on functional programming and we'll come back to why they're limited a bit later and what we can do instead but certainly using those will often get us out of the problems of using statement oriented for loops okay moving on a little bit what a talk about something that I call the Builder pattern I say I call it that it's not that I invented the term just I'm not sure there's universally agreed on term for this but I've heard this this name used in a few places if it makes sense it's different to what we might think of as a factory method in particular this is a static pattern rather than a dynamic one but what I really mean by that so we mention immutability before it's a desirable property so if we try to write our data structures in an immutable way they're going to be sprinkling Const everywhere so unfortunately C++ is mutable by default so it's noisy to to make it immutable but if we're going to train ourselves to do it I'm going to be quickly putting contest everywhere and because C++ is not transitively Const we'll also have to put it to the things we're pointing to when we have pointers but that's fine notice also there I've deleted the the default constructor because it's it's not usually useful to default construct something that you need to set up in a particular state because we can't change it in this case that's actually not necessary because a B of a constant but I think it's worth putting in anyway now how do we then initialize this we could write a constructor but when we're just using simple data types like this we're going to take advantage of member wise initialization that you're uncomfortable with this consider that if you're just using simple data structures separate from the functions that operate on them where you don't need to worry so much about encapsulation the cur we're not going to be changing these objects we don't need to preserve constraints once they are once they're set up so we're often going to work with these public data structures the simple simple data where the order of the fields in there can quite readily just serve as a constructor so we can do it that way that's fine sometimes it's not quite as simple as that though in real-world code we're looking at something a bit more messy and even this is a very simplified example based on code that I've definitely had to work with I just wanted to try and illustrate that in order to get all the bits of data we need to construct our object wouldn't have to do quite a bit of work beforehand so we may be building up these sort of externalized variables that mirror the structure of the class that we want to initialize you can see there about the variables at the top with basically the same names and you might sort of start struggling with this and thinking well I just made it mutable to start with I've got really the Const then I could just build these structures up as I go and it'd be a lot simpler we know but then we'd lose all the advantages of immutable data particularly that local reasoning property that we talked about and plus there's some concurrency gains that will come and talk about later so we're going to throw all that the niceness away but we don't have to deal with this either so this is where the Builder pattern comes in and we're really doing is creating a parallel mutable version or very mutable type and we can have a build method on there that will then produce our immutable object spike again seem like a lot of extra work the one on this user a single type or remember just now we're already declaring all those extra variables what we're doing is effectively grouping them here into a single data type that we can use collectively I'm making a few advantages from doing that one of them is that we can add this build method makes up a bit nicer when we then control the way that it gets built so here for example we're actually returning a shared pointer so we can enforce the other reference semantics if that's what we want to do and in use it's a little bit less code so if you're using this a lot more than declaring it then you get a bit of a win there but often it said the setup code may be more complex than this and you may even have parts that are effectively asynchronous you've got to do your call out to do some other work and get called back so just to try and give an example of that say this contacts book user it's going to access a database or something so you're going to ask it to to retrieve some data and then call you back on a lambda when it's got that later so we can start our builder at the top start populating it and now we can effectively yield out to the async call and then when they get a call back we continue populating it so it actually simplifies the the build up stages notice have used here the the generalized lambda capture in C++ 14 so we can actually move that data structure in that's one of the advantages of having a single structure here instead of a whole collection of variables would have to individually move in we can still do it but this just makes it a bit easier to work with I really that's all this is it's a convenience we're sort of trading off pure immutability for just controlling that immutability to a single data type whose whole purpose is to set up another one and it is that controlling the scope of immutability that's key here remember right back at the beginning in the high school wiki said that functional programming typically avoids mutable state doesn't say always avoid it and C++ we have to embrace it it's all about controlling where it goes and being able to reason about it that's what this gives us difficult to really convey that in a couple of slides but this pattern is something I used very extensively my previous role and it really made the whole process of setting these objects up and read thing about them much easier to do you have to trust me a bit on that okay that's there that's the lender okay this is a little bit of an aside actually a continuation of the Builder pattern but what we'll come back to that I call this a fast and safe threat counted string I will see why in a moment but if you've been around in C++ for for a while you may remember that standard string was originally designed to allow for internal reference counting and many early implementations did have reference counting JCC's get up until quite recently there's a couple of problems with that and those problems stem from the fact that slender string has a mutable interface you can change the standard string and because of that with reference counting blowing being shared and fred's you have to have some sort of synchronization and that implies a technique called copy-on-write which we often abbreviate Sekou which makes for a nice visual which I'm sure you're all familiar with we're just very very quickly recap just to to establish a baseline so start off with a with a string which has a pointer to the Sun buffer we take a copy of that string rather than clapping the whole string it's just clapping that pointer incrementing a reference count and so on four additional copies which is all fine until you make your first mutable call and at that point because the underlying buffer now-now-now may be shared across threads you have to take a copy so now your unshare the the one you won't make a copy on making a mutation on that that now the share count of one and your your other strings still shared with a reduced reference count which isn't fine in principle and it does work as long as you've got over a lock which a lot of the early implementations did or release using an atomic reference count but one of the problems with that is to do with performance certainly with the lock and even with the atomic because every time you make a mutable call you got to check that atomic rat can even cause you didn't intend to be mutable for example just referencing a character by index at least the naive implementation it's non constable would be immutable called live anymore actually changing it but there are ways around that but the significant enough problem that early implementations didn't perform very well and so some switched away from this just for that reason alone a belief that visual c++ did for that reason most and switched to a small buffer optimization version and the other problem is actually to do with invalidating references because this copying happens non-deterministically behind the scenes any operation could potentially make that happen they may have references or iterated into a string that you can you validate by something else actually taking a copy which is very non-intuitive the original version of the standard actually allowed for this even though it wasn't particularly desirable but C++ 11 plug that in the standard so it's no longer possible to write a standard conforming standard string that so uses copying right at least not one that has all of these benefits so that that's why JCCC eventually switched away as well so the bottom line is cow actually runs like a dog so I don't think there's that there are any standard versions of standard string now which use it which is a shame because the right reason some nice properties about being able to share strings now it turns out that all of those problems are purely to do with the immutable interface a sign of string has so if you start with an immutable string design then you can get that sharing back so I started walking Freda steps earlier take copies increment the reference count and that's it you're done it to worry about the copy and write stage because you can never change these strings because we're amitabh 'l this is the common theme in functional programming but by introducing constraints on what you can do it makes it much simpler to to reason about things and you would just eliminate whole classes of problems just by not allowing them to happen in the first place seems like a simple thing but but you go you get all the benefits of it except that there was a reason that standard string had a mutable interface sometimes you do actually want to change them especially when you're building up strings in the first place so how do we do that if we have an immutable string one approach is to say use standard string for the mutable stage and then copy that into an immutable string that seems a little bit of a shame to introduce is cocking perhaps unnecessarily also another thing you can do is well I call them string rafts in my implementation sir an early implementation but standard C++ 17 is going to introduce this as string view so it's just a string type that has a it's a non owning string type as a pointer to a first character and a size that can actually point into any string representation including substrings of an existing string quite often that's why we need to mutate strings just because we want to take sub strings from them so using string rests gets you a long way there another advantage of this is if you do have standard strings and easy mutable strings in your code base you can actually write codes it's ignosi which one you're using just by taking a string ref or a string view so that's quite nice but other times you do actually really need to mutate the strings so remember I said earlier with the Builder pattern it's all about controlling the parts of your code that can can be mutable we've been a very small limited scope that you don't have to worry about when you're reading about the immutable versions and that's exactly the solution we can use here so in this case we have a string builder now the whole purpose of this class is to mutate the underlying representation so this is not a functional data type this is something optimized more for C++ so they can do things like pass out pointers to external libraries that are going to write into buffers so we can see me take them in place we can truncate them and we can realloc them to to make them bigger whatever it takes to actually build up our string in the first place but we're never going to share these there is actually an internal reference count but there's no access to it it's only there so that when we're done we can detach or move that underlying buffer into our immutable string I said detach here because I wrote my version Priestly plus plus 11 before we had move semantics nowadays you just use a standard move and our value references to achieve the same thing and then we effectively transition our buffer from this immutable well as a mutable world where we're building it up to the purely immutable world of our string that we can then use in the rest of the code base without having to worry about whether it's going to change under our feet so that that's really nice but then later we name may need to mutate it again and we can do the same thing in with us we can move from our immutable string throw this one from our mutable string back to a string builder and obviously if the immutable string is now shared this is the one time one extra time that we'll need to consult that reference camp to see whether it shed and if it is we'll take the copy just like in the copy and write case and if not we can literally move the buffer so this has all of the desirable properties of the copy and write version but almost none of the downsides so this is the only extra place that we need to worry about the atomic locking and it's much easier to read and about and it's there's no overhead involved here compared to the copier and write version and that's why I say this is a fast and safe implementation but again this is not just theoretical only did I use this in my previous role but at one point I decided to test this out now we had thousands or millions of strings and string copies flying around in our system so I got together some sort of real-world test case where I could sort of run a soak test and measure the performance of it and then I changed our immutable string to use something more like standard string a small buffer optimization to see what the relative difference would be and found that our immutable version was significantly more performant before she I don't have the figures to share with you specifically but it but it was significant enough that we decided to to keep our implementation so just to sort of visualize that over a lifetime because I want to emphasize what we're actually doing here we are very obviously limiting the scope of where we can use mutability to something that we can easily control and then by default making the representation immutable and shareable in a way that we can easily reason about so that strings and strings are really just a special case of arrays so of course we can apply all the same principles to arrays but if or she arrays tend to have different usage characteristics particularly they're often much bigger so we do have to take a copy that's much more expensive and you'll often tend to have to mutate them in place changing individual elements in the array so they're not always suitable for the same patterns but we did we did use arrays modeled on the same way with array builders array views as well but often times we also need to use other data structures so in fact there's a more general principle or pattern from the functional programming world called persistent data structures I'm going to talk a little bit about these and just to be clear this is not persistence in the sense of databases or for files this is where you you have a representation or you want to mutate a copy of it while keeping the old one around so the old copy is persisting if you like but that's where the term comes from and to illustrate what this how this actually works Jesus simplest possible example which is a list specifically a singly linked list again I'm sure you know how singly linked lists work but just very briefly going to recap in this context so each node in the list has a pointer to the next node the tail where the first node is called the head usually and it's important that those pointers only go one way the reason being if you're to add an element to the list then all you need to do is introduce a new node with a pointer to the old head the important point is the original list knows nothing about this new element so you can keep both around the original one couldn't persist if you like so you'll usually have some wrapper type that has the initial pointer or maybe a size for constant time access and because that works for other immutable operations such as removing items from the list again we can have a smaller list with the original list unchanged and we can we can pile up those those mutations again all on separate instances all of these instances can be kept around or you can get rid of them when you no longer need them so there's always no memory overhead using these few downsides though first with a list if you do need to make any changes further down the list to will see the right in this illustration then you're going to be taking many more copies secondly because of all the the pointer hopping there's a big performance hit because of lack of cash locality and just the extra steps involved and a lot of algorithms become often linear time so it's not that suitable for a lot of C++ code bases but can still be worth having around but that's just to illustrate what a persistent data structure is why they useful gets more interesting when you arrange them or in a tree structure so just starting with a very simple binary tree again just to recap each node has two pointers now we're off to call them the left and the right pointers and these model associative data structures so usually with some sort of key or maybe the whole value that imposes a total ordering on the structure so when you introduce a new element then you can traverse down the tree to find the place that it needs to be thought it into and you usually have a couple of options as to how you're going to put that in because you can associate it with the below to the left or the low to the right so let's imagine here we're going to put it in there now when we do write this new node in we actually need to change the parent node so this if we want to make this persistent well so we can't change that in place so we'll need to take a copy and because we're rewriting that node we need to change the pointer in its parent node all the way up to the root so making changes to a persistent tree has this sort of ripple effect on one branch up to the root but none of the other branches again need to know about this so again you effectively have now two copies of your tree the original one completely preserved persisting and the new the new branch mostly sharing the original structure so this is a simple binary tree which not suited for for most uses because the the branches can get very long the worst case they can end up with slight linked lists down the left-hand side of the right hand side and it's very easy to get that if you load it up with already sorted data so we will usually use some means of balancing the branches and the most popular technique for doing that is the red black tree I went going to the implementation of that but I will say that we we did write a persistent version of a red black tree it's quite doable and actually made a little mistake when I was preparing this slide because I was I was expecting to have to change the structure slightly to show you the red black version but it turns out that it was already balanced so this looks the same but it's preserves the constraints of the red black tree and says slightly more copying overhead with a red black tree because the rebalancing involves changing a few more nodes around what we found in our implementation was it had about a 10% extra cost in copying nodes which is not that bad considering what we got from that which was we got the queering performance basically the equivalents of standard set standard map but this is now trivially suitable for concurrency because there's only one place that we actually need to to worry about synchronizing and that's the root node so if we've store one of these in a shared location then we just have a pointer to the root node that we need to change with a new version and we can actually do that just with an atomic the pointer we use their own version of a shared pointer but standard SharePoint so you can now make atomic well we could do that way but we did introduce one other optimization here to mitigate the amount of copying involved which may not may not be worth it in your case but each of the left and right pointers we made those our version of shared pointer as well and as you're traversing down the tree to find a place to put a new node you keep track of the shade camp or the shared miss as soon as you hit a shared note then everything under that branch becomes shed but all the unshared nodes we can actually mutate in place so we said that this is a persistent data structure and it allows us to to work immutably but if that mutability is not visible outside of the operation we're performing then it's actually acceptable because when you when you first mutate the whole tree the root node becomes unshared any changes we'll make on that branch we don't need to take additional copies so you can actually load up one of these trees with a lot of data with no additional copying but for that point on it's safe very similar to what we did with the with the string builder except this is automatic so that's persistent red-black trees which we use for sets of maps there's actually an even better way to get associative persistent data structures which we did experiment with Bitzer I didn't write implementation so I'm not going to talk about it in depth that you're not going to look it up so persistent hash tree or hash tree depending on how you how you pronounce it and a a tree that's TR ie in this case it's just a way of writing the parts of the key for your structure into the nodes that you traverse down and with a hash tree the key is just the the hash code for for your value and what you do is you write some four or five bits about hash in each node and then when you get to the bottom there may be a linear so much like a hash map or hash set and it's actually approaching the efficiency of other hash map but it's trivially made persistent so it's a very useful data structure so I need to move on because I'm going to get on to the flour section and this is about a very simple data structure you can consider it a container and it's going to be in C++ 17 it's already available in boost and that's optional so optional saves very simple you just use it to represent whether a value exists or not without having to resort to using pointers unnecessarily with all their problems or Sentinel values we can be very explicit in what we're actually doing and how we do it but in practice this usage looks very similar to using a pointer so we're really gaining here because you can easily still dereference an option or without checking it and then you have most of the same problems you may trade an access violation for an exception message but it's still going to crash so be nice so if we had something safely actually there is one method on standard optional and boost optional that will help which is the value or method just allows you to provide a default value so if there's no value then use this default instead and that's great so it's useful for a number of use cases similar to whatever language is called the coalescing operator c-sharp and swift have these things but I think it's not quite general enough which is a shame because it actually quite easy to make it a bit more general but just write a helper function to do that so here's one I've knocked up so it takes takes your optional and instead of a value it takes a lambda the advantage of this is that if your fallback fairly wieszczyk spensive to compute you can do it lazily you don't have to do it if if you do have the value and that could be useful sometimes so this example I'm not taking advantage of that I'm still just returning a constant but now we've open the door to Landers also we can do much more expensive computations so that's nice but let's say now we've opened the door to lambdas but can I sit do it we are way around so we can call the lambda if we do have a value so here's an example of doing that so I've called this function with you can call it other names as well so in this case if we do have a value we just invoke the lamp very simple so in usage down here we will only print it out that value if there's a value in our optional straightforward enough but sort of missing an opportunity here I think we've got that if sleeping in there I think right back to the beginning we talked about your statement we wanted to make this behave more like an expression then rather than making this a statement where the only thing we can really do is have side effect be nice if we can make this to an expression so it may be more like this so we need the else part we have to return something so we're going to return an optional this case the same type so now be invoke our lambda if we heard the value and then return its result otherwise we turn an empty optional and now we can treat the whole thing like an expression for the bottom which composes a lot more nicely so it's a good start we can go further for example we don't have to return an option at the same type so a little bit more work we could do something like this where we can excuse it with the type conversion converter now the reason for the bit at the top is just so that we we always make sure we're returning an optional so if your lambda returns an optional we'll use that result but if it doesn't that will wrap it in an optional as an important step we will see in a moment and we infer the type of the lander using duct tape that's quite nice as well now at this point say we this is very composable and we can now chain calls to the to the wif function together so here we've got an inner call with our initial variable we do something with that variable in the lambda if we have one and then we return that as an optional which we then pass to the outer call to with and if that still has a value we can pass that into the second lender so that's quite nice this is an example of functional composition it's not particularly readable and this is the problem with C++ functional composition is not well supported in C++ in a real functional language there are ways to effectively invert the the flow here so that we can contain or pipeline these function calls nicely but we can't do that in the general case in C++ we've kinda sung a little bit more specific so if I change that helper function to the pipe operator which is an infix operator everything else unchanged now that same code looks a lot more readable flow flow is a lot nicer and more obvious now when you're reading this you can you can see this as a linear code you can see what we're going to we're going to do this with the variable then we do this for the variable and if we add more operations the scales quite nicely we've got our type conversion in there as well but notice the second one we're doing something interesting we're returning an optional we're both said that we will wrap the option of it's not already optional but if we return an optional we'll use that this is important because this now acts as a filter so we're mapping and we're filtering using this very simple helper function and in fact that even nicer property about this and that's that apart from that filter there's nothing really about this that's specific to optionals you can actually apply this same pattern so anything that return refers to some contained type whether that's optional some sort of container like a vector even async values and in fact this is so general so Universal that has a very special name along with a couple of properties either brushed under the carpet but this is actually the core of what we call the Monad that's why this patent is so important because it allows us to compose very useful pieces of code without having to worry about all the tricky bits we can sort of keep off to the side and to give you an example of what that looks like in an even more general way this is a slide from Eric nibblers talk on his ranges library which in certain usages can be considered monadic and you can see he's used the same operator to compose these calls together if you haven't seen his talk by the way I definitely recommend you go go away and see that because he'll explain how he's written all the rest of the code but rather than the the inline lambdas he's got named functions here but it's essentially the same thing so all of these calls are nicely composed together and you can see the value of this now that trunk call bear just in the middle chunk free by the way just to show you what the output of this is he's actually producing a calendar to stand it out you see there's three columns across they think how complex it is to lay lay this out we've just changed that number in the quarter chunk there that would change the number of columns across that's how composable and foremostly isolated these bits of functionality are that you can then compose together nicely so that's really the value in this so they know sort of what a monad is it's worth thinking about what kevin heady has to say about them from his talk on functional programming and if you going to be around a bit later for a drink then you might want to think about that but to wrap up its what we talked about started off talking about expression oriented programming why that's really the heart of what functional programming is how everything else hangs off that but how we can actually start doing that on the small scale in C++ and then how that applies to a local reasoning why that's important and how that leads to striving for mutability but the flip side to that is controlling parts of the code that aren't necessarily mutable so we don't throw away all the benefits of C++ and part of that is about builder types how that allows us to control that more easily including that the string example and then persistent data structures allow us to take that even further and finally we talked a little bit about monads and how we can apply that to real code so that's it thanks for listening I'll put a load of references on my site at this URL so level of indirection com flash storage SCP P that's for functional C++ there refs to HTML including various other talks and blog posts on functional programming and C++ so thank you very much
Info
Channel: nastasiak2512
Views: 16,334
Rating: 4.8881989 out of 5
Keywords: functional programming, CPP, C++, modern C++
Id: YgcUuYCCV14
Channel Id: undefined
Length: 47min 45sec (2865 seconds)
Published: Mon Nov 28 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.