SF Scala: Reimagining Functional Type Classes

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
john thank you so much solar and thank you for the invitation to speak here so tonight me and adam are going to be talking about refactoring functional type classes we're unveiling the design for a new library called zeo prelude that's designed to give you a totally different take on what abstractions can look like in the context of functional programming so i asked adam to join me for this talk because he's worked on this library extensively with me in fact right now he's the number one contributor his own contributions outstrip my own and uh he's gotten there by polishing by adding a ton of cool new data types and and features and instances and tests and so i i asked him to co-present with me tonight adam you want to say a few words yeah absolutely thank you so much for the opportunity to work with you on this it's been great as well as working with you on all sorts of other fun things zeo related thank you for everyone here who's joining us especially people who i know it's uh late for those who are listening in from europe and uh thank you for everyone to everyone who's given either contributed library or given feedback or ideas through the process we're really excited to share what we've been working on with you yeah you ever get that feeling of deja vu like you've been in this situation before maybe saying these same things before that's happening to me right now anyway let me tell you why i'm here giving this talk it's because about eight years or so ago i actually set out to create a successor to a library called blue eyes that that i created in this library it was designed to give you a client and a server in a type safe fashion from a single definition of a route an http web route and i remember banging my head against the wall because as i was developing the data types that were going to make this possible i thought i saw something from the traditional classic functor hierarchy i thought i saw an applicative and a monad and other things like them but i couldn't create instances of these type classes for these data types because it wasn't quite right it was like they had some of the structure but it wasn't formulated in exactly the same way and so fast forward to many years and experiments and my work on scala z8 and more study of category theory and all this stuff i've come to the conclusion that there's something not quite right about the classic functional programming hierarchy and there's never going to be something there's always going to be something wrong with it right because these programming languages that we use they have restrictive type systems and we're not able to to express math the way math is expressed in books and in papers but potentially we could do better potentially there's a different factoring that would open up a whole new world of design choices to us today and i think we've found that in zeo prelude um tonight i invite you to follow along for the tour i hope that if you are new to the funker hierarchy then you're left today with a profound sense for how all these different abstractions are connected in a very simple way a simple way that you can probably fit inside your head even though if you've never seen funker before you'll probably be a little confused tonight but i hope when you come out you'll have a sense for what this structure is and you'll feel like it's reachable it's something you could reach in a few hours of study or a day or two of study i think that's actually true and on the other hand if you're a new or if you're if you've been functional programming for weeks or months so you already know functor you use monad on a regular basis then i hope that this opens your eyes i really do because the type classes that you use they act as a filter on how you see the world you can't see structure that that doesn't exist that can't be encoded in these type classes until you get a peek sort of at what the real world is like and what i'm going to do tonight is hopefully give you that peak so uh you'll be able to see new structure in the type classes you use but also know when those type classes are holding you back when you need to take a step back and introduce data types that maybe can't be described by those same type classes but maybe could be described by something like xeo prelude so tonight we're going to talk a bit about the functor hierarchy and if you haven't seen it before you will see a very brief tour of it adam's going to do that introduction and talk a bit about the significance of the funko hierarchy in commercial functional programming and then i'm going to take a detour and talk about some of the drawbacks of the classic functor hierarchy not just the way it's structured but also the way it's applied in scala and then we'll switch over to adam and he'll talk about algebra easy algebra so not the kind you may have run from before in high school or college math but very simple algebra that could potentially form a basis for a new type of a way of thinking about functional programming abstractions and then finally in the final segment of the talk both adam and i will be introducing zeo prelude which is an algebraic modular and scala first take on what it means to do abstraction in functional scala adam great so before we can talk about alternatives the functor hierarchy we need to understand a little bit what the functor hierarchy is and why it's been so influential so to start with functor is what we call a type class in functional programming so it describes some similar functionality that exists across different data types and so at the top of the hierarchy here we have this thing called functor and then below that we have a applicative um sorry i'm going to i'm just trying to speak a little bit more loudly so we have functor applicative and then monad so functor sits at the top of that hierarchy and functor describes some type that is a type constructor so it's parameterized other types so think about an option or a list or a future in the scala standard library and this describes some type of value in a context so if you think about having a string if we just have a string we just have a name we just have atom if we have a option of a string then we may have a string or we may not actually have a string at all if we have a future of a string then we may at some point the future have a string we may not have a string at all right now if we have a list we may have no strings we may have one string or we may have 100 strings and so functor describes a common functionality we have across all these types of being able to do this map operation which essentially says that we're able to transform the value inside that type while keeping it within that type so mapping over an option means i don't know if a value exists or not but if it does transform it with this function mapping over a list means i don't know how many values there are in this list if there are any but whatever values there are in the list transform them with this function on the future it means i may not have a value right now but whenever i get a value if i do transform the result with this function the next step down the functor hierarchy is this thing called apply and this adds this method app which has a little bit of a weird signature that we'll talk about later but essentially lets you put two of these values in a context together so you can imagine if i have a future that's going to return someone's first name another future is going to return their last name i can take those two together and return a new single future that has the first name and the last name the next thing in the hierarchy is applicative which is the first thing that gives us the about the ability to put a value into this context in addition to just working with values in the context so you can imagine if we have a future i can now put an existing value into a future if i have an option i can put a value into an option and this ends up being important because this gives us the ability to describe parallel computations so you'll often see applicative use when you want to do some degree of parallelism or you want to do multiple things at the same time and then the final part of this hierarchy is this modad which adds this flat map operation and whereas that app operation we talked about before lets you take two different values in a context and just combine them together this flat map operation now lets you take one value in a context and use that value to generate a new value in a context so if we think about like having a future that app operation would be like zipping futures where you say i've got these two different database queries and take the results of both of them and combine them together whereas flatmap is saying i'm going to do one database query and then once that comes back i'm going to use the result of that to create a whole new database query and so this essentially lets us embed a model of imperative programming into functional programming and gives us all the capabilities that we can do literally anything we can do in an imperative programming language in a purely functional context just by embedding it into these flat map computations and so this has really been incredibly influential starting off in haskell and the way they encoded the i o monad there but then going over to scala in the scala zed library as well as cats really a lot of what they've done has been based on this as well as you've seen this go into a lot of other programming languages of like kotlin java f-sharp as well as a ton of different scala talks you've probably seen about what a monad is and we've learned it's a monologue in the category of endo functors or maybe it's a burrito i don't i don't really know but it's in addition to just being influential this way it's powered a lot of real solutions so you can think about libraries like http 4s or duby monets have really been the tool that libraries and scala that have tried to use functional programming to solve real business problems have used and also inspired a ton of books from the red book functional programming scale that probably a lot of us have read and got used to get started in style of functional programming to newer books in scala to books about f sharp books we're doing functional programming in python in java so it's really been incredibly influential and it's let us really get to where we are today but as we're going to see there are also some problems with it yeah that's right so while the funker hierarchy has proven itself i'd say in commercial functional programming and does deliver value and allows us to abstract over data types that all possess similar degrees of structure it's not without its problems and what i mean by this is that even in its original haskell form by the way which was very messy originally and got cleaned up actually a little bit over time it has some issues and and not just in its natural habitat haskell but also when it's translated into other programming languages like scala sometimes some things don't work well in scallop they don't fit quite right i call those things haskellisms so in this section we're going to take a look at those things that both don't make sense in the original hierarchy maybe or could be done in a little bit better fashion and also those things that are merely uh drawbacks because uh of the way that we ported that over into scala so at the top of my list is the one i love to hate on and that is the app method in the applicative type class so this is a really weird operation if you squint you can see here that app is basically combining two f's together so that's what it does it's a binary operator it takes two f's so two futures or two options or two lists or something and combines them into one that's the simple part that's the easy to teach part the hard to teach part is over on the left hand side you have this function a to b that's stuck inside one f and on the other hand you have a value and this binary operator is combining them in such a fashion that the function is applied with that value to yield the return value of the function and so the first thing people ask is why would you ever take a function and stick it inside a functor why would you ever stick a function inside a list or a function inside an option or a function inside something else and honestly there's only one good reason for that and that is if your functions are always curried so the reason why app has the signature the very weird signature it does is because in haskell 100 of all functions are curry that means there are no multi-parameter functions in in haskell if you have a function that takes three parameters then it's actually a function that takes a value that returns a function that takes a value that returns a function that takes a value and returns the value and if you have curried functions everywhere then it turns out if you're programming with a data type that possesses the structure of an applicative functor then one of the most common things you want to do is take a pure function and apply it to n values where these n values are all computed using this functor context and to do that you can use this app operator actually the app operator has a symbolic alias which is angle bracket angle bracket and then the star inside it and what you do is you just lift that f function up into the functor context by using another weird operator and then you just apply one argument after the other separating them by the app operator and that works out beautifully in haskell it allows you to actually uh very painlessly i think uh use these applicative functors with ordinary functions with a minimum of fuss and a minimum of pain and there's potentially better ways to do that like applicative do and idiom brackets but nonetheless this app operator it's fairly easy to use and that's the reason why it has the structure it does because all functions are curried and haskell in scala functions are not queried in in general we don't carry our functions we have entry functions in scala and actually we have also we have functions from tuples to other tuples those are our two different things but never would we ever want to use this operator in fact the operator we would want to use is uh often called zip because it would take an f of an a and an f of a b and zip them together to give you f of a tuple of a and p that's a useful operator that operator i can teach in 30 minutes this operator here that we import from haskell it takes me two hours to teach this in the same fashion because i have to teach about where it comes from and why this this function here is sitting over in a functor context another example of the functor hierarchy going wrong is if you've tried to use set as a functor inside scala or haskell then you've run into trouble and the trouble that you run into is that no one ever defines an instance of functor for set there's a good reason for that if you try to define an instance of functor for set then you run into a law violation and the law violation it's a violation of a law called the functor composition law the function composition law says that let's say you have two functions f and g f goes from a to b and g goes from b to c then the functor composition law says that you can map over this set first by f so turn all the a's into b's and then map over it by g so turn all the b's into c's and that should be equivalent to taking that set and mapping it over the composition of f and g so composing those two functions into one function that goes all the way from a to c now it's easy to come up with law violations of this first set and the reason is that uh set in scala is implemented using equal equals and hash code on the data type itself it's a hash set and so it uses equality and hash code defined on any object in scala scala is an object-oriented programming language and everything extends any and any has two methods on or actually several methods but two of those methods are equals and hash code and so the scala hash set implementation uses those well there's a problem because let's say we have a data type a for which we have a sensible definition of equality in hashcode and a data type c for which we also have a sensible definition of equality in hashcode then when we compose those functions together we go all the way from a to c okay so that means we can map over the set using the compose function and we get back something sensible but what happens if we first try to map from a to b and and that gives us another set well the contents of that set will be poorly defined because if equality and hash code is not defined on the b's then that means we might end up with some collisions the set might delete information or it might not delete information that it should be deleting either of which would alter the structure of the set in such a way that we could actually observe it when we compose these functions together and do it the other way so for this reason set is not a functor in scala and in haskell and in other programming languages that have adopted the functor hierarchy well this uh comes as quite a bit of a surprise to mathematicians because in math set actually does form a function functor so what's up well the problem is that the thing that we call a functor inside of functional programming is actually a super specialized type of functor in mathematics basically it's like we've we've looked at a whole universe of possible functors and and we've narrowed our scope onto a little tiny dot inside that universe and said this is the thing that is a functor well that's wrong actually that's not a functor that's a very specific type of functor it's an endo functor in a very specific category in in scala and and that thing just because something doesn't fit into that pattern doesn't mean it's not a functor because math is gigantic and the set of all possible funkers out there is enormous and the amount of structure that you can describe with that concept in category theory it's huge you can't describe set as a functor using our type classes but that doesn't mean it's not a functor and this problem is pervasive across the entire functor hierarchy basically it's a lie you know if you want if you want to do the precise thing and give things precise terminology you have to come to the conclusion that we should not be using these terms like functor and and so forth in fp libraries because we're not being precise we actually are referring to very super specialized versions of these much more general purpose abstractions and you can see that this same problem arising not just for set but in lots of other cases an example is there's a library called zeoconfig [Music] build up a description of configuration information the structure of your config and using this definition of what your config looks like it can generate readers that can read configuration information from command line or environment variables or hokon files or json or xml yaml daw whatever you can imagine you can read information in from that format you can also write to any format and you can also use that definition to produce documentation markdown documentation or html documentation with hyperlinks and examples you can use it to produce reports that tell you where information came from and you can use it to do all these really amazing things and it if you look at this data type and the operations on it you're going to find operations that are very alarmingly similar to the ones that you would find on functor and applicative but there's no way to create an instance of these type classes for this data type in xeo config and that's not because zeo configs data type does not form a very valid abstraction with the same level of structure just tweaked in a slightly different way it's because the tightnesses that we're using are unable to conceive of these things as being instances because they're the structure's just not quite right in a similar way there's a library in development called xeo codec that's designed to allow you to build binary and text codecs and the parsers that you can define using this little dsl again they have a lot of similar structure to an applicative but you can't define an instance and that's because these these instances these type classes that we have they're just too limited they're too narrow they're too fixed too rigid we can't describe the structure of things like this using them and you may have if you've been f functional programming for some period of time you've probably heard that applicative structures can be introspected and magnetic structures cannot be so that's one reason to use free applicative instead of free monad but this turns out to only be true if you're talking about the monad type class so if you're talking about the more generalized notion of a monad in category theory there actually are interrespectable monads and you they're lawful they satisfy all the laws like you can actually define such a thing and that allows you to build parsers and other types of things that can be inter branching we can't write those john i think we've lost you can everyone else hear me yeah yes yeah i can be here yeah okay good we'll just wait for john to return it's probably his internet connection same problem here which i'm frozen yeah yeah john's the only one affected sorry for all the glitches to tonight's event it is 2020 so you should be used to it we are i'll get used to 2020 when it's 2021. well we can always blame certain people i'm kind of waiting for 2000 to not be about us having to wear two masks instead of one yeah let's see thank you for your patience thank you all for your patience and the fact that we had to restart yeah john is uh trying to redial now whilst we whilst we're waiting is anyone planning to attend the haskell love conference this weekend i believe it's this weekend if i'm not mistaken right yeah it is this weekend hello um one thing that's interesting is on friday it's in um it's in european standard time and then on saturday it's on east coast time okay just something to keep in mind so at least for me like on friday it starts at one in the morning it's a bit early that's good to know in my time zone it starts today i'm in australia and um starts this evening hoping to take some time thanks for joining us from australia thank you i used to live in san francisco when i attended the meetups in person but it's actually nice for me that i can attend remotely this time yeah the the the the nice nice nice thing about is the fact that um we're getting a lot more international crowd joining us and also international speakers last meet up we had a speaker from doing doing the talk from singapore and then of course pierre i don't know if he's online right now he he did he he did it from south korea a couple months back he's i know he's trying to dial back in john uh he's just uh i think internet issues whilst we're waiting any particular topics on scholar everyone would like to me to find a speaker to do a talk for is there anything that if we feel that we haven't had any meetup of content over here comes john welcome back john are you there thank you i had to restart my router this house's router i apologize for the technical difficulties no problem at all so where did everyone lose me you stopped at conflict already do you want to share your slides again yes okay so config yep all right so let me pick off pick up here so zeo config it has it has this structure that is applicative-like but cannot be described by an applicative and in a similar fashion zeo codec it has structure similar to an applicative but it can't be described by these type classes and if you're a functional programmer you've probably heard that applicatives can be introspected and monads cannot in fact this is only true if we're talking about the fp library's definition of monads for the category theory definition of monads actually there are restricted types of monads that we can look at and we can introspect and using this type of introspectible monad you could build branching parsers that you can fully optimize in advance and branching analytic pipelines that you have full transparency into and you can do whole pipeline optimization on and other types of very interesting things so we're not doing this we're not we're not building these libraries and in my view partially it's because we can only see the world through these functional us functional programmers we can only see the world through the type classes that we have and that's uh lack of imagination basically prevents us from building other incredibly useful and incredibly powerful data types equipped with operations that would allow us to solve problems in new and very important very practical pragmatic ways another interesting case where this problem rears its head is sometimes you want to build a dsl using a data type that satisfies that that would almost be a functor but you want to restrict the types an example would be rpc so you're doing remote procedure calls or maybe an analytics engine where you want to be able to say you can map to any type that can be serialized you can't map to any type you can only map for example this spark rdd to another data type for which there exists some sort of codec that allows us to do long-term persistence you want to do that in event sourcing and in certain analytics problems and in lots of other places you want to do that you can't do that and the reason why is functor doesn't allow you to functor has no ability to constrain the target type that you're mapping to and as a result you can't build little languages like that that form a valid functor and as a result no one is doing that by and large another interesting pain is not caused by that it's actually caused by the fact that when these haskell abstractions were ported over into scala the choice was made to not use variants and unfortunately that means that some problems that the scholar compiler can theoretically solve are not solved automatically for you by the scholar compiler you have to solve them manually by calling functions an example of this is let's say you have something that is a functor like an option or a future or something and it contains a dog if you're interacting with this data type through its functor interface then if you try to take this dog value and assign it to something that's typed as an animal so you're trying to widen that type every dog is an animal and you're trying to widen an f of a dog into an f of animal then scala doesn't let you do that instead you have to call this method on these functor type classes called widen that allows you to do this but the interesting thing about this is everything which is a covariant endo functor in scala um can in theory support this auto widening for free you shouldn't have to do this scala should be able to take its subtyping judgment on dog and animal and lift it up into work in the functor context and it can actually scala is powerful enough to do that but you have to be committed to using declaration site variants in scholar you have to be committed to embracing scala's subtyping not just its functional parts and to date these functional programming libraries have issued subtyping and have made all of their type classes almost all of their type classes totally invariant in all of their type parameters which causes massive pain for end users of these libraries leading to very poor ergonomics another problem is that libraries like scala zed they import wholesale haskell's laziness which turns out to be a mistake and then cats it goes the other way and it imports scala strictness into the fp libraries and that doesn't make any sense at all because when you're talking about the functor hierarchy you're talking about a realm in which you're using dsls to describe some type of effect whether that's effective optionality or non-determinism or input output asynchronicity or whatever it is concurrency you're you're building little dsls all functors describe dsl with a mapping operation and you can build other operations on there as you explore the functor hierarchy and once you get to applicative and beyond you can build values that describe processes that go on forever and as a result anything any sort of type class that works on uh functors so anything that defines a binary operator on functor it needs to be lazy in all of its parameters that's the only way that makes sense and that's that's a design decision not taken by cats that should be taken and skalazed goes the other way and makes everything lazy even things that don't necessarily make sense um and and so that we have no library in the middle that basically recognizes the importance of laziness as applied to effects and strictness when applied to data structures also we have an issue with lawless type classes and operations and we have a bunch of type classes and operations defined on them that have no laws and there's no way of creating generic code that is known to be correct by using these type classes this is something some problem that haskell has as well as the fp libraries in scala and then also we have type class proliferation so take a look at all the different type classes here dozens and dozens of type classes and these are the ones in cats okay if you go to scala zed you're probably looking at two or three pages of this and you can see that there's a lot of similarities between a lot of these different type classes like we have commutative applicative and uh ordinary applicative and we have uh controversial and we have minority and invariant minority and so forth it's almost like someone took a bunch of adjectives and combined them all together in these semi-random ways and generated this massive sort of sprawling hierarchy and took a subset of that and dumped it into the library there's a lot of stuff here and the reality is most people don't know what these even the most functional programmers they don't know what these things are here for what problems they solve or why there are so many of them or why there are some variations and not other variations it's just randomness it's ad hoc chaos and and people take a look at this if they're not from an fp background and they say this is not simple obviously we like to say functional programming is simple you know we're functional programmers because you know we're too dumb to not be functional programmers but you look at this and there's no way in which you can conceive of this is being simple there's a lot of complexity here and there's a lot that you have to study and learn and even once you study and learn it you're still probably not going to remember all of these different type classes and all the laws and the relationship between all these type classes and what they're all for that's just not going to happen this is too big to be easy to store in your brain in a fashion that you can use on a daily basis and then we come to another of my pet peeves here which is monad transformers in scala and i i'm not a fan of monad transformers in scala actually you can go to haskell in which monad transformers are a thousand times easier to use and you can find people who don't like monad transformers in haskell but if you go to scala and you try to use ammonia transformers well let's just say everyone everyone who went down that path you know eventually ends up going to scala because it's just so incredibly painful and the pains arise from multiple reasons but one of them is just type inference on these types of structures does not work very well performance is really awful it's really terrible and some of these a lot of them actually a lot of these data types are don't use declaration site variants which makes it even harder to use actually i remember teaching the state mon ad one time and i remember i know the state monet i can write an index state monad for you that's stock safe and all these other nice properties i can do that in like 20 minutes i was using one of the statement ads from an existing library and i couldn't remember the names of the methods like were they put or put s was it get or get ass what are these things called where are they located like why isn't this inferring why do i have to put the full type signature here it was such a painful experience i'm using that statement and i totally understand this thing and you know imagine if you're working with some monstrosity like this on a day-to-day basis it's just not fun to be doing this and there's no real reason this is not giving you an advantage over imperative programming this is not a strength of the scala programming language or strength of fp in scala in my opinion well in response to all of that i hope you're asking the question is there another way that we could factor the type class hierarchies such that we could potentially solve some of these problems is there maybe another way we could look at things that we could gain more expressive power but also radically simplify things to a level where it would be easy to teach and and could we actually embrace features of scala and get this to a point where it could be fun joyful to use could this be something that you get excited about using every day so we've seen that we've got some significant problems here how do we start to take a step forward here in addressing these well what we really try to do as we were in the process of developing zeo prelude is step back and think about what the most fundamental way of thinking about things is and to us that was really an algebra and as we think about it in algebra is a set of three things it's a set of values operations for combining those values and laws that describe how those operations behave and the great thing about this the first great thing about it is that it's something that everyone already knows so a really simple example is you can think about addition addition forms and algebra so our set there is integers our operation is just adding them and they're a bunch of different laws but one really two straightforward ones we could say is it's associative so doing x plus y plus z is the same as doing x plus y plus z and it's commutative so 1 plus 2 is the same as 2 plus 1. and the great thing about these algebras is you already do them all the time if you're programming in scala every time you define a new type every time you do case class x or trait y that's some set of elements so every time you're doing that you're doing the first part of that and the odds are every time you're defining a data type you're doing it so you can write some methods on it and very commonly you're going to have something that takes two of those and returns another one doesn't have to be um could be that you have a unary operation you have like negation for integers could be you have an operator that has three elements instead of two but combining two elements into one turns out to be really common and if you can do that in a way that composes you can end up combining three elements or four elements or an unlimited number of elements and so there are lots of these different algebras we're just going to go through a couple of examples here so one would be something that's an associative operation so we just saw that addition was one example of something that's associative but there are lots of other associative operations that we could think about taking the maximum of values as associated taking the minimum values associative another type of algebra we could think about is operations that have an identity element so for addition that identity element could be zero for multiplication it would be one for combining sets with it could be the empty set and again there's just a law here that's pretty simple that just says that using this operation combining any value with the identity element gives you back that value another simple one would be that commutativity property we talked about that doing x plus y is the same as y plus x and so these are examples of algebras but one of the really nice things about thinking about them this way is that there are other algebras that we can define we could think about an algebra for operations that are important that doing it twice is the same as doing it once and that might be valuable for us in some context and we can also combine these together in different ways so we can say that we need an operation that both has an identity element and that is associative and this in a way is much more compositional than the category theory-led approach we've had before where that in a sense requires us to privilege certain combinations of these properties that we say okay this monoi thing is important and it has associated operation identity element but then we suddenly need to remember okay we've got a monoi we've got a group we've got a rebellion group what extends which and exactly which properties do they have versus just saying we have these set of algebras and we could combine them in different ways to have things that have the properties we need for what we're doing and the other great thing about these is that they're just they're everywhere there are a ton of them that exist on any type you could think of in the scala standard library addition string concatenation more complex types like you could think about merging streams as being an operation as well as all sorts of types that are specific to your own business domain so you could think about if you have two csv files you could combine them in a way that is probably associative that adding one and then a second or third whatever order you do that is the same you could have an empty csv files identity element there seems like that's not a commutative operation that which order you combine the determines which order the records are and that's the kind of thinking that even if you didn't have formalized in that way you probably think about yourself and implementing these methods and thinking about what properties they should have and what's valid to do with them or not valid to do with them so this gives us really the basis then for what we're going to talk about now the actual code of how we translate this to zeo prelude so zeo prelude what is zeo prelude well it's a small little library that's trying to bring common useful algebraic abstractions and data types to scala developers and really it's a it's an alternative to scala zed and cats that is based on very radical ideas very very radical ideas and and they really embrace the modularity and the subtyping in scala to offer two things number one new levels of power you can talk about the structure of data types that you cannot talk about using the classic functor hierarchy and number two ergonomics now this is designed to fit well into the scala programming language to embrace subtyping not just the functional side of scala but also the object-oriented side as well basically to take the zeo design philosophy and push it into the space of functional programming abstractions the guiding design principles behind here are first off radical so so basically ignore all dogma it doesn't matter if 100 people say to do it one way well that's not a reason for doing it number two orthogonality so you probably got this sense from looking at all these different type classes me describing duplication that maybe these type classes are not orthogonal maybe there's some overlap between them and the goal for zeo prelude is have no overlap type classes should do one thing and do it well to the extent that that doesn't interfere with ergonomics the to have type classes that have laws that can be checked be pragmatic and that means if you have data types that don't satisfy laws but that are still useful to use in most cases with these instances then provide them for example double and it these don't satisfy any useful laws almost however um it turns out that it's we know we know about the limitations of these data types and we want to be able to use them in generic code so we go ahead and provide those instances for you scala first which means embracing subtyping and that means embracing subtyping even if there are limitations or drawbacks today so rather than this is a pet peeve of mine rather than sit around on the sidelines and complaining that scala has bugs here or there what we as the functional programming community should be doing is rolling up our sleeves finding those bugs reporting them and fixing them if necessary in order to move the whole scala community forward i think that's the best philosophy for a long time functional programmers avoided lots of features in scotland and i don't think that i don't think that makes a better skull it doesn't make a better future for everyone we need to be embracing all of scala and finding and fixing bugs if necessary ourselves in order to make forward progress minimal so coming up with a set of type classes that allows us to describe a huge amount of variation but in a small package and not trying to do esoteric things that are one percent cases so if a use case is required by 90 of people doing functional scala then yes include it if it's required by two percent definitely don't include it accessible which means that to the extent possible things should have good names good documentation and infer well and fit john we've lost you again um i need to find a way of coming up with some background waiting music for google meet thanks for patience everyone i'm sure he'll be back shortly [Music] now [Music] okay who's going to give me anxiety attacks here [Music] that jab played the right note already am i right come on who is that out of interest oh it's jeremy i'm very enjoying it jeremy by the way i'm still waiting for you to do a talkative scholar someday someday i'll do it [Music] how does september sound to you jeremy okay we're back with john where did i lose you here here yep okay which one opinionated opinionated okay it's okay to be opinionated so zeo prelude has three areas of focus almost everything in zeo prelude falls into one of these three buckets first off data structures and patterns for traversing them patterns of composition for types so this is taking two a's and combining them into another a and then patterns of composition for type constructors so taking an f of an a and another f of an a or an f of a b and combining them together into another f so that's like taking two futures and combining them into one so one of the one of the major trouble troubles with the functor hierarchy why it's not very expressive is basically everything after functor so functor is fine by itself but then everything else that builds on functor it starts to go down a specific path and this is a very anti-modular path that results in the lack of expressivity that i've previously talked about it it's like the functor hierarchy is taking a notion of scala function actually it's it's an arrow in category theory but think of it as a function it's taking the scala function and it's it's conflating that that exact structure of that function into all of the operators that compose these things together so you have these two things and they're all tangled together inside the functor hierarchy now let me show you that in the case of monad so monad looks like this as adam introduced it has this map and app and pure and flat map and i want you to look and see the number of places where you can see a scala function you can see it in map we really can't get away from that for this specific type of for a monad we need something like that we need that to appear in some concept of this endo functor in scala then we have app which also inside there you can also see another scala function the a to b and then inside pure it's a little less obvious but if you decompose pure into map and a simpler value then you can see that there's another scala function embedded inside there and then finally there's another scala function embedded in flatmap so this definition of monad is taking what a scholar function is this a to b thing it doesn't in category theory doesn't have to be a function a to b it could be something else anyway it's taking this thing and it's embedding it into every single operation it's conflating two things and what zeo prelude is is trying to do is detangle them and so to detangle these things one of the things we could do is explore an alternate encoding of monad and in this encoding of monad we have a map operation that mentions this scala function but that's it we have a zip operation that zips two f's together and gives us back a tuple but has no scholar function in it we have an any value which is just a simple value it's an f of any that satisfies certain properties when composed of the other ones you can reconstruct pure or point or return from that nmap and then we have flatten which takes an f of an f of an a and flattens it out to an f of an a so this base is here this is orthogonal there's no conflation of concerns here we've detangled everything and this turns out to be key to unlocking the design of zeo prelude so detangling the functions the arrows in these categories from the composition of these values that have this structure is the critical breakthrough on which zeo prelude is founded this and taking advantage of scala's support for subtyping that allows us to take modular units and combine them together to get something that has all of their pieces so it turns out that if you pull the functions apart from the composition then you can decompose huge numbers of the type classes that you saw on the preceding screen and many others in terms of way simpler building blocks and i'm going to give you a small taste of that over the coming slides i don't expect anyone to follow along with this we haven't introduced these building blocks these pieces but just take a look at the total the sheer number of type classes that you can break down into smaller orthogonal parts so we can describe semi group commutative semi group monoid commutative monoi group abelian group with a small number of type classes and many others besides in addition we can define functor contravariant invariant alternative and invariant alt with just a small number of different type classes you're going to see them again on the next slide all composed together in different ways using scala's intersection types we can also define using the same small number of type classes in variant semi-group all semi-group or contravarian semi-group semi-group k monoi k contravariant monoidal invariant monoidal all with these same type classes and and not only that but we can also describe flatmap monad divide divisible decidable apply applicative invariant applicative and lots of type classes that no one has ever bothered to create they may not even know they exist but they do exist or at least in theory things exist that have these types of structures we can describe them merely by composing together these simple orthogonal building blocks this is a whole new level of composability this is composability at the level of type classes simple building blocks which can be used to solve a huge number of problems so what do these look like well adam thanks john so as i said before one of the building blocks of zia prelude is ways of combining pipes so some of these you're going to find really familiar because they're the exact same algebra as we talked about earlier so we have this type class associative that just describes an associative operation and just like talked about the law for that is that we get to move those arrows around we have to combine the first thing and then combine the second and the third or we get to combine the first the second and then the third and that's it that's just one algebra that we can have here then we can build on that and we can say well we could have an associative operation that also has an identity element with respect to that associative operation so now when we combine the when we use that combine operation with any value in the identity element we get back that same value whether we do it on the left or the right we can also have an operation that in addition to being associative is commutative where we do x together with y that's the same as doing y together with x and we can pick which combination of those we require for whatever we're doing so we could require an operation that's commutative associative and has identity element or we could require the one that meets our needs and that's how essentially with these three type classes we're able to replicate all those different combinations that you saw above so then we go from there to the operations that allow us to describe transforming values within some type constructor and so this is the function or the arrow part of what john was talking about and so the first one we have is this one covariant that defines in a way what would be the kind of traditional understanding of functor here where you're going to change the function that we're going to apply here is just something that takes an a value and transforms it to something that is any b value at all completely unconstrained so that's the operation we could have on list or future or option that we're familiar with but we can also and one of the great things with this is it's got the variance attached to it so unlike the examples we saw before now we're able to lift that natural subtyping relationship from values to this type itself where just the fact that we have this function f a to b means that we can always widen the type because we could always just map the identity function widening the type but when we didn't have the variance annotation here we had to manually use that widen operation do that whereas now that will happen naturally for us we can also define a contravariant type class here that has essentially the same type signature except you can just see that the f looks different instead of being a function a to b being lifted to a function f of a to f b it's now a function b to a being lifted to a function f of a to f b and again through the use of variance here this now automatically allows narrowing the type of that and what gets more interesting is we can also start to add new ways of mapping here so we have this invariant type class that allows mapping but this time with this case class essentially represents an equivalence that now has a function from a to b and a function from b to a and now this says we can lift an equivalence from a to b to go from an a from an f of a to an f of b and you can also imagine here having other ways of transforming values of context other ways of doing that function or arrow part that are constrained in some way that only allow that operation to be done when there is some type class that exists for b only a subset of b values you can map to supporting some of those use cases for constrained dsls we talked about before and all of this takes place within the set of type classes that describe that arrow that function composition separate from describing how to combine individual values and then we can put those together for our types so then the next set of type classes we get to essentially describe those same operations of associative operations operations identity element commutative operations but at the level of type constructors instead of at the level of values themselves and with this for each of these i'll illustrate an example from zeo for those who are familiar with it so you can see that there's well now we're adding some f's there's nothing exotic here these are all operations that you already know about so the first one is this is these associative operations and for all of these operations we're going to see that there's what we call a both and an either variant here because we're doing this at the level of having instead of just two fas and being able to combine them we're doing the level of having an fa and an fb and so without knowing anything else about what those a's and b's are conceptually they're really two values two ways we could combine them we could either say i want both of them or i want either one or the other so the first operation we have here is associate both which is going to take two values f a and fb and just give us a tuple of fafb and so that's essentially the zip operation so if you think about like from zeo that's taking two effects and running one then running the other and combining their values and then from that we could get to other operations like zipwid that we know about by combining having this associate both with something like covariant that we talked about before but by combining those individually and building them up we give ourselves that modularity and the ability to say maybe it's not it doesn't have the full power of being covariant maybe it only works for an invariant type maybe it has other constraints on it the next type we have here is the associate of either instance so this is an associative operation that says will give me either a or b and so a simple example of that from zeo you could think about is this or else either operator which says run this first effect and if it succeeds give me its value but if it fails then run this other effect and we can see how this is associative where if we have z1 or else either zo2 or else either zo3 whatever way we arrange the parentheses we're going to keep running them from left to right until we get the first one that's a success from there we start to we have one more association since it's interesting to us and this one requires a little bit of squinting but this one's called associate flat and this says that if we have nested values that operation of a so of flattening them is itself associative so a simple way to think about this would be if you have a list of lists of lists so you have three layers nested of lists whether you take those two outer layers and you flatten them first and then you flatten that with the innermost layer or you take the two inner layers and you flatten those together and then you flatten those with the outermost layer either way you're going to get the same elements so that operation of flattening is itself associative and an example that would be flattened on zeo as well as flatten a lot of other types like lists so then we start to add more structure to this and we can say that those operations themselves have identity elements so we said associate both before is going to take those two f a value to an fa value and fb value and combine them into a tuple we're now adding this additional any value here that essentially is a identity element for this that is a value that is going to leave the other value unchanged is going to not add any new information so here again we said that zip was the operation that combines them and this any value in this case is just the unit value for zeo that's just going to return a successful effect that has the unit value in it and so we can always take any value and combine it with the unit value to get back a tuple that we can vacuously discard the unit value to get back the original value and that's always going to give us that value unchanged we can also look at there being an identity element for that either operator so here we said that the way of combining them was this or else either that essentially takes the first successful value and if it's if the left value is a failure tries to run the right value and here the identity value would be this uh what we call a halt with an empty cause which essentially says this is an effect that fails with an error that contains no information so we can see that combining a failed effect with a successful effect is always going to give you back the successful effect and then we need this cause that empty thing because zeo does a little bit of additional information preservation of preserving the errors on both causes if they both fail so we have this cause dot empty thing that essentially says this is a failure that contains no information so again we're able to combine these two things and we always get back the other one if one of them is this zeo.halt that's an identity value for that operation we can also think about having identity operation for that flattener and again it's the same thing of typically it's that z-o-dot unit and we can think about the law here being that if we use zeo.unit and we use map to put the value into that we can you always use that to essentially add a layer of f so if i've got an fa i can always use those two to make it an ffa or a fffa and so i can always vacuously add a layer through that and then i can flatten it back and i get the same thing so you think about like if i have a list and i just put another list constructor around it and then i flatten the whole thing i just get back my original list we can also talk about commutative operations so particularly on these higher kind of types these commutative operations often end up describing operations that have some level of parallelism and concurrency so before we said that that associative both that was going to combine an a and a b into a tuple that was zip that's not a commutative operation because that depends on the order of saying effect 1 zip effect 2 means run the first effect then when it's done run the second effect so that's going to be different than run the second effect then run the first effect because we might say you know print atom then print phraser that's not the same as phrase or print atom but this zip par operator is going to run two effects in parallel and return their results and so whether we run a and b in parallel or b and a in parallel it's the same thing we can also think about a commutative interpretation of that either operator so in this case it would be the race operator for zeo in particular this race either that says run these two effects and return the first one that completes and gives me that in an either telling me whether it was the left or the right side that was the first one to complete and we can see that when we race two things whether we race a and b or race b and a both times we're starting them at the same time and we're taking whichever one is the first one to complete so there's no difference between those two that's a commutative operation and then finally we talked about so we had ways of combining values we had ways of combining type constructors and then we have ways of traversing data structures and we have this traversable type class that lets you describe having some collection type and applying a effect a function that's going to return a result in an effect and getting back a new effect that's going to contain that data and while there's type signatures and constraints on it when you actually use it it's very straightforward it looks like very simple scallop you have this type of syntax you can write where you just say for each request handle the request and that handle request may contain a zeo effect it may contain some other type of requests it's just going to apply that to each element of that structure and wrap that all up in one effect and then from there we can further specialize to this non-empty traversable which gives us this 4h1 operator that has a very similar signature but both has fewer constraints on the types we have to have as well as provides us access to some additional operations so if we know the structure is not empty we can reduce it without having to provide a zero element versus if we could be empty we would need to fold over it and provide a zero element and we've got another set of type classes that uh help us out with various things we've got debug which is like show you might know from some other uh functional programming libraries that says given an a here's a way to represent this in a way that could be rendered equal that gives you a way to compare two values for equality hash that gives you a way to return a hash code for a value or that gives you a way to order two values and the great thing about all these is they embrace that declaration site variance so all of these fundamentally are functions that take in one or two a values and give you back something else whether it's a boolean or an integer hash code or ordering and so all of them are fundamentally controversial if they can consume a values and by embracing that aspect of scala and actually putting that variance there now a lot of things work much more naturally where if we have a way of ordering any animal then that means i definitely have a way of ordering dogs in order dogs or fish i can order cats i can order any animal versus in other encodings where this was invariant if we had this we'd get errors when we tried to apply this to types that uh weren't the specific type we were looking for we've also got a set of data types that we've included here and the main difference in the approach here is we've really tried to embrace scala brace the scale standard library and the types there so we have a non-empty list type but it actually it just have a ton of methods implemented on itself it's automatically convertible to a scala list from the standard library so if there's any method that's not defined that it's available on a list you'll get it there if you're trying to pass it to some method that's expecting a collection that will work for you so trying to do very ergonomic error we've got a validation data type so this will allow you to accumulate multiple errors unlike some it's not some other validation types are parameterized on the data type that's used to collect the errors which can create issues when you just want to go use it because typically you you just want to accumulate your errors you don't necessarily care that much about exactly what data type you're using and it can create inference issues we bake all that in for you so that's just ready to go we've got another one called z set that's essentially a more polymorphic set that lets you parameterize on some notion of how many values are in the set so it could be a set where the measure's just booleans that's a traditional set it could be a multi-set that captures how many of an element is in the set could be some type of fuzzy set so that's something else interesting there and then we've also got fun solution for monad transformers so in case you thought that zeo was great but just didn't have enough type parameters we've got this data type called z pure that represents a computation that takes in some initial state and output some state while also taking potentially an environment and producing either a value or failing with an error and this is essentially the solution to monad transformers and it's really taking a lot of the same ideas from zeo and applying them to this problem where one of the core ideas from zeo was this concept of effect rotation that instead of having layer on layer on layer on layer of monad transformer to kind of give you each capability and having all the issues associated with it you have one data type that bakes in all these capabilities and then you can use the ones you need for a specific problem so if all you want is state that's fine then your n type and your out type for state is going to be the same it's going to be that s type you're not going to require an environment you're not going to be able to fail and you're going to return a value a and that's going to perfectly describe the state type and give you great performance for that but also give you a lot of flexibility where maybe you say hey i want my state computation to be able to fail because i need to bail out early well that's fine so then you're going to have an error type there maybe you need context maybe you just want a reader you could do that too so you get a tremendous amount of flexibility there and we'll see some of the next few slides so you might say like hey that's still a lot of tight parameters seems like a lot here's the alternative here's that same capability set of capabilities built up using traditional monad transformers so you need a chrysler here of an either t of an index state tee with an eval inside for stack safety and you need multiple type lambdas because every one of these monad transformers is expecting something that only has a single type parameter but none of these things actually do and boy this seems like not very fun to write and i had to experience some of this actually because i went to go and benchmark the implementation of zpur with a monad transformer stack and just writing the code to do the benchmarks ended up being quite painful because one of the problems with these monad transformer stacks is you can describe this whole type but the type isn't any actual data type that knows what it is so i've got this closely of either t of state t thing but then i want to just implement a simple constructor that's going to get the current state well the data type i have is actually just this classly which the chrysler doesn't know anything about the fact that two layers of type constructors down there's a state monad there so when i try to do this get method it doesn't exist i've gotta like define two different sets of type aliases just to avoid having to do more type lambdas and then i have to do these multiple lift f's where i have to specify the type parameters just to get this really simple constructor that literally all i'm trying to do is get the state here and then also just like writing the signature i know that i'm creating type inference issues for myself because all this constructor is supposed to be doing is getting the state well why is it parameterized on a environment type or an error type the the reason is that this stack all of these parameters are invariant so i'd like to be able to say this can't fail at all so the e is nothing and it doesn't require any environment so the r is any but if all of these are invariant then that's going to cause me a bunch of type inference issues i'm either going to need to manually narrow or widen so i kind of have these extra type parameters floating around so hopefully i can specify them if i need to or maybe if i'm lucky they're going to get inferred on the other hand if i'm working with zpier zpur is just a simple data type that yeah it's got five type parameters but it knows about all of its type parameters and it's got methods implemented to deal with all of its type parameters so getting the state with z pure is just z pure dot and to make it easy for you we've got a type alias that type state equals zpr like we saw above so literally all you do is you do state.get and you don't even have to know that there's this more polymorphic thing you're working with but all of that power if you need it is there as well as the optimizations that went into it because the zpier type was built using a runtime that was adapted from the zeo runtime and is very optimized so here's a comparison of doing simple operations of traversing a data type doing repeated get sets repeated left associated binds with that monad transformer stack we looked at before versus z pure and zpr is between 10 and 30 times faster here and that actually makes a big difference because for these types of operations you know maybe if you're working with effects you can say look if i'm doing database i o the fact that my flat maps take more or less time with one effect system or another maybe that's not actually going to drive the performance of my application but you're using state if you're using a traversal to do just like zip with index or fold left in terms of the default implementation for fold left on a traversable or what would be called the traverse type class in another functional programming library that's done in terms of that state monad so that's actually a huge performance implication and even if you say well you know what adam like this wasn't a fair comparison like i don't need all that power in z pure all i need is state even if i do the same benchmark with just a state type from another functional programming library that just has that state and that eval monad layers doesn't have any of the others i'm still going to be between three and ten times slower the final thing we've got for you is new types built in so new types is i think another one of those things that's always been a really nice idea from functional programming libraries but at least for me always seemed like not quite ergonomic enough to really use day-to-day and i think we've got some really nice ergonomics for this where it actually seems quite natural and is quite powerful and all of this is done in a way where it's entirely at compile time and it's entirely plain scala there's no macros there's no funny business like that going on and there's no actual new uh data types or type constructors being created all at compile time and we've also got a bunch of really really great options to let you solve different types of problems so the first thing you'll see here is is we've got both the ability to do what we call new types as well as the ability to do what we call subtypes so in this first example we've got this object meter extends sub type of int and so what this means is that meter to the scala compiler is going to be a distinct type from int but it's also going to be a subtype event even though technically we know there's no way to actually extend int to the compiler this would be a subtype event and that can really be nice because it means that all the methods that we have on int exist on meter so it can avoid a lot of boilerplate for us in implementing these as well as any type class instances that exist for int are going to be in scope for meters so we don't need to re-implement addition and multiplication for meters just to be able to work with them we also have this set of constructors that have a subtype or new type f and this lets you describe new types that are parameterized on some other type so for example we might want to use this sum to distinguish addition versus multiplication uh if something has a sum then we want the type class instance for associative to be adding it versus if it has product we want to be multiplying it but we don't really want to have to do that separately for integers and doubles and floats and every possible value we could add or multiply so instead this lets us define it at the level of a type constructor where this just takes any type a and can return a sum of a and again if we want to we can specify that this is going to be a subtype versus a new type finally we have the ability to do smart constructors so if i want to define a data type for the natural numbers instead of using new type i can use this new type smart so this is going to create some type where the underlying representation is an integer but the compiler is going to treat as a separate type and in this case i want to treat as a new type instead of a subtype because i don't want some of the operations available on integers to be available on natural numbers i only want to define the ones that are going to be valid on natural numbers and the other thing that this comes with is built-in validation for this data type so this is powered by assertions from zeo test and here i'm defining here's the condition that a valid instance when i created has to satisfy this is greater than or equal to zero and we've got a bunch of ways built in then to build these values that have that built-in validation as well as built-in error reporting and this gives really nice ergonomics so here's just a simple example but one of the classic problems with all these functional programming libraries is you want to add a list of numbers versus multiply a list of numbers and how do you do that and how do you do that with some type of new typing mechanism and with zeo prelude it ends up being really nice because of that subtyping mechanism where essentially we're just taking this list and we're mapping the values to this new type using either this sum or product constructor but then we don't even need to do anything to unwrap the values again because sum of int is already a subtype event so the compiler can just lift that right back up to an end for us so it creates a really nice uh ergonomics for working with these new types that is i i'm obviously a biased observer here but i think better than anything we've seen before with that i will uh hand it back to you john all right so this is where zeo prelude is today we've got that bundle of data types and type classes that are small they're based on algebraic properties of associativity and commutativity and identity and they're modular so you can assemble very complex expressive combinations of all these type classes to generate new structures that you can't describe using the existing hierarchies but where to from here well we want to do documentation lots of really beginner-friendly documentation and examples we want more instances of these type classes for data types in the scala standard library and in the xeo library we need to do more polishing we want to do some effect type classes for in particular to describe the data type z pure as well as the zeo data type we want to do some more performance optimization and make sure that all the operations in there are stack safe so non-toy implementations of different operations that we may have right there now we want to provide automatic derivation of type class instances for any adt uh that'll utilize different mechanisms in scala 2.x versus dottie scholar 3. and then finally we want to make sure that we get feedback from real users because that's one of the things that i think made zio so such a great library is that we had not only all these great contributors implementing features and and writing docs and so forth but also we have a lot of users who gave us feedback and we took that feedback to heart and we used it to make the product better so that's what we need for xeo prelude is lots of people putting this through its paces and making sure that we we flush out any edge cases and we make sure type inference is really great and everything really fits together well so before we open it up for questions and wrap up i wanted to a special round of thanks to the um the spartan members who contributed to this project they worked nights and weekends to add features and docs and and typo fixes and all of the above to the library deon skin uh manfred jorge phil kamal and maxim thank you all for your contributions the operator would not be where it is today without your help and of course thank you to solar my friend who invited me to give this talk here with adam it has been a pleasure so if if you have any questions we will go ahead and take those please do check out this repository at github.com prelude that is not operational tonight and given our technical difficulties it is likely that will not be operational until tomorrow but tomorrow we will go ahead and open that up to the public also if you if you like zeo prelude then consider please um becoming one of my patrons on patreon and you can follow along with new projects and and get my my weekly lessons teaching functional programming and one-on-one mentorship and then finally uh please feel free to follow both myself and adam on twitter or linkedin or i have a blog at dcos.net and you can find both of our content on the xyverge website so xyverge.com that's my company that's where adam works that's where uh we're building a whole company around functional programming in scala so please check that out all right so let's go ahead and open it up to questions jeremy did you say you want to go first i have questions um so you mentioned that uh earlier on you mentioned that uh the functor definition in cats and scalzy was a small subset of the possible funders i agree with but i didn't see how what you talked about tonight uh addresses that can you give a little more insight into how we can make more arbitrary functors from scala to other subcategories or yeah that's a really excellent question um so remember back in the beginning i said that the functor that we have in these type class libraries is just a dot in an ocean of possible functors and part of the problem why we need so many type classes using the func the existing functor hierarchy is because it conflates that mapping function with all of the other operations like combining two f's together all the different ways you can combine two f's together it's all wrapped up into one package that means if you want an invariant applicative for example which doesn't i don't think that exists but you can define it it's valid it's lawful it really is a type of laxmanoidal functor you you have to define an entirely new type class and there's only gonna be minimal differences between that and the other ones but you still have to define it now using zeo prelude you can actually reuse associative both and you know on associated or identity both and various other type classes the only thing you would have to change is instead of using covariant you would just use invariant so you would mix in invariant with what other of other capabilities that you wanted and you would have a composite type formed through intersection types that would represent that new thing also if you wanted to define for example a a dsl for data types um an applicative dsl for data types that can be serialized then you would use associative both maybe identity both together with a type of mapping operation that only allows you to map to target types that support serialization and in in that way you would be able to reuse all of the type classes inside zeo prelude but maybe introduce one of your own and actually for that use case and many others you don't have to so we didn't talk about this covariant subset and contravariant subset type class that are baked into zeo prelude that ordinary covariant and contravary extend but the capability to operate on subsets of types that satisfy particular requirements as defined by a set of type classes is already baked in so ordinarily you probably don't need to create new type classes but the point is if you ever have to create a new type class because the xeo prelude type class hierarchy is so modular you won't have to reinvent everything you might just have to create one new type class and you can mix that in with all these other type classes and reuse a lot of the generic machinery by the way i don't think there is a full solution to that problem and the reason is in order to model category theory in its full generality you need a much more powerful programming language one that doesn't infer i mean you you need something like idris or or agda and you need to be able to describe things that can no longer infer i don't think that's worthwhile honestly i think you can get the vast majority of use cases from the combination of modularity and then subsetting on types and that's what zeo prelude gives you is it gives you the ability to describe all these new types of things you couldn't before but but without a dependently type programming language and in very complicated machinery that would not infer anymore excellent question yeah i think you're right uh as a follow-up question are uh are you planning to open source this uh zero prelude right now or tomorrow i would i was planning on doing it tonight but honestly it's gotten so late with the technical difficulties and everything probably defer that to the morning but yes tomorrow it's going to be out there it's going to be open source and we're going to create a bunch of tickets out there for people who are eager to give a hand with the sort of last mile that has to be done in order for us to release 1.0 but unlike zeo this is not uncharted territory so it's not going to be three years it's more like you know three months or or maybe even three weeks who knows but i think we're actually reasonably close to a 1.0 release on this okay so uh uh so many questions but one more one more follow-up question uh that a lot of the uh the innovations here were variants based why do you think nobody else that's uh you know traverse this territory no pun intended has has has you know attempted to use the obvious variances that that should be applied to these these different uh types yeah so i i think that's that's a good question and i think that i think that first off you do have examples of people who have uh created a more faithful representation of abstractions in category theory in both scala and haskell so you have those people who have gone like just totally crazy with very fancy techniques edcomet for example and a friend of mine alex and lots of other people have created much more faithful representations of category theory abstractions inside haskell and scala but those have not been very usable or practical um and conversely i think that um you've had people play around like in my previous work with red eyes and i know michael pochlis did some stuff with kodak um that we're like skirting on the edges of this and saying hey there's stuff here there's stuff here that's unexplored that we would like to be able to write type classes for that's very similar to what we have we don't have that do we create ad hoc classes what's not been done is the modularization of the type class hierarchy that's chopping it apart into pieces and that requires um first off it requires compromises because you have to decide what you're willing to give up and that's tricky and i think it requires time to figure that out also it requires knowing knowing deeply scala and embracing subtyping as well as knowing functional programming very deeply and the subset of people who know and embrace subtyping in scala and who know and embrace you know functional programming is not that big there's not gobs of people running in there doing lots of experiments so i think the fact that you have to know a lot about scala and subtyping and fp and also you have have to have played around with different variations of the classic classic hierarchy has mean we've not seen a lot of innovation in this space but i think you know now it's really good time in the period of functional programming in scala because people are starting to get more comfortable moving away from what haskell did for a while that was considered the gold standard and you didn't deviate from what haskell did because that's what it meant to do statically typed fp but i think increasingly scholar programmers are becoming less afraid to just chart new territory and say no you know that made sense in scala it does or in haskell it doesn't necessarily make sense in scala and there's no reason why we have to do something the exact same way that they chose to do it in that other programming language so so you might say uh subtyping is considered an evil uh yes bosses are concerned but uh it it is a feature of a scala that doesn't exist in haskell and when you bring from haskell it might give a new dimension of how to define that thing um yeah i absolutely think that subtyping here is a strength and declarations like variance and the way it can be used to describe well lift up subtyping relations into these type classes and give you sort of automatically derived type class instances for things that are super types or subtypes is enormously powerful and it's an example of playing to the strengths of the scala programming language and that's something that i know for sure i did not see early on doing fp and scala i avoided all the things that were not haskell and lots of other people did and still do that they avoid anything that's not haskell in scala i think that's the wrong approach i think everyone who goes down that approach eventually ends up leaving scala for haskell because you you can't do good haskell in scholar you can't tony morris is right about that and if you want to do good haskell in scarlet you'll end up dissatisfied and you'll you'll leave but if you chart new territory if you find out how scala wants to do functional programming i think you can end up with something that is powerful in its own unique ways it doesn't have the same strengths of haskell it has different strengths but no less important for day-to-day programming you can end up with something that's both principle principled and also joyful because i think functional programming needs to be joyful and the more we can do to make it fun to do fp and scala the more people will do it and the longer it will stick and the the fewer people will leave scala for for haskell great questions really great questions thanks do you find what you're doing to be more mathematical because it seems that way it seems simple and almost i would hate to use the word elegant but but it i i just remembering how you know koshy sequences form a ring and these it sounds a lot like rings and groups to me yes so excellent question richard i i do believe that this is even though it's um a different approach from the classic funker hierarchy and maybe doesn't use any of the category theory jargon or or those type classes it is um more principled it's more essential because these operations these type classes are more orthogonal they actually themselves compose together in ways that you you can't replicate and also they're more faithful that's that's what some people don't realize they're more faithful to the original category theory abstractions because how do we describe binary operators in math we describe them by their properties associative you know distributive if we have two of them or we we have some identity and so we describe them by their laws and it's the same thing in zeo prelude these operations these binary operations are described by their properties you never have to wonder in zeo prelude what the laws are satisfied by an abstraction because it's in the name you know an associate of both is associative for a tuple like you don't have to remember this structure is baked into the names there's no more guessing people know what the structure is from the names and things like the associative both for example or or rather the associated both and associative either are a more faithful encoding of a laxmanoidal functor than the applicative type class the applicative type process is not anything in category theory it's sort of a a weird way that something in category theory can be used based on curried functions in haskell so i think it's even though there's none of the category theory stuff in there the essential structure of zeo prelude is more faithful to category theory and it's also more principled and it relates to the laws and more how we would talk about it from a mathematical perspective of binary operators satisfying these different properties thank you great question it's sort of what i'm thinking about also the moned i look at the monette as kind of an abstraction of a sub structure in a program and that it's it has this isolation effect and it has this predictive effect and this seems to add to that capability of being able to predict from i'm starting out here and i'm going to to try to transition this thing and it just feels cleaner i don't know that's just my thought about it i think it is cleaner and what it allows you to do is to define new types of moments that can't be written using the standard moment type class and i'm excited about that actually i think that um people could now define generic type classes for parser combinator libraries in analytics libraries and so forth that use monad's restricted forms of monads that allow you to do branching in ways that are still amenable to optimization uh i have a question so when i was listening it i felt like oh okay yeah model theory is like getting into the and basically i was and the code is beautiful and the design is beautiful but do actually keep in mind like model theory and categorical uh like implementations categorical models of theory algebraic theory um to some degree i think that there's always going to be trade-offs that you have to make when you're trying to model uh super generic things in a programming language and i think that these are not the only trade-offs that you could you could end up making also there's there's parts of the class of abstractions that we've we've currently ignored and that would include um combinations of binary operators for example and lattices as a subset of those that we're not tackling right now and maybe we'll tackle them at some point for now those parts have been left off and we're focusing on the sort of 90 percent and with a specific opinion made a choice that is not necessarily the only choice you could make if you decide to branch off from the functor hierarchy there's other ways you could go this is one choice it's one choice that i think that will will be very useful but it's it's still an approximation um it's not i don't think there's a a clean solution to everything out there that's also ergonomic yeah probably just not right now but i'm working on that so i was engaging well that's awesome i hope to hear i hope to hear more any other questions yeah this is matthew smedberg so i posted it in the uh in the chat but i kind of wanted to kind of tease out if this is more something that you think is useful for library designers or something that is going to be consumed by your work a day programmer because one of the things that i've had to deal with as i've you know kind of mentored younger programmers is don't worry about all the type classes don't worry about all the complexity that and please don't import one thing from cats and don't worry about it let the people who want the complexity deal with the complexities kind of where do you see this kind of adoption coming for trailers yeah so great question i think that i think that functional type classes are useful to the advanced functional programmer and they're useful in you know assuming you're not doing something like tagless vinyl which forcibly introduces type classes into all your code or not all of it but a good chunk of it then the uses of type classes are actually quite minimal they're maybe you know charitably they're they're like five percent of your code base can benefit from type classes as an end user so these are things that are you don't need to use them if you use them you can eliminate some some typing basically in some duplication and benefit from generic functionality i think type class libraries are um more obviously useful to the writers of libraries and they can still be useful inside end user applications for the common stuff and by the common stuff i mean combining data structures together okay associatively combining data structures together instant strings and maps and lists and vectors and trees and just smashing stuff together we all want that operator that does the magically right thing in combining two things together and that's a common use case even in business logic that's very common and then the other thing that's very common in business code is this traversing the data structure and performing an effect for every value and collecting up the results so that's the traversable type class so those things the associative combination of a value and then the ability to traverse over a data structure and perform effects that's in all applications you don't have to use it but it makes sense to use that um concrete data types like validation they're easy to teach and there's uses in a lot of application for the concrete stuff the more esoteric type classes like associative combinations and identities and commutative combinations those things are more useful for the library designer and the advanced functional programmer and they're not going to be very pervasive in the typical application and user code base but still these time classes they could they could enable um new types of libraries that because they'll be they'll be causing i hope they'll be causing people to think about things that they weren't thinking about before because they were viewing everything through that lens of funker applicative and and monad yes jeremy yes michael so my last question will zeo come to implement his class for providing system or his uh yeah so excellent question so the the way that we're going about that is zeo will remain uh dependency free library and that's because not everyone who is interested in zeo is interested in functional programming as weird as that sounds that's actually true like zeo is just a great library for doing concurrency stuff so you don't need to use you don't need to know anything about algebraic structure or anything to use just you know use it um however there will be some subset of zeo users who want to talk about algebraic structure and benefit from generic code and that subset it's a good fit for for zio i think and so zeo zeo prelude is actually going to it already depends on zeo and it defines instances of all the type classes for all the zeo data types so if you want to use the zeo data types in a generic context with generic code or you want to do tagless final in xeo then i think zeo prelude is going to become an important ingredient for that but you'll never have to depend on zeo prelude because in my opinion the the audience for a library like zeo is a hundred percent of scholarly users the audience for a library like zeo prelude is maybe ten percent or five percent it's not as big and it's for the the reasons that everyone mentions you know these type classes when you're writing end user code you don't need to use them they make your life a little simpler if you do you can eliminate some duplication but ultimately you could write that code yourself and just specialize it a bunch of times type classes eliminate that boilerplate but you have to know about them and know about them requires training it's definitely inverted from the norm and i actually think that kind of makes a lot of sense because the foundation of everything is being able to to model effects like even to do a memo table which is something that you would expect in an fp library you have to talk about in fact almost any interesting functionality is going to have to depend on that functional effect model so i think that's the right way to model it as opposed to the classic way of modeling it which is to have the effect library depend on the fp library i think that's actually backwards i believed it yep fine yeah i'm actually very interested i mean the the design of the libraries is incredible i think it's revolutionary it's not no longer a hierarchy i believe it's more more than like this but i'm not very very curious about the design of people and especially the state parameters and it's actually a thing that should be embedded in the variants of the environment like you that you deal at least theoretically so what's and also you've mentioned that the driver of zpur is quite inspired by the driver of xeo so i don't actually understand the design of this vp or is it an effect system is it only for synchronous data what's what's the reason behind it seems like crosstalk a lot with your territory yeah i mean i i can try to hit on that so i would think about z pure is what you want when you want one of a constrained set of capabilities so zpur says you have the ability to access an environment you have ability to change this specified state you have the ability to fail but beyond those things you're not going to be doing any other effects right you're not going to be doing that could be forking fibers you're not going to be doing network io you're going to be doing these specific set of effects versus xeo is iozo is you can do anything in the world within the context of dio thank you nigel ben's got a question for your job yeah um i see a lot of it's based on like associative commutative uh stuff like that all the nice laws um however we all know that there's no way to kind of enforce that in the in the scala language is there like thoughts of creating something using zeo tests to create kind of like a library similar to cats laws to verify that these things are actually working yeah so actually i'll let adam take this more on but zeotest has laws as a first class value which is you know relatively new um and these these laws can work on constrained data types and so adam why don't you talk more about the integration there between zeo prelude and zeo test yeah definitely it's been really nice being involved in both projects and um you know they've been useful across pollination for each other because the need to test these laws and the desire to do so in a more polymorphic way actually led to the development of this functionality within zeo test and we think it gives a more powerful way to treat laws as values because in other systems um where you're testing laws the laws actually don't have the full degree of polymorphism um the law is some method that says well you know if you give me an a value then i'll give you some test results but in zeo tests the law is actually a value itself that says for any value that has these capabilities i can give a test result and there's a lot of neat ability to compose those together of if you have one law that requires one set of capabilities another law with another set of capabilities you get a new law set that's going to test all those laws and require all of those capabilities and so that's actually been baked in directly into zeo prelude where when you look at just about every data type every one of these type classes you'll see something like object equal extends lawful equal and it actually gives you a very easy syntax of you can uh just with a zeo test you can now do check all laws equal you just provide a generator for whatever concrete a type you want to check and it'll do that for you yeah for for law testing i think that we've upped the game basically that there's just the laws their ordinary values you compose them using plus operators you check them in in a way that's extraordinarily simple like this is a level of ergonomics that i think we've not seen before in the scala ecosystem or to my knowledge any ecosystem just making it super easy to not only bake these laws into every type class actually the companion objects of the type classes but making it super easy to check them with just one line of code great sounds awesome can't wait to see it any more questions yes um this is nicholas um i'm fairly new to the zeo community here um but this talk was to me very enlightening um it's a kind of like a revolutionary view on basically making fp programming indeed as john said you know more elegant and fun and when i i see and and i work a lot with um with lagoon and aka and and those kinds of libraries and microservices and and i really enjoy it but i can see that uh using the io prelude could actually potentially have a big ripple effect in terms of like simplifying uh several libraries probably in the zio ecosystems and so i'm wondering if you what what what you think is the roadmap about how long is going to take to effectively propagate all of this pollutedness elegance to the rest of the cio ecosystem so that you could say to people you know zio is a great way of programming in scala and and people instead of you know i mean i don't know about anything about haskell but you know all the time people say oh yeah why don't you program in in java or in in in extend which is like some kind of like a fp wannabe type dialect of of java um or or maybe in in python for for christ's sakes right and and and so i to that i i kind of like feel like uh i want to retain my competitive edge by actually using the best language to get my job done in the best way and it feels like with this prelude business and and the way that adam talked about the the the elegance in which we can incorporate that with a law testing it actually kicks ass and and so i'm wondering like when is it going to be like to the point where you have like this beautiful ecosystem that you can tell the world you know there it is enjoy yeah i think that's a good question i think that um you're starting to see that now but i think that what zeo prelude will give us is a common way to talk about structure across all these different libraries because right now they have no vocabulary to talk about that but when that vocabulary is baked into every library maybe as a separate project or maybe in chord depends on the library then suddenly it makes it very easy to share that and i think you get network benefits from that and also you get the ability as an end user to learn one set of vocabulary and use that across all the libraries and to know how like if you make your data type and it has to satisfy laws how do you do that well you've learned it for one you've learned it across all so i think the types of network benefits that we'll see there and the way that things can fit together in a modular fashion will increase greatly with zeo prelude i think it'll take a little bit of time to get there but not too long i think a year from now all of that will have happened actually and i'm really looking forward to seeing that happen thank you yeah great question um i have a question but um i really enjoyed your talk and i really appreciated the the classification that uh that this prelude produces and especially i think going back to the operation of latin which i think is the original definition of a monad in category theory instead of which makes people really confused yes i think that is really brilliant so now we don't have to use the word monad anymore we just say oh we have a flattened operation which everybody understands so i think that's great and i think separating out the different algebraic properties i think is is brilliant my question though is the original motivation for monadz was to introduce effects in a functional language right and i'm wondering in in your implementations the various laws like commutativity and associativity how do they apply do they apply to any effects at all because it seems like that's where the limitations gonna be i mean all the other algebraic data structures are wonderful but when it comes to that what are your experiences so far yeah you want to take that adam or i can do it as well you're muted i think uh sorry about that um yeah so it's a it's a great question i think it really starts with the problem being equality because all of these laws rely on some notion of equality to to test them and the problem with real i o type effects is that there's um really if you're being principled there's no way to do that of i can have two effects to do arbitrary things and maybe i can check that they return the same value but the reality is if i'm doing i o i might return two but i might have launched ten different network requests along the way and you wouldn't be able to observe that from the return value so i i think there's probably a little bit more we could do we've talked about having some concept of i don't know if you call it test equality or quasi-equality of at least equality to the extent we can measure it of you give some inputs and we'll look and see whether we get the same outputs just for testing it but i think it's important to just recognize that there is a fundamental limitation there and kind of number and you can define quality on things that don't have a well-defined notion of quality and another thing i would say is that if you take that flatten operation and you combine it with covariance so you have the map operation and then you have the flatten operation then the composition of those two can satisfy additional laws so that's something that you you don't see now in in the presentation but like if you actually have flat and covariant you combine them together that can satisfy new types of laws that are not present in covariant and not present in flatten i think maybe that's what you were hinting at but that flattened operation if you have map as well you can implement flatmap so it's it's as powerful it's just expressed in a totally orthogonal way where there's no overlap you know flatten is as primitive as it possibly can be map is as primitive as it can possibly be they don't overlap at all they're orthogonal you combine them together and now you can generate flat map fodmap is another example of the reason why it's structured that way is because it's useful to structure it that way when you're using it it has nothing to do but like you said it has nothing to do with the category theory um uh representation of a monad which is which is a monoid in the cat in the category of endo functors it's like we have two of these things and we smash them together into one and and that's the flat and operate and that's that's join all right all right anything else john and adam that was absolutely fantastic the talk the prelude now i know what you guys been up to thank you for the huge pains and technical difficulties and everything i can't believe just how much trouble we ran into tonight but i appreciate everything it was me who was supposed to apologize it's not your fault it's not your fault yeah i i really can't thank you enough uh for doing this together hope uh it didn't put you off coming back to this as a scholar and hopefully one day one day in person again yeah yeah in person where we can't be bombarded by trolls that's right that's right first off everybody was the first time it happened to a meet-up that i was running so now i'll be on extra guard next time yeah come on i thought of a few ideas for next time let's let's see what's best hopefully there'll be no next time get a few get a few volunteers to dry run at first and cause as much mayhem as they can and see if you can shut them down yeah yeah absolutely absolutely but yeah at the beginning of the talk uh john i think you were talking about um how you could have like say a monad instance for uh like a type and then it could actually like widen to the monad instance of another type and i think you can kind of reference to um the idea that like we get a lot in like cats where you do like dot some on something and that turns it into option automatically and what not i was actually expecting to see variance on the f parameters like on the outside of the f as opposed to the inside to solve that problem but i didn't see that um so does the library handle that very well or is it still kind of a hole i think that makes sense uh the reason why you don't see it in the library because the f's generally occur as inputs and outputs but in your own type classes for example if you're doing tagless final you're going to have a bunch of f constructors it's going to appear as an output parameter so i actually recommend you use the plus sign for that but yeah i think all the operators that i can think of in zeo prelude the f occurs as both an input and an output so you can't make that covariant unfortunately but if you can then definitely do it because it works in a lot of cases and when it does work it's really really nice okay yeah one other question would you be you're cutting it yeah it was a bit choppy yeah i think we lost britta there john at least we're not standing in the middle of the street for you taking questions like last year right [Laughter] so in the meantime i have a follow-up to neil uh tonight's question then since you said john that uh in our applications you could have like basically two f parameters one one for just inputs and one for just outputs and you guys configured in uh um the prelude that you have made perhaps saying well you know somebody might actually for legitimate reasons want to put some covariance on the f for input purposes or output purposes so instead of using the same f for both one we have two parameters one always for inputs and one always for output so we use that technique on uh lower kind of types all the time in the zeo ecosystem so in order to make things infer very well we'll split invariants into into the covariant part and the contravarian part have two separate type parameters it works beautifully now the reason why i wouldn't necessarily recommend doing that on the type classes themselves is because it's a niche use case and it's going to complicate the design of the library and this is something that i talk about when i'm trying to teach library design is that you have to you have to make trade-offs if you want to make it really good at some things it has to be not so great at others and absolutely if you wanted maximum um maximum covariance contravariance so subtyping relationships then you would split things up like you would split associative both up into the covariant part which would be an f and the controverian part which would be a g or vice versa and you would annotate one with minus and one with plus these would be the higher kind of types f and g would have kind start a star and you would annotate them with plus and minus and then your operations would support an input type that is potentially different than the output type that would give you a lot of power a lot of flexibility that would be used in a very small number of cases so in in my view if you're aiming for a mainstream library then that would probably not be the best trade-off to make because it will complicate the design um and only address niche use cases but nonetheless you're right that that would actually give you much better type inference you know for those use cases and also you would be able to describe things using uh that structure that you can't describe using the current one in which you have a single f a single uh a data type that it's invariant in okay anything else john and adam thank you so much i think everyone's ready for a glass of wine now until to digest digest prelude and unnecessary drama at the beginning of the evening right okay time for the afterlife yeah thank you so much for starting from scratch and then you talk again as well appreciate it it's no problem stamina that was amazing yeah the second effort was even better so i'm about to hear that i i i i had to watch my introduction of john though i feel like i i didn't introduce them as well as i could i tried to convince it i don't think everyone's waiting they don't want to hear me so sorry i didn't introduce you better the same second time around don't worry let's do it one more time no i'm out of here it's been fun all right thanks everyone have a great interview thank you bye everyone
Info
Channel: FunctionalTV
Views: 5,577
Rating: 5 out of 5
Keywords:
Id: OwmHgL9F_9Q
Channel Id: undefined
Length: 132min 29sec (7949 seconds)
Published: Thu Aug 06 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.