Golang UK Conference 2016 - Dave Cheney - SOLID Go Design

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right good morning so I have a question how many co programmers are there in the world like this this is how many not enough okay that's one guess any other guesses well there's at least 350 like in this room and we also wanna have a guess 50,000 okay yeah you had a hand up seven okay so somewhere in that range okay keep the keep that in mind I'll come back to it a bit later my next question is who does code review show hands as part of their job I think we can say that's approximately everybody which is a good sign so why do you do code review shout it out why do you do code review what's it for why do you waste the time why don't you skip the code one wait the code better okay to learn okay but to come back to Bills Bills one so if you wanna make the code better maybe you could say it's you're trying to catch bad code and so if code review is there to patch bad code how do you know if the code you're reviewing is good or bad I mean it's fine to say things like I don't like this code it's ugly or things like that or wow that that change was really beautiful but that's kind of subjective I mean you might say you know painting is beautiful or the room that we're in is beautiful it would it be more useful to have to have some way that was less subjective to talk about the properties of good or bad code so what are some of the properties of bad code that you might pick up in code review rigidity is the code rigid is it like a straitjacket of overbearing types that make it really hard to change is it fragile does the slightest change you make just wreak havoc in the entire code base is it a mobile is the code hard to refactor I mean are you always one step away from an import loop is it complex it's just overly complex for the for no reason other than just to be complex or is it just exhausting to use the code I mean when you look at it can you even tell what the codes trying to do now would you like to see any of these words used to describe code that you wrote probably not but this is really good now we can say things like I don't like it because this card is this code is too hard to modify I don't like it because I can't tell what your change is trying to do but those are kind of negative statements what if we had more positive ways to talk about this wouldn't it be good if we had some way to describe good design not just bad design and be able to look at it in objective terms so in 2003 Robert Martin wrote this book and in it he described the five properties of reusable software which he called the solid principles after the first letters in their names now Martin's book is a little bit dated the language that the languages that it talks about are in use you know a decade or so ago but perhaps there are some aspects of the solid principles that might give us a clue as to how to describe well design go programs so I want to spend some time talking about that this morning the first principle is solid the S is the single responsibility principle which reads a class should have one and only one reason to change now obviously go does not have classes instead we have the much more powerful notion of composition but if I can ask you to look past the use of the word class I think I think that there's some value here I mean the question is why is important that a piece of code should have only one reason to change well for me it's distressing the idea that my code might have to change it's far more distressing to discover the code that your code depends on has changed under your feet and when your code does have to change it should do so for its own reasons it shouldn't be the victim of collateral damage it's a code that has a single responsibility therefore has the fewest reasons to change so two words that describe how easy or difficult it is to change a piece of software are coupling and cohesion and coupling simply a word that describes two things that change together a movement in one induces a movement in the other now related but separate notion is the idea of cohesion which is a force mutual attraction so in the context of software cohesion is the property of describing pieces of code that are naturally attracted to one another and when we describe the units of coupling and cohesion in go programs we might talk about functions and methods and this is the usual way that you describe the single responsibility principle but I think it does it starts with goes package model in go all code obviously lives inside a package and a well-designed package starts with this name the packages name is both a description of its purpose and a namespace prefix and some examples of good packages from the standard library might be things like HTTP which gives us HTTP clients and servers os exec which runs external commands and encoding JSON which implements encoding and decoding of JSON documents and when you use another package of symbols inside your own this is accomplished with the import declaration which establishes source level coupling between two packages they now know about each other and this focus on names is not just being pedantic a poorly named package misses the opportunity to enumerate its purpose if indeed it ever had one so what does package server provide well of some kind but what protocol who knows what does package private provide things that I shouldn't be allowed to see does it even have any exported symbols that would be a bit weird no package common just like its partner-in-crime package util it's often found close by these other examples because catch-all packages like this become a dumping ground for miscellany and because they have many responsibilities they change frequently and without rhyme or reason and in my view no discussion about decouple design would be complete without mentioning double McElroy's UNIX philosophy small sharp tools which combine to solve larger tasks oftentimes these tasks were not envisage by the original author and I think the go packages embody the spirit of the UNIX philosophy in fact each go package is itself a small program a single unit of change with a single responsibility so the second principle though is the open closed principle by Bertrand Meyer who in 1988 wrote this software entities should be open for extension but closed for modification so how can we apply this advice to a language written 21 years later so we've got in this example a type a the field year and a also has a method called greet we have a second type B which embeds an A as a field and and B provides its own greet method obscuring that of a but embedding isn't just some methods you can also provide access to the embedded types fields and you see the codes because both a and B are declared in the same package B can access private year field as it were defined in be so embedding is a powerful tool which allows ghost types to be open for extension in this second example we have a cat type which can count the number of legs with its legs method we can embed this cat type into a new type called an octa cat and declare that octa cats have five legs and their octa cats define its own legs method which returns five as you can see down here we call print legs it returns for this is because print legs is defined on the cat type it takes a cat as a receiver and so it just batches the cat's legs method not any other so cat has no knowledge of any type it might be embedded into so it's method set cannot be altered by embedding it into other types and so we can say that goes types while open for extension are closed for modification and in truth we all know that a method in go is little more than syntactic sugar around a function with a pre declared formal parameter that being its receiver the receiver is exactly what you pass into it the first parameter of the function because go does not support function overloading octa cats and not substitutable for regular cats and this brings us to the next principle which is of course the Liskov substitution principle coined by Barbara Liskov it states roughly that two types are substitutable if they exhibit behavior such the caller is unable to tell the difference now in class-based languages LSP is commonly interpreted as a specification for an abstract base class with various common concrete subtypes but in go we don't have classes or inheritance so substitution cannot be implemented like this instead substitution is the purview of gos interfaces and go all types of required are not required to nominate that they implement a particular interface instead they any type implements the interface simply by having a matching method set we say and go the interfaces are satisfied implicitly rather than explicitly and this is profound impact on how they work well defined interfaces are more likely to be smaller interfaces the prevailing interface prevailing idiom is an interface that contains only a single method now it follows logically that small interfaces lead to simple implementations because it's really hard to do anything other than that and this leads to packages composed of simple implementations connected by common behavior this brings me my favorite example I operator which is easily my favorite interface and go I Rita is really really simple read reads the data into the supplied buffer and tells you how many bytes were read and if there's any error encountered during that read it seems really simple but it is so powerful because readers deal with anything that can be expressed as a stream of bytes we can construct readers over just about anything a string a byte array standard in a network stream the gzip file at our gzip file the standard out of a command being executed remotely all of these and all these implementations are substitutable for one another because they follow the same simple contract so the lib scoffs substitution principle applied to go could be summarized by this lovely aphorism from Jim Warrick require no more promise no less this is a great segue into the next solid principle which is of course the interface segregation principle which reads clients should not be forced to depend on methods that they don't use and in go the application of the interface segregation principle can refer to the process of isolating the behavior for one function to do its job now as a concrete example so I've been given the task to write a function that visits persists some document structure to this so I've called it save and it takes an OS file - right - and the document that we're going to persist but this has a few problems for example the signature of save as I present here precludes the option to write the data to a network location unless of course that network data location is you know mounted as a filesystem somewhere and assuming that network storage is likely to become a requirement later on we would have to change the signature of this function and that would affect all its callers save is also a bit unpleasant to test because it operates directly with files on disk so to verify that this worked under test I have to read out the file I don't have to read back in the file that I just wrote out and of course I have to make sure that files written to a template location so I didn't overwrite something else and I didn't inflict with other test runs and of course I'd have to clean it up at the end now OS file also defines a lot of methods which are not really relevant to the operation of this save method as a function like OS file can read directories I can check to see if a path and simile can do a whole bunch of stuff which save is not really interested in so it'd be really useful if we could write the signature of save in a way that told the caller only the bits about Oster file that was actually interested in so one example is that we could use io rewrite closer to apply the interface segregation principle to redefine save to take an interface that describes more general file shaped things rather than a file we talked about a read/write closer with this change any type that implements read/write closer can be substituted for OS dot file this makes save both broader in its application and it clarifies to the callers of save that the only methods from OS not father were interested in were read write and close now as the author of save I suddenly no longer have the option to call those extra methods that are as file provided I can't I can't cheat anymore because they've been hidden behind the readwrite closure interface now we can take the interface segregation principle a little bit further for example it's unlikely that if save follows the single responsibility principle it will read the file it just wrote to verify its contents that should really be the responsibility of a different bit of code to check if the file was written correctly so we can narrow the specification at the interface we give to save to talk about just writing and closing now secondly if we've write save with a mechanism to close it stream which we we provided save with a mechanism to close it stream which we kind of inherited in this desire to make the things still can't look like a file raises the question under what circumstances will WC the write closer be closed and maybe you can solve this with documentation but then you've got to read it but it's possible that save might call clothes unconditionally or perhaps clothes will only be called if save is successful and that kind of ambiguity makes it hard for the caller because it's the caller I might want to write additional data out to that stream after after I've written that document maybe I want to write several documents that makes it very hard if clothes will just close close that stream straight away after being used so one crude solution we define a new type which embeds an i/o writer and overrides the close method preventing save from closing the underlying stream so we here see here no up closer just as a close method that does nothing but this would potentially be a violation of the Liskov substitution principle because now suddenly no op closes don't close anything anymore they don't they follow the contract but they don't actually follow the behavior a better solution would be to redefine save to take only an IO writer stripping it completely of the responsibility to do anything but just write data to the stream so by applying this interface segregation principle to our save function the results simultaneously become a function which is the most specific in terms of its requirements it only needs a thing which is writable and also most general in its function because now we can use save to save data to anything that implements io writer be it a file being a network connection be it a byte array whatever and stepping back a few places this is a quote that I've been that I've seen popping up a lot in the last year it's interesting mean that's percolating around the around the tweets fear and while this kind of tweet tweet size version likes a bit of nuance and this is not not Jack's fault I think that this represents one of the first pieces of defensible go design law that we've come across so the final solid principle is the dependency inversion principle which is a little bit lengthy but it states that high-level modules should not depend on low-level modules both should depend on abstractions and the abstractions should not depend on the details details should depend on abstractions so it's a little bit a little bit wordy what does dependency inversion mean for in practice for us as go programmers if you've applied all the principles that I've talked about up to this point then your code is already going to be factored out into discrete packages each with a single well-defined responsibility or purpose your code should describe its dependencies in terms of interfaces and those interfaces should be factored to describe only the behavior that they actually require the functions that use them actually require so in short there shouldn't be much left to do at this point if you followed all the design up to this point so what I think Martin is talking about in this principle certainly in the context of go is the structure of your import graph now and go your import graph must be acyclic and a failure to respect this asically requirement is grounds for a compilation failure but more gravely I think it represents a serious error in design all things being equal the import graph of a well-designed go program should be wide and flat rather than tall narrow if you have packages whose functions cannot operate without enlisting the aid of another that's perhaps a sign that the code is not well factored along package boundaries so the dependency inversion principle encourages you to push the responsibility for the specifics as high as possible up in your import graphs leaving low level code to deal in terms of abstractions interfaces so to recap when applied to go each of these solid principles is a powerful statement about design but taken together I think that they have a more central theme the single responsibility principle encourages you to structure your functions your types and your methods into packages that exhibit natural cohesion the types belong together the functions serve a single purpose they want to be together the open-closed principle encourages you to compose simple types into more complex ones using embedding the Liskov substitution principle encourages you to express the dependencies between your packages in terms of interfaces not concrete types by defining small interfaces we can be more confident that the implementations will faithfully satisfy that contract the interface segregation principle takes this idea further and encourages you to define functions and methods that depend only on the behavior that they need and if your function requires a parameter of an interface type with a single method then it is more likely that that function has only one responsibility and dependency inversion encourages you to move the knowledge of the things that your package depends on from the compile time and ago we see this as a reduction in the number of import statements in your source file to runtime so if you're going to summarize this talk it would probably be that interfaces let you apply the solid principles to go programs because interfaces let go programmers describe what their package provides but not how it does it and this is all just another way of saying decoupling which is indeed the goal because software that's loosely coupled is software that is going to be easier to change so sandy Matz notes design is the art of a ranging code that needs to work today and to be easy to change forever because if go is going to be the language the companies invest in for the long term the maintenance of NGO programs the ease at which they can change them is going to be a key factor in their decision so let's return to the question I opened my talk with how many go programmers are there in the world I've got a guess I think by the end of the decade there will be 500,000 go developers and it's pretty reasonable there are roughly 100,000 scholar developers today does it seem so improbable that by the end of the decade they'll be 5 times that number of go developers so what a half a million go program is going to do with their time well obviously they're going to write a lot of go code and if we're being honest not all this code is going to be good and in fact some of it's going to be quite bad now please understand that I don't say this to be cruel but every one of you in this room knows from your own experience experience with other languages that you came from to go that there's an element of truth in this the opportunity for all go programmers to make our language success a success hinges directly on our collective ability to not make such a mess of things that people start to talk about go in the way they joke about C++ today so the narrative that divides other languages have been bloated and verbose and over complicated if we're not careful one day might be turned upon go and I don't want to see that happen so I have a request go programmers need to start talking less about frameworks and start talking more about design we need to stop focusing on performance at all costs and focus on reuse at all costs what I want to see is people talking about how to use the language that we have today no matter what the limitations of that language to design a solution and solve real problems what I want to hear is people talking about how to design go programs in a way that's well engineered decoupled reusable and crucially responsive to change now it's great that so many of you are here to hear from the great lineup of speakers we're going to have over the next two days but the reality is that no matter how large this conference or any conference grows and compared to the number of people who are going to use go during its lifetime we're just a tiny fraction so we need to tell the rest of the world how good software should be written good software composable software software that is amenable to change and show them how to do it using go and this starts with you I want you to start talking about design maybe use some of the ideas that I talked about today even better maybe you do your own research and you apply those ideas to your projects and and then I want you to write a blog post about what you learned I want you to teach a workshop about what you did I want you to write a book about what you learnt and I want you to come back to this conference next year I don't want to want you to tell everybody about what you achieved because by doing these things we can build a culture of go developers who care about programs that are designed to last thank you so much
Info
Channel: GopherCon UK
Views: 72,918
Rating: 4.8968406 out of 5
Keywords: Go, Golang
Id: zzAdEt3xZ1M
Channel Id: undefined
Length: 27min 30sec (1650 seconds)
Published: Wed Sep 07 2016
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.