Entity Framework Community Standup - Hot Chocolate 12 and GraphQL 2021

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Music] [Music] [Applause] [Music] hello welcome to the final uh entity framework community stand up of the year uh we have a bit of a festive theme this week as you might be able to see and most of the show we're going to be talking hot chocolate with michael stark which is a great the graphql uh libraryfor.net so we all have a mobius of hot chocolate um or at least we're pretending the hot chocolate i think jeremy actually really has hot chocolate in his he's keeping it real for us this is legit for sure yeah so um yeah uh before we do that we'll just do the final um state of the unicorn uh for the year now shy suggested that we should do state of the reindeer but i couldn't do it because i'm just too much of a fan so much too much graphics so instead we have just rained it we have unicorns pulling the sleigh i think this is something you're going to see for the first time this year uh unicorns honestly um so uh yeah this year's all been about uh ef core 6-0 released in november about three weeks ago um jeremy published a blog post uh just this last week get to know ef course six um let me see if i can quickly show you this so this is on the.net blog and on here there's videos and links to videos and docs and everything that you want to know so we have uh videos and docs for cosmos db stuff we did a bunch of cosmos tv stuff in there uh compiled models great feature from andre to improve your startup time configuring conventions again from android helps your model building migration bundles bryce has been working really hard to get that people seem to love it shy done a lot of performance stuff here uh and uh temporal tables from maurici uh and then of course um smith as always does all of the great query stuff even though it's not on the list here because it's so fundamental to everything and then of course we've got graphql links on there because uh graphql works great in dotnet and that's what we're going to talk about today um just want to point out uh we've been we've been on youget for just over three weeks we're at 400 000 downloads already for ef course six zero so uh thank you to everybody who's downloaded and tried it and submitted issues and questions it's been it's been great um so next year will be about uh ef7 uh and uh we've got the plan ready it's gonna be reviewed by the the bigwigs.net directors uh next week uh and then we'll put it up on on github and in the docs for everybody to see and give feedback on and as always it's uh it's a it's a continually evolving plan so uh just because we publish something doesn't mean we don't want your feedback on what other things we can do so look out for that in probably next week sometime and uh and with that let's go over to um draftql so michael what have you got to show us today yeah lots of little demos uh where we will look at the state now with dot net six and ef core six uh and also a bit into the end of year release then to 12.4 and we are making entity framework more easily to use in hot chocolate going forward awesome it was already pretty easy to use but yes it will be easier and less clutter but um yeah we can we can start okay let me add let me share this here okay yeah as always i start with an empty console and um the first thing we added now is with hot chocolate 12 are better templates that align with dot net six and this is really great to get you started with either asp.net core which we support for a while now but also with azure functions and there uh starting with uh hot chocolate 12.3 i think we start to ship azure functions support so if you're into azure functions it's now super easy to start let me just jump into my first demo and then we can just have a look at some code so in order to get the the the templates for hot chocolate you just start with your console.net new i and then you essentially type in the templates package that works actually with a lot of templates that are out there and support by the community and the cool stuff is that actually these templates then show up in visual studio so if you don't have if you don't like you just do your code like like i do and you want to run these templates in um visual studio for windows or visual studio for mac just install it with the console and you're good to go so what we are doing is hot chocolate watch templates and then we are taking the leading edge stuff here give you [Music] i think it's 40 14 previews where yeah we are doing them thoughts of them so what you can see is now i just installed here my templates i have a hot chocolate graphql function template which is essentially azure functions four or i have my hot chocolate graphql server here and it's quite easy to get started with azure functions so i just say now put my new graphql azure functions and we are good to go have a look at that awesome yeah it's so it's very cool that you can uh you can just add your own templates as new get packages and have them integrate into net like this this is this is something that i remember there was a lot of talk about when uh when we were doing dot net core um you know let's let's make it so that people can publish uh published nuget packages with their own templates and you're not limited to whatever you've got installed in visual studio you can use it from the command line you know other people can add them so i love seeing this this working yeah it makes it so much easier like the the startup experience like before it was fixed what is in visual studio and as a community project you weren't able to get any of your templates in there and now it's so easy it's one line of shell script and you're good to go so what we can see here is now i have my azure functions project and i can see there is a query like any query for graphql server and you can see it's already.6 so we have these filescope namespaces here and we have lots of other things here like there's a startup the starter startup is actually just a hook into azure functions that configures my graphql server and in here i can just define so that's just an assembly level attribute that you just put anywhere in your in your assembly basically yeah if you essentially um yeah it's just a standard class library this is the so there are two experiences in azure functions is the out of process this is this is the um in process experience right um yeah and that that essentially just integrated in smb so this will handle like in azure any other graphical server the configuration instead of add graphics cell we have a graphical function here and the function integration essentially let me just restore that so it looks a bit nicer remove all those squigglies yes there we go looks looks much better um and this is essentially a standard function it exposes our graphql server as an azure function um through an http trigger and essentially gives all the requests onto our graphql executor that we wrote for azure functions so in order to start that we can go here on the azure symbol and then just initialize our azure function and i hope it works looks good this is by the way also very awesome this is a plug-in for visual studio code for azure it integrates azure functions and all the other resources that you have measured with this we are good to go and we can just say func start which is the azure functions console cli to get started with it okay so our function is already running i can just grab this url here and open uh just to start a browser put it in and you get started with um banana kickpop or graphql ide so when i thought yeah and when i thought about azure functions i actually always thought about some http triggers that do one thing um but actually it turns out you can do a whole lot of stuff with added functions this is essentially a full web server running in there serving out the the graphql interface the graphql ide and also our graphql server so i can just query this and see i have luke skywalker so that is uh pretty easy now to get started with azure functions to host a graphql server very cheaply on azure so what does the experience look like if let's say we have multiple concurrent uh hits at the the same time what is shared what's not shared what's what's the startup time look like oh um so you saw the startup time here is a bit about 49 you have a little bit of overheads for the startup but this to be honest it's not a big schema so if you have a entity framework under there with a large database schema or stuff like that you will have higher startup times there are solutions to that so you can either go for a premium plan where you have warmed up workers and others so there are delayed startups or you split your schemas onto multiple functions and essentially merge them on one azure function so you have a smaller startup time because you are only initializing the parts that you need so there are a couple of there are a couple of approaches to that there's also the the biggest problem is actually graphql subscriptions um but there is a new service on azure function on azure that is called azure web pops up and we are integrating that at the moment so we have already a prototype and i hope that is going also into 12.4 and then um you are able to offload these um real-time connections to azure web hubs up and be able so for for our audience subscriptions in graphql just to to make that clear is instead of just you know issuing a query and graphql and getting the response back you subscribe to uh data that's going to come back to you and it's going to come back like in real time as a flow right as a stream of data exactly yeah we we dove in already in the complex uh stuff around creation graphql very for that but uh it's an often asked question okay you can run on azure functions but you can only do simple fetches it turns out now you can handle all the complex stuff that's very cool that's very cool you can integrate with this that's yeah that's very easy is that supported as well yes and we we look in this i think in my demo 5 or something ok ok so this is just getting started for most of the people so if you want to do azure functions host it very cheaply don't pay for a graphql server that nobody uses at the moment or that scales very to a lot of pressure that you have on your api that you can do now with our azure functions template okay so let's dive into some uh entity framework and with entity framework let's go to my demo 2. we had a couple of things that we thought a bit difficult or when people start getting into a hot chocolate and entity framework there a couple of things um people got confused or had problems with so the first thing is if we look at this this query type here we can see we have this resolver the reserve is essentially a field in graphql that fetches data and here we can see we have the books context very simple books context that has an entity books and authors on it and we are just fetching it and returning it what we also have book context is just the normal entity framework core db context that everybody knows right just to make that clear let's have a look do we not need the um service attribute anymore are you getting to that we are getting to that okay so we have the simple books context i i think i showed that the last time i was here so and then we have authors and books so very very amazing basic yeah yeah but mini to mini so it's not just a flat model at least there's a lot going on under the covers even though it looks basic yeah yeah yeah yeah and we will see that we i have some joy in in the in the projections demo you will see how awesome anti-framework actually is but let's not get ahead of ourselves so the first thing here this is a simple resolver we have here our middlewares for projections for filtering for sorting which essentially allows us to rewrite graphql queries into um entity framework expressions which we put on top of iqueryable and entity framework will then rewrite that to proper sql and in in the in the past we told people okay so you have essentially two ways to do that you either can use pooling because then we can execute in parallel with multiple contacts in graphql or you can write something like the serial attribute on here and say this is actually a service and then we would inject that and handle everything from there the the issue with that is you would do that on all your resolvers and it's a bit clumsy so what we thought about is first we want to get rid of these things and be able to centralize the configuration because i mean in your graphql api you just have a couple of db context maybe one maybe more so let's centralize that so here you have your db context configuration that you do with just microsoft's dependency injection integration and then on the graphql side of things you can now say okay i register the db context that is on top of that so now we know the db context and now we know how you want to use it so the default for us and we can change that here is that we use a synchronized context which means by default if you do add bb context here you get a scoped db context per request that means we have to tell the graphql engine actually that there is only one db context and we cannot access it with multiple threats so the default in this case is that we use is synchronized so you don't have to specify that we actually do that for you so does that mean then that you're essentially making sure that that context is only used by one uh graphql query at a time uh yeah but yeah we okay let me just uh get another window and then we see what it what it does so so here we have let me just refresh the schema here we have two fields in graphql in a query we would parallelize the data fetching and fetch essentially with two threads from books with a standard db context injected into your dependency injection we couldn't do that so we would need to synchronize that and that is essentially what we by default will do so you can see a b there's no error and if we look here at our query plan what graphql does it creates a serial shape and then executes first this this subtree and then the other subtree but that is also a problem because sometimes you want to scale much more and you want to have more throughput so with entity framework we actually have a solution for that and that is a pooled db context so we could say we want to pooled db contacts factory and this allows us to have multiple db contacts initialize them when needed and then give them back to a pool and in throughput tests we can see that it's a lot less memory that you use them if you can use it there are cases where this might not be a solution for you but when then you can put a lot more throughput on your graphql api so do you automatically uh discover whether or not there's a factory registered in di and then change behavior based on that not yet we are oh you have to change that okay that makes sense so you uh that's pretty cool yeah you do it at the central point now in this configuration and all your resolvers all your code is now reconfigured and what it does know is that essentially we automatically build a middleware around this field which will get a pool execute get books and at the end when this is finished we will return that um db context back to its pool so we have all the all the complexity around that behavior so the standard question that comes now from the from my friends that do clean architecture is that they don't have direct access to the db contacts it's behind some services and that's no problem for us because you can do the same thing with services by just saying okay i register one of those special services i don't have one of those but it's the same way and then you can also say this is actually a pool service and then we can handle it uh in a special way there are also other ways so if you had a so if you had a um a uh repository pattern uh and you say you've got an i books repository interface that's backed by adb context then you would basically say register us the ibooks repository as a service and then uh graphql would go through that and everything would work without direct reference to the db context in your your application right exactly awesome so this is now very transparent and gives you the the the ability to get actually the these resolvers now a lot cleaner remove these implementation details about how they are hosted in your dependency injection and then give essentially yes a more cleaner yeah very nice the whole so i i'm curious um and we're probably you know going off script here um had had you had we registered the db context not with a factory but as a transient service um would you then be able to do that without serialization because you would uh request a new okay so so that's another way to approach it too and of course you can uh you can use db context pooling for that as well so you could have uh di uh basically um serve up to you transient instances of the db context from a pool which functionally is very similar to what you're doing with the factory but you know depending on on what you're actually doing maybe maybe easier for some people yeah and there's a actually a third way and that would be the resolver level db context which means we will create a service scope for resolvers right a book context yes and the same works for the services and with the service you can also implement like uh an object pool of t and we understand that if we get that from the dependency injection so there is a lot of things that we can now handle and take out the complexity from doing uh more performant or more scaling apis i think there's a there's kind of an important kind of point about di here um di in general but maybe di specifically with ef in this case which is you know we have at db context which does a standard normal thing that's easy to use in web applications but there's a lot of other stuff that you can do with ti we have the different db context methods you know the pooled one um the factory one there's different scopes that you can use and di is very powerful for for doing lots of things so sometimes people kind of assume that you need to resolve a db context directly out of di but no you can resolve a repository you can resolve it as transient you can you can resolve it as scoped i would not put a singleton db context in the mdi i mean you can if you want but that's usually a pin of failure um there's a lot of different things you can do with di as well worth kind of looking into into some of those patterns yeah that's uh that's one of the things um so the the where we see the most failure was to be uh context or with the entity framework yeah is um when people build these repositories and try to lift just with one thing uh yeah yes yes and that is a he's saying the singleton context behind a repository is not the way to go it really isn't people it really isn't but that's that's only on the on the other side when we when we looked into performance with with dotnet six and also pooled it's it's um we have graphql examples where it's six times faster now yeah and then yeah so it's worth stressing even regardless of graphql that db context pooling is something that if if you really care about performance you should at least look look at and measure if it if it matters in your application so newing up creating instantiating a db context every single time you want to do a query and then disposing it might be surprisingly uh might you know be heavier than you think basically so just keep it in mind but yeah now i have to put the opposite side to that as i always do when this comes up with shiny and i which is if you're doing a simple thing so if you're doing like a single graphical query or you're doing you know you're making a single query to the database and returning them then that's true the db context pooling helps if you're doing anything where you're doing you know a thing is now change tracking business logic going on it quickly becomes negative more um a lot the services behind db context already have a caching mechanism for you know built in the model is built you know um db context is is designed to be nude up uh so the pooling really can make a big difference but as i said measure it because if you're not doing just a super simple thing with the db context as soon as you're doing something complicated uh then it quickly becomes not the bottleneck and instead it's all the other stuff you're doing that becomes a bottleneck yeah there are a lot of things that you actually have to look into like if you need tracking or stuff like that yes exactly oh absolutely absolutely um okay so this is just the query part it's now a lot sleeker you can see also it integrates very well in the new mini minified api and makes yeah next really it essentially looks like you're just writing a c-sharp class but out you get a full graphql server that produces as you see here sql from your query so i just query it for title and we got a query that really just asked for type from there yeah i want to triple quadruple whatever the next iteration of that is emphasize that because that is to me the most powerful aspect of these two married together is that projection goes all the way down dignity framework right we're not grabbing everything from the database and then selecting what we send to the front end we're literally piping it all the way down and this means that as a front-end developer i can make changes and not wait on someone to change the server and not worry about those changes being inefficient i mean i still should look into it and and there's nuances but the fact that it passes the sorting the projections down to the database level is so key and that's what makes it very different from a lot of other approaches yeah i want to just call out as well the question that was already answered in the chat about cosmos db you know we're showing i assume this is this going against sql server michael well this is actually sql lite at the moment but that's the point like we've got these different providers so this isn't tied directly to a certain relational database or even necessarily relational database if you wanted to this could be a cosmos db server behind it everything would still be the same cosmos db has different capabilities in the relational database so you know make sure you are using the right tool for the right job there but you've got that abstraction and that common patterns behind it so now you can tie all of these things together and basically have the same experience yes and uh i mean we are using a lot in a lot of uh projects we are using using postgres for instance it works the same there so that that is really the beauty of empty fragment um and to really make the point here with entity framework this is actually a demo um my colleague pascal built for me and it is a more complex db context which uses uses inheritance and stuff like that um so we have uh again a book context here and we have some publications authors readers and um uh so so we have inheritance built between these and actually if we look in our configuration it's a bit more complex we have an interface so we we use a class publication here as an interface it's actually a type so an entity but we say in graphql force it's an interface and ebook paper and magazine actually inherit from this type so this didn't work with hot chocolate 11 but uh now we we understand also these things from uh entity framework so what we what we can let me just run that and just to just to have a look at the query again it's a simple query we're using here our book context and have sorting stuff on it and we have configured it here to be a pooled db context so very simple and now let me just go into this demo fear actually we skipped one i have to make up later for that so what i what i did here is uh essentially we have a simple query so get publications that mean that maybe it's just a list of infos what publications we have so if i query that let me just pick up the query you can see we have here now a query that builds up the sql for us to get this is paginated calls so we have also a limit in here and essentially grabbing some of the data but now that we made uh graphically aware of all the interface dependency and stuff here we can see that we actually can ask our publication here for if it's a book if it's a book we want the author or if it's an ebook we want uh to get the the reader name or if it's a magazine the schedule or or if it's a paper maybe the field of research we are drilling into so if i execute that now and this is a fairly complex query now then you can see depending on the type here this a book or this is the ebook we get different shapes of data and if we look at the the query here now we just changed the graphql query and entity framework i mean we translate it to expressions but entity framework figured out this fairly complicated query and built us uh just thing we need to fetch our data and when i saw this demo i thought that's really that's really crazy with what's what complex queries entity framework manages to still get some useful escrow out so yeah very cool yeah excellent so um yeah also with very complex entity framework db context you will get a beautiful graphical api where you can drill in and look at all these connections so the the nice thing about graphql2 is that you know we allow for edge scenarios we were talking about cosmos db and there's some link queries that are just too complex to translate right now and you can drop down to raw sql and what i love about graphql is you can build a resolver for that piece of the query that requires that but you can still use the the rest of the stack for the other parts of that and it all just surfaces as a single api exactly and you often have that if you think about scenarios where you now drill into further into the data and it might become too complex so you have a resolver here to get into another shape and then it maybe use a data loader or something else so there are a lot of constructs that you can use to handle those scenarios yeah that that seems very powerful to me the idea that out outside you expose like uh you know like like a graph you know like a thing but then the actual specific nodes inside the thing that's being queried you can have you know the generic thing and people just compose over it you know projections filters or whatever but you can also decide to override um you know as as one says to override it and provide your own custom implementation which for example would be a lot more performant if you know that is a very specifically hot kind of api point for you right so that that's very that's invaluable and that's something that i know i get asked a lot about comparing graphql to odata and they're similar but they're different and they serve different purposes and to me o data is a lot more about serializing a query over the wire whereas graphql is different in that it explicitly allows you to specify how the runtime resolves the bits of the court and that's a subtle but very important difference you can obviously just hand it off to an iquariable but you have that built-in flexibility to control what aspects of it and how you retrieve them i see this this kind of collaboration of different technologies is a real example of how polyglot persistence um is works in in the real world i mean i don't know for those of us who remember back to what seems like quite a long time ago now when you know you know no sequel was like oh do no sequel and other people are like no but relational is better for this and there's different tools for different jobs and and uh i i i may be getting this wrong but i'm sure martin fowler at one point you know wrote a blog post or something about polyglot saying right well yeah you you've got different data sources for different reasons like maybe this part of your data source needs to be in a cosmos backend or in or just in a text file somewhere or whatever and the rest of this needs to be in a relational and you know this bit should be in a in a key store or whatever um and that's all that all makes sense but then being able to tie it all together to a single api which the consumer then uses but which splits out to the different parts on the end this is i think to me a real like concrete example of uh actually being able to use polyglot persistence in a real world kind of application which i think is very cool yeah and um just just to make the point here so uh as she and jeremy also mentioned you can put in these resolvers this could also be like you have a couple of things in your entity framework like this magazine is in our entity framework but maybe you have data somewhere else maybe in files maybe in a blob storage whatever so it's very easy to extend this magazine type without actually changing the magazine type so we could introduce a new type that is maybe our magazine mega just copy that magazine let's say there's an info about our magazine so we call it the magazine info and this type actually in graphql will extend extend the the magazine type that we have here so i don't have to bind it to the type i also could specify a name here you can see i also could define the graphql type name so they could be very loosely coupled could be in completely separate assemblies and then i can introduce a new resolver here that now gets us some extra data let's call it info about our magazine this is an info this is how easily you can essentially extend now the model that we have from entity framework and enriches with data from somewhere else so the one thing we just have to do is add a type extension here and then put it in run our server again and our schema has changed and we have now uh we are resolving data from entity framework and then um are able to extend it from that's very cool yeah there you can see the info and would now get the info as we need it no i read the upper part this thing where you have where what you know the model you expose in graphql is um you know fundamentally it's coming from you of course in this example but being augmented with non-ef core things i find also very powerful this idea that you can i'll use the word stitching which you use which you use in a different i think uh sense usually in graphical but still i'll still use that word you stitch together like an ef core model with another uh you know extra stuff right that's coming from who knows where uh and presenting all that to uh you know to your consumer to your graphql api consumer that's awesome yeah and that's that's really empowering you have this data and i mean you have lots of data sources in in your everyday life and you can just bring them together um like nick schrock one of the graphql inventors one of the three persons that came up with the graphql spec said graphql is a bit an overcompensation of the microservice architectures like graphql has this one graph that you have to explore and query all your data and that's what a lot of people love if it's data scientists or front-end developers they can just go to this one graph behind this group one graph could be microservices that are very very small services all over the place and you just bring them together in this graph and make them accessible to the user that's one thing i've been doing a lot of research of graphql and talking to a lot of developers about it and the one theme that surfaced is this idea of a single endpoint and the ability to create it as a facade behind other services so a lot of developers who are modernizing applications are writing resolvers for their backend apis they're saying we don't have to rewrite them just yet we can expose them as graphql and the resolver just talks to wcf in the the back end or whatever that legacy is and it's pretty amazing to see those themes surface across the board yeah and the wcf is a good example because wcf has these huge data of these huge messages that would uh take a lot to send over the wire but with graphql you can essentially put a graphql server very close to that old api so you have unlimited bandwidth between your graphql server and this wcf server and then you just send down the data that the user actually requested from that wcf service so that's a very good optimization and still with this old server you can get a lot more performance than now okay let's have a look at some new graphql features and that is hot chocolate is one of the two first graphql servers to implement that that is called the defer and stream spec and that allows you to de-prioritize parts of your query part of your query graph just switch the demos here and then we have a look why this is so cool and it's actually not a new idea to graphql facebook uses it uses this actually for years um just let me get that up and then ex i explain a bit so facebook actually uses defer and stream and also there's a third directive in that regard it's live for years in that graph so the main problem that facebook has is with grapher is awesome to fetch everything in one go and that makes it very efficient to get all the data that you need for your ui components maybe in one data fetch but it also introduces a problem if you have new data graph maybe some resolvers that are slower for instance facebook fetches the new stream and they all have the comments what they want so that the that the site appears to be very fast is first get the get the get the news and then one by one get the comments without the user even noticing it and that's what stream and defers about let's have a look so i have a very slow result right here it's my query and it essentially gets books again um one thing is here that we take the books now as an async enumerable so we essentially wait for each of our data over our db set i built in here a very bad de-optimization so each iteration costs us 500 milliseconds so let's run that the optimization that's good yeah i like it we should we should get some of those into into our products those are good so you can remove them and say look we've become faster it's like the uh when you used to have the turbo button on the pc right because the new pcs were too fast for for the code written so you had to slow it down i i have to admit the first pc that had a turbo button on i had no clue what it was doing i was like trying to figure out like okay what's faster or slow when i press i don't know what's going on here but but i had to i had to uh put the turbo mode sometimes out so run the pc slower to play certain games yes yeah of course yeah yeah like that game that you started on a on a on a 486 and it was designed on an 86 and it was like suddenly everything is just zooming around it was using the cpu cycles for timing yes back to the bottom world good times so i'm i'm running now my graphql query and you can see it's not the fastest it's coming now six seconds that's very bad so what can we do here so graphql comes with these new features they are actually pre pre-draft stage so they're in the proposal stage at the moment they will come but there are a lot of stack edits so um but anyway they are they are already um quite stable uh we are iterating at the moment on the spec uh to to refine them so with every new graphql server from hot chocolate you get a new set of these implementations so what we can do is now say stream that's the simplest usage of stream and now look what happens i get them one by one after the other meaning the data is streamed down to me and if we look at the at the transport level we can see that we have patches first the graphql server will send me down over the same http connection the empty graph because i said we i want to stream this this piece of the query then i get a patch and you can see that it tells me okay this piece of content actually belongs in two books and it's the the first item essentially in that list and this is the data that you have to patch in that list nice very cool is it so is is it true so i'm i'm assuming that you're actually streaming all the way from the database so ef core itself also returns an an async and numerable right exactly that's why i said it's an asynchronou so that's i want to i want to make sure you know you everybody understands this if this is like a super uh complicated heavy query at the database side so the sql actually you know requires your database to work a lot every time your databases produces a row then this gets handed back through uh you know through hot chocolate all the way to the client which i find extremely impressive having this end to end like uh streaming very cool yeah because the cruise so how we support that is essentially that you say in your in your um reservoir that this reservoir returns an async enumerable and then we by default uh support end-to-end streaming if you don't have that we will emulate that so essentially build a task around that and stuff like that but uh for data sources like entity framework that support that we really have an end-to-end stream so in the the client does that just look like a callback that gets the updated data as it come in oh it's it's it's essentially if you look at the really what's going on the transport level it's so facebook has this problem that it has to run everywhere so how that works let's look at the transport body do we have that here yeah when you look really into it you can see that we have a chunked encoding so we update the stream in chunks and we are sending down a mime stream so it's uh like this uh content parts mime parts that we sent down so it's a standard http connection there's nothing special there's no socket involved so it works everywhere and works also on without any pops up stuff so it just works and because of the the beauty of uh of the way async works in in.net you're not consuming a thread the entire time you're doing this basically when when you don't have anything coming back the thread is threaded in your slide pool or going off and doing other things and you know so that's that's the that's the beauty of the the async await and uh for reach async there shy says going all the way back to your database server hasn't got anything for you yet you're not consuming anything so um great for for um serving multiple requests from the same server without uh without using up all your threads so i'll just sorry just one thing so there's there's a very uh pertinent comment here by dick baker on order by so it's true that if we're talking about maybe we can highlight this this question so if you if you're executing a sql query that has order by then the database cannot give you back results until it's gone through the whole thing right if people think about this logically when you say order by you're effectively asking the database to go through like the entire uh set and you can't it cannot start um you know returning any rows before it's seen all the data set because there might be a row that should come first right so in that kind of case you would still be streaming everything uh uh from the graphql side but the database side is no longer streaming right it's gonna give you back like the entire uh result set in one go so the the the end to endness is going to be cut at the level of the database unfortunately but not all queries require order order by right so that that's a pretty important thing and i'll just add one very very small thing this is actually this whole streaming versus uh buffering what we sometimes call it a lot of vehicle users are kind of like not aware of this so i we sometimes see a lot of people you know you do an ef core query not in graphql just a normal c sharp and at the end you put a two list right or or a two array or maybe a two list async if you're doing async so it's important to understand that when you do this you're basically telling efcor you're telling.net to take all the results back buffer them in a list and put them in in your memory as a list whereas instead of doing this if you did for each or a weight for each for example or if you use something like as enumerable you can stream those results as they come back from the database exactly as we've discussed here in the graphql context so this thing it's always worth thinking about how am i getting my results back from the database however i want to again come in with my with my uh opposite side of the the fence on shy on this that doesn't mean that using two-list async for example is not worthwhile um because again uh async is also about not having your threads be blocking while you're doing that database access so your database access might be taking a while to give everything back and even though you would want to use it as a list when it gets back onto the client side it's still very useful to do that as an async request um to avoid you know blocking blocking threads the entire time you're doing that so yeah it was it it also takes just time to produce the results like that's before that process so there's there's more to this so i can also i can also say i can also say maybe okay this this initial request if you look it's a 39 milliseconds for my first response whereas without stream we had to wait six seconds i can also say okay maybe i can wait a bit but i already have the first item to display on my website web page in this case you can see it was immediately there we have first initial response for the first byte which is 550 milliseconds but i get the first item and then it streams the rest so you can also tell the the stream which items you want immediately from the stream and which items you want to defer okay so but that is just stream there's also defer so let's let's um change our resolve a bit to make it deferable so actually for deferred there are no preconditions for the phone but i want to make the performance of our query even worse so we're gonna we're going to write a little extensions for our book and let's call it extend object type and then we extend actually uh the book type here and this is book details maybe and we put in a very very slow resolver that gets some details i don't know what details but uh let's just just uh maybe this is another 500 milliseconds just to make my point and then we return hello okay okay so this is this is very slow now and we are integrating that into book so this is actually then variable in our streamed list so let's just hook that in and type extension this okay let's fire up our graph dot run and see how that turns out okay so this is up let's just refresh our schema and then we should have no details here and this is our very slow field and this is where we stream our stuff but still like now for our first results we need two seconds and then also all the other parts will take a lot of time i think i have a buck this is as i said preview preview 14 which comes out at the end of the year so we will fix maybe this little bug but anyway the dangers of live streaming um so what we can do here is to make that faster this essentially defer this this field so we can say this shape we wanna actually defer and defer will essentially spawn another task off so this we are streaming now and deferring this field which uh will give us the first item but the first item will come with our details and then it will start streaming and deferring uh and swapping between these and resolving the data very asynchronously so let's try that and i hope my i you can see how it completes now so these fields are now so this is very very um very pertinent for front ends right on my mobile phone where the networking is bad on whatever 3g and so on then i'm the idea here is to get the essential data on the user screen very very quickly and maybe get uh you know the rest of the data a little bit later or something like that right this is a very front-end thinking kind of um yeah it's essentially the front end before that could just do aquarium and then it was slow but now we give the front end a tool to essentially tell the server to de-prioritize certain sections of the query and the server is allowed in the graphql spec to essentially ignore the defer so if this if i for instance would defer title but title i have in the moment i get the book so the server might know title is already there so deferring it will actually make the query slower we can skip that and just return it so michael what what does this do by the way in the context of an ef core query if say i i want to get a book and i wanted if and i put the fur on a specific attribute of a book like knowing sql databases obviously how how what what does that mean here um so so if the query is is projected if we have a big query we essentially uh will project the query and ignore skip the defer in it because we know we have the data when we do the first query right but we are we are essentially working on splitting the queries then and then just fetching sub-graphs but that is coming uh next year we we are working i guess i think entity framework also has a capability to split out the queries and that's this essentially yeah yeah we call this split queries in our world um yeah yeah that's essentially what uh what we we are working on to really spawn them out at the moment we would ignore the defer because we know it we essentially have the data there but uh we are getting more and more into this project stuff and making it more intelligent so as far as defer works so stream needs stream to efficiently work needs an iac innumerable defer needs nothing it just needs some deferrables thing so something that costs time and then we can put it in another task and if there would be another aquarium if details would query something else like really have a data resolver that does something on the db context you could see that the sql would run a bit later and if we look into logs then you can now see here we have 23 patches so you can see first is the first books coming then actually the the second book is coming here and then you can see there's the details coming and so on so you can really see how the data patches so we had a question for the front end like what does the fetch look like what does the code look like that's managing the defer how's it handed off the the pieces as they become available um depends on your depends on your client so we are working on so this is really really cutting edge and graph well this is a feature that is coming maybe into the graphql 2020 2022 spec as i said uh there are a lot of things we are working on in the spec group but there are clients in the javascript world like relay supports it already i don't know what what's about apollo but essentially they will patch these graphs and complete the models so what what maybe i didn't show here is there is something called the label and relay will use this label to label certain sections of the data so we essentially with this would label this selection set and then with it when this data comes it knows which component to update okay do you know yet what this would look like in strawberry shake yes i'm working on this um but we we we're experimenting with this because it's in design yeah it's a lot more complex with static types sure but we have we have a couple of ideas we have also a couple of prototypes i'm working on and maybe maybe end of the years we will release the first bits so we are experimenting with maui and i'm talking that a lot to brenton uh from uh the summary stuff at microsoft and yeah we will see we'll see but it's it's a it has a lot potential to um really reinvent how uis will work especially with blazer which plays that's actually more easily done than with exciting interesting so what what i want to see now is i want to see sql light in blazer web assembly running graphql as a uh on a server in the web assembly so my application could be accessing sql right through maybe going too far but i don't think you are actually you you you are laughing but i did the demo once with that but um wow so what we're actually doing also with strawberry shake like um if you're using blazer we we already have like subscription supported and so it's very rea it's a reactive approach to data and we allow you to store data on the local data on the blazer application in sqlite so that that works quite quite well already um but let's not dive into the wrong direction now [Music] i have i have one more i have one more thing on my list this is deferred as i said experiment with it so who has fun uh if you're if you're using graph graph from uh if you're using javascript front-ends there's already um clients out there that support that relay at least and next year maybe january you will have strawberry shake working with that as well okay so demo three that's it this is the final demo of the year so no pressure exactly [Laughter] so this is about it's also it's preview the 12.4 will come out uh at the end of this year so we are still refining things and this is about mutations um when people come to graphql.net then they get sometimes upset how much work mutations are so in graphql mutation actually consists of three parts if you want to have a well-designed schema we call them the mutation so that's the mutation essentially one of these methods is a mutation and that changes data so in graphql we have queries which are side effect free and they allow you to just read data they are very simple we saw that but then we have mutations that change data they are not side effect free and the mutation as i said consists of the mutation here our resolver it consists of the payload which represents our data and then it represents on the input and the input is another object which gives us the data let's just have a look at the graphql side and then i explain about this pattern a bit so okay let's just go in okay i haven't prepared a sheet for that let's go let's get here okay so my mutation in graphql starts with the mutation and then i can go into my author and essentially specify my input so why is there just one input that's a pattern that was established by relay it makes it very easy from a javascript client for instance just to pass the javascript object into uh essentially a graphical variable that you would use so you just have one thing you don't need to spread it or transform it that's why why we actually have this pattern of having a single input here so in this case i have i want to create an author let's call him michael i'm not an author but make me one and then i have a payload and the payload in this case looks a bit silly because it only has the author and in graphql we return the author because the mutation is what changes the data and everything that is now in this selection set down here is the ability to query the changed state of the server to create the changed author this could be useful for instance to maybe uh query for the id from that ozone database generated data for example in our relational world like uh an id that's auto incremented identity whatever all that kind of stuff you're getting it you posted a new author to the graphql server and at the same time in the same round trip you're getting back that data that was generated right exactly exactly and what we also have here and that makes it even more complicated if i would code that out uh user arrows so graphql has errors but the protocol errors are more meant for the developer maybe database was down and stuff like that but everything that is user-driven like maybe i'm not allowed to have uh also with just two letters here then i would essentially throw a domain error these domain errors are also designed into the response so i would be able to query for the errors so you can see that is a lot of data and objects i would have to create and that's often where people that are new to graphql find it a bit clumsy so what we thought is getting rid of these things and making the dotnet side again clean so essentially we would like to write our mutation like yeah let's do an author here and as an input we essentially in this case want just the name so we can just just inject what we have or what we what we like so we just inject the name and now we sacrificed our schema design for graphql but let's just run with that so let me just pass in the name and to get rid of all these kinds of things yeah let's kick that out so now our mutation looks a lot less cluttery and then we might have maybe maybe some custom domain orbs that we want to throw i know i know minimal mutations right right mutations for minimal apis i like yes yes i could be okay with that so um so maybe we just throw because i don't want to want to waste too much time just let's throw an exception and this exception we call maybe we just say okay foo because something went wrong in our location so this is essentially if we look at this now just a c-sharp method again we we can live with that it's just what we do would do on our repository maybe even expose our repository now in this way much more idiomatic to what you would expect in in sharp exactly so what do we have to do to make it a cool graphql api so the first thing is we opt into something that we call mutation conventions so what it does is essentially taking your standard c-sharp stuff and making it and granularify it make it awesome for graphql and we give you a control so these attributes are not fixed we are still still working on them so essentially i want to have just one attribute but at the moment you can specify a payload here so that you tell the engine generate me a payload object for my for my mutation and also generate me the input object for my mutation here last but not least i maybe have an error and this error is a invalid operation exception which makes no sense but let's let's say every error is an invalid operation exception that's that's that's our philosophy sorry so still iteration i think the next uh build it will just be um one attribute that you need to use i mean you could have multiple errors we can see that in a bit so there will still be the error attribute and what we want to also allow is to have no none of these attributes essentially give you a way to say okay infer everything for me i'm just writing simple code simple c sharp code infer everything make it graph very nice and then you don't have to annotate uh anything so concretely the idea here is that you guys would look in the for example in the ef core model to know which property is database generated and that would implicitly be returned in the payload for example is this like a direction you're going to we would essentially you tell us that something has mutations and we would merge it into the mutations and make it a nice graphical mutation okay so um essentially you would still still need this this mutation class here and say that there is an author mutation maybe it's a repository that you have written and then you give us this repository and tell us these are my mutations on the repository and then we would infer from it a mutation type yes so you're inferring based on the return type and the parameter signature what the input and payload are yeah you can see i'm i'm now just returning here author and i'm getting in the name [Music] i also get in the book context and cancellation token and i'm throwing here an invalid operation exception so what comes out now let's just refresh that is that it's the same so i can now execute that and actually i get an author but i don't get an author here because an error happened here but graphically i didn't display the error because i didn't ask for the error because now i actually can ask for the errors here and um graphql actually will also tell me that when they when they error i can i can either query for all errors like uh because there's an interface for instance give me all the give me all the error messages or i could now say because we specified it give me the invalid operation error and if my exception had some some special properties that i want to expose we will also infer that correctly and make it a nice error experience here and then i essentially can query it and i can see okay that's an invalid operation error because we threw that there and it has a message through so it's a much more natural for graphql for c-sharp developers now to write these mutations but you still get all the graphical benefits like it looks like proper graphql it's all the schema best practices that you have and yeah so we are making the experience even nicer with the the next release for entity framework and graphql in combination and also everything else you can bring so um i think we're we're we're a bit over time so um sorry for that no no no no it's fine we like we like going over time um it's it's it's we always go over time it's always better to have more um i just want a couple of obviously if you guys have things to say as well a couple of things to end with first i just wanted to show this comment um junior dev i have no idea what's going on but i aspire to understand what's going on um i say we've all been there and most of us are still there a lot of times you know don't don't think that everybody here understands what's going on all the time there's a lot of stuff that i saw today i'm like what's going on there what's that i just i just nod my head personally yeah i just say yeah oh yeah that's great exactly but um yeah keep going it does it does get easier to understand and then you go onto something new and it's hard again but yeah but it's worth it um and then uh i guess i have one final question for michael which is chili cream hot chocolate banana popcake and strawberry shake your names where do your names come from what made you come up with those things so um essentially we were looking for names and i always liked the the way javascript libraries do it they have some nice like gulp and uh strawberry graphql stuff like that they they have very funny names and when we were looking for a name um we were at starbucks and my my son every time he goes to starbucks or every time he sees a starbucks he says daddy i want a hot chocolate and then my brother then my brother has then she said let's let's name it hot chocolate and yeah when you have every time you go for a graphql you grab a hot chocolate now right and now we yeah we essentially recorded because my son like liked hot chocolate um from starbucks and then we came up with other things like starbucks has these cake pops i don't know if you know them every time you can get banana pop cakes from starbucks i was not aware of that yeah but they have banana cake pops and they have strawberry cake yeah i have to get yes at some point very cool good to know good to know yes yeah okay um well thank you so much for for joining us michael this is this has been some really uh inspiring stuff here um it's really great to see graphql on.net working like this um and uh thank you shai and jeremy for uh all the time this year we've done these stand-ups and everybody for watching them and uh generally being quite nice to us which is always uh always helpful um and uh yeah we're gonna we're gonna take a break over uh what's the holiday season for for most of us here and uh and then we'll come back uh come back next year with um with lots of new good stuff so um if you have suggestions about what you want to see um go on to the github repo there's a there's a pinned issue at the top um and yeah and jeremy's got his festive chair there as well so um wilson will uh will say goodbye and uh and see you all next year so thanks very much for for watching who's doing bye everybody the let's get out of here thanks is that you jeremy okay let us waffle around for a little bit longer yeah okay [Music] you
Info
Channel: dotNET
Views: 5,154
Rating: undefined out of 5
Keywords:
Id: 3_4nt2QQSeE
Channel Id: undefined
Length: 77min 53sec (4673 seconds)
Published: Wed Dec 01 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.