Entity Framework Community Standup - Introducing EF Core Compiled Models

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Music] do [Music] [Applause] [Music] do [Music] do [Music] hello welcome to the entity framework community stand-up uh we're here again uh this time with a session with the ef team i have shai joining me as usual uh jeremy's out this week and uh also joining is andrew from the ef team uh andrei's been working in on ef course six uh in the compiled model and that's what we're going to show today um as always let's start off with a bit of uh state of the unicorn what's going on in the ef world um well we have an updated plan for ef core six you might have seen this in the notes from their weekly update from last week if ever you want to know what's going on in the entity framework updates we have this issue so if you go to dotnet ef core and then click on issues we have these uh pinned issues the status update ideas for this show and then the plan and this plan has been updated so most um significant change that we brought into ef core six is more cosmos features so basically we've been doing custom development we've been talking to a lot of people about what they need in cosmos to be able to run their their uh scenarios efficiently um and if i go back to the plan update that's resulted in this pulling in this list of features into cosmos for ef core six so there's things in here like collections and of primitive types which is a very common thing to have in your json document and then there's also some some bigger things here like defaulting to implicit ownership in cosmos so not necessarily having to specify everything is owns one has what it owns one with many and all those kinds of things uh in your uh model when you're building it for cosmos to kind of make it map to the document better and then there's a bunch more stuff in there that you can go look look through we had to cut a few things because of resourcing and balancing so we're still doing migration bundles but the other migrate movements have been cut and we've cut some of the improvement to raw sql um as well so that's basically the plan remains pretty much unchanged and coming back here there's a link to the plan in our documentation so if you want to get detailed information about each of these features that we're working on this is in the documentation in our plan okay so that's updated plan for ef core six uh of course six preview four is out now um that was out uh for build and um it has the tekken power fortunes improvements that uh shai's been working on and in case you missed it there is a great blog post that shai wrote this is on the dot net blog so search for dot net blog you'll find this and this shows what we actually this goes into detail in what we actually did inside ef core and to the benchmarks uh in order to get the performance improvements that we showed at build and that we talked about in the show um a few weeks ago so that's pretty cool definitely check that out preview 5 should be out pretty soon um and preview 5 will also have the compiled model that we're going to show today okay another another thing that's interesting is uh when we set up migrations and scaffolding i say we i mean bryce primarily when he set all that stuff up it was designed to allow the community to implement scaffolding and migrations for different languages and we've had a f-sharp package for a while now and we also now have this ef core visual basic package from the community um which is right here you can see bunch of community contributors to this and uh you just need to add it as a package and then you'll get scaffolding of visual basic code and generation of migrations in visual basic um this github project here it's just called ef core um it's unofficial collection of projects this is not anything that's supported by microsoft but the people who uh have this repo are our own shy embrace um so this is this is a great place to look for uh kind of side projects things that people have done so uh ef core naming conventions that i think originally came from shai's postgres stuff right is on there um and then like i said there's the sharp provided with hab for a while and says visual basic one now and then there's a bunch of other stuff like if you want to use sql server hierarchy id uh there's a package there to give you support for that so check out this uh this ef core repo not not to be confused or organizations are not to be confused with net ef core which is the official one okay so i think that's everything in state of the unicorn so let's talk about compiled models okay so i'm going to go and look at some code first because what we're going to be looking at here is how long it takes you to get from starting your application to executing and getting results for the first query if we look at that in code it's basically this create a new context instance execute a query and get the results then dispose that context instance how long does it take to do this the first time that's kind of what we're looking at so before we go into looking at the code and and doing a demo of this i want to talk about a little bit about what happens inside ef core when you do this and in particular when you do it for the first time like a cold start of your application and then we'll come back and look at where where ef core does caching and what that means okay so first thing you do is instantiate a db context so i mean use new to create it um you might get it out of a factory but basically you're creating an instance um the thing we do there is we actually compile a gallaudet to set the db set properties so you know many many years ago we had this idea that it would be really nice if you could just use a dv context and query a set without having to first set that set property to some value and so now everybody just assumes that that's the way everything works but it has to obviously be done because the compiler doesn't do it so we compile a delegate to do that this is typically very fast and you can avoid it entirely but not having any db set properties which is actually what i have in the in the in the code but this does happen when you have it whenever you create a new instance that's separate from db context initialization now db context initialization is lazy so you can create an instance and it does virtually nothing other than call this delegate to set the property the tv set properties but once you use it meaning you execute a query you access the model you track an entity anything like that then we run through db context initialization and this is when on configuring runs so you get all of your complete options ready to know what provider you're using and all of those all of that good stuff that we need to know and that results in ef core then building an internal service provider for that stuff so if you look at ef core regardless of whether you're using di in your application um ef core is essentially a service oriented architecture based on a di container internally so it sets up services for change tracking uh detection you know various things in the query pipeline for creating query contexts and things like that and then those are all used as services internally by af core so efcor basically has to build this internal service provider using the di system now all of this stuff you cannot you can avoid some of the cost of this by doing context pooling so when you say context pooling it's basically getting you all of this stuff basically set up and the context initialized and ready so that then when you put it back in the pool and take it from the pool you don't have to do this step and this step and this step again um that's that's the idea of context pooling but we're not focusing on context pooling at the moment okay so once we have an internal service provider then the first time we run we need to build a model so this is the model that we're talking about today and this is a metadata model that tells us how each property is mapped to you know which columns they're mapped to how relationships are mapped what the foreign keys are uh you know whether what the types are is this required does it have a maximum length all of that kind of stuff is built into a model and this is when on model creating gets run finally because we're executing query we have to compile the query so this is where we pass link into basically a kind of sql template that means that we can then quickly just reuse that generated compiled sql template the next time we want to run the same query okay and then we execute that query and we're done so this is what happens when you run this code basically internally for the first time now ef core works like it does because it caches things so the compiled delegate is cached we're not using it right now and it's small but that is cached um the internal di container the internal service provider is cached so even if you're not using context pooling you probably won't build this uh no you will build this for once for for each uh db context but then we we cache the way it's built basically and then the model itself is cached so this is why we only run on model creating once because the first time you run and you start the application it runs this and builds the model and then we cache it we don't run our model creating again and that's normally what what allows this to be relatively slow is because it only runs once and we cache it and we'll come back to that in a minute and then we cache the compile query when we compile the query we cache that as well so that we don't have to compile it again so all of these things are cached and i'm going to show what that means when we run some benchmark code now so i'm switching over to code again so this is this is the code we're going to test now i'm using here benchmark.net um which is what everybody should use if you're testing benchmarks testingperform.net and uh it's got just a really simple class with just this one benchmark in it that does just what we told told uh just what we said before um so let i'm just gonna run this now and we'll see what happens this is no compiled models nothing so to run the benchmark we just execute the project it's gonna start up and here we go so what's happening now is that benchmark.net is uh working out overheads and warm-ups and then it's running some warm-up steps for the actual thing that we want to execute and now it's actually running the workload and you can see that this is these queries are returning in five six milliseconds which is pretty fast on my laptop so what we're getting here when we see the time to first query it says it's 5.8 milliseconds that's really a lie because we cached all these things the first time we ran everything right and so if we really want to test cold startup time we have to stop ef core doing this either by literally running cold startup every time or by doing something else so what we're going to do this is the bit we want to measure now this is this dominates for big models the performance of all these things so what we're going to do is we're going to stop caching the internal services which in turn means that the model is no longer cached and also the queries in cache but that's not terribly relevant this is the thing that we really want to test but by preventing us from caching all these things we're gonna have a benchmark that basically tells us how long it takes to do all of this end to end on cold start remember we're not using this at all so it doesn't even matter if that's cached in any way it would be negligible okay so i'm going to come back to my code here and i'm going to go to my context and do this little trick enable server provider caching is an option now this is normally on by default you normally would never want to switch it off but by switching it off i've stopped this being cached but let's run this benchmark again now you remember it was telling us before that it was it was uh completing that query in you know about five five or six milliseconds and now it's starting up all of that overhead stuff again and it's doing stuff but we can't even tell because it's now so much slower that benchmark.net is trying to run this thing 100 times then it won't even even finish now shy we'll we'll talk a bit a little about this at the end because shy is an expert on benchmark.net and really you can you can make the settings benchmark then it should choose appropriate settings for this and i'm kind of preventing it from doing that so we'll look at how to do that you know better in the end but for now i'm going to go back here and i'm going to stay in my benchmark so going back to my benchmark code rather than run so this invocation count is saying run this 100 times and time it and then divide by 100 to get the overall time so i'm going to say don't run it 100 times just run it once so before we run 100 times now we're running it once and this really shows the impact of that caching um and that's why we always tell people make sure you don't defeat the caching you know even if something starts working by disabling the service provider cache don't do that because you're killing your perth so now you can see um we're getting warm-ups and it's saying one operation is taking about 1.2 seconds so clearly this is way way different than the 5 milliseconds we had with caching so this is basically now measuring for us how long it takes to do all that initiation and because we have a big model i'll go into in a minute that's dominated by the model building perth so this is this is what happens when this runs over and over again so basically we're now we're testing the startup time and we can see it benchmark.net gave us about 1.2 seconds now i'm doing a lot of stuff on my laptop at the same time so these may jump around a bit but this is basically the kind of range one one to 1.2 seconds is what i've been getting when i've been running this um on my laptop the last couple of days okay i'm going to interject like a comment or two um so one thing is uh benchmark.net was initially conceived to do what's called micro benchmarks so to benchmark very very fast things it was not necessarily you know constructed in mind with a scenario where each invocation kind of takes over a second although it can be used to do that as you guys can see i'm just saying so usually what you're going to do is you maybe saw how arthur was configuring the number of iterations manually and all that kind of stuff in a normal scenario yeah those those kinds of things the beautiful thing about benchmark.net is that you kind of delegate all that to benchmarks.net so you as the benchmark writer are not supposed to concern yourself with how many invocation counts how many to all that kind of stuff you kind of you know throw a method at benchmark.net and it will decide how many times to run it and in which way in which way based on the results that it sees but again because we're doing something that's a little bit weird here then you know we have to go into these details and benchmark.net supports going into them as well so we just just wanted to get that that was good and let's remember to come back at the end and just remove these and see what it does by default just for interest exactly um so i have a little method that i haven't been calling here which just prints stuff in the model this gives us an idea of how big the model is um so if i run this um to see i think we've got the 500 entity types is it the 300 300 entity types 5000 properties 520 relationships so this is just the total count of properties this is the count of foreign keys with each foreign key representing a relationship when i've been talking about model size i've been adding these two things together it's a arbitrary somewhat idea of how big the model is andrew and i were talking about it yesterday um but it's it's it's an idea of how big your your model is there's a lot of variations in model size in terms of the complexities between their entities the number of depth of the relationships lots of things like that they that affect uh model building perth so it's kind of uh hard to get a stake in the ground um so what i've tried to do here is create a model which is um which is relatively similar to what i think a real model would be in that so basically we've just got a lot of different instances of this blog so that that there many many entity types and then multiple relationships for each of those that gives the model building code something to do and then then for each of these each of the entities have i think 20 properties which have data annotations on them as well so there's model building work to do there as well so we've got kind of a big uh but reasonable reasonable model um okay so that's the start 1.2 seconds let us go to terminal and do compile models yes dls doesn't work in in when you resize that window in presenter mode that's a bug in rider i think anyway um so i have dotnet ef installed uh and this is the i'm using the daily build so let's let's look at my project nope i think maybe arthur throws up for us yeah he might be an issue with his machine [Music] that would be a shame okay i guess oh maybe he's coming back okay everybody please be patient for a few minutes we'll try to get him back if not i'll just improvise on how great model building and compiled models are but it would be much better if arthur did it also just so you know we are not using teams for this yes that's true yeah i'll just uh i'll just like because we have a few minutes there's there's like a usual comment about a rider being used rather than vs and people sometimes in for all kinds of things um it so happens that both both arthur and i are actually big linux users and vs doesn't exist on linux in fact so one of the reasons that we use like the main reason that we use writer is actually that we're on an os that doesn't support anything else so that's to dispel any of that kind of thing yeah and there was a comment asking whether the compiled model uses search generators we did investigate whether it was possible but currently search generators have some limitations to it as to dynamically loading users db context as so we're not able to use it as a star generator yeah one more thing this is actually we're actually discussing this internally right now it's a very interesting question and we've been doing some experiments one thing to realize is if you do this as a source generator then you end up running user code from a source generator so like the model building kind of stuff that you just saw that gets run as part inside the compiler process inside vs which is something that could be very tricky or problematic so if user code hangs stuff then your visual studio starts to hang or maybe even crash so we're not sure we want to go in this way and instead the compiled model is is a different command but arthur is back now yeah my internet went down so that was great um okay so i was showing you wow cards um so we're picking up the daily builds here which means if uh anybody broke the build last night then this could all fail but shai and i are here and we're the most likely to break the builds so um and then also i have uh the donna af tools preview version installed as a project tool in uh in my uh cs proj um so you don't have to install.net ef as a global tool you can install it in your project uh and then um in the terminal you just have to do net tool restore and it will restore that version and be ready for you to use in the project so that's a little little tip there um let's go to the terminal again okay so we have a new command oh by the way there's package manager commands for this as well if you prefer to use that uh so dot net ef db context and let me just see the help for that you'll see that there's this new option optimize this new command so this then is basically what i need to run if i do optimize i can get help on that for various options so you can do things like change the name space that it's going to generate the compiled model into uh you can specify db context if you've got more than one these are all the normal things you kind of expect from these tools now i'm going to take a command line that i prepared earlier um that specifies a namespace and a project and uh let me just paste that so dotnet ef db context optimize output directory to my compiled namespace and put it in my compile model's namespace now if everything goes well this will build and then generate the model so it says it's successfully generated and it gives us something that we can put in ad db context or in on configuring to use that model so we will come back to that in a minute what happened when we did this command so i think i heard you guys talking about source generators at some point so this isn't a source generator this basically is working in the same way that migrations does in that so when you were if you run.net afdb context or migration ad for example then it compiles a model it finds your configuration in this case it's doing it by running uh because i have a parameters constructor on iv context so it creates an interest in my db context it gets the model out of that so it's running the code to build your model and then it uses that model to then generate the compiled model code so the generating compiled model code is is somewhat similar to the step you get when you're adding a migration and it generates a migration um so now we've got we've done the compile model so let's go back and look at our solution again and you can see i have a folder with a bunch of stuff in it now we'll look at that later it's not terribly relevant what's in there what is relevant is that it told me that i needed to add this line to my options you can see it's i'm telling it to use this model um and there's this is a in a static instance that was created for you inside the generated code so this is basically the only place you kind of really need to to potentially look in the generated code now there's a customized partial method on here and this is a partial class so if you want to do customization after the compiled model has been built you can do it in here okay so now we've said used compiled model i've generated the power model i said use compile model that's all we need to do so i'm going to run the benchmark again so this is exactly the same one that took 1.2 seconds um on average uh oh i didn't change my code to run the benchmark um so let me do that okay so it was 1.2 seconds to get to the executing the first query um when we ran it without the compiled model and benchmark.net is now going to start running with the compiled model and you can already see that things are going much much faster in fact they're probably going so fast that benchmark.net is going to tell us you did that wrong it's saying that was too fast i couldn't measure it i remember which i was talking about earlier this is me basically because we've been messing with those iteration invocation counts and stuff so what i'm going to do is go back to my uh benchmark my program here and i'm gonna make this 100 so we would basically instead of it doing it running this once and measuring it which is what we were doing without compiled models now we're running it again 100 times measuring it which is what we were doing when we're using the caching before but in this case it's still not using the caching right it's just you so it's doing that whole step from it's doing let me find the powerpoint it's doing all of this every time basically um so let's run that again let's see if we can get decent numbers out of benchmark dot net without it complaining and you can see now it's doing it much doing a much more reasonable thing it's running a hundred times on the average it's getting about 9.6 milliseconds so we've gone down from 1.2 seconds to nine point something milliseconds so i think that's a round of applause for andrew right there yes but i would like to interject that seems to be too good to be true right in fact if if you go back to the compound model code to instance implementation okay sorry let me go back to uh this one yes you can see here that the instance field is where we cache the compiled model being generated so if you remove that if condition and just let let it build each time then we'll get the real time to build the compound model to build the compile model so we're not caching the compiled model we're just building the compatible so this would be a bet this is a better benchmark you're telling me i've been giving people the wrong numbers uh you might have yes but okay okay so let's run this then okay it's getting ready let's see what it says oh it's much slower so what have i been showing people then caching the compiled model yeah there's basically the the cash being used well okay so this is what happens when you do things live i did show this to andrew yesterday and he didn't tell me that then so i was okay so transparency mode yeah so it looks like we we've still got an improvement but it's now down to it's now 170 milliseconds not the uh massively improved nine milliseconds that we had before okay well uh what was next now um now that we've destroyed the whole thing so how would i have detected that andrey how would i have known that i was doing the wrong thing you probably could have profiled the compiled model test i see so you're saying that there was no way from just that you could tell just from looking at the benchmark and what it was running and everything that this was happening well you could also run it with different model sizes and see whether the time change depending on the model size and if it didn't that would be an indication that something's been cached right so [Music] this is the graph that i plotted yesterday but if we actually look at the ratio of these it looks like this and when we were looking at the raw numbers this morning they're not changing very much so this should have been an indication right that this wasn't right this wasn't correct that's good okay well what we learned here is that when you tweet about perth numbers you always get them wrong first you say it's 160 when it's not it's 160 times but then it turns out that you did it wrong in the first place and even though you talked to the engineer who was doing it you still got it wrong so anyway that's that's the way perf works that's the way perf works so we've got you all here on false pretenses in some respects however as um was pointed out let me just look at the window here [Music] run okay yeah so as somebody said here uh we've got about a 6x is that right yeah um i thought somebody said that but now i'm not seeing it yeah oh yes there we go so as someone said six times there's nothing to sniff out to be honest which is true although this was when it when it looked too good to be true it was too good to be true i guess um this is why the people who write the code should do all of the talking about it and not have the money to do it okay um let me cancel this and let's remove the things that we have on here and go back to shy's point and see what so this you're saying so i should just basically figure the right thing to do yes okay so let's see what it run what it does then when we just let benchmark.net decide the right thing to do so right now it's basically running some tests right of the operations to see how how fast they really go dn basically detected this as a relatively slow mbdm turns uh operation 100 and something milliseconds so it's running it once you can see it's one operation per iteration like that anyway and then this these are the results it calculates how many how many iterations it needs in order to get statistical significance out of it or whatever the technical term would be um so we started off then at one second 1.2 seconds and we got to uh 120 milliseconds so that's still a 10x uh 10x improvement right so that's still pretty good so compiled models maybe it's not 16 60 160 no 160 times improvement but it's at least 10 times improvement which is definitely worthwhile if you're using um big models now um remember though that when we did the original um we ran the original benchmark without uh doing compiled model at all but before we disabled the server provider caching in here that we also got very fast numbers out of it even with no changes to the default and the the way the reason that's important is because what we're talking about here is really to do with startup time so it's when you're going from that cold start to building the model and getting your first query executed that's when the compiled model is really going to help you once you have an up and running application that's serving requests or doing whatever it's doing then that that that cold start no longer plays a part in that so for many people even if you've got a reasonable size model unless you're really concerned about that cold start being very significant um you should be able to just continue to use what you you're doing also if you don't have a big model it won't make that much difference now what is a big model that depends on who you are like most people who have a model that's bigger than say 100 entities entity types think they have a big model and and you do in some respects but we're really talking about models that have lots of properties lots of entity types lots of relationships where it's going to be really slow so compiled models is not necessarily the right thing to use for everybody one place where i haven't tested this but it should be beneficial is when you are running on a lower powered device such as a xamarin app uh you know running on a phone or something like that uh in that case is the compiled model could be quite slow the compiling model can be quite slow and so you'll get an advantage for those those cases too okay let's see what happens if we run it again without server provider cache but with this should they should basically be the same as what we did before because everything is now cached just in a different way i think we'll see maybe not maybe it's gonna maybe it's no you see because it's still building the compiled model every time because that catches moving moves so don't don't go edit your remove this line don't do that okay uh do we have any questions or anything i'm trying i've been taken care of of most of them okay for now okay so another advantage of using the compiled model is when developing the application you're not always changing the model itself so as long as the model stays the same you can avoid your compiling it and that means you can run the application faster and reduce the inner loop time yes what we're going to look at what is the compiled model yeah it's worth stressing this this there's really two things here exactly like andre said there's reducing the startup time of your application and and in production and somebody here mentioned serverless which is exactly the kind of scenario we have in mind in certain serverless environments your application gets spun up and turned down kind of automatically based on demand now if your application takes like five seconds to start up that's going to be a problem because that's five seconds of latency basically on a first request to a serverless application this is exactly the kind of thing that this application this feature sorry kind of improves on but there's also what we call sometimes at microsoft the inner loop the developer inner loop experience which is basically when you're working as a developer with whatever visual studio if every time you debug an application or run it for testing purposes or run a test for that matter you now have to wait for five seconds that's awful for the dev experience right so what you would do in this kind of case assuming you're not touching your model you just generate the compiled model once and then you go and iterate on whatever part of the program and as long as you're you don't need to regenerate your model then that part once again gets you know reduced in a very significant way so these these two kind of uses of this feature are important to to understand absolutely so let's look at what actually is in the compiled model so we have this initializer and it calls create on each of the entity types that were generated so this is all auto generated code and what it's doing here is it's basically creating all of the uh the it's built this code based on all of the configuration and everything into a consistent single model which can be built very quickly uh compared to compiling everything in advance so that is essentially what the compiled model is it's not particularly interesting to go look at any of this stuff it's not something that you should edit this is all auto generated code and then when you when you regenerate if you change the model and regenerate you uh you will overwrite this with new types also let's go look in the in the explorer here so because i put all of this into its own directory i can also easily clean up by deleting this whole thing uh and then i have to obviously not call that in my context anymore but that's it's easy to get back to a clean non-compiled model starting point by doing that you don't necessarily need to do that you can rerun the command i think it's said on the terminal window there run this command again when the model is modified but if you make significant changes to the structure in particular removing an entity type for example um you'll probably want to delete the model because we're not smart enough yet to uh clean up stuff that's no longer used in the model which is why putting it in its own directory and then deleting it is uh is a good place to go muted thanks andre maybe i can interject with the question with a recurring question from the the feed there's a question here of whether the model the compiled model will automatically be synced once i make a change in the model or like another way that this was asked was whether when i like when i generate a migration basically which means i changed my model again would the compiled model be regenerated do you want do you want to mention that andre should i yes there's there's nothing like an automatic detection of any model modification you basically need to know whenever you make a change that might affect the model you have to run this command again and and and we have uh an issue in the backlog uh to improve this uh for the release scenario so you don't forget to regenerate the model but in the current version in preview four you need to do this manually each time you make a change yeah right it's worth mentioning like one one way to look at this is this would possibly be part of your ci cd pipeline now or more more yeah depending on how exactly you're doing it so basically when you're working on your machine maybe you don't need this feature depends maybe you do as we said before but then when you send it off to a sort of deployment then you could have a build step in there that runs this command that you just saw generates that code and then compiles and then the application which you actually deploy to production already has the command the compiled um model built into it so that its startup is fast uh this is kind of how we we thought of this as being used as something that always happens but in your cicd kind of pipeline for now but as andre says we have some ideas there's definitely it's technically possible to of course detect when your model source code actually changed and then trigger a rebuild or an invalidation of the compiled model but there's no plans to do this for six so i also want to point out that we can't automatically at startup compile your model and tell you whether the compiled model has changed because that would defeat the purpose of not compiling the model at startup so even though it's like seems like oh wouldn't it be good just to automatically detect it's actually relatively hard to do unless you rebuild the model again even if you want to get a hash to compare a hash you have to rebuild the model to get the hash to compare the hash and so um yeah we have uh issues on the backlog to make this easier but it's not going to be simple as you know we just automatically detect and do it for you there's also been several questions around you know for example using ms build and making this automated um so the challenges with some of those things are that it runs the code right and again this is this is the primary reason why source generators don't run for this don't work for this because we need to build your existing code we need to then execute that in order to get the compiled model and then we need to generate code and then all of that needs to get built again to be to result in your finished application so while you can certainly imagine ms build pipelines that would do that it's certainly not trivial at the moment and source generators in fact we just had a conversation with the roslyn guys about source generators and this again and they again came back and said that's not really a safe thing to try and do that um so we're not going to try and put it in the source generator use source generators for this while they're a team that creates them says don't do that um but for now we feel like this is a this is a usable and reasonable way of doing things and like i said before it's kind of like migrations has always been um you know we've got the donna af command that runs everything you do in the package manager console and then there's not much additional changes that you need anything else okay so unless there's more questions then i think we can just say uh we call this one right sure okay well thanks everyone for joining us sorry to over promise and have the engineering team tell me that that was wrong but at least we uh we got there and all of the things you learned are still true even if the answer is different okay thanks andre thanks shaik thanks everybody we'll see you next time bye [Music]
Info
Channel: dotNET
Views: 9,261
Rating: 4.9411764 out of 5
Keywords:
Id: XdhX3iLXAPk
Channel Id: undefined
Length: 49min 37sec (2977 seconds)
Published: Wed Jun 02 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.