Are React Server Components Really Slower?

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

this could happen

👍︎︎ 1 👤︎︎ u/ThePrimeagen 📅︎︎ May 31 2023 🗫︎ replies
Captions
maybe it's just me but when I heard about next js13 and react server components I thought they're going to be faster if you look at the implementation of how we have to do get server side props and then render the result versus a really nice like async component where you just fetch something I thought hey simpler implementation equals faster or at least just maybe as fast but not slower it turns out it actually is slower to do react server components and not by a little but by a fair amount check out this graph of requests per second it shows us that the pages architecture from before actually serves Pages faster than the app writer which means that it can serve more requests per second which means that you need fewer servers or lambdas to service the same number of customers which means lower bills for you on the old Pages model than the new app writer model crazy right but hey maybe I'm wrong let's go through my methodology and see if you can poke any holes in it and we'll see you for ourselves the results let's jump right into it now our starting point couldn't be simpler I created two applications both on xjs 13.4 one called app router test where I use the app router and the other called Pages test where I defined it to have the pages version as opposed to the app writer version and then I took those and went down to basically a single tag like you can see it here this is the index in the pages version right that's kind of weird there's a paint but whatever whatever in the pages version we have the index that is going to be the home page and it's got one tag it says that says hello and then over in our app router version we have one tag that says hello because I want to see in its simplest case could I run a performance test against these two and see a difference in the request per second or the average time to return a page so I built both of these using pmpm build and then I start them using pmpm start so we're looking at the release build not the development version on all of this so now both of these are running the old Pages version is running on Port 3000 and the new app writer is running on Port 3001. let's go over to our terminal and try out OHA OHA is a command line utility that you can run to test a given URL so we're going to run it against 3000 which would be the pages version of that home page and one of the parameters we're going to give it is we're going to say that we want this test to run for two seconds so let's try this out so we got 100 success right awesome the average request took about 15 milliseconds or 0.015 seconds and then it's about 3 170 requests per second not bad all right now let's try this again on Port 3001 that's going to be our app router version again a 100 success rate awesome but this time we get a response time of 17 milliseconds so 2 milliseconds slower and a request per second of 28.20 versus 31.70 so about 300 requests per second slower than the pages version but of course this is just a single tag on a page which is a really small sample size so it's really in the noise right 15 milliseconds versus 17 milliseconds that's not much of a differential so what I want to do is create larger and larger pages in fact I want to be able to kind of parameterize that so I want to try 10 tags 20 tags 50 tags 100 tags 200 tags and so on so what I did was I created a new route called no fetch you give it no fetch slash 100 say and returns 100 tags so let's go take a look at the implementation on that so over here in app router test we have our no fetch directory then within that we have a brackets count directory so we're parameterizing that that comes in as count then in our page we just get the params count comes in as a string we coerce that to a number create an array of that size fill it with zero so it can get mapped over and then we map it we get the index and we just give back a div with a value in it so it's literally just if you give it a count of 100 it gives you 101 tags 100 divs and then one for the main let's take a look at what that is in the pages version so we get count as a param to get server-side props and then we just send that prop on to the home page we don't create the array and give it server side props we just give it the count and then from that point down the code is exactly the same between these two so if you want to see one of these awesome Pages let's go over into our Arc no fetch 30 will give you 30 tags and the main so let's try this out with something bigger say 500 tags so first try our Pages version we'll give it the no fetch route with 500 for 500 tags and we get about 83 milliseconds for the response about 587 requests per second let's try that again on Port 3001 for the app router and where the pages version was about 80 milliseconds this is about 163 milliseconds and where the requests per second were in the 500s these are now down in the 300s but when I did the testing I didn't use two seconds I'm just doing that for the video I did five seconds worth of testing and I put all of the results into a Google sheet so let's go have a look at that all right so here's our no fetch tab for our Google sheet we've got the number of tags across the top here and then we've got the pages response time the app writer response time and I literally just ran the test copy and paste it was the dumbest most boring thing you can imagine but I ran over and over and over again and here is the result so when it comes to the pages versus the app writer in terms of response time the app writer is here in red you can see that line and then the pages is down below in the blue so in this one more is bad more means it's going to take longer for the customer to get the result and if you're looking for what's sort of a decent honest page I would say in the 1000 to 2000 Tag range is pretty standard third for a decent sized web page and then over in the request per second we can see that as the page grows the endpoint returns fewer and fewer requests per second that's okay we expect that but Pages starts higher and ends higher uniformly across the board from App router but this isn't really realistic right what we do with pages and app writer is we generally make some requests to the back end and then we display it so we use an async function in the app writer to go get the data or we use get server-side props in pages to go get the data and then we render that so the real test is to go make a Fetch and then to see how those two things compare so I needed some data and of course in the data directory there's guess what Pokemon so we've got a bunch of different Pokemon files Pokemon 100 has 100 Pokemon in it Pokemon 1500 has 1500 Pokemon in it so over in the pages version we have a get service I prompts at the top we call 8080 which is we're going to have our data hosted we use Pokemon we give it a count so you can only do the specific counts that are available and then we don't cache that result so we're going to make a request every single time and then we take the output of that and then we send it on to our page where we basically do exactly the same thing as we did before except that we're outputting Pokemon now every Pokemon has nine items in it so for 100 Pokemon that's 900 tags 200 Pokemon 1800 tags and so on now let's go take a look at the much cleaner app writer implementation in this one we have the same Pokemon component but now our home page is async and we just do our fetch right in line right there and that's really the only difference between these two implementations with the RSC version we're doing the request right inside the component and in the pages version we're doing the request and get server side props and that's it so let's go take a look at the result so if I go over here to our URL and change that to fetch and ask for 100 Pokemon I get an internal server error but that's because we don't have our data server running so let's go run our data server to do that I go into the data directory and you'll notice there's a file called binserve.json we use a rust based server called binserv to serve the data really quickly and this is the definition that binserv needs to know to run needs to know hey where do you want me to go and what am I actually serving it's going to do static serving of those files so I just run binserv here and now if I refresh Arc we get our Pokemon but hey how fast is that bin serve thing maybe that's going to slow us up maybe that's going to skew the results let's run OHA on our data to see how fast that's going to return so we'll go to the terminal I'll go to 8080 and then Pokemon now let's stress tests a little bit let's ask for 1500 Pokemon and [Laughter] this comes back in eight milliseconds for 1500 Pokemon at 5800 requests per second yes rust is very very fast and very consistent so okay now that we have the fetch version going let's try out 500 Pokemon fetched on app router versus on pages so start with Pages fetching 500 Pokemon 552 milliseconds 76 requests per second not too great but okay now let's try the app Rider version 835 milliseconds 43 requests per second so again a big differential between the pages version and the operator version and the pages version is still faster let's go take a look at that over in Google Sheets look familiar it is familiar in the response time version app router is again always slower and in the request for a second the pages version is always more requests per second so faster and then I thought to myself yeah you know what but versal uses serverless functions so maybe there's some magic if I deploy this to versal it's going to be better so I deployed both of these to versal and the results were basically the same Pages again outperforming app router there was something consistencies in here I think that's just because of my internet connection and my Wi-Fi router whatever but the net result is again pages is beating out app router and not buy a little but by a fair amount but I did have one question and maybe you can answer this for me so when it comes to request for a second it seems like there's essentially like a lock so we've got 100 tags here we've got a thousand tags here and the request per second is effectively the same across the board which really does make a whole lot of sense to me and the same thing for app writer lower but lower consistent so you get this kind of flat line here and I'm not really sure what that's about if you looked at the local versions right we're getting this massive drop and you're getting a real curve whereas with the deployed versions it's pretty much a flat line so I'm not really sure what happened there if you have some insights into that please let me know I think there's a fair point be made that we're not actually getting the best out of the app writer and this is not a fair comparison because app writer allows us to do something that Pages doesn't it allows us to do streaming we can send back an initial page result and then as slow microservices return their data we can stream more and more data out to the client and so that the perceived customer performance is a lot better they get an initial page really quickly with those loading skeletons and and then those skeletons fill in with the data and yeah that's super cool and we can't actually compare app router to Pages because Pages just doesn't support that but I guess the question is now that app writer has become the new to default are we paying for streaming even if we're not actually using streaming and that's the interesting question here because if Pages work better for me in my particular application because I don't need streaming then can I stick with pages and if so how long but I guess the larger question is does this really matter to you so to answer that question let's bring up the Blackboard all right here's the simplest architectural diagram ever and it shows two different ways of deploying an exercise application the one on the right hand side is the more common users connect to our servers directly whenever they make a page request we go to the server we get the response and there you go and that's what we've been showing so far in all of these demonstrations and so yeah it is going to matter in this case the user is going to get their Pages slower unless you use something like streaming probably not something you're going to see in a low volume site but on high volume site yeah you're going to need more servers or more lambdas to satisfy the same number of requests the other model is where you have your servers deployed behind a Content distribution Network or CDN that's like Amazon's AWS cloudfront or Akamai in that model when a user makes a request to a given URL the CDN looks at that URL and says hey do I have this in Cache and if it has it in cash then it just returns that page right out of cash doesn't hit the server at all but if it doesn't have any cash then it goes back to the server and says hey what's the content for that particular route so if you're the unfortunate user who gets the cash Miss yeah the app writer version is going to be slightly slower to get you back your data but for everyone else they're going to get the CDN version and it's going to be just as fast so with the CDN model I don't think this makes any difference at all in terms of performance Pages app writer whichever it's fine but I want to hear from you is this important in your scenario do you care about next js13 performance let me know in the comments right down below of course and in the meantime if you like the video hit that like button and if you really like the video hit the Subscribe button and click on that Bell and be notified the next time a new blue collar coder comes out
Info
Channel: Jack Herrington
Views: 19,695
Rating: undefined out of 5
Keywords: next 13, nextjs 13, next.js 13, next js 13, nextjs react 18, vite nextjs, nextjs, next js, next.js, nextjs react, nextjs typescript, next js typescript, nextjs npm, react 18, vite, react, reactjs, server components, react server components, server components react, nextjs server components, server components patterns, server components next, server components nextjs, next 13 server components, server side rendering, server side components, react 18 ssr, react ssr, next ssr
Id: 3Q2q2gs0nAI
Channel Id: undefined
Length: 13min 47sec (827 seconds)
Published: Mon May 29 2023
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.