Hi, my name is Dan and I work on the React team.
In December our team has published a talk about our research on the new experimental React feature
called React Server Components. If you haven't watched the talk, you can find it by searching
"Data Fetching with React Server Components" on YouTube. That talk was an introduction and
like we said in the talk we'd like to follow up with more in-depth content so this is why today
we're recording a Q&A where a couple of people from the React community will ask questions and
a few people from the React Core and React Data teams will try to answer them. Our initial talk
about Server Components was a high-level overview aimed at everyone using React. However,
quite a few people working on front-end infrastructure at different companies expressed
interest in a deeper more detailed conversation about how Server Components fit
into their app's architecture. This Q&A is an architectural deep dive: you don't
have to watch it if you're only using React as a developer but you might find it interesting if
you work specifically on front-end infrastructure. This Q&A assumes that you've already watched
our talk called "Data Fetching with React Server Components". It also assumes that you
already read the Server Components RFC and played with the Server Components demo so we're
not going to explain what Server Components are. Without further ado I would like to let how
two guests introduce themselves. Our first guest is Jae, she works at FindMyPast. Jae, can
you tell us a little bit about the kind of work you're doing and what got you interested
in Server Components? Hello, yes, so I'm a a principal engineer at FindMyPast in the
e-commerce team, so we do all of our payment and account management flows, and so the thing that's
interesting in our opinion was that all our app is in React and any new development we do there
and so anything which can help development and especially data fetching and secure data fetching
is something that's of great interest to me. Thank you, and our second guest is
Yen-Wei, he works at Pinterest. Yen-Wei, can you tell us a little bit about what you do
and what got you interested in Server Components? Sure, so my name is Yen-Wei, I work at Pinterest
on the web platform team and we're basically a vertical team that supports web developers across
multiple product teams at Pinterest and we're interested in Server Components primarily because
we do a lot of server rendering at Pinterest and also data fetching and sort of how you fetch data
across client and server is something that we've been thinking about a lot lately, and so Server
Components just kind of really sounds like it would solve a lot of our issues. Thank you,
finally I'd like to introduce our guests from the React Core and React Data teams. We have quite
a few people today here in the hope that for every question we'll find somebody who can answer it
so let's go in the alphabetical order. Andrew? Hi, I'm Andrew, I'm on the React core
team, I guess in particular I work on a variety of things, like Concurrent Mode
and Suspense, so the stuff that happens once once it hits the browser,
basically. Thank you. Joe Savona? Hi, yeah, so I work on the React Data team
primarily working on Relay but also collaborating with the React Core team on things like data
fetching with Suspense and Server Components. Lauren? Hey, I'm Lauren Tan, I am an engineer
on the React Data team along with Joe and my work has been primarily focused on
augmenting Relay with support for Server Components and also prototyping its
use within Facebook. And Sebastian? I'm Sebastian Markbage, working on a React
Core team and most recently mostly on the core implementation of Server
Components and streaming server rendering. Thank you everybody for introducing yourselves and
before we start I want to emphasize that Server Components are still in research and development
so we ask that you don't create courses around them and don't build your apps with them yet.
The purpose of this Q&A is to be more transparent about the future of React and its development
but we don't have all the answers yet and so the answers that we'll give here are not final and
they just represent our best idea at the moment. So I think now we're ready to jump into
questions. Yen-Wei, do you want to go first? Sure, so we played around with Server Components
recently and actually were able to integrate it into our app just to play around with and I
think a lot of people on the team were really excited so but the first question that
sort of popped into everyone's mind was when is this actually going to be available for us
to use and what are some of the big missing pieces that are sort of left before we can actually use
this? Yeah I think there's there's a couple of big pieces. Everything about Server Components
connects both to server rendering and the client and specifically the part of how Suspense works.
So there's a few missing pieces in our server rendering story. We're currently working on a new
implementation of the streaming server renderer which has some new interesting features but
particularly that's how we plan to integrate the data stream from Server Components so that you
will be able to server render into HTML together with Server Components. The other parts that are
kind of missing is we want to make sure that when you're bundling the output, Server Components
kind of implicitly gives you this fine-grained bundle splitting built-in, but we want to
make sure that there's a particularly useful bundling strategy that doesn't just regress the
strategy because if you're splitting your bundles into too small pieces then that can be worse than
not splitting them at all. So we're working on at least providing a prototype or ideally a full
implementation of a webpack plugin that will let you have a pretty decent out-of-the-box experience
and there's other ways you can do that and I'm interested to see kind of what the community comes
up with with various types of bundling strategies. But we want to at least be able to ship
our best thinking so far in this space. And then there's an another piece
which is how does this connect to the API you use for actually kicking off
the fetch like routing or pagination or other things like that. We don't necessarily see
ourselves being particularly opinionated about that but there are certain patterns that
work well and certain patterns that don't work well so we want to at least provide a prototype
and a demo showcasing how you can think about solving those problems. I don't think you answered
the "when" part, do you want to comment on that? Yeah, I think so we're working towards having a
Release Candidate of the Client aspects and we're hoping to introduce both the new release for a
React 18 version together with a Server Components MVP, and possibly some streaming rendering as
kind of a preview package, hopefully this year. Thank you. Jae? Following on from what you said
about it how it ties into server and client rendering and data fetching, our app is built
around GraphQL and specifically Apollo GraphQL, which means it's built around this GraphQL
cache that is warmed during server rendering, transferred to the client, and then
on a user's journey through a session that cache gets modified by new queries
and mutations. How are you thinking about things like this, things like a GraphQL cache that
right now is shared between server and client, are you planning on making something that works
with Server Components, or is it something that the ecosystem and and us as developers will have
to rethink how we interact with data in that way? Yeah, I try to answer that. So, we kind of see
it as a sort of progression of how you might evolve your app that uses GraphQL or even other
data fetching approaches. The first step is to go from non-Suspense based data fetching to
using Suspense. The idea being instead of fetching in a useEffect or something, to
switching to Suspense-based data fetching. To make that work with server
rendering, that would require that some of the pieces that Sebastian talked about
in terms of the new Suspense streaming-aware server rendering work. That's kind of the first
piece, and what that gets you is the ability to mostly keep the same patterns
that you use in your app today, and to continue doing server-side rendering.
But the thing is that that sort of gets you the ability to fetch at multiple points in your
tree, and to have all those queries sort of happen on the server while avoiding round trips for
your initial page load. But what that doesn't really solve is now you're on the client and now
you want to do a page transition or something, now you're kind of going to be back in the
world of: you start rendering, maybe you you hit multiple queries as you're rendering your app, and
those could cause waterfalls, and so that's where we think Server Components can help. But that's
kind of a like a second stage after you've moved to Suspense with data fetching, and in terms
of the broader question of how does GraphQL or other kind of normalized data stores fit into
the Server Components world, we don't foresee that really going away. There are going to be parts
of your app that are interactive that require data consistency on the client and those will
continue, I think, to make sense to build with the existing approaches that we're all using today.
And there's lots of good approaches: GraphQL, REST, and various different data libraries. I
think what's really going to change is for the parts of your app that makes sense to convert to
Server Components you start to want to think about splitting up what's sort of state and what's
sort of canonical server data a bit more. Others may have more that they want to add
there. I can add that I think that the strategy in general is like you need a Suspensey... all
this builds on the Suspense API, so you have to build that regardless. And we expect a lot
of these libraries that exist today to build some kind of support for that out of the box, and
that lets you do the Server Components approach including colocating your data fetching in your
component without the waterfalls for initial load. But if that collocation or that
transformation could cause a regression compared to what you're doing today,
if you have a very optimized solution, and then Server Components could be the
solution to that. So sometimes I suspect you'll actually want to wait to roll it out, even
though that it's a two-step adoption process, until you have both pieces just so that you don't
regress performance overall in the meantime. Jae, do you feel like this answered your question?
Yeah yeah I think that's that's quite a good answer, a lot of detail, I guess just to make
sure that I'm understanding correctly is that we're looking at the components
that are Server Components will not be updating in response to updates in the
cache so we're looking at Server Components being things that are rendered using kind of
canonical data like data from a CMS or something, but not things are part of interactivity
and those things would be Client Components. Yeah, exactly, and I think what we've found is I
don't think it's necessarily that the data that you fetch in Server Components, that might very
well come from GraphQL, it might come from the same data source that your Client Components are
fetching data from, but often, within the data that you fetch, some of it changes at different
regularity. Just to take the Facebook example, like perhaps the text of a story might not change
very often especially if it's a post that you haven't written, the only time that data is going
to change is if you actually just refetch the whole story to begin with, at which point you're
going to the server so you could just refetch the the actual Server Component output. And so there's
just different types of data and so some of it changes less frequently and therefore you could
just re-fetch it in full, and when you do you just refetch the Server Component instead of fetching
the data and then re-rendering the client-side, so it's kind of about the sort of rate of change
of the data and how how consistent it has to be. I mean the adoption approach is... the way
to think about it is that you're writing a Client Component first and then if you see that
that component doesn't have any state or effects you can convert that to a Server Component. But
it doesn't have to be that you kind of go all-in to converting a whole tree or a whole subtree of
components. It can be that you're just converting individual components as you go. So some of them
in the tree might be fetching client-side or as part of initial server rendering and some of
them might be Server Components embedded in one tree. I was gonna add to Jae's point about certain
highly interactive... like a theme here is that there are some components that don't update very
frequently and there's other components that are like highly interactive and have more local
state, like UI state... and maybe it receives data from from the server but like you
can pass it from a parent component. So a lot of folks today who are already using a
data framework like Apollo or Relay are probably already writing code in roughly that pattern
where there's some some sort of separation between super highly interactive Client Components
versus things that are really about managing data and passing it down. That pattern
works really well with Server Components. But there might be some folks who are just
kind of throwing everything into the same kind of source of state, maybe like
a store or something like that, and those those patterns might take a little bit
more work to migrate to this world where you're kind of thinking a little bit more carefully
about what types of data that you have. So going back to, Sebastian, what you said
earlier about the streaming server rendering, I was wondering if you could talk a little bit
about more about that. As I mentioned before, we care a lot about server rendering and I
think I was specifically curious to understand how you're thinking about the interop between
Client Components and Server Components, Client Components and server rendering,
all together. Yeah so for server rendering there's a couple of pieces where we're building
server rendering with the Suspensey approach in mind. So that is decoupled from Server Components:
if Server Components didn't exist that would still be a thing. That approach allows you to stream
chunks of HTML in if you have, for example, one slower data source than another, so you can kind
of see the UI progressively streaming as you go. And it kind of ties into the whole Suspense
approach in general. But then you can see that each of those Client Components could be converted
to a Server Component and what happens then is it's similar to what happens on the client.
I think of the server renderer as more of a simulated client environment where
the server renderer is the thing that receives the original requests. But
then it can request data just like the client can request additional data, and that
data could be a subtree of Server Components that then gets fed into the server renderer
that acts as a client, and then streams out the resulting HTML, and then it also embeds the Server
Component output in in the HTML as JSON data. One key distinction there is that current
approaches tend to embed data in its rawest form, so if you're fetching a REST API on the server
you might embed a REST response in your HTML for use with hydration. But in the Server Components
approach, we're embedding the result of the Server Components in the JSON which means you're getting
kind of denormalized and processed data in the output which can sometimes be bigger but faster
to render, and sometimes be smaller because you're you're just loading the the data that you actually
needed for that component rather than the whole REST response. I was going to say something
that... because I personally find it very confusing sometimes even though I know the
difference between all the pieces, just because the naming is very confusing because nowadays when
people think "well, it's called Server Components, I already have a thing called a server renderer,
well the server renderer must be rendering the Server Components" but it's actually not
quite that. The thing that outputs HTML, the thing we traditionally think of as server rendering
today, before Server Components — in this new architecture, that thing doesn't actually render
Server Components. It only renders the Client ones which is kind of mind-bending. It actually
receives already — see, I'm even struggling to think of the words to use right now, but there's
like a layer that runs your Server Components, it sends that to a client renderer, and
then there's two types of client renderers: ones that runs in the browser and one that runs on
the server. I don't know if I'm clarifying it at all but but there's this distinction there between
the thing that outputs HTML and the thing that fetches the data and generates this streaming
output that you can then turn into HTML. If if that helps at all. Yeah, so I guess in this case
the server renderer is like the simulated client runtime, basically, right? And so I guess kind
of following up there does that also mean that the assumption that Client Components only run
on the client is kind of false in that world? Yeah, so that's another confusing aspect
is there are certain components... so yeah, by default Client Components do run on the Node
server environment and output like initial HTML or they run in the browser. There is a use
case for like some components that maybe you don't even want to try and render the initial
HTML on the server "client" renderer — sorry, the server renderer — so we're thinking of an API
where you can just bail out and say, you know, just don't bother to try and render this this tree
on the server and we'll pick it up on the client which is a pretty nice feature because it gives
you some granular control over which things are able to run in both environments and which
things aren't. But yeah, in general you're right: Client Components in this world don't necessarily
mean you can just like access window and all these browser-only APIs. If you want to take full
advantage of streaming HTML generation then same restrictions apply. In terms of naming, I think
there's some other interesting ways of looking at it because the Server Components is really about
utilizing the server for what it's good at: like close to the data and relieving some resources
and having code already loaded. Whereas the server rendering is more like a magic trick and I think
that's a good way of looking at it because it's just about rendering this snapshot that the user
gets to see before they can interact with it. But hopefully with, especially with, progressive
hydration approaches it won't feel any different when you try to interact with it but that's
really the purpose of the server rendering. It's to provide this magic trick of a fast
initial snapshot. It's similar to kind of how on an iOS app where you can see a snapshot
in terms of pixels of what was previously there when you start it and then it actually
starts. It's a similar kind of trick making it feel like it's starting fast whereas the Server
Components is really about this is a permanent approach that helps navigations further down in the app and permanently
avoids having to load that code. Yeah I like the "snapshot"... if anyone
has a good suggestion for naming by the way we're open to them. This "snapshot" thing
I like because it reminds me of like a V8 snapshot. I think one term I've been using
personally is "bootstrapping", it kind of like bootstraps the page just so that React can take
over and actually do what it needs to. But yeah, it's the server-side rendering that gives you that
initial skeleton to actually do anything off of. Yeah, at FindMyPast we've often called it
the "pre-render" because server rendering made people think of like ASP .NET MVC kind of
application. It's not really what it's doing so we started calling it the pre-render
because it is this kind of optimization. Let's go to the next question. Jae? Yeah!
So going into that, one of the things when I first talked about Server Components with
a colleague that is a principal on the front-end platform team — one of the things
that he immediately was concerned about was our server render — server pre-render — so it's
already like quite a resource-intensive piece of our stack and that's just pre-rendering like once
per session and he was thinking what is it going to be like, what are going to be the performance
characteristics of this Server Component provider that is going to have to do much more work over
the lifetime of a user session both in terms of connections and in terms of processing with Server
Components. Is there going to be any built-in optimizations for, say, caching or memoizing the
result of Server Components that might be the same even for different users or even for the same user
across the session requesting it again and again? Yeah, so I think just to address the
the first point there — it's unclear so far, we don't have great numbers.
It's not super resource-intensive for us at Facebook so far that we've seen
in comparison, and I think part of that has to do with just how resource-intensive is your
REST API today, or the processing of the data, or the GraphQL endpoint. And the other
part is that the Server Components subsequent requests are not necessarily as
intensive as the initial server rendering because it's only the Server Components and not
the Client Components, and it's also a subtree. So it will have this ability to kind of re-fetch the
subtree but it's definitely a concern that we have that we want to meet by having that ability
to refetch a subtree rather than kind of re-fetching all the data for a whole page
when you're when you're refreshing it. And to the caching point, we have some ideas about
ability to cache subtrees in various forms. So caching is always kind of tricky because you
have to make sure that you can invalidate it properly. But it also ties into context where
because we have the ability to fetch subtrees like I just mentioned, you want
to preserve that ability, then we'll also have the ability to cache those
subtree responses within any particular tree. But we have to encode the
inputs to that which is, for example, if you're having an HTTP fetch or a file
read, all the inputs that go into this other than just the initial props or all the data
that you read need to participate and give us a way to invalidate that — whether it's a timestamp
or a file watcher or a subscription approach. So we haven't quite figured out what the
API for that invalidation is going to be and it might be tricky to add after the fact so
we're still kind of figuring out should that be something that is a part of the data fetching API
contract from the beginning so that you don't lose that ability later, or is it something that you
can gradually adopt later on. I want to add that at the client level, the Server Component
response is also cacheable, determined by the needs of the product. So for example
if you have a part of your application that is really static, and the rate of change for the
data that powers that, those components — like let's say a navbar — so you don't have to
re-render the Server Components necessarily if those initial Server Component responses
are cached. And there's nothing really special or unique about these Server
Component responses that don't make them easy to cache. So for example in Relay
we do cache the Server Component response and we essentially make use of that if
the data has not changed. Instead of refetching the Server Component, we
just restore it from the Relay store. One thing to add is that you mentioned that your
server side rendering — what you described as pre-rendering — is currently resource-intensive.
I think one thing to note there is that for certain libraries, the only way to do server
rendering with data fetching and useEffect right now is to kind of... certain libraries in the
ecosystem are doing multiple passes over the tree just to figure out what data the UI needs. And
once the cache has been then warmed up then they can actually do like a full render, but obviously
that's doing multiple passes over the tree. With Relay we don't see that because we're
actually fetching all the data upfront, and one of the benefits of Server Components is
that it makes that a bit easier to do. So with Server Components, it makes it a bit easier to
structure your app so you can actually avoid the need to walk the tree again and again just to
figure out what you're rendering. Also the new streaming Suspensey server rendering will actually
be able to resume work. Fetching with Suspense, we can resume work where we left off as opposed
to having to start over so I think that even in terms of that initial baseline of seeing that
pre-rendering today is maybe expensive, that might change too, right? It's not just about "oh
we're adding more work" it's actually potentially making all of the work that you're already doing
a bit more efficient as well. I have a question: where is your GraphQL implemented, is that a
JavaScript service or a different language? Yeah, GraphQL is mostly in JavaScript but it's
a distributed graph — so we have a central Node.js server that proxies different
requests for different parts of the schema to back-end services written in a
variety of languages but mostly Node. Yeah, I think the reason I ask is because there's some overhead in just the runtime itself
and if, for example, if you have a REST API today and the REST API is built in Node, you can just
add Server Components as an additional layer to the end of the same runtime. And similarly if you
have a GraphQL implementation in Node or even in front then you can just add Server Components at
the end of the same service to amortize a little bit of the overall cost because you're utilizing
the same service for both processing your data and processing your Server Components because
essentially it's just a data processing pipeline. Yeah so I think this is kind of a continuation
of the previous question. So we talked about um caching of Server Component responses and
I'm kind of curious if like — you know, today something we do is we cache the resulting
data in a client-side store or a provider. We use Redux in our app. I'm wondering if —
talking about Relay store caching the responses for Server Components — is that something that
React itself is gonna be opinionated about or is that something that's just going to be up to
userland and sort of the needs of the product? Um.. someone else wants to jump in? I
thought I heard a breath. Wanna go, Seb? Yeah I was just gonna tie it back to what I
was saying in the intro about the pieces that are missing. There's a piece here about
routing and triggering the fetches which also includes the caching. And we have some
ideas around how you might want to do that without any additional library, just
like the simplest possible you can do, where you would have the cache — there's this
Cache primitive built into React that would hold — it's actually both used on the server to
hold the responses that you use on the server and a cache that holds the response on the
client as well. But the Cache is also used for any ad-hoc thing you might fetch on the client,
so for example you might want to have images in there to support kind of like Suspensey images
technique, or you might want to have one ad-hoc client request that also goes into the same
Cache. So that that's kind of the basic approach and we have some opinions about how
that's rooted — it's rooted in — certain subtrees have a lifetime in React, and that lifetime
controls the Cache. But then you can also build this into an existing cache that is
more globally rooted, like Relay for example. I'll add that if you've ever played with Suspense,
like the preview versions of Suspense that we've published in the past, we have very glaringly
not solved this caching issue. We've kind of just given you like a recipe for how to do
a user space cache and we've kind of just put a giant TODO in front of the whole
area of like how you doing invalidation or how you decide which which parts of the
tree need to be consistent. So the API that Seb is alluding to is the thing that we
are now going to have more opinions on. And so if you are using Suspense, there will
be this unified built-into-React Cache API that different frameworks can hook into. And
so each framework might have different implementations for how it fills in that
Cache but there will be a unified pattern for, this is how you should invalidate it or this
is how you decide which parts of the tree to be re-fetched or which parts of the tree need to
be updated after a server mutation or something. There'll definitely be additional layers on
top of this that a framework like Relay will have particular implementation opinions on,
but the lowest level substrate of where does the cache actually live, we will have an API for
that. And to fill in what the purpose is — this is kind of a deep dive — the purpose of
that Cache is to provide a consistency for the subtree. So if you imagine you're doing
a fetch for Server Components but your Server Components can layer in Client Components, and the
Client Components might also do fetches around the same time, filling the same Cache. And the idea is
that you can invalidate all of that as one unit, and you get a new server request for fresh
data, but you also get client requests for fresh data as well for the same subtree. And it's
all tied with that subtree in React on the client. Should we go with the next question,
Jae? Yeah, so another thing that I think, especially with how complicated it is right now
for us with data fetching, is error handling. So I was wondering what your thoughts are on
what if there's an error in a Server Component, what if the service providing the Server
Component becomes unavailable, you know, is there going to be a way for the client to
say something like "well if you can't fetch the subtree, display this in the meanwhile" or
is it a case of if there's some some subtrees that fail to fetch from Server Components, the app
isn't in a state where it can continue rendering? So I can start by kind of talking about the
general mechanisms, and how it can fill in with best practices. There's a couple
of places that errors can happen. There's errors that can happen in the runtime
outside of React itself. That's more up to the infrastructure metaframework to handle those.
And then there's errors that can happen as as part of network, maybe you don't get the
response at all or you get part of the response but the connection errors. And then there's
errors that can happen within a Server Component. So when there's an intentional thrown error
within the Server Component that is on the server, there's two things that happen. One, you get
to log it on the server so that you can kind of track this. If they don't even end up on
the client you still want to know that that you have some kind of errors happening. The
other part is that it gets embedded as part of the response. And then that component, where it
kind of abstractly conceptually gets rendered in the tree on the client, an error is rethrown so
that the client's error boundaries can handle it. If an error happens because of, for example,
you've gotten piece of the response but not not all of it, or even if you didn't get
the response at all, the client runtime throws an error for all the pieces of the
tree that haven't already rendered. So if you render a part — remember, this is a
streaming protocol so you can partially render the data that you already have, but the
error happens in the places that haven't yet rendered — so the nearest error boundary to
those places is where the error gets handled. And then it's really up to the error
boundaries to determine what to do with that, whether it should display the error
or if it should retry that request. Yeah that sounds very flexible and like it will
give us a lot of options for all of the different error handling cases that you have and it sounds
easier than how things are right now about errors on the server, errors on the
client. Yeah one thing that is a little bit tricky in this space is that you might have
a general-purpose error boundary that just renders an error message for all the
errors. But in this world if if you're never using errors like I/O errors to be thrown as
an error boundary then those boundaries might not be aware that they should special-case I/O
boundaries or maybe rethrow if it's an IO error. So it's a little tricky now that an error boundary
has to be aware of I/O errors as something special so that it can know to delegate those or know
to handle it itself. Because otherwise if you have a deep boundary that handles the I/O error,
it might not refetch, whereas if it would have bubbled through that error boundary it would have
gotten the parent that knew how to refetch it. So that's still a little tricky but I think
it's pretty flexible still. One thing we were curious about was specifically in terms of — a
lot of our pages are basically giant feeds — so pagination is something that we think about
a lot. And I'm curious how that would look like in terms of like Server Components and
pagination and fetching subsequent pages. Yeah that's a great question, and
I think being very honest here, we're not sure yet. We've thought about this,
we've explored it, but currently, for example, we're using Relay for our pagination, so
for example we're using Server Components for individual items and I don't think actually
we're using Server Components within a feed-like situation yet. But if we were, it would likely be
kind of Relay on the outside, Server Components on the inside, and I think our idea is to gradually
explore that space. So in terms of where we are, a little bit to be decided, and I think one
challenge there is the need for... even with Relay, we're still evaluating what is the right
way to do streaming pagination with Suspense, where you want to have new items arriving from
the server and getting incremental rendering. But obviously with Suspense integration so that
you show the first item and then subsequent items even if maybe the second item is ready
first, right? So it has to be integrated with SuspenseList. So yeah this is like a non-answer,
others may have more thoughts, but that's the current state of where we're at, what actually
works that is known. I think that there's actually more known there than it might seem because
there's a bunch of possible versions that we don't think are gonna work. We don't have the
exact API but we think roughly the same structure. We've explored various forms, for example,
if you refetched the whole page and told the server to now include more in that list, that
would be one approach. But the approach that we think is going to be there, which probably seems
the most intuitive too, is you imagine each item in a list being its own subtree and we will have
the ability to refetch just a subtree picking up the context for where you left off. So the idea
is basically you have a Client Component that is managing the list and it's sending a request
for "give me this list of extra items" and it server renders those, or Server Component renders
those, and then you get the result back and that's what you render at the end of the list. That's
effectively what we're doing in Relay. There's nuances in exactly how you design that API but
I think that's the general principle, and some of the reasons for that particular approach
is that the page itself is kind of stateful in the sense that where you are in the list is a
client concept. If you just refetched — and this is especially true with Facebook because
every time you refresh the newsfeed you get a completely different order — it
doesn't have any inherent order. So because the underlying data can change, the list
can change throughout time. So we don't actually want to refetch the list itself as a part of this
request, we just want to add an extra page and just fetch that page and add that to the data we
already have. And to do that we need to be able to pick up the context. But which context should it
be — should it be the context of the freshest data or should it be the context that you rendered with
at the time that you rendered the outer list? And we think that it probably should be the context
that you had when you were rendering at the outer list. So there's a lot of things that we concluded
and the end results ends up looking a lot like like Relay paginations, so I would look at that as
an inspiration. Yeah, that makes sense, thank you. Jae? Yeah, so another environment
where all of this will have to run that we're thinking about is tests. So right
now we have quite a few tests running React against jsdom, for some quick tests that
can be run more quickly than, say, Cypress end-to-end tests that actually run a browser.
So I've been wondering how Server Components fit into that. Will it be a case of being
able to have this Server Component provider running locally as part of the same process that
is running the tests, or how do you imagine that? I was gonna ask Lauren, how do we
run tests now at Facebook end-to-end, can you talk a little bit about that? Yeah,
currently in our prototype we do have testing but the only tests we have are basically end-to-end
tests where we do actually run the Server Component rendering infrastructure in that test.
I think the unit test story is still kind of at least not super clear to me, so others may
have thoughts on that. But yeah we do run our tests end-to-end so we get to see the actual full
end-to-end flow of rendering a Server Component and then making it into the initial load and then
any interactions that might be expressed in the end-to-end test, those are all testable there.
So it should plug in into existing end-to-end frameworks assuming that you can run your Server
Component rendering infrastructure as well. But the interesting thing about Server Components
is that there will be paths that we're exploring like we're currently researching some ways to
run Server Components in a different environment, like not on your server, like maybe in a Worker
or something like that, that could help with the unit testing story. I don't know if Sebastian
or Andrew have more thoughts there. Yeah, I think for end-to-end tests it's... well so there's
different types of unit tests. I don't always know what people mean by that, I think it usually means
some part of the layer is mocked out or stubbed. So like if you wanted to unit test a Client
Component that expects Server Component data then that'd probably be pretty similar to
today where instead of rendering it inside of a Server Component you just render inside
something else that gives it props. If you wanted to unit test the Server Component
itself, since Server Components can render a UI, the way I would probably do that is actually
run some... in-process... simulate the request environment and actually generate the output. And
then feed that into the, what are we calling it, the pre-renderer API. And then assert on
the React output the way you would for a client component unit test. You probably shouldn't
assert the actual data format that this spits out so I guess it depends on what layer or part
of the stack that you're trying to test. But even for things that I call unit test,
I usually find it valuable when you keep it as "end-to-endy" as possible. So yeah I
probably wouldn't recommend asserting on anything except for the final tree output if
that makes sense. Yeah I'll add also that a lot of what we call Server Components are actually
Shared Components that you can run on either the client or the server, and one way if you're just
looking to test the logic and not the integration is to just render them as a client just like you
would test them today. I do think though that our observation is that moving more towards the
direction of end-to-end, whether that is more of a simulated end-to-end like a jsdom environment
or a richer full browser end-to-end test, seems to be the way a lot of things are
going because it definitely simplifies testing a lot of asynchronous
behavior, like Promises. Yeah I think we also agree I guess that the end-to-end testing, especially like full
browser, removes a lot of complexities setting up the environment, but there's still a trade-off
there in between performance and how many tests you can write and still have them run
performantly. So yes, specifically I was wondering like yeah this kind of like, but
we want to test just a subtree in jsdom and especially what happens if that subtree includes
both Server Components and Client Components, and can that just be run in-process in Jest
or is it a thing well no you have to spin up a separate server worker process that
does the Server Component stuff and then?... That's a good question because the infrastructure
is a little tricky with this environment just because we special-case how imports are
handled so in general the server can't... well, ideally it's set up so that the Server Component
renderer is its own process from even the "pre-renderer", that "bootstrap" renderer
thing, but you can run them in the same environment as long as they're built as separate
module systems. So for example a lot of production environments for server rendering use webpack
bundling before it's loaded in Node. And since webpack has its own module system and graph,
you can put two of those in the same process. But also if you're able to run it as a
Client Component, it more or less behaves similarly. It's not exactly the same but putting
a Client Component where a Server Component would have been inside of a client tree as is more or
less the same, and that's the idea. You mentioned towards the beginning that one of the things
you're thinking about before releasing is a webpack plug-in. I'm wondering if there are plans
for first-class support for non-webpack bundling and also whether or not bundling on the server
for example is actually a requirement for Server and Client Components. Yeah, so we're
doing webpack first but we want to support first-class bundling for any bundler that can
support a good experience out of the box for this, for the client. There's a few constraints
there, particularly the reason even the runtime is coupled to webpack right now
is because we're kind of relying on some internals to be able to synchronously extract and require
modules lazily even though they've already been loaded and pre-load them early. So to get really
the ideal of performance we're relying on a lot of these features that are not necessarily
part of the standard API but there's more other bundles that support the same things we
can definitely support that. The other part is just getting the bundling strategy which we
don't really know exactly how that will work. But definitely something that could be built
for others and we could even maintain it as a first-class package if it's a high-quality
implementation and we're happy to help with that. The other part of the question is whether the
Server Components, the server part, needs to be bundled. And none of this needs to be necessarily
bundled as part of development, and I think there's a large shift now in the ecosystem trying
to explore other ways of development where the development experience can be faster. For example
by not bundling. But we also think that an ideal developer experience for
debugging could actually be to run the server part in the Service Worker which might
require some kind of bundling or at least some some partial bundling or partial compilation to
get JSX and stuff. But then even our demo doesn't actually bundle the server and I think this is
actually the big missing part that it doesn't. And the reason I think it's ideal to do it but
you don't have to, it's two things. One is that it's a little bit faster to just have a bundle
running in the Node environment in general. But the other part is that we might want to
use the graph that we determined during the bundling of the server to determine what the best
bundling strategy for the Client Components are. I know Tobias from webpack has some ideas
of even designing webpack to have a shared graph between a server bundle and a client
bundle so that it would have this information. But that really depends on
what your bundling strategy is. At Facebook we use a data-driven bundling approach
where we see previous visits and try to determine using a statistical model how best
to group certain Client Components. But if you don't have that you have to get as
much information as you can from a static build and a lot of the information is dependent on the
server graph. So for example if you have a Server Component that always pulls in these
three Client Components, you want to be able to know that as part of building the Client
Components so that you know to group those. But you don't have to because you can just build
all the Client Components as a separate graph and treat all them as entry points but you don't
have a lot of information then about how to group the best chunks. There's middle ground here
too, you could have something that doesn't actually run the bundling of the server but just
uses that as an analysis to feed into a client's bundling. But I think that the first approach
that we want to build, the missing pieces, is a unified approach where the out-of-the-box
experience is that you build the server first and use that as input to build the client. So
I was thinking about CSS as well and with the Server Components can render UI, how will the
CSS get to the client at the right time when the Server Component UI is fetched, both in
CSS-in-JS and also CSS Modules. Especially if we're talking about how these
Server Components might not... the code that runs them might
never be downloaded to the client, how does the client know to download the
right CSS and put it in the right place? I know this one. There's basically three different
strategies of these that we observed. The strategy that we currently use at Facebook is basically a
static analysis where we analyze the file and then we create the CSS bundles, they're basically
one bundle that has all the CSS more or less, and in that world you just have
to make sure that the analysis is able to traverse these files so that it doesn't
just traverse the client, it has to traverse — and that kind of ties into the previous question too,
right — you have to have something that traverses the server files to find the CSS in them. The
other strategy is more like in the out-of-the-box experience with webpack with no plug-in where
you can import a CSS file as part of the module. In that case it's kind of implied that if
you load that file that the CSS file will be loaded with it. But there's no
explicit connection between the CSS file and the component, it's just
that you import it and it's there. That needs a little special consideration because that
module won't be pulled into the webpack client bundle, so the dependency won't be there in
the graph. That's part of the thing that we probably want to add in our official webpack
plugin since that's a basic webpack feature, and we have to do something clever like transform
the file so that it injects the call so that we know that this file is associated with this
component somehow. But the third option, I think, is the more common one which is whether
you do it static or at runtime there's something in the component that determines that this
class name is associated with this component and it needs to be injected. Either it needs to
download the dependency or it needs to be done dynamically injected on the fly.
And you can certainly do that, kind of, in userspace third-party
code, but I think we actually want to expose a particular API for this case so that
you can say that "this is the class that I want to associate with this output". And if this
output gets included in this part of the subtree then there's some metadata that goes along
with that in the Server Component's response and then the client can inject that to load that
CSS or to include that CSS in the server renderer. But I think that's going to be a missing piece
that we'll have to add before it's really useful. I just want to add something really quick, not
specifically about CSS but I think this is also generally in the class of problems where
some side effect that used to happen on the client now happens on the server
so you need some way to keep track of all the side effects that happen — whether it's
logging or an error is thrown or it's CSS-in-JS that's being used — and then depending on the
needs of your product replay that on the client. Like in the error case where we re-throw
the error or like in the CSS case you might need to request for that CSS or add
inject some CSS class to those components. So I think it's a very similar class
of problem that we were working on. Yeah and we have a similar issue with Relay,
right? Where we we want to emit a data dependency because we know from the
server that we need this data to be part of the client
component we're about to render. Is there anything we should be doing today to make
it easier — obviously we want to be able to adopt Server Components as soon as it comes out — is
there anything we should be prioritizing in our own codebase to help that migration eventually?
Yeah, so there's multiple layers to this. We mentioned upfront at the beginning of
this chat that there is a dependency on some concurrent rendering features,
we've talked about this in the past before. Our next version of React, React
18, will support concurrent rendering. Not all features of Server Components depend
on you being a 100% compatible with Concurrent Mode. But just by start adding Suspense boundaries
and starting to use Server Components in parts of your app you're kind of opting in those subtrees
into some amount of concurrent behavior. So we thought a lot about this and our rough strategy
is that you will upgrade your app to React 18 and basically almost nothing will change in terms
of... if all you do is upgrade your dependency and switch to the new root API then there's like a few
very subtle legacy quirks that we've gotten rid of but everything will still be synchronous. And then
as you adopt feature by feature, screen by screen, component by component, and some things will get
a little bit into the more Concurrent Mode side of things. So if you want to start preparing today
there's some fixed upfront costs that you have to care of. And then there are things
that you can incrementally do later on. So one of the fixed ones, if you don't
already have Node running, you might want to figure that out so that by the time you get to
like later this year or whenever that happens, that's already solved. A lot of people are
already in that world if they're using Relay or Apollo or something or they're doing server-side
rendering. And then what you can do right now today is you can start getting your components
to be Strict Mode compatible. We have an API called Strict Mode that we released a few years
ago that was designed to surface in development certain concurrency issues so that you can solve
them now before Concurrent Mode is released. The basic things it does is it'll warn about some
old class component lifecycles that just don't really work well in Concurrent Mode. A really
important thing it does is it'll double-invoke pure render functions, only in development, to
try and flush out any possible side effects. We have a whole document, I think a lot
of people are already familiar with this, a document describing how you can start wrapping
this Strict Mode around certain parts of your app to incrementally get things migrated over.
That general strategy of starting small and then gradually spreading it out until you get more of
your surface covered is roughly how we're going to do it post React 18 as well. One thing that's
important to emphasize is i think we might have been a little overly pedantic in the past when
communicating about Concurrent Mode compatibility. What we've realized converting Facebook surfaces
to Concurrent Mode is that a lot of things that are theoretical problems just don't really
come up in practice that much. I mean it is annoying when they do arise but we've been able
to convert large swaths of our app with really not that many problems. So we are going to have
a way for you, even once Concurrent Mode is out to, for instance, if you have some old class
components with unsafe lifecycles that are running in a part of your tree that's not using
any concurrent features, there's really no reason for us to warn you about that. So we'll have a way
to either opt out of those warnings and delay them until later once you actually do start adopting
things, or use the Strict Mode component API to to fix those ahead of time. But the general
message is we're working really hard to make sure it's gradually adoptable, and you only have
to pay the cost of migration once you start using new features in a particular part
of your app. So yeah, short answer: if you want to start today you can start using
Strict Mode to fix those issues and you should be hopefully very ready once the day comes to start
incrementally adding features. The one other thing I'll mention is that there is — my whole thing
about how in practice you don't really tend to hit Concurrent Mode bugs — that is true of components
and Hooks. It's less true maybe of frameworks or infra-level code. So there will be some work, this
is why we're planning to do a release candidate before we do a final release, because we're going
to do some work with open source library authors, particularly things that do a lot of
state management type stuff or read from external data sources. Those are the ones
that tend to have the most concurrency issues, and so that's really important for us to address
that the ecosystem is unblocked from being able to... by the time we go wide with the actual
release, people are unblocked from being able to migrate. But the nice thing about it, even
though that sounds scary, the nice thing about that is if we fix, for instance, I'm just gonna
pick Redux, if we fix Redux for Concurrent Mode, we fix it for everyone. We already did this with
Relay at Facebook, we fixed a bunch of concurrency compatibility things in Relay, and as a
result everything at Facebook that uses Relay, which is tons of stuff, kind of mostly just worked
after that. Hopefully that provides some insight. The other part is around how you do data fetching
today. If you're interleaving data fetching into a normalized store and you're mixing and matching
state and data that way, then it can be hard to to know how to separate the client
parts from the server parts. For a certain part of your app you might
want to keep that ability, but for the parts where you're really thinking Server Components
could be helpful, it's nice to be able to split out the data fetching parts from the state parts.
And a pattern that's particularly useful for that is getInitialProps or getServerProps in Next.js
because it's very clear that this is all the data that you need for the initial rendering pass or
even you could potentially invalidate it too. And then for anything else that you need to do for
data to be more dynamic, that's a different thing. So that pattern, whether you use Next.js or not,
is a good way to prepare because you can mostly put all of your getInitialProps or getServerProps
data fetching into Server Components once you adopt them. I also wanted to add that in
addition to the points that Andrew and Sebastian were highlighting, when Server Components are
released in open source I think we'll also aim to open source some of the internal lint rules that
we've written along with the conversion scripts that should help you get some
of your components converted to Server or Shared Components. For the conversion
script in particular, it doesn't actually change your application architecture or
the structure of your component tree, but it will identify components that
can be or are Server- or Shared-safe, and if they are, then it tries to convert those
components and does a bunch of other stuff to make sure that renamed files are imported
correctly and whatnot. From all of the points that Andrew mentioned and Sebastian in particular where if you can separate your client side state from
the data requirements, that will go a long way in helping the conversion script understand which
components are actually Server- or Shared-safe, and then it can do the conversion for you. We'll
try to aim to release these along with Server Components. The timing may not necessarily match
up but I will certainly try my best to do that. And I think it's right about time to wrap
it up. I'd like to thank you all so much, thank you Andrew, Lauren, Sebastian and
Joe for coming to answer the questions, and huge thank you to Jae and Yen-Wei for
joining us today and asking the questions. I want to emphasize one more time that Server
Components are in research and development so we don't have all the answers yet and what
we said so far are just our best guesses at what the answers are. But as always when when we have
something that we want every React developer to hear we will post it on the React blog. So if you
check reactjs.org/blog you will not miss anything important. We still hope that this Q&A gave you
a better sense of the overall architecture and the vision that we're going for, and if there is
interest in the community we also hope to produce more technical deep dives like this
in the future that are centered on our ongoing research and development. Thank
you for watching and have a good day. Bye!