RustFest Paris 2018 - A QUIC future in Rust by Dirkjan Ochtman

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
Dillion Ockman takes us on a ride to the future of rust server site to learn what a zinc fate we will have to await meanwhile let us rust far and white [Applause] better yes it's not just about a server site it's a client site as well yeah gotta have two sides to any connection so I'm here to talk about quick it's a new protocol the ITF is currently working on first a little bit about me so I've been using rust for about two years in my spare time mostly or really only and I've had a lot of fun and I think Russ is great I believe in the future of doing things in rust rather than other languages so I got interested in this protocol that they are developing it's called quick UDP internet connections or just quick and basically the the large companies are working on internet stuff this started at Google but a number of companies are working at now and are in the working group together trying to standardize it and what they want to do in the end is to replace TCP and TCP is sort of this core part of the internet stack you might have heard you you might just think of it as TCP and IP where basically you shove data bytes into a stream and you get it out on the other side so what it's trying to do is reliable in order delivery across the Internet and this has been working for about four years and there have been some extensions to make it better but it turns out we have learned a few things about doing internet scale things in 40 years so they think they can do better than TCP even with extensions and they started at Google in 2012 and in 2016 they went to the ITF hopefully by the end of the year it will be standardized and though there's still a number of things to be fixed and recently there was this week there was a proposal which might slow it down for I think nine months the estimate it so well no no when it comes really to to a standard but I think as forward-looking people are in the rest community let's look at this protocol that will at some point be part of our lives so these are sort of the three sections of the talk that I'm going to do today first I want to talk a little bit about the problem or one of the main problems that we're trying to solve with quake and that is head-of-line blocking and I've tried to make an animation to make this a little bit clearer so you see data flowing from the application through the transfer layer through the network and at some point a packet is going to go get dropped in a network side and the packets that come after that after that packets have to wait to be delivered back to the application so as you can see they get buffered in the transport layer and then once the packet is retransmitted then they can be delivered to the application and in many cases in practice actually what you're delivering over the transport layer over network is not just one stream of bytes but is multiple streams of bytes interleaved so you're doing multiplexing and in that case it might be nice to actually take some of the parts that are not directly affected by the packet loss and we will be able to deliver them to the application layer before the reasons mission is fixed so in general the quick goals these are the goals that are as stated by the working group Charter to minimize latency both in the connection establishment as well as in the data transfer as you might know TCP connections are a bit slow to set up often because you have to do multiple around and if you also DTLS that's even worse to provide multiplexing so that's sort of wedging together multiple streams by taking chunks of each stream and serializing them into a single stream but with quick it better understands how you're doing multiplexing so the application layer can influence that and it allows you to get rid of this head-of-line blocking problem change is limited to path endpoints so what they're saying is they don't want the whole internet to have to change to be able to do quick so what they do is they use UDP UDP is sort of the simple simple sibling to TCP which basically means you take an IP packets you stuff two port numbers a checksum and a length field in it and then you just shove it on the wire and so that means you don't get in order or reliable but you can do this sort of more fine-grained stuff and leave the inorder reliability to something closer to the application layer which also means it's easier to evolve over time because you're not dependent on for example most people running TCP stacks it from the kernel which are hard to upgrade they want to do multi path support built in it's also possible with TCP but what they do there as to they sort of layer an extra thing on top of TCP which makes it less efficient with quick that's built-in they do better error correction so they try to make the most of error robustness against problems from the packet also with quake Security's built in so they use they reuse a lot of the stuff that's in from TLS although they rely on TLS 1.3 but it's always there unlike with TCP where you get TCP and then you can layer to your less on top of it so even though these days it's getting more and more common to just use always use TLS on TCP with quic is built-in and nice and what they're saying is we want to rapidly do development and testing so it will not just be this version of quick and then ghost along for a long while but they expect that there will be revisions of quake so there's stuff like version ago she ation built in and that makes it possible to at some point move the ecosystem forward without having large flag days quick is split up currently in six documents it's nice because it makes it a bit easier to get to the parts you find interesting or to maybe at some point do more modular protocol implementations first one is invariants so they try to document stuff that cannot change across versions so that that helps if you're doing good version negotiation or you want to try to be robust against future protocol changes transport is like the core protocol logic so it has all the stuff about the handshake and how the data is transferred recovery is all about loss detection and congestion control so if packets get lost how do you detect it what do you do when you detect it that sort of thing or if the flow of the network is restricted what's the best way to handle that and all of a lot of these algorithms are algorithms are described in the special specification TLS so that is how is that promise of security realized so what they do is use the TLS spec that's already out there which we have implementations for but make it use it in a slightly more efficient way because it can integrate into the quick layer and then in order to do HTTP on top a quick there's two other things so queue back you might know that in HTTP two there's something called H pack which is a way of trying to do a binary sort of compression encoding of the headers both header names and header values so you can do being more efficient at HTTP data transfer making it almost look like some really smartly encode its binary protocol and for quick they have slightly different needs so they also have to do with different encoding of the confession scheme and then HTTP to HTTP is sort of the same thing where it's really based on HTTP too and it's quite similar but a lot of the details are different to accommodate the quic semantics so sort of this core idea in quic is that of streams so I had to think of some nice analogy and what the core concept is that you have any and hardly hardly limited number of streams you can have I think two to the power 62 or something so it should be enough for most people and they have four different types of streams streams are either client initiated or server initiated and they are either by directional or unidirectional so that allows you to make some optimizations with how you handle those streams and you can and that's used in the protocol and this is sort of encoded by having the least significant bits of this stream type represent these stream types short bit about recovery so there are the differences with TCP are well explained in a specification so whereas TCP conflates the order of transmission of packets and the order of delivery of the package contents to the application these are separate concepts in quick which means that quick packet numbers are just representative of the transmission order but the order in which data is delivered to the application relies on data offsets that are per stream so this again helps you do prevent head of line blocking type problems and because you have a very clear ordering in your packet numbers in terms of transmission order that also makes it easier to detect I could loss TCP actually allows you to in some cases go back on acknowledging packets which is called reneging this could happen for example if you have you lose some packets and you buffer a bunch packets after that and then the receiver is allowed to at some point say this packet last thing is taking too much time I don't want to buffer these packets anymore but as it turns out there was a paper that showed that this doesn't actually function in practice it's use only rarely but it causes substantial complexity on both sides so this was dropped from quick and if you receive a packet you just have to it's your responsibility now really only if you a kit and there's a more efficient way of doing X in quick so that you can more reliably convey as a receiver to the sender what you have received and what you have not received making it clearer what packets need to be retransmitted by the sender finally there is an explicit correction for delayed acknowledgment so as a sender you might well as a receiver you might well choose not to ACK to send acknowledgments directly for any package that you receive because for example it's more efficient to group acknowledgments but that makes it harder for the sender to determine the round-trip time because you're fiddling with the timing of the X so in quick the delay between receiving the packet and sending the acknowledgment for it can be made explicit so that you can reliably estimate the round-trip time for our packets so TLS is just TLS 1.3 handshake with a custom extension the extension is used to announce transport parameter so this is things like as a receiver you announce how much data you can hold in your buffers until you've acknowledged it or how many streams you allow to be open so this really also makes it possible to scale down implementations do not need a lot of memory and the nice part of doing this in the till SN 1.3 handshake is that both you get you don't have to do an extra round trip after the handshake plus you get authentication from the handshake so it's clear that these are actually your kind of mess with the transport parameters in flight after the handshake you use the negotiated cipher suite and secrets from TLS but you don't actually use the TLS protocol anymore so instead of wrapping every data transport stuff in TLS record since you're less managed messages that doesn't happen you just encrypt your packet payloads with the handshake or the stuff that you got from the handshake not sure if the slides are going to come back this time doesn't look like it yeah and so this week there was an announcement it looks like the quick working group has design teams so that seems like similar as in the rest community and the design dust there's a stream zero design team stream zero is the stream that carries the TLS handshake and and further messages and it looks like the stream zero design team has a lot of ideas about how to improve stream zero or that there are still problems with the current design of stream 0 that is what might cause a bunch of extra delays because this now means that all the TLS implementations so open SSL or in my case Russell's will have to be updated to support that kind of stuff my next slide is about quick HTTP and quick HTTP is negotiated as a application layer during the application layer protocol negotiation that's part of the TLS handshake that works today it's used for HTTP 2 as well for example in this case you just say you can say just HQ to say hey I supports HTTP quick / quick you can also do things like saying I support HQ on this particular port so you're a regular HTTP 1 web server can in its TLS handshake say if you go to this other port you can talk to me over quick and it can also mention the quick version support it so that that's quite nice and prevents you from having to do more round trips to do all that figure all of that out and as I mentioned quick HTTP is built on HTTP 2 so it's really similar semantically but a lot of the bits on the wire are slightly different I have a interoperability matrix it's a really nice visual with coloured cell cells and a lot of letters on it so really what they do in the working group is you might know that the IETF is big around the motto rough consensus and running code and this really represents a running coat part of it so I think they're like 15 people or groups that are trying to implement quick and this matrix has 15 clients on the on the left side and 15 servers on the top side and then for each combination you can track what's working and what's not and it's interesting to see the organizations that are participating in this so there's an there's a mozilla implementation there's a fake facebook implementation called MV FST you might have heard their motto move fast and break things there's I think Google has an implementation CloudFlare Apple and pretty much all of these are in C or C++ I think there's one in go and one in OGS and then there's two in rust one of which is mine so I'm back to the table of contents second SEC section is on the rust ecosystem so that's basically stuff that I used in my implementation first thing that I used is Russell's so big thanks to Joe for making it and also supporting me with code reviews while I was putting some quick specific stuff in there so there's one the transport parameters extension that I implemented and I have some extensions traits added to the client session and server session type with which you can during the setup or construction of a session type you can pass in your own parameters as just as a byte feck and then there's another method that you can get the beers parameters with after the handshake and while I was in there I also implemented support for the ssl key lock file feature so this is a feature where you can specify an environment variable and that will dump the TLS session secrets to a file which is really useful when you want to debug what's happening in your TLS connection and is for example supported by browsers and NSS and open SSL and Russell's didn't have support yet but it does now if you use master now I can see my own slides anymore so it makes it even harder ok so just try to go in from memory for a little bit so I also used that makes my practice actually worthwhile there it is yes a ring and WebP api PK i keep pronouncing it wrong these are things by brian smith that sort of underlie brussels as well and it's really great to be able to use that stuff also because we I know that's boring SSL is used there so it's I think it's really trustworthy stuff even though it might not be as mature as your open SSL but in terms of security I'm not sure which horse I would bet on and so if you get a handshake complete in Russells then there's a method you can call it I think it's called get negotiated cipher suite and it basically gives you access to all the algorithms in ring so that's really nice for something like the quick current design of stream zero anyway where you can then do the packet payload encryption separately without relying on the internals of city less stuff I also made use of futures Tokyo and H - H - is the HTTP - implementation based on futures in Tokyo I had worked before with futures in Tokyo I met but it was relatively simple because they were just futures that pulled their internal futures for the networking stuff but I was looking at how H - implemented their API you know how it exposes sort of the fake multiplexing that HTTP 2 does to the API user and what they do is there's a streams object that's part of a connection and they give a clone of that to the API user and then if you send some bytes through that streams object the streams object will notify the connection and make sure it sends it off over the connection so I'm working on a similar design of the API for my implementation and it's I learned a bunch of new stuff about futures in Tokyo in the process so that was fun a final trip created I use this bytes great which has been around for quite a while I stole an idea from nice I stole an idea from Russells where it has a codec trait which knows or provides an API for encoding and decoding types 2 bytes and I try to or I looked at how it was done in Russells and also in other crates and they were using the buff and buff new traits and that's a really neat way of sort of keeping track of where you are in a buffer and putting bytes in or getting bytes out so this is used for all the low-level encoding and decoding and because quic is trying to be very efficient about using bandwidth there's a lot of details in the encoding and decoding for how to do like variable length integer encoding and that kind of stuff so there we go into the last section the goals for my implementation so yes for the past two months I've been working on my own implementation of quick it's called Quinn it's named after a character from a science-fiction book which I've done with other projects as well the goals for this foundation were to first to implement the specification faithfully and so we can fill up that interoperability matrix which I'll just quickly show to you so you have an idea of what it looks like a second to have clear abstractions and concepts so that there will be a good API I think earlier it was said that libraries without or with with bad documentation aren't really that useful currently there is no documentation yet so in that sense this is not a very useful library but I'm trying to get score some points on interoperability before I get to documentation and it's fully futures based there's no polling going on and I like to leverage the type system for all that stuff since we have this good type system might as well well make use of it so there's no boolean so it's all like enums that's our say the direction for this stream is Uni or by die and instead of it having some kind of boolean that says false or true I was really satisfied with my 85% coverage until Tyler came along and spoiled all my hopes for that so now I'll have to do a bunch more stuff to make it better this is kind of a nice technique I think for figuring out whether the design for your code is good to try to figure out if there's a nice layering in it so on top there's a client in the server that should be pretty straightforward and conceptually correct what I've done is put a connection state object in there so because unlike in TCP you are typically your servers accepting connections and that gets a different socket for each connection in quick there's just the UDP socket and it's just there and you get stuff out of it so for a server there's just one socket but you have to manage a bunch of connections so I've chosen to keep this connection state object separate and keep the sockets in the client or server instead and that also is actually nice for testing because you can just sort of do all your protocol interactions without having to deal with network stuff and then there's the parameters module which deals with the transport parameters there's the stream stuff that has the strange object TLS is fully abstract in the sense that the rest of the implementation doesn't know about russell's but just deals with what's in that module and then the rest of the stuff is basically about low-level encoding and decoding types two bytes mmm current status of the implementation I'm targeting draft 11 with TLS 1.3 draft 28 as most implementations are doing right now draft 12 was released this week so people will move on soon but I think there's another interoperability event first so for that event people will mostly stick to draft 11 the clients my clients the Queen client can handshake with other servers pretty good pretty well other clients with Queen server not as well because I made some simplifying assumptions that were turn out to be and not so interoperable most implementations don't have much in the way of actual quick HTTP yet so they're just throwing HTTP one over one of the streams or multiple of the streams and in order to be able to test sort of the lower-level bits of the protocol I hope to keep this implementation moving forward that depends a bit on my free time it is a big job because there's a lot of stuff to deal with which you may expect if you're sort of dealing with this whole thing that transfers bytes across the internet and it's not just doing the job of TCP but also a sort of reimplementation of HTTP to so it feels sort of like fractal complexities everywhere you go into detail there's more to deal with but I've just been taking it from the start of the connection and that's seems a workable approach there's one other rust implementation by Benjamin Saunders I had not found it when I started my implementation or I may have reconsidered doing so he uses open SSL instead of Russell's and I think he spent way more time hacking open SSL to support the extra stuff or the open SSL bindings to Russ to support extra stuff then I spent on fixing Russell's so at some point I want to consider whether it makes sense to share more code he has a sort of networking less less core and then a small Tokio layer on top of it so it can also do Tokyo and his implementation is currently more mature so it handles a bunch more stuff in that interoperability matrix so if you are looking for a fun product project to contribute to then consider this one I've put some issues in the repository even if you don't have much experience with rust or networking protocols I'm happy to help you get stuff done and I'll be here it at the conference for the rest of the comfort for the weekend and also the imple days or if you want to contribute some funding to be able to use this in a real setting then I'd also be very interested in talking to you that's all I have thanks you
Info
Channel: Rust
Views: 3,642
Rating: undefined out of 5
Keywords: RustFest 2018, Paris, Day 1, Auditorium, rustfest18 ov, rustfest18 eng, Dirkjan Ochtman, rustfest18
Id: EHgyY5DNdvI
Channel Id: undefined
Length: 30min 52sec (1852 seconds)
Published: Mon May 28 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.