OAuth Happy Hour - PKCE vs Nonce, "none" JWT method, Live Q&A

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Music] [Music] [Music] [Music] [Applause] [Music] [Music] [Music] [Music] [Music] [Music] [Music] hello everybody and welcome I hope can you hear me okay Micah you there yep excellent thanks for joining the stream everybody I'm good here big here go ahead and type in chat let us know where you're joining from go ahead and just say hi and we'll kick things off so this is the OAuth happy hour and I'm Aaron brekkie and we've got Michael Silverman joining as well from the other coast and we're gonna talk about all sorts of Roth stuff today I think we've got a fun little agenda planned so go ahead and if you've got questions drop them into the chat we will make sure we keep an eye on that and hopefully get a chance to to answer all your questions we've got a bunch of other stuff to talk about as well so starting with I guess talk about what happened this week in in the OAuth world so we've got again this has been an ongoing this for the last several weeks we've had it IETF meetings which have been postponed since the original meeting that was supposed to be in Vancouver which got sort of turned into these virtual meetings spread out over the course of about six weeks so this week was I believe the second to last one on the agenda and it's always a good very packed full hour of conversations Michael you didn't tune in to this week's video I did not I missed this week yeah I was swamp 'add yeah there's there they're only an hour long and there's like a set agenda for all of them I'm trying to pull up the agenda right now for this this last one we had we had to know that's yes to two specs been talked about this week one is called deep hop or a demonstration of proof of possession and the other one was called incremental authorization so those are two new documents they're working their way through the process and the deep hop the the idea is that it's a form of proof of possession which is the idea that just because you have an access token isn't shouldn't actually be enough to be able to go and use it so if this is like one of the criticisms a lot of people have with oauth2 although apparently it's been fine for the last ten years and has been that big of a deal but this is one of the criticisms is that it's too easy for for if someone can steal an access token and it's just too easy to use it whereas in OAuth one or a lot of the other ways that are where you have to sign requests that means that it's a lot harder to actually extract tokens out of applications so there's a handful of different approaches to this none of them are very widely adopted one of them that is an RFC is called mutual TLS and the idea there is that it's a just like you would validate the server certificate it would also validate your certificate and then that happens at the TLS layer rather than in in the data you're sending so happens before the connection was made and once that's established then this the OAuth server knows which clients making the request and then you can do things like attach access tokens to the exit that certificate and then nobody can use the angelus open if it's stolen the problem is that that doesn't really work in browsers so deep hop is a way to do the that concept of proof of possession but have it work in browsers and the way that it works is by doing it in the application layer so it's adding actually adding parameters into your HTTP request whereas with mutual TLS you don't need to do that now it's poking around the specs this week and and am I miss remembering it or that it seems like there's more than one deepak depop spec floating around out there that not the case I think there's only one thing called deep op there's several different things that do that solve this same problem okay all right I might have just found ways to get to the same place maybe that's very likely yeah the ietf website is a little bit confusing if you're not used to it in terms of how to navigate and how to find the documents there and then the other spectrums talk about incremental authorization the idea there is that this is really based around I guess like best practices that have been really like I would say probably pioneered by Google where and now in force actually by Google where just because your application needs like might do something like send email through someone's account access their contacts also read their YouTube videos or whatever you shouldn't ask for all those permissions upfront so if if like if you click on you know login to this app with Google and then you see this whole host of things that's asking for permissions that's not very friendly and you're very likely to just click away and leave it or agree to it cuz you know many other choice and then it's got a lot of permissions that it doesn't need so the idea with incremental authorization is that apps should only be requesting permissions as they need them and only once it actually needs additional permissions should it go and then do another Roth round-trip to get a new access token so this talks about that concept but also then ways to actually like negotiate that kind of request and communicate that information between the client and the server can you summarize what that looks like in practice so you know I I go to the front door I accept a very limited level of scope and now wants to increment that access what would that interaction look like sure so the it would basically be like you kind of seen this already with checkout flows where you're logged into Amazon and then you're browsing around I like stuff to your cart and then when you actually go to pay you'll sometimes get re prompted for your password right right and it's that same idea where you know you do it your you've logged into actually the streaming the platform we're using right now is a good example where like this is this is a web application that is pushing to YouTube which means it could use Google to log in right the first time you come to the website it's just trying to get you to log in from very or who you are so all it's doing is requesting your basic profile info later it needs to be able to upload to YouTube but it doesn't need that permission when you start so the idea is you would click login it would do the round-trip o auth figure out who you are you're logged in and then as soon as like you can you know stage things and try things out and then when you're ready to go live it'd be like oh well we don't have a access token to use with the YouTube API so please you know go click here and we're gonna go do that o author around trip again get it a better access token and come back so when you see like consent screen yeah you would see the consent screen again except it would show you I believe the idea is that it shows you only the new scopes that are being requested so that you would just see oh this application that you've already logged into is now asking for permission to upload to your YouTube account and now is this in the context of you know kind of future Roth or as a part of a LOF to one because I didn't think of what two one was adding any new stuff yeah this is just an extension to oauth2 so so there's nice building off of the existing existing framework there are no plans to bring this particular concept in to OAuth 2.1 at its core but of course this would extend OAuth 2.1 just as well just as it does extend to OAuth 2.0 right now right so 2.1 is not being limited in any way in terms of being able to use other extensions it's just kind of consolidating what extensions are already currently in use for OAuth 2 and getting rid of the stuff that really should be deprecated it's not a good summary that's a good summary yeah so Oh a 2.1 it's taking I would say like the most common or most understood most implemented bits of oauth2 and repackaging it up so taking up the stuff that's that that we all understand is not good anymore pulling out just the good parts of oauth2 and then giving a new name so that we can then not have people have to read through 12 different specs which you know build on top of each other and complicated ways to actually understand what we actually mean but when we say oh ah right now right make sense yep so those two things were the focus of this week's call yeah those were the focus of this week's call we it was a very rushed call because an hour is not a lot of time to get through all this right so some progress was made and you know the authors of those specs now have have things to work on and go back and update next week is the last one on the on the list and it's going to be an interesting one because it's two kind of new new concepts but we'll save those for next week once we once we get the recap cool yeah so that's that's the that was the ietf call and then they've been some pretty lively discussions on the mailing list as well which there was actually a question here as well from a pixie yeah so yeah I see a lot of discussions in the mailing lists about pixie and open you connect nonce what are your recommendations PS were using both of them so yeah this has been this has been an interesting conversation if you've followed the discussions on the list there the thread kind of blew up as of yesterday day before and the there's two parts of the discussion but the discussion was primarily focused around what gets included in 2.1 the question was posed should a 2.1 just flat-out require pixie for all authorization code flows and or is there like a good reason where you would want to not use pixie for some other other reason and the conversation is mostly around if we say that I often got one requires fixing like if that's just part of our 3.1 then there are existing some existing systems that technically don't need fixing because they're already solving the things that pixie solves some other way and those would now either not be a wall 2.1 compliant or would have to change to add pixie in order to be a lawsuit oh so that's the concern the if we look at what pixie does its solving two different problems and that's where this starts to get really confusing pixie does two things one we know it from the world of native apps and browser-based apps where without it there's someone could just go and steal the authorization code out of a redirect and then use that to get an access token instead of the real app getting the access token right with no client secret there's no other way to prevent that so this was this is this was one of the reasons for making pixie in the first place it was used primarily just to native apps and then expanded to now in the security let's scream practice says that all apps should do it as well however there's something else that it does and this is the one that's a little bit more subtle which is that so so that that first problem is about the Hoth server wants to make sure that when it hands the client this authorization code someone can't come along and steal the authorization code and then go use it themselves right so pixie solves that problem as a separate problem which is that the client when the client receives an authorization code it doesn't actually yet know whether that authorization code was actually intended for it or if it was injected from something else so the problem is that and this applies to both both confidential clients and public clients and this is there's a whole section in the security Western practice that talks about this attack it's called the authorization code injection injection attack the problem with this one is that it's it also is an attack on confidential clients so the idea is that without fixing a confidential client could be tricked into using an attackers authorization code instead of the real authorization code and because the client secret will be will be the same because it's actually like legit well it's not a it's the attacker users opposition code that it really did get from the OAuth server so it's not a fake one and if you are attacking the real app then the client secret doesn't fix anything because the client secrets the same for both the real off code and the fake authorization code so the idea is that this was a way this would be a way to get to get logged get logged in as somebody else because if you could steal theirs and drop it into the real app you would be logged in as them and the other way around as well where you could get an authorization code and then give it to your victim where then they would be logged in as you so that's bad and there are two ways to solve it One is using pixie because if you add pixie that prevents the client from sort of being tricked into this because pixie requires of the client hold state and that state can't be then shared across these different instances of the browser or the device or whatever the other way to solve this is by using the open nd connection on scram and what the way that solves it is because again the nonce parameter forces the client to keep that state so the idea is that you would the client would generate the nonce parameter get an ID token the nonce gets included in the ID token which the client can then check and then it knows graduated it knows after it's already done the exchange for the access token it then could figure out that oh I should stop processing now because this access token is a is somebody else's it's not the one from this session now it wouldn't actually know that until after it got the access token which is you know maybe a great but also this was this attack is about the clients just trying to make sure that it's behaving properly itself it's like the client reductant itself so if it's behaving it's not going to use an access token it knows wasn't intended for it so that's why that solves that so this is okay so this is the pixie versus nonce thing right the trick is that nonce solves only that problem nonce does not solve the problem for public clients of authorization code extraction so for my clients you have to do pixie there's no other way to fix that otherwise you'll get tricked you could just lose your authorization code and someone could get you know get tricked into into or someone can steal the authorization code out of the redirect with without without a client secret so for public clients you have to do pixie nonce doesn't doesn't fix that for confidential clients you don't need pixie to solve authorization code extraction because the client secrets all set pixie does solve authorization code injection but so does the nonce parameter so there's about this if you look at the grid of like client types and you know different coming a med quadrant for this yeah it's like if you so if you are a confidential client and you're getting an ID token and checking into Don's parameter then pixie doesn't solve anything that isn't already being solved but if you are a confidential client and you're getting an access token now you have to do pixie again otherwise you're you're vulnerable to this problem right because the nonsense crack in the ID token I know we have a lot of questions but but I have a question which may be controversial and that is don't we kind of have to pretend that Open ID Connect doesn't exist and you know in the context of OAuth what's best what's best for OAuth so why even bring up or I even caught up on the on the email thread yet but why even mix in this concept from open ID connect especially when we have a solution that that kind of covers all these bases even if there is technically a duplication if you're using both pixie and open ID connect that's that what in that one case that is the that is a great summary of this so the the there's there's two parts to this one yeah o auth has to exist without open and connect because it is a separate thing open D connect is built on top of OAuth so no auth needs to be able to be secure without adding in anything from open and connect so that it is secure itself there are also plenty of use cases that don't require open you connect that require off right so we need to make sure that's secure separately from that is this other problem which is that there are a lot of deployed systems in the world that are secure because they have open D connect and aren't using pixie and they are confidential clients and if the spec changes and says well now pixie is part of oh oh ah those existing implementations now are not following this back and they either have to just live with that that they're they're not they're no longer OS 2.1 compliant or that they have to change and you know changing code is hard and changing you know pushing out updates all these libraries it is hard and there is I guess very little benefit that those would get from adding pixie because they are already solving the problems that they see selves but yes absolutely like we do need to make sure that o is secured by itself and it does exist without opening connect it's just that sometimes people use them both together and that's fine and sometimes they don't so yeah I this is the argument right now so I know myself and others are are firmly in the of the of the thought that like I want to dot one while what while the main main goal is sort of consolidating documents one of the other goals is that it should be pushing things forward and helping make things more interoperable and more secure by default and one of the ways to do that is to just put pixie into it so that everybody just builds it that way from the beginning and you don't have to then wonder well when I pick up this library does it actually support pixie or does it not because without that requirement a library could be OAuth 2.1 compliant even if it didn't support pixie so that's kind of like the interoperability I think is the story there where if you go and if you're going and looking it through a list of servers and you find one that's labeled OAuth 2.1 it would be nice to know what that means exactly and right things that that would mean was is that it does support pixie so that you know you can use it with just a off by itself and it's secure right so I guess I can see the concern from the other camp being if pixie isn't a requirement then we can say we're oh f 2.1 compliant today because we're using though IDC and you know we are confidential client and there's nothing that you know we have to change to be 2.1 compliant if pixie is a requirement then we then have to live with 2.0 or we have to make some changes to say that we're 2.1 compliant and those changes don't really give us a lot with our current configuration because we're using OID C so I can kind of see that argument but I'm still on the like you know Roth like part of the motivation for 2.1 is to kind of wrangle in all these disparate extensions and and you know to say like we're not editing anything new but this is like oh ah the good parts and pixie is like one of the good parts yeah exactly so this is an ongoing ongoing discussion right now I actually owe the the listing of the email which I'm going to buy after this with some of these additional points so it'll be interesting yeah it's so it's it's always it's always a challenge doing these things on the list and talking people that way so we'll see how it goes yeah so you've had some experience on the list now right yeah and it's it's interesting I'm trying to I'm trying to come to be able to speak to you know complete noobs that are interested in getting involved and I fall into that category I'm maybe you know a millimeter past complete noob but I definitely encountered some some challenges that are some intersection of you know the way I learn and the way things are with the whole IETF process so as an example I joined the mailing list who I'm getting the digest I can read them I can catch up that's cool but I sent an email to the list or so I thought and as far as I can tell it got lost like it never made it to the list I can't see it in the archive I don't know if anybody ever received it no so and and I I have to assume it's something that I did but I don't know I thought I was sending it the right way so I just sent I just send an email to actually I forget exactly what it was I think was a lot that IETF Thorg and oh okay so then my question is why why can I not find it on the mail archive list I search for my name if I search for the subject that is an interesting question probably the answer is that search is hard and Google's the only one that does it well okay but the VLC we go to the mail archive when I look at the digest I see the emails and responses and and everything you know like like this pixie thread is a good example and I was just trying to find the thread if any for this email that I sent now even if nobody ever responded to it shouldn't shouldn't I be able to see it somewhere in a digest or something else yeah you definitely should why don't we do this as a as an exercise let me go ahead and share my screen here this is yeah okay this is the email list archive so we've got um this is the latest messages just came in just now you can see the 2.1 discussion and the search is all right right up here but I would say it's not great the other way to house this is by thread and this then shows you the nesting of them but if we go back and try to find your thread remember what date that was yeah was April 27th should be let's see it here yeah there it is alright so just scrolling around okay yeah I don't know why search didn't turn it up but interesting okay so that's the one thing I basically didn't try was just scrolling around I tried many various searches advanced searches and I just couldn't quite get there but at least see that it is actually there so that's good yep so so success I sent an email to the list great okay so let's let's talk about this question really quick yeah from Andreas what's the reason why the specs allow for unsafe configs like none for pixie or the challenge method or for the none algorithm in JSON web token it'd be better the specs follow a secure by default approach instead so yeah this is a these are two separate uses for the word none which is interesting but the so for JSON web token let's talk about these separately safer JSON web token it's possible that you can have a JSON web token that is signed with the signature algorithm called none which means it has no signature which is not really a great way to you do do that for an access token for example because then anybody can just make one up you want a signature into the JSON web token so that you can know where it was sign from the reason that it has the none option is because there are many uses of JSON web tokens and some of them do not need a signature it's just sort of a packaging up of data so for example if you had a way to authenticate the request from a mechanism other than the JSON web token signature you don't really need to sign the token so a mutual TLS would be an example of that where if you have a client sending a token over mutual TLS that TLS connection is already authenticated so there's no need for an additional authentication layer which will be the signature so that's the theory anyway now I agree that in practice that is not very helpful in because most of the time people are using this for things where that is the only authentication mechanism and it yeah it would be nice if the if the yeah the default was just that this wasn't allowed but that's why that's why the none option is there for pixie the none option is there so with pixie fixing the way pixie works is when you have a hash method the client makes up a random value hashes it and sends the hash in the URL when it when it does it that way it's really nice because the actual secret that's used that the client makes up never actually leaves the device until it's ready to go get the access token so that's the most secure way to do it and that prevents authorization codes from being useful if they're stolen and prevents that authorization code injection attack now if you have the if you have or it's not called none in pixie it's called plain so if the if the challenged method is plain it basically means that you generate a secret and then you include that secret in the request so that has the problem of if an attacker can watch your address bar like if you're a browser extension for example the attacker could get that pixie value and use it in the request with the authorization cook so pixie with the none method doesn't solve the problem of stolen authorization codes however I believe it does still solve authorization code injection because you can't trick a client into replacing its pixie value that it made up so again it's only solving it's only solving half the problem the reason it was in there in the first place was because there it is still valuable like it is because it is still solving something it's still valuable even if it isn't solving everything and there are some clients that aren't able to calculate hashes we think about like constrained devices or even old browsers that don't have crypto libraries in them it can be too much of a challenge to actually calculate the sha-256 hash so that sort of out was given so that they could still get some of the benefits of pixie even if it wasn't all benefits one one other thing I want to add about the JWT business is there there have been a couple of challenges or a couple of breaches or problems with the alguna along the way for JWT and most of them have turned out to be a problem with the service or the or the parser so the fact that it exists is problematic but then also if you have a JWT that has Al none but also still has a signature that's a bad GWT also if your service doesn't allow out none that should be case insensitive so you know there was a recent issue where somebody was able to get past that because they messed around with the word nun and changed up the case and it was able to get to get past that so it's definitely an argument to be made for like out nun being problem problematic empirically but most of the actual problems that have occurred because of that have been related to you know problems with libraries or problems with services we not handling that parse properly so you know with with with octo or other services if you evoke the encounters a token that has al nun it just rejects it that we don't allow that in our use of access tokens JW TS as access tokens so yeah definitely empirically problematic but also implementation problems along the way also and I think it's one of the reasons that you know passado just gets rid of those problematic configurations there is no known for something like the set of yeah exactly and I think I think one of the things that people often forget about JSON web tokens is that that had error information it is the same as like unsanitized form data in a web page like you don't go around writing web apps that just believe whatever you you know user types into the form you validate out on a server right and you're not gonna let the user just choose their role from a drop list and then process the request as if they were that role you're gonna enforce that on the server side and it's the same thing with the JSON web token header it's because that did because the header determines how you validate the token the header is untrusted data until you validated it right so you can't just well I you know accept anything in the header you have to have a very small list of validation methods that you do want to support and then and you and in those cases like when you're using them as access tokens you would never have a case for the nun algorithm so don't ever let your code get in a situation of being able to accept a nun algorithm or any other algorithm that isn't the one that you're using right so it's the same with the other problem there which was switching to a symmetric deciding algorithm so normally access tokens are sine using asymmetric keys because the server it's gonna validate the tokens is gonna go use the public heat of the OAuth server to validate it but if you if you switch the header to say it's asymmetric signing algorithm then you can trick the server into going and fetching the public key and then using it as a shared secret and then by passing validation again right so it's just because the header is untrusted unsanitized input you can't believe anything it says so I think this was kind of one of the design flaws with JSON web tokens which a lot of these other versions of them other libraries are trying to solve which is that you probably shouldn't have had that in the header to begin with so people couldn't even get into the situation in the first place right yeah yeah it's kind of a red flag when you know the quality of your library feeds into whether or not you know you're gonna be safe in using a particular standard you know so I know with the J JWT library the Java GWT library it doesn't allow you access to the claims until you verified the signature and the only valid use of the header until it's verified is is to understand the algorithm that was used now at the end of the day it's just a bunch of base64 encoded segments and so you could go and manually work around the library and just go and you know decode that that body section but you know the the library it forces the the proper order of operations so you can't even look at the claims if you're using the library properly until you validated the signature but it requires you to use the library properly you know so that's kind of like a code smell as what we would call that as as developers but in this case it's like a specification smell that you know that that's the who've you have to go to in order to you know get get the good stuff out of the specification is you have to rely on a proper library implementation it kind of should be the other way around the specification should guard against you know doing weird things with it yeah definitely and I think that's one of the other things that uh a pseudo one is trying to do which is removing parts of oauth2 that let you do things badly which is actually so here's this here's another question related to this isn't the things isn't the pixi nonce discussion the same as the implicit flow if the implicit flow is not part of oh I forgot one that all clients using it wouldn't be compliant so the implicit flow is not in OAuth not one because it is also being taken out of oauth2 by the security best practice so security best practice says that the implicit flow is not allowed no it doesn't actually say that literally because the implicit flow isn't isn't actually a thing it's actually the response type token which is the implicit flow and it says that the response type token is not allowed or rather the tokens can't be issued from the authorization endpoint and the result of that is then yeah yeah you can't do the implicit flow and the it's a little different because if a client if a client or server is doing the implicit flow right now then yeah they're not that's not really a wall 2.1 but no auth 2.1 is still a framework and like a place to build off of so you can still add things in and have them be you know have them be part of a watts you know one if you're building on top of it the way that we build all these other extensions on top of I want to tie them so yeah you can't really do the implicit flow and be that but you can still extend it in other ways I think the main difference between the implicit flow and the pixie versus nonce discussion is that nobody wants the implicit float it exists so everybody's on the same page everyone's happy to just have it be gone right whereas there is this one use case and one you know very strong voice about well we need to make sure that the nonce is an acceptable way to to to still be considered part of a 2.1 for all the reasons we mentioned before now this is a good tie in air into a more general like process question with standards in the IETF and that is when there's something like a BCP or there's a rata to to a spec does that does that mean that that is now kind of for lack of a better word the law of the land with that spec so with the presence of the be the the BCP as it relates to the implicit flow does it now mean that you are not in compliance with the oauth2 spec if you're still using the implicit flow or is it not quite that strong I don't think it's quite that strong because you can still follow so auth too is also like isn't really a thing it's it's RFC six seven four nine is the oauth2 framework RFC six 750 is bearer tokens then there's a handful of other RFC's and drafts that are under the same working group another same umbrella too when someone says oh ah tirado it turns out that it might mean any combination of those documents so there isn't really like both to compliance there is compliance with specific RFC's and none of them are required right like if you want to take an oath to as the foundation and then ignore some extensions you can that's perfectly fine and you may not have any use for those extensions right some of those like like you don't need JSON web token access tokens if you're just building a you know single server authorization server and resource server an API you don't need JSON web token access tokens so you can completely ignore that whole extension and you're still owe auth to write with the BCPs if you want to just ignore it you can it just means that you're not compliant with that BCP you're still RFC six 749 compliant right it just is just that you're not following all of the different you haven't picked all of the building blocks off of that stack so this I this is actually another one of the reasons that I I want to do a while 2.1 which is that it actually is a name that now means more than an ostinato because it also is such a big collection of so many different things and for the most part we all agree on what those things are with these little exceptions we're working out right now so I want to consolidate that and give it a name so when someone says 2.1 we know what that means and then we keep building off of it from there we're still gonna have extensions and that's you know that's a house that's good it has to work that way right but we there's at least like this sort of like 10 years of legacy stuff that like everyone pretty much agrees what are the good and bad parts of those so who's trying to smash that together right so if somebody says you know I'm using oauth2 I'm using the implicit flow that's part of our through the the best I could respond to that is well you're not you're not observing the best current practices with oauth2 yeah whereas the boa 2.1 follows through the whole process and becomes its its own RFC then I could say then if somebody said well um I'm oh a 3.1 and I'm using the implicit flow I could say well that's not part of aa 2.1 so you're not you're not you can't say that you're a 2.1 exactly exactly yeah so that's yeah that makes sense um there's a question so is there a new version out right now this is so we're talking about OAuth 2.1 i should mention that it is right now it's not done it's not really a thing it's not even officially adopted by the working group yet it's just an individual draft that myself and a few others have written and we're talking about it in the group because we are interested in bringing it into the group but it is not yet even adopted by the group so it's not an official document yet and this is a good tie and also another good tie-in for the process like what what stages does it go through and at what point would it would it so so right now it's a draft that's a step in in a process at what point would it become adopted by the working group at one point is it its own RFC is it is it is ratified the word right word like a lot 2.1 has been ratified or is there some other work that's used internally to say like this is an official spec now I don't remember what the right word is but but the process is someone has an idea they create a draft that that looks like the using the IETF tools so using that format that you see all the time with the fixed-width font it's an individual draft at that point which means it has no standing at all so you see a lot of these documents thrown around as like oh it's a standard well no it's not it's just on ITF org what it's an individual draft doesn't actually mean anything yet anybody can write those I think someone wrote one as a joke for April Fool's one's putting a point of like this doesn't actually mean anything yet um once so the idea is that somebody does that usually with a group of people and then they work on that back and forth for a while oftentimes the publish level to several revisions of that draft has an individual draft and then they introduce it to the group and that's done usually during during these these meetings also game just happened on the list or during virtual meetings and then you see like does the rest of the group have interest in this document so as long and then if you know people do there's a call for adoption and it's a couple weeks long where is a thread on a mailing list and basically just a gather idea of how much support a document has so during those calls for adoption it's very important for a lot of people to say yes I want this be adopted once it's adopted it then gets renamed so instead of the like right now if you look at it it's like draft - Becky - oh uh one or whatever it'll get renamed to draft IETF - oh well 3.1 because then it is an actual official document that's when it's brought into the group and then it becomes a group working a working group document so at that point the versioning starts over again like the revision numbers start over and you start doing more iterations and that's the point where now like you have to make sure you're taking it all the feedback from the group and it's it's sort of assumed that the whole group has agreed on whatever is in that document at that point up until that point you can just do whatever you want because it's just your own draft so this point it's really just who's ever interested in contributing to this grit the draft but if it advances then it would really be the responsibility of the working group to take an interest and to kind of move it along yeah and then eventually after that's gone through several iterations and discussions and everyone and it's sort of everyone's on the same page and agrees with everything in there there'll be a last call and that's to actually finalize it so then that gets bumped up through the IETF this is all done within a working group after the last call then it gets like submitted into the the processes of the IETF itself where it gets looked up by people outside of the working group and I haven't had one in mine go through that process yet so I'm not super familiar with the details of that but that's when you get people from other other groups or other positions in IETF looking at it and making sure that it's you know not conflicting with the names of other things or it's formatted correctly all that all that stuff and then finally look at its RFC number after that cool one up for anybody that may be interested one of the things that I found handy in trying to wrap my brain around this is data tracker IETF dot org you can actually look at all the different phases that ooofff to and through in this process from the drafts - excuse me getting adopted by the working group and you can kind of see the historical flow and and the historical record of the path that it took you know to become its own RFC I found that to be helpful just to see that that that historical record through the data tracker yeah let me let me share this real quick so you can see that so this is the this is the page for the OAuth 2 RFC six seven four nine so you can see here how many like when it started right which was April 2010 and it's gone through a whole bunch of revisions what I thought was really interesting was notice here how there was about a six month period of no changes during draft ten this was a run time act that I actually started getting involved but because it was such a long time of no changes a lot of the early api's they were adopting oauth2 that's what they were reading because they were because it was clear that all too was like okay this that we're going to new things in the future and that was the latest version of the document in 2010 so a lot of the API is implemented that and then never changed anything past then so there's still a lot of services out there that are draft 10 compliant and never renamed or updated past then because at that point they were already deployed and right and then there were another 20 revisions past that or 21 revisions past that yeah and some of the revisions are just typo fixes you know miners yeah but sometimes it is actually like renaming a parameter for example yeah so then RFC six seven four nine that's the last date of that one October 2012 and yes you can see here the other reviews that have been done so the last call reviews are done by these different groups outside of the Roth working group if you go to the OAuth working groups homepage here this is actually where you see all of the drafts that are currently in the works so we've got there's the active means it's an it's an adopted item and it is in progress in some stage of the progress some of these this is in last call for example this is just in the works this one is passed last call and it's about to get its RFC number and then if we go down there's the actual RFC's that are done with all their RFC numbers sometimes their best current practices sometimes that proposed their proposed standards and then down here are related drafts these are the ones that are individual drafts so these are not yet adopted so you can see here's a walk 2.1 which is still just an individual draft and a couple others as well and then I actually just realized that one of my other ones expired and I need to go update publish new update before Monday cause this one we're talking about on Monday it would show up in there too but it has expired so it is dropped from that list so in this view this is actually one of the questions I have when when it's active it means that it's been adopted and related just means it's not yet it's working its way through the process yeah well right means it's an individual draft but it was tagged with this group so it shows up here okay yeah got it and while we're on the topic when you look at if you go back to the previous screen you are on it's got the obsoletes tag so that's very clear to me like what you know our RFC 67 49 obsoletes 58 49 which is OAuth 1 that makes sense but then it says updated by and then there's this later RFC from 2017 that speaks to OAuth 2 for native apps what does that mean exactly that it's updated by does that mean that RFC 8252 is like the source of truth for OAuth 2 or or something else I think because this is a BCP not a non its own actual spec I think the idea is that this replaces like anything that this talks about about the core this is now considered more accurate that make sense so it's not a complete replacement because it's not its own RFC it's a best current practice but anything in here does update whatever the corresponding part of the 67 49 would be that make sense so so for the BCP that you co-authored for browsers is it is it possible that it could get to a point where that was also you know listed as an updated by or something I believe so yeah I believe that let me go back to the list of the working group lists so the browser based app one should be in here somewhere here it is I believe this will then replace both two core the same way than the native apps one did so we should then see on the oauth on the 67 49 page updated by this document as well right I mean that's good to know because I I think those people refer to you know that that start getting start to get involved with this refer to 67 49 is like that's a lot to but it turns out that it's been updated by some other things that that are more modern more relevant and that's the problem which both you don't want is trying to solve right so the problem is that if you start here 67 49 that's great but don't forget to read 67 15 also don't forget to read the rest of these and make sure that doing them if you care about them some of them don't matter for you necessarily you get into the weeds of RFC's pretty quick yeah so I'm just trying to make sure that someone coming up this for the first time does not have to follow that path because it's a deep deep rabbit hole yeah we have a couple minutes left and there was a question that I wanted to address it's near and dear to my my heart we have a lot of questions and I really appreciate it but migrating from implicit to pixie important to do now especially that it's the best current practice and and it's now a lot easier and the short and unsatisfying answer is that it's going to depend on what service and what libraries you're using for how you go about doing that the good news is that tactically it's it's not that bad it's the authorization code flow without a fixed client secret and so you have to put together some additional query string parameters but if you're using a good library it'll do a lot of that heavy lifting for you so I obviously I'm biased I work for octa but octa has this off-gas library that you literally just have to change the response type from token to code and then internally it does all the heavy lifting for you it generates the code verifier and the code challenge when it gets back the code it automatically does that additional call that it needs to do to exchange the code for the token and you don't have to change another line of your code so hopefully you're using a library that's similar in that way if you're not using the octal library hopefully you're using one that's similar where it's done a lot of the work for you and you just have to change some configuration and now you've kind of magically switched from implicit - to pixie in your spy but if you're rolling your own or you're not using a library like that tactically what you have to do is switch the response type to code make sure you include the code verifier in code challenge and then when you get the code back instead of getting the tokens now you're getting the code back you have to now exchange those that code for tokens and then you can continue on as you were with your spy off because now you have tokens anything you want to add to that Aaron no that's exactly yeah the first thing I would do is figure out what library library you're using and check that library to see if it supports pixie already because there's a good chance that it does and if it does the change that you have to make is really probably just going to be some config to tell it to do fixing if you're either not using a library because they wrote from scratch then you will have to go and you know add the pixie bits in there I have a blog post that talks about the actual like raw JavaScript that you need in order to do pixie let me drop a link to that in the chat but if you but yeah it's all it's not that much code so hopefully it's not too much work of course the first thing I would do actually before either goes is make sure that the server you're using supports pixie as well because most of them do but you know double check because if you if you do the pixie on your side but the server doesn't support it you're not gonna see anything wrong you're not gonna like get any errors it just won't actually be giving you any benefit so I wanna make sure that you're doing getting the right way and I think I found the link to the blog post yeah the this post is screen share really quick - this is where are we back here oops wrong tab the the blog post goes through first it has a fun video that myself and another co-worker film talking about the implicit flow and why it's not a good idea but then it also goes into down below walks through actually building it out from scratch and it's doing it all with plain JavaScript not with any libraries so this is where we get into the code of like doing the basics before and go doing doing the the hash hashing and things like that so it's not too bad it's a good it's a good read cool yeah and then once you've done the implicit or done the pixie flow instead of implicit you're gonna end up with the access token the same way you would like the result is the same right you you've got the access token now from the server so however you were storing the access token before with the implicit flow you don't need to change anything to store the access token that you got via pixie it's just the actual exchange and the redirects they're a little bit different yeah cool well I think that's about all the time we have for sorry we didn't get to all the questions but thank you so much for joining let's go ahead and wrap this thing up and call it a day but thanks so much everybody for joining and we'll be doing this again next week so good luck happy coding and see you next week [Music] [Music] [Music]
Info
Channel: OktaDev
Views: 1,414
Rating: 5 out of 5
Keywords:
Id: qJatSV05b3U
Channel Id: undefined
Length: 63min 20sec (3800 seconds)
Published: Fri May 08 2020
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.