[MUSIC PLAYING] MARSH GARDINER: Hi. I need that. Thanks for coming. Today we'll be talking
about API design and what's new with OpenAPI. We'll talk a bit about Apigee
and how we use specs and Google and how we use specs in
our development process. First, a little about me. My name's Marsh Gardiner, I'm
on the product team at Apigee. I've been at Apigee for
over seven years now, which seems like a lifetime,
and now we're part of Google, which
is pretty exciting. I generally work on API
portals and our OpenAPI tools. I've worked on other parts
through my seven years. And I've been involved
with Swagger and OpenAPI in different ways basically
since 2010 I think was my first interaction
with Swagger. It wasn't even called
Swagger yet, actually. And I'm also a member of the
TDC, which is the Technical Developer Community, and
I serve as the marketing chair for the group
and I promised them I'd take a picture of you
all to prove that I actually talked about this today. So if you'd say OpenAPI. AUDIENCE: OpenAPI. MARSH GARDINER:
Excellent, thanks, I'll report that on
our Friday meeting. It's a big deal. We have a lot. I'm going to try to make sure
we get through it quickly so there's time for some
questions at the end. Roughly speaking, I want to talk
about the Open API Initiative, why it exists, and then how
we use API specs at Google and how we use them at
Apigee and why they matter in the API design lifecycle. And then, it's good
timing, actually. A week ago, we reached
a critical milestone for the next generation of
the OpenAPI spec, Version 3.0. And so I'll give you a glimpse
into what's changing with that and talk about what that
stage of the process means. So I think the
first half is going to be sort of big picture
things, talking about items one through three, I think. And then the last bit
will get pretty nerdy about the details of
the Implementer's Draft. A quick history-- and
I've lost things, here. Hold on. There we go. Before we had Swagger, basically
the state-of-the-art was WSDL and WADL. Those were the ways you could
describe services, really. They were generally
HTTP-based services, and then in 2010,
a company called Wordnik started building
their own tooling that would eventually be called Swagger. And that's part of what
made this description format that they'd put into
what they were doing exciting. It had authenticity. This wasn't a
vendor saying, this is the way everyone
should describe APIs. This was someone saying,
hey, we need to do this. And then over time, as
more and more people, they open-sourced it, and
as more and more people got involved, they
had to add things. In the beginning, it was
just a plain old API key. But not all APIs
just use API keys. So over time, as the
community asked for things and they saw this organic
adoption and growth that happened, and then
in 2014-- so Apigee had been watching this go on. We liked what we
saw with Swagger, but it wasn't quite
descriptive enough for us. We needed a few things and
a few changes to the spec, and so we approached Tony,
and Wordnik became Reverb at that point. And we asked if
they were interested in the next generation. They said yes. So we added a bunch of
really cool things to that. And that process
took about six months to get from the beginning
of the working group to the version 2 spec. And then after that, companies
shuffled around a bit. And then in 2015,
about a year after we started talking about how
Swagger could become neutrally governed, maybe under
The Linux Foundation, it actually happened. So at the end of 2015-- I think it was November-- the Open API Initiative
was officially founded and launched. And SmartBear, who had acquired
the interests for Swagger, contributed the specification. So not the Swagger
universal brand. That they broke out
the specification and they gave that to
the Open API Initiative. Within a couple of
months after that, we began what was the Version
3 iteration of the spec. We created a Technical
Developer Community, and there were six members. I'll talk about that in a bit. But this is really why
we started to get excited about things and why it's so
important that the spec is neutrally governed. So-- and this is just one
of several different metrics that all show the same thing. We saw downloads going up. If you look at Google Trends,
you could see similar lines, basically. So here is where the Swagger 2
spec working group kicked off. That's about when it became
the official spec and tooling started to catch up. And then you can see
a little further up the graph is when the Open API
Initiative happened, and then today. We're at the far right. And that's where the
version 3 of the spec is. So why did this matter? Why was it so
important to put this into some neutral
governance model? Well, really it
was to protect it. The industry needed
a standard way to describe APIs
and lots of vendors wanted a way to be able to share
these kinds of descriptions, and so it mattered. But while that specification was
the property of one company-- it's too important
for one company to have that kind of control. And so putting this into
the Open API Initiative under The Linux Foundation
was really important, and it matters. One of the challenges
is that it's hard to not call it Swagger. We have a mark that
describes this service. It's OpenAPI. And Swagger is a trademark that
applies to a brand of tools that SmartBear operates. And they do a great
job with that. They have a big community. That's what the specification
is known for first. But having the name OpenAPI
and using the name OpenAPI is really important. So if you're building support
for this into your products, you should use the OpenAPI name. It matters. In the beginning, there
were nine members. Two of those were
Apigee and Google. There were several
others, as well. But that's in part how I--
it's interesting to me, because that's how I began to
meet people at Google, totally oblivious to the fact
that at one point Apigee would become
part of Google and I'd actually work with
people like [INAUDIBLE]. And I think it's pretty exciting
that we went from nine members, and over the time since it
launched at the end of 2015, we're now up to 21. And when I made this slide
first two weeks ago, it was 20, and then was 21. I have to keep updating it. But I think it's pretty exciting
and a great show of support across the industry that we
see all of this happening. One of the interesting bits
and sort of OpenAPI trivia is that there is a space in
the name of the initiative, the Open API Initiative. And so you can see that
in the style guide. But the spec actually
doesn't have a space. And this is the fun trivia
fact you can impress people at parties with, is that
we realized a bit too late, after the charter had already
been drafted and signed, that it's a lot easier
to Google "OpenAPI" without a space than
"Open API" with a space. But it was a little too
late to change the charter, and so we just went with it. It actually has some
interesting side benefits. If people use it
incorrectly, it's a good sign that they
haven't paid attention to the details of the spec, so
I think that makes it a feature if you document it. So your first takeaway
should be that there are lots of ways, however
you look at it, where you can say that OpenAPI
has emerged as the industry standard. It is the most common way
to describe "RESTful"-- I'll put that in quotes-- APIs. So moving on, why
specs actually matter. Because having a specification
[? way ?] to describe, that's great. But without tooling and how
that fits into your workflow, it makes things
better-- without that, there's really no point to it. So in order to understand
the spec and why it matters, we have to talk about
the tooling a bit. There are many ways in
which specs help you in your API lifecycle. I'm going to cover seven that
we've used in our workflows at Apigee at different
points along the way, but there are plenty of others. I just picked my favorite seven. One, just simply having a
formal, machine-readable interface description
or definition-- and I'll talk about that
on point 7, the distinction between those two. It matters because one,
it's language neutral. So maybe your polyglot-- you're building services in
lots of different languages and you need one common way
to describe those services. And since the format for
OpenAPI is YAML or JSON, because YAML's a
superset of JSON, it gives you a way to have
consistency across teams that really isn't-- since everyone has to deal
with serializing data anyway, they're all used to that. Two is that once you can
describe it for machines, you can validate it. So now you can make sure that
mistakes happen less often. So the ability to validate
is really important. But mostly we used to make
50-plus page Google Docs, or before that, Word
docs, where you'd describe all this
interface, you'd hand it to an engineering
team, they'd build something, give it back to you,
and maybe it matched. Once you have a
machine-readable spec upfront, if you do that in
the design phase, you can everybody that can
drive agreement across teams. Which brings us to point two. So if you're treating that
specification in your design phase as the contract, now
you can let both the client and the server-side--
we can get people to agree on what actually
is being built and why. Because you never get your API
design right the first time. You're going to have
to change things, and the faster you can
do that, the better. Oh, I forgot one last
key point on this. The machine-readable interface
description is ultimately going to mean less
work, and less work is going to be a theme. You're going to hear that
on every one of these-- spoiler alert-- every one of these reasons. It's because it's less work. So having that contract
and allowing teams to have discussions about
what these services are and how they work, that matters. It ends up you get better
APIs and you do less work. You can use those specs to
even drive a mock service. So now, instead of waiting
for the server team to stand up a version of the
API, if you can drive a mock, you can have the client
team working in parallel. You can begin to explore
how this is actually going to work with the client
applications that are deriving value from that API. It also means you can learn
faster, because now you can validate what you got
right or wrong in the design before a line of code
has been written. And again, you do less work. You can use that spec to drive
client SDK-generation or server generations, server-side stubs. This is good because
you end up getting more consistent
interfaces when you start treating that spec
like code in this way. Then you end up
getting things that are more likely to agree with
the spec that you started with. There are fewer
mistakes and less work. You guys can follow
along when I do that. I'll give you the cue. So, generally
speaking, if you're treating your spec
as code, then you have it under source control,
and you can track changes. This can be really great, for
instance-- your tech writers. If you want your
tech writers to be able to keep up with what's
changing in the interface, there's a great way. You can just look at the
commit history for the spec. So by empowering them,
you have fewer mistakes, you've got better
documentation, and you've done-- ah, this is good. You've got it. Number six you'll do better. You can also make
your writers very happy by being able to
generate docs, and maybe even interactive docs. You can also generate
tests from that spec. We did that with some of
our node-based applications, if you ever play
with Swagger Node. And I'll talk about
that in a bit. It actually has something called
Swagger Test Templates that was folded right into that. I think this is a great area
of opportunity in tooling, too. I think if you are interested
in generating tests from your specs, that there are
a bunch of folks out there also looking for a really good
solution in that way. So again, you end up with
docs that are more accurate, they have fewer mistakes,
you can use tests as sort of a check to make
sure that what's been delivered was actually what
the specs said, and ultimately it means you do-- AUDIENCE: Less work. MARSH GARDINER: Very good. I think gets really
interesting is when you use the spec as code
as the runtime's contract. So I mentioned a slide
or two ago a project called Swagger Node. We started something at Apigee,
a project called Apigee-127. It was pretty interesting. What it did is you could
define the interface. You'd design your API
interface, and then you would have a loose mapping
with the controllers that were the logic. So the problem when
you generate code, you're usually getting stubs,
then you add a bunch of logic into it. If you regenerate
the code, yoyou've got to figure a way
not to clobber that. There are ways to do that. Perfectly good. That's a totally valid workflow. But we wanted to
play with the idea that if you could use
the interface description as a definition-- so the difference
being what is truth versus what might be truth-- so if you're using it not
just as a description, but as the definition
of the services, and if you've got
a nice coupling to the logic and
the controllers, then you can really iterate
quickly on your design. You've got, for instance,
with Swagger Node-- I didn't explain that part. Apigee-127 became Swagger Node,
was contributed to the Swagger universe as an official project,
and has the test generation piece in there, has
mocks in it as well. It's a pretty fun
way to build APIs. In fact, most of the APIs
that I have coded-- they don't let me code
very often, which is probably good for everybody. But most of the APIs
that I do write, I've done in Swagger
Node and Apigee-127, and I tend to deploy them to
Apigee because that's my job. Actually, there are
two other projects I should talk
about, though, here. There's a Python-based
approach-- Python Flask-- called Connection
that also uses this approach. And then another Swagger product
called Swagger Inflector, that also has this interesting
coupling between the interface and the controllers. We've used each of those
things along the way to make our lives better
and do less work at Apigee, but what's been really
fun and interesting to me is to have been at Google
for the last three months. I think I'm still a
Noogler, technically. And it's been fascinating
to see Google's approach. And so even though I didn't
plan it for this talk when I proposed the
abstract a while ago, we're going to take
a little detour and talk about how Google
does APIs, because there are some interesting
things coming out of that too that I think are
worth at least talking about. So Google publishes
a lot of apps. They've got over 80
on the iOS app store. I don't know the
number on Android. I expect it's bigger. And they've got a lot of APIs-- so many APIs, in fact, that they
need 14 different categories in order to be able to
present them to people. That's a lot of APIs. And the other
thing that has just blown my mind about learning
about how Google does things is their sense of scale. So this is a data center that's
in Mayes County, Oklahoma. We're going to have
a little data center porn for a few minutes. Indulge me, please. It's kind of fun and cool. This one's in North Carolina. This one's in Belgium. And I've started to think about
this as Google builds data center so you don't have to. I think you can
make a fun joke here about how the internet
is just a set of pipes. Well, these pipes-- they
actually carry water to cool the data center down. And for a sense of scale to
understand how big this is, so that's a Google bike. That's actually how we
get the data from one data center to another. We put them on
bikes and ride it. Just kidding. We have a lot of computers. And if you think about
it, if you try to add up all of the API
requests that happen in Google's infrastructure-- that's every search query,
that's favoring a YouTube video, it's archiving
a Gmail, it's when you make a
wrong turn in Maps, it's your route recalculation. There are probably a bunch
of APIs calls in there. And some of those are
just a single API call to Google's infrastructure. That generally results in a ton
of calls in the back ends, too. So if you start to do some
back-of-the-envelope math, it adds up fast. Because all these computers
are talking to each other with APIs. And what that means is that
the systems that Google has, they can handle over 10
billion API calls every second. And I remember five years
ago when John Musser was talking about the
billionaire's club-- like providers who were serving
a billion API calls in a month. And then multiply that by
10, and do that every second. And that boggles the mind. It's just so big
you can't really wrap your head around it. And that turns out to
be really important, and I'll come back
to that in a second. So Google, I'm going to
say about six years ago, seven years ago. Oh, let's do the math. OK, seven years ago-- started doing its
own discovery format, it's a JSON-based format. This was happening
about the same time that Swagger was
coming out of Wordnik. And this was able to
drive a bunch of tooling, and they did this, they had
to roll their own description format at the time. A lot of you have
probably never heard about Google's
description format, because it was really to
solve a Google problem. So they needed it--
and I'll explain why in a second-- and
there was no standard, so they didn't have much choice. So this discovery document-- there was a lot of
infrastructure based on it. They used it for
client generation. A lot of the reasons why I
talked about with API specs and why you want them, they
were taking advantage of that. So their discovery service
drove their explorer, it generated documentation. There were things
you couldn't see. They were able to take
the JSON-based APIs and map them into their
own internal format. And that starts to
get real interesting when we talk about scale. That worked-- I mean,
it solved their problem, but it didn't solve
other people's problems. They weren't interested in
open-sourcing it or formally making that a format
for other people to use. It was really to
scratch their own itch. And that meant that there was
no open-source community that sprang up around it. And so lots of things happening,
everything's evolving. And people realized it. So there have been
people doing what are called protobufs at Google
for at least a dozen years, I think. And that's how Google sends
things back and forth, and that's how they
describe their data. And it takes advantage of things
like HTTP/2, bi-directional streaming. And so Google was using
all this internally, and the lesson they learned
from doing their own discovery format, in part, was
that if they really wanted the rest of the
world to appreciate and do things the Google-y
way, then they should do their next
version-- so that service was called Stubby. They should do the
next version of Stubby. It's called gRPC. And they should do
that in the open. So this was a
little weird to me. I get it, it makes a lot of
sense, but it's taken me-- I would say it's three months
to even sort of understand how these things
start to fit together. Protocol buffers are
pretty interesting. They're language neutral,
platform neutral. They're extensible. I like to think of them
as sort of three things. One, they're a
serialization mechanism. Two, they are a kind of
interface description language. And three, they're
sort of a methodology. And that methodology is going
to sound kind of familiar, because we were just
talking about that when I was explaining
OpenAPI specs at Apigee and how we use them. Protocol buffers are a
bit like JSON schema. You use them to describe
the structure of your data. gRPC gives you this framework. You get a bunch of tooling
that comes along with it. And this is what
Google uses internally to do that massive
number of APIs that I was talking
about earlier. What this all means,
if you add it up, is that if Google used
OpenAPI instead of gRPC, we'd probably need
another data center. And that's hundreds of
millions of dollars. That's an expensive thing. Because when we have to send
all of that information back and forth across
all those computers that we saw those
pictures of before, it actually makes sense to go
to a binary format over the wire just for efficiency-- not everybody has that problem. Others have started
to adopt gRPC too. Netflix talked about this. They took one of
their main projects, Ribbon, I think it
was, and instead they're now going to be building
their solution on top of gRPC. Also interesting and pertinent-- March 1, the Cloud Native
Computing Foundation became the parent of gRPC. This wasn't meant to
be a Google thing. This is meant to
belong somewhere else. And so it's neat and interesting
that it's in the Cloud Native Computing Foundation
for a reason we'll come to in a second. Why two, though? Because Google's involved
in both gRPC and in OpenAPI. To synthesize some of the
things we were talking about in the beginning,
you get things like documentation, your code
generation, your API service configuration-- all of those benefits of
embracing the way of the spec. So we see the industry moving in
a direction that fits with how we think things work best. It's so funny to
say "we," actually. [INAUDIBLE] But it's also really the thing
that both are Linux Foundation projects. We think The Linux
Foundation is pretty great, and it makes a great home
for both of these things. And we are interested in
figuring out over time how we can bring
those things together. That may be off in
the distance a bit. We've been trying to figure
out how and to help-- at least with the 3.0
evolution of the spec-- consider ways in which these
things could start to come together over time, but
right now they aren't. Google uses OpenAPI
specs in other ways because there are some nice
adoption benefits from it. The new one platform API-- sorry, GCPs. The way we manage
APIs internally-- we used to serve up
that description format that I talked about, that
they created on their own. Now you can actually
get an OpenAPI format of that description instead. Kubernetes publishes--
people know Kubernetes. But just in case you don't,
this will explain a little bit about what Kubernetes is. I'm no expert in this. But the thing that
is interesting is that Kubernetes publishes its
OpenAPI specs with a Kubernetes server. At Apigee, we use the OpenAPI--
because we're part of Google now I can say this is how Google
[? uses specs. As ?] Apigee uses specs, we use it
at different points in the lifecycle. We've got a spec editor. You can generate proxies
and flows from specs. And API documentation
is generated from specs. Cloud Endpoints--
it's an API gateway, and its configuration can be
done with an OpenAPI spec, so you can start to
see that OpenAPIs-- that we expect the
rest of the world to be speaking OpenAPI
and not gRPC, necessarily. So we've had to live
in both of these worlds and figure out how to live in
both of them at the same time. So that's a bit
about how we deal with specs at Apigee
and Google, in general, and in the world at large. My advice would be, if you think
scale is your biggest problem, gRPC might be a
good fit for you. It certainly works well for us. And if you're concerned more
about adoption, OpenAPI. And there may be some cases
where you consider both. That's where we are, actually. And the reason why
I said option one, there's just more
attraction out there in the world with OpenAPI. Two, it has a benefit of being
a text-based description format, and generally speaking, it
describes text-based responses or serializations. So just like when you
see a web page you can view source and see the HTML
and the JavaScript and the CSS, you can see what's
actually being sent back and forth across the wire. That is just much
more approachable. You get more visibility
into how the system works. There's less mystery. Whereas if you're
dealing with binary, unless you've got some
special tooling to figure out how to unpack that,
you're asking developers to make a big leap
to understand why. And so I tend to think of
this as an adoption benefit. Because what I see
with our customers all the time is that
they're trying to figure out how do they deliver
value to people, and how do their developers
realize that value quickly, and if the format's
getting in the way, that could be a problem. So I don't think it's an
easy question to answer. I think there's a lot
of interesting stuff to chew into, particularly
now that both projects are in The Linux Foundation,
we'll see how that goes. But the takeaway is use specs. Good, I was a little worried. It's a lot of content, and
we're making good time. This is where we start
to get pretty nerdy, so if you want to understand
how the evolution of 2.0 is getting to 3.0,
there's a lot here, and I'll recommend that all of
this is happening in GitHub. And so you can get involved-- file issues, ask questions. When we hit the implementer's
milestone last week, really this means that there
are very few changes to how the spec works from
this point forward, definitely the focus
is on documentation. But we're also at the point
that the challenge we have is that there's this
chicken and egg problem. People don't care about the
spec without the tooling, and the tooling can't
spring up around the spec until it hardens enough to
be able to build against it, and that's the point
where we're at now. So now it's stable
enough that you can begin to build against it. And we'll see. I'm not sure how long it
will take for things to flip and for 2.0 to become the
old, out of fashion version and 3.0 to pick up. But it's going to happen
sometime in the next six to 12 months is my
guess, but first we go through the
Implementer's Draft phase. We'll cover a bunch of
the big changes today. If you remember one thing, the
easiest place to get back to is the Open API
Initiative's blog. We've been talking
about and trying to help the community
understand what's changing. We have pictures of
unicorns and rainbows, so it's got to be good. That's how we get the
younger demographic in too. A lot of this work has
happened with the TDC. So that's the Technical
Developer Community. That's me-- I'm
trapped in an iPhone. And Tony Tam-- Swagger's his baby, he's
been there for a long time. Ron [? Rotoski-- ?] both
those folks were at Wordnik and are now at SmartBear. And Ron probably knows the
spec as well as anybody, like forwards and backwards. He's where I go when
I have questions, too. Darrel Miller from Microsoft. He's been great. And Jason Harmon, who
had been at PayPal when he started working on the
TDC and is now at Typeform. I spent a lot of quality
time on conference calls with these folks. The Implementer's Draft-- it
represents about a year work by the TDC and the community,
because it's not just the TDC saying, it must be like this. It's people filing issues
and us figuring out how to reconcile that with what
the spec should and shouldn't do. It means that the third
generation of the spec is now relatively stable. And as I was saying, we have
this chicken and egg problem. The spec has got to be
stable enough for tooling, and that's going to start to
become cleaner and tighter. And at this point,
only blocking issues. Like if we discover we made
some terrible oversight, that would get fixed. But most of this is
going to be documentation and giving people the
supporting information they need to be successful. And this has been
happening-- at least until we hit the full
release of the [INAUDIBLE]. It's still happening in a
branch they called OpenAPI.next. So if you arrive at the GitHub
repo and you see Version 2, that's because it's still
the official version. It is the only released
version of the OpenAPI spec. And so 3.0 will replace
that eventually. This is a great time to
start getting involved, particularly if you
are doing tooling. You can get a lot of attention
from people in the developer community now. So if you are thinking
about getting involved in creating tooling on
top OpenAPI, particularly open source, this might
be a great time to do it. It will also really help the
spec become a real thing. So I think this is the
simplest way to understand what's happened between 2 to 3. This is just a
visual representation of the major high-level
sections of the spec. And it's a lot cleaner
and simpler on the right. Things like components. So we took a bunch of these
reusable definitions that were sort of
scattered throughout, and we brought them
together under one thing called "Components." Obviously, the spec is now 3.0. That's exciting. But it's not just that. We have introduced Semantic
Versioning as well with the specs, whereas
before it was just 2.0. Now we're trying to follow a
Semantic Versioning approach. So clean up of the top level. And in general-- and I'll
try to point this out as we go through some of these. We reconsidered some of
the names along the way. So Host is a good
example, and we'll talk about that in a second. We now use Host to
Security instead. I think that's all the
stuff I have on this one. We're going to start with
some of the easy things to warm up with. It used to be you could
use description fields, you could give mark down,
and we had, in the 2.0 specs, said "GitHub Flavored." It just sort of made sense. It was on GitHub and it
was easy to deal with. But the GitHub-Flavored
mark-down didn't have a spec, and so Apiary and
API Blueprint also moved from GitHub-Flavored
mark-down to common mark, because there's a spec. Also, I think leveraging
other specs where we can, like URI templates,
you will see this come up over and over again. I think we did a great job. Darrel's been particularly
good at bringing specs that exist
in the real world so that we don't
reinvent things. Another small change is
that we moved from YAML 1.1 to YAML 1.2. In the info object, there
is a breaking change. The terms of service
is now a URL. And something that often
catches people by surprise-- this version here is not
the version of the spec, or it's not the spec
document itself. This describes the API. That's the version of the API. This came up a lot. Multiple servers. So now you can actually declare
a bunch of different servers. This can be a little tricky
because sometimes if you're dealing with a
staging environment, it may be that it's not
the same specification. You're introducing a new API. And so I would say be a
little bit careful here. But if you have
a situation where you were running
multiple servers, then this starts to make sense. It makes even more sense when
you add in parameterization. So now, for instance,
if your host name changes based on
some pattern, there's a template that lets you do
that and some variables you can leverage to enable that. One word of warning
here is that one of the trickiest things
of all in the 2.0 spec was how we used
the word "default." We use default to describe
parameter values-- that if the client
didn't supply a parameter value, that's what the server
was going to substitute for it. That's what the server's
initialization would be. Here, however, if you
don't have a user name, this URL isn't going to resolve. So this isn't simply
what the server will do if you don't supply anything. In this case, a default means
something a tiny bit different. We probably talked
for 15 minutes about whether we should call
it something else. We couldn't come up
with a better name. These things are hard. And that sort of exemplifies
what these conversations-- we can have an hour and a half
conversation about Webhooks and how whether or
not they deserve to be part of the specification. Some of these calls,
we recorded them all. Some of them are really boring,
talking about things like that. But, you know, fun
and passionate-- a bunch of people who are
really interested in what's the right way to
describe the APIs. I mentioned the reusable
components before. So we collected a bunch of
things and added a few more. So one thing to remember is
that all of these components require some sort
of a reference. We use JSON pointers for that. We also introduced
in the components some restrictions
around the name. I think there was no character
restriction on the names before, that introduced
all kinds of bugs. People would come up and try to
put Unicode in their definition name. That's a bad idea. So we decided it's easier to be
restrictive early and expand it later if there is need. If you do want to
namespace your components we recommend using
a dot notation. That's one of the
accepted characters. Another thing that came up
surprisingly often with 2.0 was why can't we add a
description to the resource, to the path? And, you know, why not? It's a good idea. It makes sense. It should belong there. Another thing you can do here
is you can override servers now, at the path level. And we introduced
support for Trace, because, hey, it's an [? HTV ?]
verb and we should do that. We made an effort to do a better
job with URI template spec and keeping that compatible. These schemas for parameters
are basically primitive types. One thing you'll notice is
that the form "body" isn't here anymore. It used to be that you would
have to say of a parameter-- you'd have to give it's
location as "in body," and you'll see how we can
deal with that better. So it requested content body. This is what replaced that. You can now describe form
data using schema definitions. That's kind of interesting. You'll notice the
content block here. And there are now media types. So you can describe
different media types, and they can have different
examples and different schemas. That's a big deal. We can only have, I think, one
example per operation in 2.0. I might not have
that exactly right. But now you can define
any number of media types. That's cool and interesting. It also ends up replacing,
produces, and consumes, which I had always
found a little tricky in how OpenAPI and
Swagger before it had tried to describe
these things. So I think it's a much
cleaner, nicer way of doing it. Similarly for responses, you've
got multiple content types. And you can now wildcard status
codes, which is kind of cool. And there was an
amusing conversation about how many Xs we should
allow in status codes. I should probably have written
down what video we did that in, because it was funny-- what call that was. Another interesting
piece is that what is a callback in a Webhook
if not really just another kind of API? It's more like a push
API in that case. And one really
important piece of this is to be able to refer back
into what you're getting back. So you'll notice the
request body URL. That's something that you're
looking for in the response. And that's interesting. I definitely recommend looking
at the callbacks objects. Darrel Miller did a bunch
of work describing that. It's good if you're
struggling with also describing your Webhooks. This could be
interesting to you. We also tried to be
more accommodating of hypermedia-- not necessarily
as the engine of application state, but just realizing
that hypermedia is used a lot to point at different things. We wanted to be able to do a
better job supporting that. And so now it's possible to
use these links to connect operations from responses,
and that operation can either be an ID or it actually
can be a full href. And this, to some degree,
means that documentation is no longer flat,
because you start to be able to interconnect
these operations, and that's kind of
cool and interesting. So if that strikes
you, definitely check that in the
OpenAPI.next spec. We didn't change a lot
of security definitions but we did one thing that
was really important. The flows that we had in 2.0
didn't follow the the OAuth 2.0 spec, and so as a general
hygiene and cleanup, we fixed that. So that's good. I don't know how we
missed that in 2.0, but I'm pleased to say
that that's right now. JSON schema is the thing
that gets everybody excited. I don't know why,
exactly, but I think it was in part because 2.0 had
such restrictions around what you were allowed to do when
you're describing things, and that didn't fit
a lot of the APIs that are out there
in the real world. So now you can use-- and there are a
bunch of caveats. You should definitely read into
this carefully in the spec. But I am no expert
in JSON schema. I would recommend if
you are, check that out. But you are now able to use
allOf and oneOf and anyOf and some of the other
interesting pieces that JSON schema allows. It's not full JSON
schema support, but it's a whole lot
closer to it than it was. This is good, and
it's helpful also to take advantage of all
the great JSON tools that are out there in the world. In a quick summary-- I want to make sure we have
enough time for questions-- really what's happened
with the 3.0 spec is it's gotten better organized,
it has better names, it is in fact more
descriptive-- so it can describe a wider range of APIs. It has callback support in
it, which I think is fun, and the ability to use links to
connect operations and describe some of these sequences, and
then just general cleanup. If you want to get involved,
there's a bunch [INAUDIBLE] the beginning. We blogged about a lot
of the changes that were coming along the way
and then summarized those. Ron [? Rotoski ?] did a
couple of great blog posts recently about this, too. Darrel Miller was a big
help in this as well. Check out openapis.org. You can follow OpenAPI
spec on Twitter. That's where we
generally try to be proactive about
when we're having meet-ups and things like that. Actually, there's a
meet-up tomorrow night, I think, in San Francisco,
if you're interested. And then, like I
said, all of this is happening on
the OpenAPI branch. You can find two projects--
sorry, the OpenAPI [AUDIO OUT] GitHub. You will find two
projects there. One is the style guide. So if you were
looking for guidance on how to refer
to OpenAPI, either the spec or the initiative,
or get the logos, all of our source
logos are there with guidance on
how to use them. So third takeaway is
that the OpenAPI Version 3, it's a big step forward,
it's generally an improvement. Ken [? Lane ?] did a
nice blog post recently on how pleased he was with
the evolution of this. That's exciting too because he
just joined the API Initiative. He's now on the governing board. This is a great opportunity
for tooling authors. So I would recommend
kicking the tires. See if the OpenAPI 3
spec serves your needs. Or if you see tooling that
needs to exist-- like right now, one of the next
things we have to do is to actually update the JSON
schema that describes the specs so we can validate specs. That's the stage we're
at, and we definitely need and will appreciate
help, even if it's just commentary on things
you liked or didn't like about the changes
that have been made. [MUSIC PLAYING]