Better API Design with OpenAPI (Cloud Next '18)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[MUSIC PLAYING] MARSH GARDINER: Hi, my name is Marsh Gardiner. I'm a product manager for Google. I tend to work on Apigee's developer portals and Google Cloud Endpoints' portals as well. I also work on our open API-based tooling, and represent Google on the governing board for the Open API Initiative, and chair the marketing group, and serve on the Technical Steering Committee that guides the evolution of the spec. Just a reminder, this is better API design with OpenAPI. This is an intro level discussion. We're not going to get deep into the weeds on how the specification works. But we will talk specs a lot. In general, we're going to cover four areas. One, I'll give you a bit of a background on OpenAPI Two, we'll review specifications and how they work throughout the API lifecycle. Three, we'll cover some basic API design principles. And that will set us up to combine all of these so that we can talk about how OpenAPI's opinions guide, somewhat, API design and the choices you have to make. There's a spoiler. OpenAPI is not going to magically make your API awesome. But it can help. And hopefully, by the end of this, you have a bit of an understanding about how that helps and why it matters. Who here uses OpenAPI in their organizations today? Great. Who's thinking about using OpenAPI? Good. Who here still calls it Swagger? Be honest. Let's see the hands. Well, it's my marketing hat on for OpenAPI. One of our challenges is to get people to use the name OpenAPI when referring to the specifications. Swagger still means the tools. And that is still a perfectly appropriate way to talk about the tooling that SmartBear oversees. But the specification, it's important that we use the right name because this is an industry format, an industry standard. But behavior change is hard. The good news is those were fewer hands than last year by percentages. So I think we're making progress. So briefly, to set the stage, before we had this thing called Swagger. The state-of-the-art back in 2010 was still WADL, the Web Application Description Language, and WSDL for SOAP APIs. They'd been around awhile, and they were pretty successful. But there were also XML-based formats. Because of JSON and JavaScript, we began to see the rise of JSON-based APIs over HTTP. And it kind of sucks to describe JSON-based APIs using XML. And in 2010, a couple of us at Apigee took note of a new API documentation approach that a company called Wordnik was pioneering for their own APIs. And we thought it was pretty cool. So we reached out to them, which began a conversation in an Apigee office about whether they would be interested in Creative Commons licensing, the look and feel of their API documentation. And to our surprise, they said, actually, we're thinking of open sourcing all our tools. We thought that sounded pretty great. And we talked and laughed about how much it sucks to describe JSON-based APIs with XML. And I think was Zeke who said, why waddle when you can swagger? And we all laughed and thought that was funny. And then someone said, actually, that'd make a great name. And so it began. Wordnik had created all of these tools to serve their own needs. And so in the beginning, for instance, the description format that they'd created, you couldn't specify how you would do authentic authorization. Obviously, other people needed that. And so as the community began to participate, the format began to become more descriptive. It was also very much bottom-up. It was one company trying to solve their own problems with software. And it wasn't a vendor saying, I think everyone should do it this way or a consortium saying, hey, everybody, like so. So it had real authenticity about it. It was solving real problems for real people. And it grew organically over the next few years. And then in 2014, that was about the time where we were at Apigee, getting kind of tired of using WADL as our preferred format. And we took a survey to see how the API description landscape looked. We still really liked what Swagger was doing. And so we approached Tony and suggested perhaps it was time to consider a major revision to the spec. And that kicked off a working group. It introduced things like the YAML format, at the time, Swagger specifications. It did things, like it added a single file expression of a spec. Before that, there were others, a bastardized version you could do that was embarrassing. It was really multi-file format. And it turned out to be a pain when you wanted to do things like create an editor to make it easy to create a spec in a browser. It also added things. We called them, at the time, vendor extensions. Those would be renamed specification extensions later. But really, it's because we were realizing that, as providers, we needed a way to mix in a bunch of implementation details. So that's an important piece here is that the Swagger specification grew really as a way to document the contract for clients. But there was this interesting tension between the fact that while it's client first, it's also really useful to the server side and how providers are describing their own services for their own needs in the back. I'm going to talk about that a bit more later. And we began to see with the 2.0 spec, it was like the world was waiting for it. And I'll show you that in a slide in a second. It began to take off. It also became more important to get it into a neutrally-governed foundation. And along the way, Wordnik sold some of its interests in Swagger to SmartBear and some its other technologies to NDN. And it became even more important that if this was going to be an industry standard, it had to have neutral governance. And so the Open API Initiative was founded. The spec was contributed by SmartBear. And all of that led to figuring out the right structure and organization for how we would evolve the spec. And that all led to the Open API version 3 specification, which took a lot longer than anybody had planned. It took us a lot of time. And that was released about a year ago. And since then, we've seen tooling adopting it. And it's starting to get some pretty good momentum around it. This is why. You can see, in 2014, when we started the working group, that's when interest began to really take off. It continued to accelerate as we added the neutral governance model around all of this. It's like the world was waiting for this. And I stopped tracking these numbers. This is the Stack Overflow tag trends for the various formats as of 2017. OpenAPI just continued to accelerate. And part of that is because of this governance model. So for those of you who are really interested, you can go read all about how the inner workings of this happen. But it's pretty interesting. I had never gone through this process before. How do you take this open source project and make it a safe space for lots of people to participate in? And that means doing things like incorporating, and making sure that everyone's assets are properly protected, and there's a level playing field for folks. It's pretty interesting stuff. So we became a project of the Linux Foundation and continue to this day under their auspices. And we have all of these guidelines and charters. If you're really excited about this, you can read this on OpenAPIs.org. We grew from an original founding members of 9. We're now, I think, at 32, with a pretty impressive collection of logos. These are all the members in the OpenAPI Initiative-- everything from Big, Salesforce, Google, IBM, Microsoft down to small players, who are focused more on narrow sections of tooling. But it's pretty robust, and it continues to grow. The reason why all these people came together to work on this is because having a common industry format to describe APIs, it's really important to a lot of people and the industry as a whole. And this is why I said earlier, using the specification name OpenAPI really does matter. It's important that this thing be able to live on its own. And I think it's been doing great. We have more work to do. The specification itself evolves and is maintained on GitHub. So if you want to read through the specification, or participate, or ask questions, or file issues, this is a great place to go. And all of this is guided by the Technical Steering Committee. We started as six, shrank down to four, and we're back up to six again. We meet once a week, in general, for about an hour and a half. That's an open meeting, except when we have to deal with internal governance issues. These happen on Thursdays. And they're open to anyone. You can actually find the calendar through the GitHub repo. We have all this documented in how you can participate in the development guidelines, et cetera. That's me. I thought it was really cool in 2007 to be trapped inside an iPhone. And I just haven't updated my avatar. So the purpose of this body is really to govern the spec. It's important to consider the burden on tooling authors when you're making changes to the spec. Because any time you add something, it's more work for people to do, or if you make a change and now, things are incompatible. Really, the way the spec evolves is by the community needs. But someone has to make sure that it's evolving at the right pace. And that's what we do. But also, it's work-like, writing requests to make changes to the specification, clarifying things along the way. If you get excited about API description formats, you should come talk to us and join. Right now, we're meeting on Thursdays in the morning. So we've had the 3.0 spec out for a year. And we spent some of that time thinking about how to better involve the community, how to have clearer practices and guidelines for participation. We also have a big list of possibilities. And we've spent a lot over of the last few months talking about these things. I promised earlier I wouldn't dig too deep into the specification, but these are the kinds of things that we're thinking about. And this is really what we need input from the community for. So again, things like reusable groups. That's the idea that you could define things and refer back to them with being able to write less boilerplate over and over. Things like alternative schemas, if you don't want to use JSON schema, or even if you want to use a newer version of JSON schema other than the one we support officially. Those are things that we're considering. We've talked about adding multisecond paths. Actually, I'm super-excited about overlays. That's the idea that you could separate some of the implementation concerns from the specification itself. So rather than mixing in all of your specification extensions, those implementation level details, to be able to have a separate file that you can think of as a layer on top, you would merge those down. Again, I would encourage anybody who's interested in these things to please participate. It's the community involvement that makes us move. These are the various ways you can contribute. So I would recommend visiting the project on GitHub. There are issues you could file or participate in, pull requests. We have lots of passionate arguments about things. I would recommend reading the development guidelines that you apply your changes against the right branch, that kind of thing. We have that weekly conference call that I mentioned. We have meet-ups. And if you want to join them or even host them, you should please get in touch with us. The easiest way to do that is via OpenAPI Spec on Twitter. That's probably the easiest way to quickly reach out to us. But things look pretty good and please participate. Part 2-- specifications in the API lifecycle. So contracts-- OpenAPI is really a contract-based approach to things. And really, this is an agreement between the client and the server. So the server says, hey, this is the way I want people to talk to me. And if you talk to me, I can promise that I'm going to be somewhat stable and be around for awhile. And the client's like, hey, that's cool. And so you register an application. And you are able to make requests against that. But I really do think of it as this formal contract and agreement between these two sides. I'm going to ask you to use your imagination. This is, as you can tell, in the style of Homer Simpson. But Legal would not let me use Homer Simpson in my slides. So mentally substitute this with Homer Simpson, because apparently that's OK. Sweet, sweet declarative, compile-time contracts. I actually imagine myself as Homer Simpson when I think about OpenAPI in this way. What it is, is I love that you're so declarative, you can understand all the various capabilities of an API. It's very declarative. And what I mean by compile time is that, as you build your application, the interface is hardened. You compare this to things like HATEOAS, Hypertext As The Engine Of Application State, where the client can only do what the server says is allowable from the state that you're in. This is much more declarative. And you can think of it as slices of functionality that you can expect will work in this way consistently. A contract defines the boundary of responsibility. And they're great because this formal agreement lets the client and server teams understand the expected behaviors so that I know what to expect back when I ask in this way. Contracts have been around for a long time. I've already mentioned WSDL and WADL. Before that, we had things like CORBA. And we've had other contracts as well. There are different flavors. I'm going to mention HATEOAS before GraphQL. But what I like about OpenAPI, as I mentioned, is its declarative nature. And as I mentioned earlier, it really grew out of putting the client first. What is it that people are going to know in order to be able to call my API? Google doesn't use OpenAPI for its own APIs. Why? Well, Google runs things at really big scale. And Google needs to have a binary-based format. And so they use proto buffers. And they have open sourced something called GRPC. And if you come in within a couple of orders of magnitude of Google scale, you might consider it too. But we also recognize that exposing our APIs with JSON flavors is important. So we have a gateway that sits in front. And it basically takes that GRPC and exposes it as JSON over HTTP. Most of our APIs can be expressed as open API in that way. The way I think about this is that if you like contracts, and you've got scale problems, GRPC is a great fit. If you have adoption problems, I would go with OpenAPI. If any of you have attended any of Apigee's sessions before, you've probably seen a slide like this that is how we think about the API lifecycle in general, the core lifecycle being those icons in the center. And this covers a lot. And if you ever look at Kin Lane, the API Evangelist site, he's got pretty similar lists. API management's has been around for awhile. These categories are pretty good. It looks like people want pictures. Wait one second. 3, 2, 1. Good. I want to focus on just the core lifecycle bits. This is everything from imagining and designing your APIs through deploying, publishing, and maintaining those, analyzing that traffic and how you then maintain it over time, protect against threats, real-time monitoring. But what's interesting to me is how many of these stages are better informed by specifications. They add value in lots of different stages of the lifecycle. And I'll walk you through a couple of these just to give you a flavor of what that is. Documentation is the most obvious one. For many years, people thought Swagger was Swagger UI just because they'd see the documentation and think that was it. It was really a much deeper and layered system. But specs-driving documentation is the number one reason that gets people to write specifications, in my experience. Everyone understands that in order for a developer to consume your API, they have to understand how it works. They need documentation. Maintaining documentation is a pain. And doing that by hand is kind of even worse. And whether you treat your API specifications as truth or possible truth-- and what I mean by that is, say you're generating your API specifications from annotations in your source code. You can consider that to be truth, more or less. If you write that description and maintain it out-of-band, it is a description of a possible truth. It may be just as true as the ones driven from annotations. But in the case where you're writing it out-of-band, that's where you may want to do automated tests to make sure that your specification stays, in fact, in line with what you're actually exposing with your APIs. So specs-driving documentation is the single biggest reason to use an OpenAPI spec today. But there are lots of others. So we use specs. This is in Apigee Edge. When you go to create a proxy, which you do, essentially, in order to be able to attach policies to control as a part of API management, how you control traffic and access to the services that you're offering, if you have described your API using OpenAPI, we can use that information to help you figure out how to manage your API. Specs also can inform implementations you may generate code in the back end, whether it's stubs, or whether it's something more sophisticated. Specs can drive mocks. This one seems obvious in many ways. But it's surprising how few organizations take advantage of it. The nice thing about driving mocks is it lets you start building your client in parallel with the server implementation. That's a pretty good benefit right there. Now, you don't have to wait for the server to get stood up before you can begin to build your client. It also is a good way to just put it through its paces as you're in the design phase. Or say you're a financial services company, and you don't want your developers making live requests to your actual production backends. You could attach that to a mock. And have them send requests without using real money. More sophisticated, of course, would be to have a parallel system that didn't use real money. But this is a good way to start. Specs can also power test generation. Here is a snapshot of a project from a colleague of mine, Noah Deitz. He has a project called OATTs, which is OpenAPI Test Templates. These are ways in which you can make assertions and run automated tests that help keep your API specification in line with what you're actually doing. It's pretty neat. I encourage you to check it out. You could use specifications to classify traffic at runtime or to validate payloads at runtime. I mentioned earlier generating SDKs for clients or to do server side code. There's a lot that specs do. But given that this talk is about API design and how that OpenAPI influences it and its opinions, I want to really talk about this. This is the spec editor in Apigee Edge. But you've probably seen lots of these out in the wild. As you incorporate specifications early in your process, you get better APIs. I'm going to talk about this a lot more for the rest of the talk. But being able to design and see your API-- when I go back to 2014 when we started the working group for Swagger 2, people didn't actually understand at the time why having a YAML-based syntax mattered. And it matters, so we can do things like this. The ability to write in a web browser a specification, that's a big deal. And the ability to get real-time documentation, this is like documentation-driven development. And it's a way in which you can make better APIs before you've even written a line of code. So before we go into the OpenAPI influence on API design, I want to review some key API design principles that we will then apply to that and bring it all together. Did anybody read this book? It's been out for awhile. I see a few hands. Good. I would encourage you to. You can find this on our website, Apigee.com. Even though it was published in 2012, it continues to be really relevant. It's been cited by organizations, like whitehouse.gov in the Obama years. It continues to influence a lot of thinking. And it's a pretty simple pragmatic approach to building interfaces that developers love. And a number of the examples that I'm going to pull come from this. And this was really gathered from collecting the patterns that we were seeing that we were working with our customers and working out in the wild and then synthesizing that down. So it's an evidence-based approach to API design. There are four patterns from this that I want to call out today. If you want more, please read the book. But one, reuse, don't reinvent. Obvious. Again, these are all basically common sense. That's all design is really is common sense and creativity. Two, resource-oriented. And when I say that, I mean the R in URL. Three, I'm going to talk about names and why they are so critically important in this area. Then four, just a general principle-- when in doubt, do what's best for developers. And by that, I mean your consumers. Reuse, don't reinvent. Web technologies-- when we talk about web API design, for me, the first web API that I saw and fell in love with was Twitter sometime in 2007, I would say. And what was amazing to me about Twitter was that I understood how to use their web service, the actual website, and use Twitter as a user of their web interface. But switching from that to their API was really easy. Back then, you could use your basic authentication. The same credentials you were using to sign into to the site, you would use to sign your requests. And the way they'd organize their API was very similar in its hierarchy to how their website had been designed. I mean, that's not an accident. And so it was just so amazing and beautiful to me that I could get at my data, not just in the way that they said I could, but in the way that I wanted to. And a lot of what made the web work is what makes web APIs work too. So JavaScript is the lingua Franca of the web. It is why JSON emerged as a rival and successor to XML. So JSON is a key piece of web APIs. And HTTP is another great thing. If you think back to right before this style of JSON over HTP came out, everyone was doing RPC, more or less. And that's perfectly fine. It just wasn't leveraging what was really great about the web. It was tunneling a bunch of things, like the actions through the payload. And by using what makes HTP and the web great, we got web APIs. And that's important enough to call out they are resource-oriented. So if you use the web, you understand that things have locations. And this is why it was so easy for me when I first got into APIs and web APIs through Twitter, it was because I already understood how the web worked. I knew that oh, you go to a blog, and there are posts, and then posts are entities. This kind of hierarchy is reflected in good URL design. And it just makes it easy to think about how the site works and how to take the context of the service and apply it to how you access its data. It's tremendously powerful. You don't have to teach the basic mechanisms of accessing data and functionality. This is third actually-- naming. So this is one of my favorite quotes in computer science by Phil Karlton. "There are only two hard things in computer science-- cache invalidation and naming things." There's a version of this that wasn't from Phil. Apologies to Phil. "There are two hard problems in computer science-- cache validation, naming things, and off-by-one errors." But naming-- think how hard it is to create a good user interface that's a UI. It takes a lot of design and rework even before you write a line of code. APIs are no different than that. So all you have is URL safe characters, really, to deal with when you're working with your resource design. This means that the names of things have really great power. And one of the things I like to do in API design is to involve my tech writers because you really want people who actually care and think about words a lot. And so not only do tech writers spend a lot of time trying to think about how to break complicated things down to make them easy for people who are approaching a subject area, but they also tend to know a lot about other parts of your API. And so they can help you keep your API consistent. And the third bit, this is what I just alluded to a minute ago, is that they care about words. Think about how hard it is to name a variable well. That's hard. Anybody here play Adventureland or Zork? I'm dating myself here. Oh, no. Wow. Anyway, I don't have time to do a demo of that. But if you're familiar with it, you were in a room. You see a lamp on a table, and you say, get lamp. And to me, when I'm designing an API, I'm thinking about that. How do I make it obvious from these words how you work with this system? The way I think about it is that your developers are taking a journey through your API that's not unlike Zork. And all you have to work with is text. And it's your job to keep them from being eaten by a Grue. It'd be a funnier joke if you'd all played. God! Anyway, so you have one job. The API's job is to make the developer as successful as possible. The orientation for APIs is to think about design choices from the application developer's point-of-view. If you look in the lower right-hand corner of this little blue section, this is the Japanese expression of Shoshin. My apologies if I'm saying it poorly. I don't speak Japanese. But Shoshin, it's the concept of beginner's mind. It's a Zen concept. So the example I like to give here is-- who knows what IMEI stands for? Wow, I expected more telco people in the audience today. It's the International Mobile Equipment Identity. If you look in your phone's settings, you'll probably see your IMEI number. When I saw my first telco-based API, I thought, what's that? It's a device ID. It's that curse of knowledge that you have and that's really hard to step outside of. And that's why taking API design to other people is really important, specifically people who are possible developers outside your organization and understand how they see the words that you've chosen because it is difficult to avoid the bad ones. So Shoshin, the beginner's mind is another place I'd like to go when designing APIs. So now that we've touched on specifications, why they matter, why OpenAPI is important, and basic design principles, we can put them all together. But as I started with, just remember, OpenAPI is not going to magically make your API awesome. But it does have some opinions. And I find it's a whole lot easier to work with them than to fight them. And this matters whether you are building micro services, or whether you're building full-blown public API programs. So in the rest of this talk, we'll cover-- and I'll make sure there's time for questions-- we'll cover these six ways that OpenAPI guides API design. One, JSON. We'll use JSON schema. You'll see why that matters in a second. I'm going to walk you through each of these in detail. Two, signatures-- how the signature mechanism works in OpenAPI and why that's important. Three, warning, do not vary your response models from parameters. Four, collections and entities are a nice pattern and a good fit. That's from the API design e-book that I mentioned earlier. Five, versioning and how you can approach it with OpenAPI. And six, defaults and pagination. So OpenAPI leverages JSON schema. And it was in a draft at the time that we launched v3. And in fact, we were a little worried that it was a dead end of a specification. But it's been revived a lot recently. But we picked a draft, and we can't just update it without breaking things. That's why when I talked about the big list of possibilities, it's on there. But we didn't take JSON schema entirely as is. One difference between version 2 and version 3 is that we added any of and one of to that. That had been a common request. But what's important to remember, and one of the challenges you face in working with OpenAPI is that, really, JSON schema is a validation language and not a modeling language. And if you remember that, it will help you on your journey. Two, anyone recognize this? When I think of a signature, it's John Hancock's, the one that springs to mind, that big one in the center. But if you look really closely, you can see path and verb. A lot of people don't realize that OpenAPI also signed the Declaration of Independence. (WHISPERS) It's not true. (NORMAL VOICE) It is a unique pairing of a path and a verb that really defines the core of what's in an OpenAPI spec. That is its signature. And what I mean by that is that if I'm a gateway, and there's traffic passing through, then I should be able to identify, based on what I know from the specification, which operation it maps onto in that spec. That's the signature mechanism. I kind of cheated the plus media type. I didn't let them sign. If you are going to have multiple media types, you can use that as a way-- it's next layer of signature. But path and verb-- if you think in terms of path and verb, it will help. Number two, don't vary response models by parameter. I've tried to protect the-- I shouldn't say guilty. This API came about, I think, around 2006. So it predates a lot of things. But you can see that what we have here is-- where is it? There we go. Oh, I put it in red. Good. The method equals. You are passing in what the signature is in the query string. And because that can appear anywhere in the query string, it can be a lot harder to figure out what the signature mechanism is if you're in a layered system like a proxy. When Wordnik set out to build their own stack for APIs, they wanted to have a nice clean signature mechanism. And that's why we have the legacy that we have today of path and verb. And that's a perfectly good way to do things. That's basically the way the web works. So just a warning, don't vary response models by parameters. It's funny. This is one of the things that comes up a lot in the community requests for how the next edition of the spec should go. And we continue to resist it, in part, because it adds another burden on tooling authors, as I said before. Three, I think we're up to-- collections and entities. And so what I mean by this is, like, /dogs, or this could be /posts. That is the collection. And then you've got the specific entity there. So if I've got /dogs, and I do a post to /dogs, I'm creating a new dog entity. And I will get back a unique ID that defines that and that specific dog entity. And then I can do things like update it, or delete it, or get it. And so collections and entities are a really nice way to represent the hierarchy of your system. And it's a way in which you can use the names of those entities to help people understand how to work with your APIs. It's also useful for API management. Because now, because I've got the signature mechanism and because I've got this way in which I can think about the structure hierarchically, I can then apply policies to specific areas. Say I want to throttle-- maybe throttle and dogs is the wrong word to put together-- say I want to throttle dogs but not cats. I could decide to do that because I have these kinds of-- I think I need a better example than that next time. Sorry, PETA. Another useful thing that I probably should have mentioned during the naming bit is the singleton resource. Imagine this was users. And users 1, 2, 3, 4 happened to be me. Having a /me-- and I forget who pioneered that. Facebook was early on that pattern. But it's nice to have these convenience singleton methods that map onto these things. And you can still apply policies in the same way to that. Versioning-- nothing causes more controversy in API design than versioning. And I'm here to give you permission to do versioning the way you think is right for you. However, do think what's best for your developers. I remember when Facebook used-- hopefully, they don't do this anymore-- but it used to be able to pass in the version you wanted in the query string. And people always forgot to do that. And things would break. The downside of versioning in the resource is that it feels kind of wrong to version the resource. But it's also obvious. And so you have to figure out what's right for your developers and make that trade-off. But what I wanted to talk about is the way in which OpenAPI has opinions here. So one of the things we added in the version 3 of the specification is the servers replace the host. And we added a way in which to templatize that server. So you can, in fact, in your server's information, you can put the version. Used to be able to do that with a base URL. Now, you can do it in the servers as well. The only caveat I would give here and something to be careful of is that it may be that you want a separate specification. If you're putting versions in your servers, you might as well think of those as different contracts because someday, you're going to retire one of them. But if you choose to do the Twilio style, if I remember correctly, Twilio has got at least 60-- it's probably got, like, 100 now. But they have a lot of versions that they keep. And that works for them. And they use a date-based format to do that. So I give you permission to use what works for you. But do look at what's worked for other people. Defaults and pagination-- again, I think this comes back to naming. Make good choices in how you name things. We took a look around and saw a bunch of patterns. I think my least favorite was Twitter's, at the time. It was RPP, which stood for Request Per Page. And that's just not intuitive, right? Limit and offset are nice because people who use databases are familiar. This is common language. But with OpenAPI, at least, what I really want is a consistent way to apply pagination to all of my collections. And the ability to define that in one spot and reuse that definition over and over again is really important. The pagination is actually a great place to use hypermedia too. I think it's the most obvious case for hypermedia. If you can give people back a link to the next page of results, consider doing that. And you should be able to do those kinds of things using OpenAPI. So here are your three takeaways, then I'm going to open this up to some questions. One, investing in API design, it always pays benefits. Its so much easier to make a change early in that process than to do it later. We all know this, and yet, we continue to push it off and not do it properly. Invest in API design, even using specifications to help that process go better. Two, the promise of specs-- by having a formal machine-friendly description of your services, you will do less work over time. Levering specifications throughout the lifecycle will help you do less work. Yay. And the third bit is don't fight OpenAPI's opinions, if you're going down the OpenAPI route. Hopefully, you have a better understanding of some of the ways in which it sees the world. I'm going to add a number four, which is please call it OpenAPI. It's important for the standard. [MUSIC PLAYING]
Info
Channel: Google Cloud Tech
Views: 17,782
Rating: undefined out of 5
Keywords: type: Conference Talk (Full production);, pr_pr: Google Cloud Next, purpose: Educate
Id: uBs6dfUgxcI
Channel Id: undefined
Length: 38min 51sec (2331 seconds)
Published: Wed Jul 25 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.