Google Cloud Platform 101 (Cloud Next '19)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[MUSIC PLAYING] TERRENCE RYAN: Hi, my name's Terry Ryan. I am a developer advocate for Google. So let's get started. There are a ton of GCP products. When I went to go put this talk together, I went to go to our canonical list of all of our products to like, OK, what things should I talk about? And so I went and I saw these. And then I saw these. And then I saw these. And I was like, oh, what did I sign up for here in this talk? I can't get through all of this. Oh, and there are the open source ones that we also do a lot with. So I can't get through all of this. So what is my goal here with this talk? My talk is to give a tour of GCP from the perspective of someone coming to GCP for the first time. So what are these things? What are all these hexagons? What do they all mean? A secondary goal, but not something I'm going to get to, is explaining all the various things that are available on Google Cloud. I just literally cannot do it. And a lot of this, there are a lot of individual slides I have here that are actually placeholders for an entire talk I give somewhere else. Like when I get into compute serverless versus VM versus containers. That's a whole talk on unto itself. So I'm going to try to point you all in the direction of where you want to find the things that you want to do, and hopefully that will be good for everyone in the room. So one of the resources that I used to build this content is GCP in four words or less, which is available at that very long GitHub address. If you just do a search on a search engine of your choice for GCP in four words, you will find this. And it's really helpful. It gives very concise definitions of a lot of our products. And where they are concise, I hopefully, will kind of help explain them. So let's get started with I think one of the most important parts of GCP, which is infrastructure. When you move your work over us, you need to run somewhere, you need to have VMs, you need to have network connectivity, you need basic computing resources. So infrastructure is a gigantic important part of this story. Oh, a little bit about me, just so you kind of see where my biases are. I am by definition a AppDev mostly on the web. So you'll see that I have much more knowledge on the compute side of the house and the technologies that sort of complement them as opposed to say, oh, big data jobs. So I'm going to skip this slight detour and go right into computing. And we'll talk about the issue I already brought up, is VM versus serverless versus containers. So this is one of the first decisions you have to make. Where are you going to run the code that you need to run to do what you need to do? So we start out first with Compute Engine, which is where I think a lot of our customers start off. Virtual machines, disks and network. What do they do? What can they do for us? So we will start. Compute Engine, you can get anywhere from 0.5 to 96 processors. And actually, you will actually see in the UI that you can go all the way up to 160, but you can't customize them. Same thing with memory. Half, just under half a gigabyte RAM to 624 gigs of RAM. You can attach up to 64 terabytes of standard disk or SSD. You can do three terabytes of attached SSD. You can add GPUs or TPUs. And I like to say they speed up in tens of seconds. You'll see that in a minute. I'm just going to go right to my demo, and then if I could switch over to my machine, that would be great. There we go. All right, so you'll see here I have a whole bunch of VMs already, but I'm going to create a new one. And I'm going to have to call it deleteme2, because I created one earlier today. And let's see, we'll put it, let's put it in LA. All right, our Los Angeles data center. And you'll see that I can go ahead and customize. I can create with anywhere from 96. I can play around with this, I'm just going to do a basic one, because I don't want to have to explain that on my expense report. Let's say, I can choose a whole bunch of different images, a whole bunch of different flavors of Linux, CoreOS, Debian, SUSE, whatever you want to use. And we have flavors of Windows as well. I could use standard or persistent, sorry, SSD persistent disk or standard persistent disk. Again, I will say to standard. And we see that I get an estimate of how much it's going to cost per month. Great. I can open up firewall for it. But one of my favorite features of the interface here is down to the bottom. I use this a lot. I configure this in the web. And then I hit this, and I get-- you'll see it's a long string of text, but this is the G Cloud scripting command to build the VM that I configured. So I can use the tool to dial in exactly what I want. And then I don't have to write out these really super long URLs that mean something in terms of what scopes the VM is able to access. I don't have to remember all of that. The tool will do it for me. But I'm going to go ahead and create this VM. And like I said, these VMs spin up in tens of seconds, which means when I'm in my hotel room practicing, making sure I don't screw up in front of you all, takes like 10 seconds. In front of people it takes like 40, and I'm not really sure why. But my hope is that as I slowly crab walk over to the console, it'll be ready when I get there. Ooh, it was! All right, good. So thank you for not looking directly at the screen. And you'll notice there I pressed the button to SSH into the machine. That's available through the console all the time. I can just, I don't have to pull down certs. I don't have to set up my own terminal. If you want to, you can. I don't want to. I just do it. I do all of my maintenance through these browser windows. And you'll see here that I'm connecting, and by the time I walk over here, it's all ready to go and I can, you know, let me bump up the font a little bit here so you can see that I'm going to apt, get update. All right, it starts installing software. Remember that I was supposed to sudo that. So sudo, bang bang. Do it again, there we go. And so now I've got a VM that I can start fooling around with, installing software on, all ready to go right through the browser. So I'm going to switch back to the preso and move on. So that was Compute Engine. Now I want to talk a little bit about App Engine, four words or less managed application platform, Managed App Platform. App Engine has two flavors, standard and flexible. And I think these runtimes are off, because I know PHP can now do 7.2. So I apologize for that. So standard and flexible. There is a little bit of difference between the two. Flexible, you can do more different runtimes and more different languages. And it has open capabilities, but it it's not as scalable as standard, which is much more constrained. You have to really adapt your code style to write for App Engine. But you get incredibly rapid scale out of it. And it scales down to zero. So if you're not using it, you don't get charged for it. And if you are using it, you can scale up to however much usage you need to handle relatively quickly. And I'm actually going to show a demo of App Engine scaling in just one second here. So let me switch over to my machine. And I'm going to call this up, hopefully. Let's make sure everything's loading. There we go. All right, so let me explain what's going on before I fire this off. I have an App Engine app. You'll see it up the top UI. That's this thing. This is going to-- I give it a variable amount of load. It's going to take that load, distribute it, basically do all the math to divide it amongst the five the VMs that I have down there. Those five VMs have Apache Bench, and they're just going to hit, they're going to send a whole bunch of requests against the app down there at the bottom, a totally real app, appspot.com. The reason why that's obscured for now is because I want to show this cold. So I tested this last night and I let it all go cold. It's been sitting idle for overnight. It has zero usage right now. I want to show you how fast App Engine can scale from not running at all to handling large amounts of traffic. So up here at the top, you'll see that I have a request of 10,000. So I'm going to send 10,000 requests at that totally real service there at the bottom. We're going to go ahead and send load. And you'll see App Engine's speeding up instances of my code to handle my requests. And oh, something got interrupted there. So let me just, ignore that happened. Huh, I seem to be having issues with this demo today. Maybe I, no wait, yeah, no, I'm going to give it one more try and then say I should have tested this morning in addition to last night. Yeah, all right, yeah, fair enough. All right. Well then, I'm just going to punish it by sending 100,000, 200,000 at it. So I think what's happening is one of my VMs is down. Oh no. OK, fair enough. Well, normally App Engine can scale beautifully when the app I wrote to show this off shows up. What you should see, and what I will totally go back and make sure is working the next time I give this talk, is that usually it will go and scale however many operations I need it to take. And it will, in the beginning, the QPS there, you see at the top, it's at 3,600, will be low. It will be like 500, because it's warming up from cold. And then I can eventually get this up to being able handle QPSs of 8,500. QPS, which when you do all the math, is about 30 million requests an hour or what Wikipedia handles. I am sure App Engine is fine. I am sure I pushed a change at some point that broke this front end. So with that, I'm going to switch away from the demo and be glad that my sacrifice to the demo gods did not get accepted this morning. So let's, oh, we're going to go this way. Let's go forward. So now I want to talk about Cloud Functions. You talked about App Engine, which is an application as a service. You build an application, we scale it for you. But sometimes you don't need a full application, you just need a little bit of code. One little bit of code that will do one thing. Enter Cloud Functions, event driven serverless functions. What you can do is have a function, a single function, respond to an event. And that event could be HTTP-based, so you get a URL that you can use to hit your code. Or it could be something what we call background. Background is writing a file to a bucket or responding to a message on Pub/Sub. Languages, we have a restricted set of languages here-- Node.js, Python and Go. But this is another one where I'm just going to go to the console and show it directly. So if we could switch back here. And have a little bit of networking. So I'm going to blame the other one on the network and just add wireless here. All right, so we're going to go ahead and create a function. And what's cool is, functions will automatically populate this stuff for me. So I can just call this greetings. And you'll see that it has a hello world message down there. I'll spell it correctly, that helps. I have a couple of different options. I'm going to make this trigger on HTTP. I'm going to call the function, hello world. And what's cool is, I can experiment with this. I can-- no, I'm all right. Thank you though. I'm more covering up for my faults, sorry. So I'm going to go ahead and create this. And an error occurred. Fantastic. Maybe I should've taken the hard wire, but too late now. So I already built one of these. So I can-- I was prepared for this. So we'll see that I have a function here all ready to go. And I have a-- let's see, if we trigger this. I can go ahead and write this function that was the same exact function I just wrote there. And we'll see. Yeah, send something, would you? All right. So this takes about 30 seconds to deploy. And I can have functions that respond to anything that I need, but based on just one function. So great, we're going to switch back to the live preso. And we'll talk a little about Kubernetes Engine. So we talked about VMs, we talked about serverless, and now we're going to go with containers. So how many people here have played with-- how many people here have played with Docker? Good. All right, that's normal. How many people here have played with Kubernetes? How many people here have set up their own Kubernetes from scratch, the hard way? OK, less. All right, so here is what Kubernetes Engine does for you. So if you are used to running Kubernetes, you may recognize them. So you go to set up Kubernetes and you build yourself a cluster of machines. You want to make sure you have attached disks to the node machines, because they're going to need extra storage to run your containers and whatnot. And then you're going to go ahead and install Kubernetes and wire up all the connections to make sure you have a network that will allow all of them to talk. You want to make sure you have your nodes. You want to make sure you have your masters. You want to make sure they're all talking to one another. And then you're going to want to add DNS and service discovery. And then probably someone will say, how do we know stuff is running? So you got to add logging and monitoring. And then you got to make sure the OSs all stay up to date. Then you got to make sure Kubernetes stays up to date. And that's just a lot of work, right? Like it's a lot of work. I made it sound faster and simpler than it is. There's a whole [? doc ?] that's Kubernetes the hard way that breaks people when they try to do it. Or, just saying you could go to Kubernetes Engine and hit that button that says Create Cluster and just press it and it does all of that for you. And we will manage all the updates for you. All you have to do is say update, and we'll update. And that's a much better experience than running Kubernetes the hard way. And so I just want to real quick show what Kubernetes engine can do for you. So I'm going to switch to the laptop here, and let's give this another try. So I have a Kubernetes Engine-- sorry, I have a Kubernetes cluster already go. And so the first thing I'm going to do here is launch a visualizer. And I'm going to do all this with Make, because I don't want to live type in front of anybody. So let's make sure this is all up and running. Visualizer. OK, we should see nothing here. Good. There's nothing. All right. What we're going to do is, we're going to launch a Kubernetes system here. So let's make launch. We should be able to see this live. No, make deploy. Sorry, make deploy. All right, stuff is happening. So if we go back to here, we should slowly see my containers and my services load up any moment now. Yeah, there we go. Good, good, good. All right, so we have containers. I have a little performance anxiety now. So we see my containers loading up. I've got Drupal. I've got Drupal 8. I've got WordPress. I've got custom PHP front end. I've got a node app. And all of this is running on the same hardware. It's all running-- like you would never run Drupal and the Drupal 8 and Drupal 7 in the same box or run a version of Node on it, but that's the great thing about Kubernetes, you have this set of allocated resources that you can just keep adding containers on until you fill it up. And then with Kubernetes engine, it will actually scale it for you. But I have the databases all set up in the back end. And I have all of the front ends running and working. So that was a system that wasn't running at all two seconds ago, and now I'm starting a good IP addresses. I'm starting to get connectivity. I am not going to risk it. I going to say this works and this is great. So I'm going to switch back to presentation mode. And then we're going to talk about something that got announced today, which is really exciting, Cloud Run. Now Cloud Run sort of straddles, because if I were to give it four names, four words, it would be serverless but with containers, which seems sort of oxymoronic. But basically, what you get with Cloud Run is, you get what we call Knative, but managed-- Knative managed. You give it any container and we will scale it down to zero or scale it up to however much you need to do. And we are able to scale that under load. So I'm going to show a quick example of a Cloud Run app. I going to show two Cloud Run apps. So one of the things-- I'm going to switch back to my laptop here. And let me make sure that I have access. And let me talk about a little thing that I was trying to do, which is, I have a tab delimited file. It's not that exciting. It is all the sessions from Next. There's some extra proprietary data in there, so don't like read it or anything else. Trust me, it's all the sessions from Next. And I just want to take it, display it on a screen. So I wrote a little Go app that does that. And let's see. We have that in Go-- sorry, in Run. So me pull up Run here. Do, do. Do, do. Here we go. And let's see, here's my Go app. And you'll see that I have a URL for it. All right. So let's go click on that. And we'll see the Next 2019 session list. Great. Right? Just a little bit of code. I'm not going to show it to you, but it's not that exciting. It's just string parsed in code. But when I went and looked at this, we go down and then eventually someplace, it just kind of falls down. Like the string parsing in Go, I just could not get it to do what I needed to do. So I struggled with Go for a little while. Then I remembered I'm old. And I used to know a language that was really fantastic at parsing strings, really fantastic at parsing strings. But no one would build a serverless runtime around Perl. Right? That seems ridiculous. But I was like, but maybe, maybe it would do better than Go at parsing a tab delimited file. So we'll see here. I have the same app. You'll see it is Perl. You may not recognize it as Perl, because I did not write it to be as inscrutable as possible. But it works. So I'm going to go and deploy this code. There we go. We're going to deploy that again. All right, and this time it works. We're going to see how long it takes to push a container to Cloud Run. So there we are. And should be done now. No? Not going to work. OK, good. All right, so we can go back to Cloud Run here. We'll go back and we'll see that I have a version of this Perl. And I'm going to go ahead and go to this API. Oh, sorry. No, no, main.pl because that's ugly. There we go. And there we are. We have the [INAUDIBLE] list up. But happens if we scroll down? Oh, look, look at that. Perl did it perfectly. Perl did it fantastically. Now what I love about this is, one, I get to make fun of Go, which I love. I do love Go, but I do like making fun of Gophers. But the other thing here is that, like we all have this kind of code lying around that someone wrote 20 years ago and 10 years ago and is still being maintained, still needs to run, and we would like to get it off of our machines. I got a, however old Perl is now, running on Cloud Run and solving my problems without having to do a lot of work to get Perl running on Cloud Run. So that's what I love. Anything you bring to Cloud Run will just run. And that's really fantastic and free. All right, great. So I am going to go back to my preso and go a little bit further. And now we have a question to ask. It's like, which do I choose, right? Like I've got containers, I've got VMs, and I've got serverless. Which one do I choose? And that is, like I said, an entire talk on its own. But let me try to give you kind of the bullet points of this. If you've got a lot of stuff in VMs and VMs are the gravity that are holding you to being on prem and you want to move to cloud, well then, I would just go with Compute Engine, just do migrations to VMs and fine. Especially if you have a large number of systems people, if you're in a shop that's more systems than dev, it totally makes sense to go to VMs and not switch over. If you have a lot of developers and you have a lot of Greenfield projects, it's a lot easier to start Greenfield projects on serverless than it is to start it on your own hardware. So that would kind of lend you towards going serverless. If you have a good mix of both systems people and developers, and you're looking to get the most out of the hardware, to get the most out of your spend for your resources, container and Container Engine is the way I would go. And one of the advantages of containerization is, one, preparing your apps to run on containers pretty much matches what you need to do to run them on serverless, except the whole Docker part. But especially with the advent of Cloud Run, running in containers, you can choose to run on Kubernetes. You can choose it to run on Cloud Run. You can choose to run it on Docker. You can choose to run on a whole bunch of different platforms. So when in doubt, I would say containers is probably the way you want to go for now. Serverless, I think, is eventually going to be what everyone wants to do. I think it is today what a lot of developers want to do, but there's things blocking people from going. But I think, so a lot of VMs, stick with Compute Engine. If you have a lot of developers, go with serverless. If you a lot of systems people and a lot of developers, you can go with containers with the added thing if you can learn containers now, it would make future transitions for you much easier. That's a lot condensed into one slide. All right, so I'm going to switch gears here and now talk about networking. One of the things that we are proudest about about Google Cloud and our infrastructure, is we have our own private network so that when you are transmitting data between our services, it transmits completely on our network before it ever touches the public internet. And we can route stuff all around the world. So if you are a user in Australia and you request a resource in North America, what's going to happen is, the request is going to go to a local point of presence in Australia for us, and then travel along our network to North America and then back. And what this does is, it provides-- you get shielded from a lot of the bursting nature of internet traffic because our network is a little bit more consistent. And you get pretty good throughput through that back end network. And I actually want to show a little demo of an app that we built to kind of show this off. So if you could switch back to the laptop. OK, so what I'm going to do is, I have a picture-- I have a head shot on my desktop that I took earlier. So there I am. There's me. We're going to send me around the world. So I'm going to send. We're going to prepare, we're connecting, and then we're going to be able to see this travel in real time across our network. Bam, we're done. So that transmit, that going from Taiwan to Tokyo to Sydney to Oregon to Iowa to Virginia to London to Frankfurt to Sao Paulo, took 4.24 seconds on our network. And what's great about the setup that we have for this is, one, that's fairly good that we're able to send this message across. And we stamped it in all the places. We had a VM in each location that took the image in, stamped it, and then sent it onto the next spot. But what's really great about this, is that I didn't have to set up any networking to make this happen. If you have your VMs in one of our projects, they can automatically talk to one another. So you don't have to go to different consoles or different parts of the world. You can just have all of your VMs in one place and wire them up to talk to one another just by adding them to the same project. If you want to play with this, we actually have a full version of this out in the dev lounge-- sorry, the dev zone. And you can actually get selfies that print out and your picture has traveled around the world, and you can take that home as a souvenir. So I'm going to switch back to the preso. I'm going backwards. There we go. All right, now some basic things that people want from the network here. They want load balancers. We have load balancers. We divide them into HTTP SSL, which you would think of as more than L7 load balancer. We also have network load balancers, which is what you think of as an L4 load balancer. You can also set up the load balancers to be internal only so that you can only have traffic-- you don't have to expose your services to the world to get them to be load balanced. We have firewalls. Pretty easy to set up. You set up a rule, open port 3306. You give that rule a name, MySQL. You apply that tag, that name to a machine, and it'll open up that port on that machine. Relatively easy to do and pretty simple to go through. We also have Cloud virtual network. So if you want to configure routes to handle communication between your Google Cloud instances and another location, you can do that. We have Cloud Interconnect. If you have enough traffic that you would like to directly peer with us, that's something that we can talk about. It is obviously something where there's a lot of upfront cost. But the advantage is, is once you start that pipeline going, you can transmit large amounts of data for a better experience. VPN. We do provide VPN, but it's gateway to gateway only. This isn't for hooking up road warriors. This is basically used to connect you to another setup someplace securely with your own tunnel in place. CDN, we do have a CDN. I wouldn't, this is not like say competitive with Akamai. This is like you just want to add some edge caching to your work. You can do that. You set it up through our load balancers. They'll use HTTP headers to determine whether or not stuff's cached. You can invalidate stuff. But if you need more advanced tools, you probably want to go with one of our partners instead of just the cloud CDN. Also DNS. I love this, because it irritates the lawyers whenever I give this talk, because it is the only product we have that has 100% SLA. And every single time I give it, the lawyers say, you can't say that. And I was like, it says it right there on the web page. I'm saying it's the only product we have that has 100% SLA. So you can run your DNS off of our network, and our name servers do have 100% SLA, which means that if you go with this and we have an outage, don't mention me in your lawsuit, I think is what was coming from what I just said. Great. All right, we're going to move gears to storage. So storage we basically divide into structured or unstructured storage. Structured are things like data, application data, user data. And unstructured, things like files and blobs and pictures and other things. So we'll start with unstructured. And we'll start with Cloud Storage. Cloud Storage is object storing and serving. If you want to have a giant bucket where you put all your files and serve them up to the world, this is the way you would do it. Divide this kind of into three areas-- public content, process data and long-term storage. So we look at the chart. Public data, we have at the bottom there, multi-regional storage. It is 99.95% available. It's geo-redundant. So this is for web content. This is for stuff you want to push to your users as close as possible so that they get the fastest pull-down of the data that you possibly can give. It is 2.6 gigabytes percent-- sorry, 2.6 cents per gigabyte per month to store it there. Now that's for public facing data. If you have large data jobs, you've got LOB files or batch data that you want to run a lot of computation on, you don't necessarily need it close to your users. You need it close to the machines that are going to do that work. So you can go with regional, where the data will stay in one region, but it's OK, because you're going to set your VMs up there to churn through them there. And so they're only going to be-- or let's say you're just a regional, you have a regional presence and you don't want to necessarily serve the world. You can just set up your data here. Now, it's important to note that just because it's regional, doesn't mean it's only available in that region. It just means that it will be faster served in that region than somewhere else. It's still worldwide available if you wish it to be. It's cheaper. $0.02 per gigabyte per month. And so that's regional. Now we have two long term storage implementations. One is what we call Nearline, which is 99% available. It's got a 30-day minimum, which means you can put data in and delete it in under 30 days, but we're going to charge you as if you left it there the entire time. But you can absolutely delete it. That's totally fine. It's $0.01 per gigabyte per month. And unlike the other ones, there is a charge for pulling data out. So if you envision needing this data multiple times, what you probably want to do is pull it out, leave it someplace, instead of pulling it out repeatedly. So it's for things like backup or maybe long tail multimedia that's not getting a lot of use now, just leave it there and have it be stored cheaper. The final one is Coldline, which is our cheapest. It's 99% available, just like Nearline. It's got a 90-day minimum storage. Again, you can pull it out earlier, but we'll just charge you. And this is for things like disaster recovery or compliance archives, things that you really hope you never have to pull back out, because if you do, you probably have a resume open in another window. Disaster recovery and compliance archive-- two things that are not happy making. But if you have to, at least it's the cheapest one to store, and you will not kind of complain about the $0.05 per gigabyte to pull it out, because you've got bigger problems. But the incentive here is to try to push you towards making sure that you're using it the way we kind of envisioned. This is not pulling stuff in and out very quickly. This is for disaster recovery and compliance. All right, we also have another thing called Cloud Filestore, which I'm not really going to go too much into, because it's basically just a managed NFS server. If you need a file system, if you need multiple machines to look at the same file system, there you go. We have this product. If you need an NFS server, this is what you need, great. So now I'm going to talk about SQL NoSQL. Or actually I'm not going to talk about SQL versus NoSQL, because I don't want any part of that argument. I have my preferences. You have your preferences. Let's all just agree to be friends and not get into a NoSQL is web scale discussion. So we'll start with NoSQL. We have two products here. Cloud Datastore, which is a horizontally scalable document DB, and we have Cloud Firestore, which is a strongly-consistent serverless document DB. And you might, one, say I don't think the hyphen makes that four words, and I agree. The other thing you might say is, these two things, if I look at them, they're really similar. They're both document DBs. They're both horizontally scalable versus strongly-consistent. All right, so what is the difference between these two things? Basically, Datastore is a little bit more capacity, a little bit more scale, a little bit more robust. Firestore is based off of Datastore, but with the Fire base kind of APIs on the front end, which means, one, there's really great connectivity to local clients. So the SDK for writing to mobile and web is, I think, really, really fantastic and easy to use. And it has real time capabilities. In fact, that network demo that I showed where you're seeing in real time and hopping from place to place, that was powered by Firestore, because I needed that for the real time kind of aspect of the experience. OK, so then we have Cloud SQL-- Manage MySQL and PostgreSQL. That's what it is. It's just manage these things. You don't have to worry about maintenance or you could still do all the DBA tasks you might need to do, but you don't really have to do systems work to keep that machine running. So that's Cloud SQL. And we have finally Cloud Spanner, which is horizontally scalable relational database, which it basically combines some of the best features of NoSQL, which is the fact that you can chart it, separate a whole bunch of different instances, and have that drive better performance and better scale. But it comes at a cost, which is, it is not like a-- it's more of a BMW than a Volkswagen, if you know what I'm saying. So if you go to experiment with it, please watch the pricing meter. Because it's a great tool when you have that level of scale and that level of client that you need to access or you need to give access to, but it is a pretty robust tool. So how do I make the decision of which one to use among all of these? Well first, it's easy to split up. If I've got unstructured stuff, I want to use one of the two on the left. If I have structured stuff, I want to use it on the right. If I have pictures or documents or other things, how do I choose between Cloud Storage and Cloud Filestore? This is really easy. Do you need a file system and then attach it to a VM? Then you probably want Firestore. If not, use Cloud Storage. Great. Also, if you want to share it with the internet, Cloud Storage is one click to do, whereas with Cloud Filestore, you're dependent on the VMs that you've run to run web server software in order to serve up anything that's on there. So I would, if you want to share it, Cloud Storage is probably the way to go. All right, so structured data. Let's say user data or financial transaction data or a stack of paper, which is, I don't know what I was going for with that. But you've got structured data. If you're going with NoSQL, if you are talking directly between your database and web clients, I would highly recommend Cloud Firestore. If you're going a lot of server, server communication, Cloud Datastore is probably the way you want to go. And then between Cloud SQL and Cloud Spanner, do you have a crap ton of users all around the world and high needs for availability? Well then, maybe Cloud Spanner is the right call for you. But you'd really have to test it out and see where does traditional SQL start falling down, and when does Cloud Spanner start to become worth it to you to get the kind of performance that you can get out of it? So I'm going to switch to security, last part of infrastructure. And one of the first things people do, is how do I let people in to the resources they want? And you use Cloud IAM to do that. It's pretty basic. You've got a list of users. You can add permissions and roles to them. And that's how you do it. The permissions and roles definitely vary from product to product, and they are done on a product to product basis. So different ones will have different abilities to do things. For example, one of the things that you can do is create custom roles. So you can take a whole bunch of permissions and combine them into a new custom role. But that's only available for certain products and certain combinations, so keep that in mind. We also have the ability to do key management service. So if you want to host your own keys, you can-- basically everything on ours, in terms of data, is encrypted by default. The only question you have is not whether or not it's encrypted, it's whether or not you manage the keys or we do. We'll do it by default. We're happy to do it for you. We also know that trust but verify. You're perfectly entitled to host your own keys and manage your own keys if that's what you want to do. All right, so now that's all infrastructure, and that's all the stuff that you absolutely have to have to move to the cloud. I'm going to talk about some of the things that we can add to the picture, because again, we're Google, and we have some experience with some of these things. Not intelligence and not me, but that's what we call it. So we'll start with data analytics. So one of the first things I'll show off is BigQuery. BigQuery is this amazing tool that allows you basically to do very fast lookups of large, huge, huge sets of data. And in fact, I'm going to do one now. I'm going to brave the demo gods once again. Load. All right, so I'm going to open this query. I'm going to run this query. Now let's see. I'm going to show options, yeah, that's fine, good. I'm going to run this query. So what am I running? I have all of Wikipedia's LOBS loaded up. And so I can see the requests for everything that's come in. And I can-- oh no, it's not using cache. So this is going to take a while. I'm just going to explain what I was going to do. Basically, long story short, I can in a relatively short period of time, look through over a trillion records in BigQuery and come up with the answer of what are the top 100 most searched things on Wikipedia for 2013 and 2014. I do know this is going to take longer than most people's attention spans, because I had hoped it would remain cached. But it didn't. So long story short, this allows you to search through trillions of records in just a few minutes, as opposed to maybe spinning up your own data job and running it in hours or whatnot. But I am going to, better part of valor, and switch back to my preso because I have one more demo to give, and that one I'm happy about. So other data tools that we have-- Pub/Sub. Pub/Sub is our messaging bus. It's a publisher subscriber model. You can do many to many communication. Basically, we can scale to hundreds of millions of operations, hundreds of millions of messages per second. We also have Dataflow. Dataflow is a managed service for data analysis. Basically, it's Apache Beam, but managed by us. Also managed by us is Dataproc, which is managed Spark and Hadoop jobs, but with auto scaling. So it's important to note that Cloud Pub/Sub and Cloud Storage form sort of a bus so that you can-- if you need to talk between any of our services, that's probably going to be one of these two that you use to do it, with Cloud Storage being for unstructured data like just a file, whereas Cloud Pub/Sub will do messages for you. All right, we're going to switch to machine learning and artificial intelligence. And typically we have three kind of ways of using ML on GCP. The first is to use our models. The second is to create a model. And the third is to extend and go further with models. So we talk about using models, we have, some of our models that we use the most already exposed in the form of these ML APIs-- Vision API, Natural Language API, Speech API, Translation API, there are a few more. But basically what they allow you to do, is you want to detect faces. We have a Vision APIs that will allow you to detect faces in your apps-- I'm sorry, in your pictures, because that's a very common use case. Same thing with logo detection or language detection, translation, all of that stuff. Now typically what happens is, someone uses this and like, this is great and it does this thing, but I really wish it could do a little bit more. Case in point, someone came to me and said they wanted Vision API, but they wanted it trained so it could detect whether or not someone's face was covered, because they wrote security cam software and they wanted to be able to detect someone masked coming into a location, because that's usually bad, right? We all agree. Security, someone coming in a mask, they're not there for fun-- or your fun, I guess. So we also have this thing called Auto ML, which takes some of our APIs and allows you to train your own models, but based within narrow constraints. And so I'm actually going to show one. And it is a demo near and dear to my heart called Pizza Authenticator. So I'm going to switch over to laptop. We're going to refresh. And I'm going to explain. So I was trying to teach myself ML, and I'm not from California. I'm from Philadelphia. And whenever I'm here-- I live here now-- I really don't enjoy pizza here. And if you're from California, you don't understand why. And if you're not from California, you're nodding your head. You understand. So I wanted to write-- could I write something that would detect whether or not pizza was authentic? Like if it said it was New York style, it is New York style pizza. If it's Chicago style, it's Chicago style. So we're going to see here on my desktop. I'm going to pull up a picture. And here we go. So this pizza comes from my local pizza place that is very authentic. Like I think it's very good for this area pizza. So I'm going to go ahead and upload a picture of it. There we go. And we have an auto ML model that's going to go look at it. First it's determining whether or not it's pizza or not. And then after it figures out that it is, in fact, pizza, it should give me a score. And it says wow, that is 95% New York style pizza. That is good pizza, at least according to me. I was able to train this model without any ML knowledge. I was able to just take pictures, classify them, put them in. And it wasn't that tremendously hard. And now I have my own Auto ML model that can determine whether or not I'm going to like pizza. If you want to play with this, this is also in the Developer Zone. You can go, and we have at various times that we're not announcing, we're bringing in actual pizza. And when there's not actual pizza, you can use it with pictures of pizza which, we know where they came from. But you can play with it and kind of see how this works. It's also a GitHub project, so you can go and download it and build your own model and play. If want to rate other tools, other regional foods, you can do that. It's cool. I'm going to switch back to the preso and close it out. So if you want to go further and not train a narrowly constrained model, you probably want to use TensorFlow, and Cloud ML Engine is probably the way to do that on our platform. It's scalable. It's managed TensorFlow where we will make recommendations on how to run those jobs. All right, so we're at the conclusions, and I'm close on time, but I'm there. So there are a ton of GCP products. So we weren't here. We only showed these products. We didn't show everything. And here, we only showed these. And here, we only showed these. I didn't get into all of it. But my hope is that now you have a better idea of where the tools are and what kind of problems you can solve with them, and certainly to compute at another level, you can get to the resources that you need. So with that, I'll say thank you very much for coming. Your feedback, you probably got email spamming you to say, please rate your sessions. I apologize, but if you would, that would be great. And I'm going to say thank you. I hope you enjoy the rest of Next, and have a good day. I'll stick around for questions, but I'm at the end of my time. [APPLAUSE]
Info
Channel: Google Cloud Tech
Views: 44,101
Rating: 4.8934278 out of 5
Keywords: type: Conference Talk (Full production);, purpose: Educate, pr_pr: Google Cloud Next
Id: vmOMataJZWw
Channel Id: undefined
Length: 45min 38sec (2738 seconds)
Published: Tue Apr 09 2019
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.