So this was originally designed
as a level 400 level course for containers. But before I begin, I would like
to get a feeling from the room. How many of you are familiar
with containers? Good, really good. How many have run
a Docker run command? Becomes smaller. And Docker build? Okay. [LAUGH] So, given that, I probably will stay somewhere
in the middle, so, let's see. Are we going again? Feel free to ask
questions in the middle. I will be happy to
answer the questions. I have a ten minute video at the end
so if we get too many questions, we can skip that video and
you guys can see it later on. So let me introduce myself. So I am Humayun. I have been in Microsoft for
about ten years now. First five years in Skype or
Office Communicator and the last two, three years in Azure. And for me, coming back to
Chicago is a good feeling. Because I actually went to grad
school just two hours south of here at Purdue. Any Boilermakers here? I got one. [LAUGH] Okay, great, great, great. So 12 years ago, roughly 12, 13 years ago I landed at the Chicago
airport the very first time. I went to the grad school here,
did my master's. And then since then I
went to Seattle and I've never been back, so
it's good to be back. But unfortunately, the weather is
still the Seattle weather here, so it's okay. I was calling my friends here,
how heavy a jacket should I bring? No, you don't need a heavy jacket,
so anyways. Okay, so let's start. So, I will go over first
an Introduction to Containers. Which I guess most of you
are well familiar with. But soon after that, we will dig deeper into why
should we use containers? How are they used? And then I will go inside
the container orchestration engine. Specifically the DCOS and
Core [INAUDIBLE]. So just to give you an overview of
what are the offerings we have at Azure for containers and
what we can use for it. One important thing I probably have
heard recently over the last three, four years from Microsoft. Everything we are doing in
containers is all open source. Which means it can be,
if you want, again, if you want. You can go ahead, take the code,
and manipulate it as you want, the container code and
then deploy it on a VM. Azure is just providing a VM and an
on template on top of it with which you can deploy containers and
reuse it for your purpose. So we don't think most of the people
would like to change around the container code as it is. And use the simple commands and
use the benefits of containers. Rather than trying to
change the container. For example you want to change
the allocation somehow. I don't think that's the right
way of thinking about it. So we'll go a little bit into that. But again,
if you have anything to contribute, please contribute in
the open source community. And we take the code from GitHub and other places just like
any other open source. So you are free to look at the code,
make suggestions to modify it, modify it and
then we'll take it from there. So Microsoft has a great partnership
with Dockers on this front. And we have been in continuous
engagement with them on this front. So whatever innovation
they are doing, it gets into Microsoft Azure Cloud. And we start deploying it for
further goodness of the view. So what is Docker or containers and what is the difference between
the two and why should we use it? So first of all it's,
think about it with virtualization, we virtualize the hardware. Where on the same hardware you can
run multiple versions of Guest OSs. And then you can run your stuff or
whatever you want to run. You can run a host OS as Windows. And your guest OS might be Linux or it might be Windows or
it might be any other OS. But the problem in this model
was that you need to get a VM. You need to wait for
the guest OS to install and the size of the VM goes huge,
goes into gigabytes range. So as a result, it's not fast. It is fast enough but it's not really fast that you want
to do a seconds level upscaling. In order to do that, you need
to have a much faster solution. That's where the Dockers and
the containers are coming in. They have actually
virtualized the OS for you. So the OS is the same, and
all you do is you deploy. You come with the base OS image, and
you deploy your image on top of it. With all the required libraries and t he required versions of
the libraries that you need. And the benefit of that is that
given that it's a whole package. You can give this package to
anybody in your test team, or in your operations team to
deploy the same package. Which includes the base OS image and then whatever accessories
are required. They get deployed as part of the
installation of that Docker package. So in a way that it's faster,
it's more portable, I mean I don't want to say it. But if you want you can use
the same Docker image and deploy it on a competitive
cloud as well. It will just work because they
also support Dockers as well or containers as well. So you can deploy it onto
your on prem devices as well. And then once you are happy with
your on prem device you can deploy it in the cloud as well. You can deploy it onto
your dev box as well. You don't need to have big
servers running in a data center. As a developer,
you can take the Docker image, deploy it on your dev box and
play around with it. Once it's satisfied, push it to the
test team and it becomes portable. Obviously it improves
your dev cycle, improves your ops cycle because
everything is self-contained here. And given that it's faster,
it gives you more agility. By the way, so I'll first start with
the demo of Docker on a Linux box. So basically, let me switch over. Okay, great. So, this is your
standard Azure portal. I believe most of you
are familiar with it. What I will do is that
I will go ahead and create a VM,
a brand new VM in Linux. Has everybody done that? I can skip that section. So okay,
let's just go very quickly here. So what I'll do is that I will get
an Ubuntu image with Docker on it. So all I did was pressed
the [INAUDIBLE] button, searched for this one. And it will go into the marketplace,
give that to me. You can read through it and
basically you create, I'll go through
the classic model here. And then here you go in and
fill in your host name, whatever host name you
want to give to it. [INAUDIBLE], okay, got it. So any hostname,
Ubuntu One, any username. I'll use my regular one. Password protected. Select the VM you want to use. You can select whatever VM size,
network configuration, and which data center you want to use,
and so we'll just corner this guy. And create. And all it will do is that it
will create a VM in the cloud. And it will take about four, five minutes because
it's the VM creation. So what I'll do is that I already
have a fresh image available with me here. So this is just one of
the VMs I've installed. So right now if I do a docker ps
here, which is the first command. It basically tells you what are the
Dockers currently running on it. So what I'll do here is So, I have a demo here. What it will do is that it will
sort of run a hello world app. And all it does, it prints
hello world on a web browser. Nothing fancy about it. But I just want to show it to you,
how quickly you can get the apps, and install it. So the very first thing
is a text browser, which I probably have
already installed it. So it's a text-based browser
that we will see later on. And this I think is
the important command here. So Docker run, all it does
is that it runs this image, hello world created by
a partner called tutum. And then it just basically
puts the output on port 80. So if you see here, look, all I
did was I did have it installed. It installed it from Docker Gallery, installed this app
locally on my Docker VM. And we are ready to [INAUDIBLE] run. So now it's already
running actually. And what I can do is
that if I do a docker ps, you probably will see this. I'm now running my particular
one image that I have. And if I go lynx localhost I will see this was my
app that I was running. All it is doing is printing it out. And now, if I want to run it
20 times, because I have 20 or such images now, I want to run
20 instances of it, all I do is And what you will notice here is
that, given that the measure was already locally available,
it didn't go and fetch that Image. >> It is just running and [INAUDIBLE] I have 20
instances of it running. You can't do that over a VM, you
need to have 20 VMs up and running. Look, the VM that I created already
probably is just still going on. You can't 20 VMs created
in such a short time and have those 20 instances running. And now all my instances are running
and if I do Docker ps here, it will show me all those 20 instances
that are currently running on it. So that's the agility, the fastness. And again,
all you need to do is publish your image at a Docker gallery and
then you can run it from there. Any questions? Okay. And then, obviously,
you need to stop the Docker as well, otherwise, it will keep
on running forever. Okay coming back to
the presentation now. Okay. >> [INAUDIBLE]
>> I will go into that, yeah. I will go into that, yeah. So let me go into that
example at the end. I think I'll have that example at
the end, where we will have multiple instances and multiple Docker images
running as part of one big service. So I'll show you that demo as well,
thanks. So very important, Docker Incorporated did
not invent containers. Containers were there, they just
make it easier for people to use it. I mean that is very important and I think people do remember that
Support west had some support of it. But all Docker Inc did was
that it made super easy for an app to [INAUDIBLE] to use it. That was unfortunate part like we
had it for such a long time but nobody was using it and
any app can be containerized or put into a container and
deployed on the Cloud. But you have to be very careful that
you don't need to break your app into microservices or anything. You can use whatever app you have. Put it in a Docker container and
run it. The benefit the Docker gives again
is that is a full package and fast execution. But the important thing here is
that once you start to having a multi-tiered app,
it becomes very hard to manage. And that's where all these orchestration engine that
are being developed come in handy. And that's where we'll go
a little bit into that as well. So again as I mentioned earlier, the
right-hand side is your typical VM, where you have a server followed
by a host image and hypervisor. On top of hypervisor,
you have the guest OS and your app in the library
running on it. So imagine you need to go
through a new VM installation. You are getting everything
from the guest OS binary and the app everything, which the size
goes into gigabytes ranges. While in the Docker, what you have
done is that you're sharing the OS. You're virtualizing the OS. That means that if
your image is corrupt, it may cause blue screen as well. Yes, that is possible. That is surely possible because
this is what can happen. But it gives you
the fast flexibility and agility to deploy the software
quickly, run it, and move on. So let me go into some code. The primitives of
Dockers are very simple. You have a Docker file,
which contains basically, three parameters. There are additional parameters as
well that you can use to control other things, but
the three main parameters are From. So you have a document which
contains a From tag information, which contains your base OS image,
so you need a Windows image or a one two image, or whatever image,
your base OS image. Copy contains the information
where your source is located and CMD is whatever command
you need to run. You need to run a PowerShell command
or whatever command you need to run. And again,
how does it improve my dev? So your dev,
he's compiled some code. Now he will build a Docker image
by doing a Docker build with a tag on it. And then he can run that
image locally on his disk. And once he's satisfied
that his image is good, he push it to the Docker gallery. It might be the public
Docker gallery or your own personalized
Docker gallery. All of those are possible. Yep? >> [INAUDIBLE]
>> As long as your app is
supportive of that. Yes, yes. And on the test side again, so
now coming back to dev is done. He pushed his bits out. Now the Test is gonna
pick up those bits. And he will then do his testing. And once he's satisfied
with his testing, he will push it to a prod store. And then the ops guys will take it
from the prod store and deploy it. So Again,
dev is doing his testing on his box. He's like, okay, I am happy with
whatever the code I have written, now let me go to the next steps and
deploy it in the Cloud as well. So, he deployed it in the Cloud as
well, and now, from your DevBox, it goes to one of his private VM,
and somebody's testing it out. And now we want to scale
it up to three, four, five VMs, and
it just goes on, and on. Now at this time,
it becomes very hard to manage, and that's where all these other
partners like DC/OS, Swarm, and Cuberators are coming in handy. These partners, what they do,
is they provide the orchestration. You as a developer, or as a ops
person, don't need to worry too much about on which particular VM this
software is gonna be installed. It will be taken care of by these
different orchestration engine. And you just provide the how
many instances you need and then it will distribute those
instances as it needs to be. If you are developing an app which
contains micro services, Azure, you can even go inside a Docker to
say that this microservice should be deployed in five VMs. This should be deployed in three
VMs, wherever you want to go. And then if you need to scale up,
provide those parameters as well. And that's where one of the demo
I will go in available scale up very quickly and scale it down and
you will see the benefits of it. So, again, your job is more about
general operations of the system and developing your software. The deployment is taken care
of by Dockers and containers. Okay, so next I will go inside
the Azure Portal just to see how easy it is to deploy
an Azure Container Service. Okay, so lets go. So this is again my dashboard. And all I do is here new. I'm sorry, thanks. I will press escape, okay. Okay, so I went to my standard
Azure Portal there as well. And there I will say
Azure Container Service. Again, if you have time,
read through it. I've never read through this one,
I'm just saying. [LAUGH] So
you just go ahead and create. And whatever name you want to
give it to it, just say, Chicago. And then you need an SSH
public key as well. There are a lot of places to get it. I already have one
available with me. So I will use that one. Hopefully, this will work out. And I can create
a new resource group. So what is a resource group? Probably have seen
that in service demo. All it does, it basically tells you
what are the policies governing by the VMs that will be created as
part of this container service. So you can just either
create a new one so chicago and
you can pick a site there as well. And this is where you select
your orchestration engine DC/OS, Kubermeter or Swarm. Are you guys familiar with these
three orchestration engines? These are basically three
different orchestrations, which are developed currently,
are readily available. It depends on if you already
have a Docker instance in your enterprise and you're already
using one of these three images, you can use the same one because
you are more familiar with that. Otherwise, there
are some guidelines. I would call them guidelines because
they keep on changing depending on what each one of them is inventing. There are certain guidelines that
are available to the end user. So say that okay is you are having
the load off which is more database-heavy, use this particular
orchestration configuration. And those keep on changing
as we move along, as people are making development. For example, Swarm was
developed by the Docker Inc so it is probably more performant,
I don't know. Maybe it is because it's developed
by the Docker guys there. DC/OS is more suitable for
bigger database workloads. So it depends on
what your needs are. If you're already using one of them,
you can use your existing engine. I'll just pick DC/OS for now. And this one, again,
you specify agent counts you need. Basically how many
numbers of VM you need. And then whatever the DNS
prefix you need to use so I'll just call my name there. You can go up to 1800 or
even more than that right now so you can have a docket image with
100 instances running on it. Obviously as you increase the number
of count here, if you say three, it will go to the three VMs that
you will be able to select whatever the cost is. And then you can see the master con, which basically is the rule
breaker about that. And just press OK here and it just
gives you the summary at the end and then there's this download
template as well. This is very important in a way that
if you want to repeat this process multiple times. You do not want to through the
command line interface every time. So you download the template and then you use Azure Command Line
Interface or PowerShell or other tools to run it
through the command line. If I have to ask you
to do it one time, maybe you will go to the webpage. Ten times, maybe, but if you have to
do it 100 times or 1000 times a day, you don't want to go
through the web UI. You want to have
an automation in place. And if I press OK, it will probably
give me an option to buy which I will avoid for now, and
I already have one created. And I'll just go to that here. This is what the container
will look like. And inside this, you can go in and
play around with it. The more important thing is that
remember the agent account that [INAUDIBLE] set to one. You can increase that to any
number of VMs for example. In the same Docker, Azure Containers you found out
that now I need 1 or 5, I need 20. You can go ahead and
add a data in the same portal. You don't need to recreate it. And I work around, go around with
this various settings, yeah? >> [INAUDIBLE]. >> No, it's the number of agents
that you want to run within a container. >> Okay so then [INAUDIBLE] my
question is what container? >> It's up to you. You can say that I want all these
microservices per container or I want these five microservices
to go into one container. It's up to you. Whatever you are currently doing. >> You don't assign to an agent? >> You don't assign to an agent, no. Okay, so and let me start over here. Okay. Okay so,
What are we providing as Microsoft? We are providing you the lower
layers of infrastructure and orchestrator. As you saw in the demo earlier,
if you are already using an orchestrator that you
are familiar with, you can use that. Or if you're starting
fresh on Containers, pick the right orchestrator. This is a big decision
point you need to make, depending on your work loads and
other. Again if you have questions, the tech community forum that have
been shared with you in multiple other presentation as well is a good
forum to ask these questions. Okay, this is what my work load
currently looks like and then and somebody from Microsoft or from the
community will go in and help you. What is the right orchestration
engine you need to use? Microsoft is providing an
infrastructure which basically means VMs and storage. And basic infrastructure and on top
of it we are providing you the ON template which you saw I can
download that ON template. But basically configures the VMs for us and which orchestrator
engine to use on top of it. And then you as
an enterprise developer or your personal developer you come
in with your application and deploy it with the Docker, on it. And then there are certain
third-party like Mesosphere which is providing you a great
UI to deploy it. It was once integrated well
with Azure Container server. I'll show a demo of
Mesosphere later on, yeah? >> [INAUDIBLE]
>> Mm-hm >> [INAUDIBLE] >> So you can even take your existing monolithic app and put it
in a container and it will just run. You don't need to break
your app to microservices. Just whatever I'm in Hello World app
or I mean whatever app you currently have you can just take it as it
is in a Docker and deploy it. Obviously, you test your app out
first whether it works with that base image, whatever the base
image you are going to use. And just put that whole
package in and deploy it. You don't need to break your
app into smaller apps, but you don't need to change
anything and it should just work. That's the ideal situation. And again, Azure Container Service,
all it is doing is that we are, the container service
that we just saw. All it is doing is it is basically
connecting to all these three different orchestrator and
providing you an interface, same interface no matter which
orchestrator you are using, to play around with your settings. If you saw, I think this is an
important slide where all I want to take this point is that here
are the multiple actually VMs or something that get deployed. So let me see the pointer. So if you recall my example
that I showed where I did this Docker Hello World app. That hello app off installing it and running all these commands like
Docker run and other things. Those are coming in
through this interface. We are. So all these control functions of
your app, that wherever you're using Docker run or other they're
coming through this interface. While your end users,
who are actually consuming that app, they will be coming through the
middle interface towards the agent. And then you might have some
background tasks which are running which definitely the outside
user don't have access to. For example, considering
your doing a web based app. So all your web based users
are coming from this interface and backend tasks are running
in backend agent. So again, what you are doing whenever you're
using a Docker run command or any command which has a Docker, it comes
into through this private interface. The first one, and then your end users because they
will not be doing those commands. They will be accessing through a web
page or through some other apps. They will be coming through
the other interface. And when we say agents and
other things, those are part of the agents and
the master is this one. Let me just go where the command
line interface as well of Azure. Was it shown to you
in any other talk, the Azure Command Line Interface? No, then I probably will
go into that as well. So this your standard [INAUDIBLE]. Sorry if it's too small but
this is your standard prompt. And there is a Azure CLI package
which is available on the free downloads. You just download that package and
then all of the commands that you saw on the webpage, you can just
control it through here as well. So for example,
Azure ACS, mine is help. So this is basically telling you how
you can create an ACS configuration, that's your continue
service configuration. And then you need some JSON file, which is basically was
the template you can download. Other templates are available
in GitHub as well. The important thing here again
is that all these things that we are doing on Dockers
are available as open source. So you can go ahead look
at the source score and if you want to change
anything you can changes it. You can change your JSON parameters,
adjust it accordingly. Add your own JSON
parameter if you want to. But then you need to deploy that
image, all these images here. So I will avoid the logging in part
because of the time I think, but So these parameters
are available at this website. Let me show you the gist on here. Yeah, so just download this JSON and then whatever adjustment you
need to make your own DNS name. You can set your own DNS name,
whatever the VM size. Remember we saw it through the UI. You can select your VM size,
the username that you need to have. What is the orchestration
type you need to use? All these things that we've
configured through the web interface, you can
configure it in this JSON. And then blast it off and
send all so many container images that
you want to set it up, okay. So coming back to the question I
think which was earlier raised, do I need to change my existing app? Mm, not really. You can use your existing app,
whatever you have, and deploy it as it is. For example, I have this example
there of an order processing pipeline and a finance pipeline. So for example, somebody goes in and picks up an order
using his forklift. He puts it in a stock item and
then shipped it out or put it in a shelf or ship it out. But as soon as the stock item is
moved from one point to another point, the finance guys
get to report out and then they can do the real
time analysis on it. So here, potentially all of this can
be one monolithic app where if that app is waiting for a picker to pick
up the package, and then move it. And then once you've moved it then
the finance report gets updated, and all these things. But as you can clearly see, there are obviously two
separate sections here. And those are great
boundaries in your app. Where you can say,
order processing is one app, and my finance view is another app. Because finance view does not
need to get updated real time, you can update it once a day,
or twice a da, or five times a day, while your
other processing might be real time. As soon as an order is
picked up from the shelf, you want your inventory
to be updated. You can go even
further here as well. And I'll go into those
details as well. So there's a great
book by Sam Newman. It's about a 300 plus page
book about microservices. Anybody read that book? It's a good book to read. It basically tells you
how to do a microservice. And again, I don't like the word
microservice that much, because sometimes people
go into this notion that, in order to go to the Cloud,
I must do microservices. No, that's not required. It is good if you do, if you break down your service
into smaller services. But breaking it down too much
will make it too chatty and then it will defeat the purpose. So you need to have some loose
coupling between and all the pieces that are doing the same function
should remain together. And I would rather say that if
you are starting your journey in the Cloud, start with a monolithic
app at the beginning. Don't spend too much time in
re-architecturing right now. Start with a current one and
then look at the profile. Okay, do I need to
increase my input, because there are a lot of
people making requests? Or do I need to increase
my database size? Then you break down later on. It's easier to break down later on, rather than starting with
a broken down model and then trying to say, okay, now these
three companies should be together. It becomes much harder that way. So start with,
if you a monolithic app, go ahead, deploy it using container. >> [INAUDIBLE]
>> Yeah. >> [INAUDIBLE]
>> Yeah, at the very least, you can ensure that you
are scaling up very quickly. That's the very least advantage. And your dev test cycles
have improved a lot, that's the very least advantage
you are getting with the Dockers. And then next,
you can do autoscaling later on. So start with that environment
where you need to scale up. Obviously, telemetry
is very important. I mean, make sure that you
fill in your variables and get enough telemetry
from your service. >> [INAUDIBLE]
>> Yes. >> [INAUDIBLE]
>> Yes, that's where I said it's becoming more useful where what
you can do is that your devs can put the Docker image out there for
your test to use it. And your test can use the same
Docker image and send it out. So it's much easier
in that pipeline. Rather than let me first break
it down into Microsoft and only after that I will
start my journey. You can start your
journey right now. So going back to
the previous example. So the finance and other processing, we can separate it out as sort
of two, I would not call them microservices, let's call them
two smaller services, right? Because finance again, as I've said,
has its own requirements. It does not need to
get updated real time. You might update it once or
three times or five time a day, while your
ordering is done in real time. You can break it down even further
to say that the picker and the forklift are done
at a human pace, where somebody physically need
to go and pick up that box, or the stock from using the forklift,
so you can split it up that way. Or you can go break it down further,
but as you can see, as you break it down further it becomes more and
more chatty between these two because now the picker will wait for
the forklift to be there at the end. And only the ordering will be done
once the picker has picked it up. All of these charts will come
into your service, which may or may not be useful. These charts only help if you
need to scale each of these items individually. If you don't think you need to scale
each of them individually, put them as one service, and then put them
all in one document image as well. And again, the same book which
we say is that you should start more towards, okay, no, I'm going
to stick with monolithic design. Do I need to go to microservices or
not? Let's think in that mode. And then break it down
from that point onward. I mean coming from Skype for
the last five, six years, that's. Initially when Skype was on-prem
it was all monitored by Skype, I mean Office Communicator. It was all in one server. And then eventually when it became
Skype Online, then we are starting to break it down into multiple
services based no the needs. For example, Address book and
your meeting content are together, because they are generally
correlated, but sometimes they are not. So, it depends on your needs, your
apps, where you need to scale more. For example,
in the case of Skype, I recall, most of the people are logging on,
so the logging on system need to scale up a lot more than your
audio video system, for example. Because millions of people
are logged on into Skype but very few people are making calls. So that's a good service
breakdown of Skype. But on the other hand in
the On-prem site people want to no, I want to have one server
which manages everything. So you see the defense coming in. So always start with existing app,
put it in a container, see if it helps, and then start to
see, okay, where is the point where I need to breakdown my
app into other services? Okay, so this is a interesting
demo of the whole image that we discussed, so
I will go into that, okay? So, this is an app, which is running
on top of DC/OS, in Azure, on a Linux VM, and all the boxes that
you are seeing, analyzer, autoscale, batch, these are all separate
containers, which are running. So, these are all, six,
seven, different containers, which are running. And then, the very first container,
analyzer, is the one which, to think of it as order processing
engine where you receive some order. And then you analyze those orders,
and then you serve those orders, and then you are done. Very simple input and output. Analyzer is the one who's actually
analyzing the order that comes in. Order scale is a container service,
it is looking at every service to find out whether
I need to scale up or not. So if somebody is running red,
it picks up that, okay, I need to it,
scale up this particular container. Batch is a low priority process
which is running, which is almost like, if I have nothing else to do,
I will do some batch processing, which probably can be used later
on for your reporting purposes. Producer is the one which basically
is getting all the input, so it's like the input queue. Rest/query and web is basically, the web is the service which is
actually showing you the web UI. And the rest/query is curing all
these other services to get to that UI. So in this example,
if you see right now, so you have one analyzer running. One autoscale instance running. You have five or six instances
of batch processes running, and there's no producer right now. And the other view is
basically showing you how much CPU we are using. So you see 80% of the CPU
is already used, and this is being used by
the batch process, which is the low priority
service that we had running. I doubt anybody who's running a VM, who wants to scale up
will run it at 80% CPU. Typically people run at what,
20, 30, if you're very brave, 50%, that's where people
generally moves the hands up like, okay, 50%, I'm done. And then you start about scaling. But here, I don't need to
worry about scaling up. I am using 80% of the CPU,
for some background activity, but still it is 80% being used. And I'm not worried about
scaling up right now. Because my scale up with Docker
will be instant, really fast. I don't need to spend a lot of
money upfront, purchase VMs and have them running,
get the money out and then. I want you to buy VMs from Azure,
but I want you to be efficient as
well in running those VMs. And this is where it comes
in handy where, say you have three VMs purchased, and what
you can do is that you can keep, and you have all your services
as a Docker service or a container service in it, and then
what happens is that one of those, you keep on running a batch process
which is a low priority process that's probably doing whatever you
need to do in a background activity, but as soon as a producer comes in, and I'll show it to you
in the demo right now. We'll scale up within seconds. So you don't need to worry about,
hey, look, we deserve a lot
more capacity for VMs. You don't need to do that anymore. >> So what you're saying
is you're autoscaling. >> Yes, I'll show you that. That's what the autoscale process
we have running is doing. It is just another
service which is running. All it is doing is looking at the
service health of everybody else. And once it finds out that it
needs more analyzer containers, it will deploy those. So here is what we will do. So this app currently is designed so
that scaling is about 1.5 seconds. And the time which shows
the processing time, it basically was the time for
the last job that it received. And the queue length is
the current size of the queue. And what I'll do is that I'll
go inside the producer here. And I'll scale it up to about two, which roughly means 2000 requests
per second are coming in. So something happened in the world
and you're Like it's Black Friday, I don't know. For something happened and you got a lot of more requests
coming in within an instant. And now you will right now is that
we are scaling up the producers. A lot more instances. For this is nothing but generating
load right now on this service. So if you see here,
so we started up. So now all of a sudden my
queue length went up to 60. And all of a sudden, you will see my analyzer is
automatically increasing. And my batch eventually
will start to decrease. If you see that,
my analyzer is currently at nine and I am still not under my SLA. So this just like in real time. How long does it take to scale up? We'll be scaling up
within ten seconds or so. >> [INAUDIBLE]
>> No no, order scale is a job which is running to find out how
much do I need to scale up, right? And the important thing here is
that you're not setting up new VMs. The VMs and you're not-
>> [INAUDIBLE] >> Yes. >> [INAUDIBLE]
>> Exactly. >> [INAUDIBLE]
>> Yeah. >> [INAUDIBLE]
containers [INAUDIBLE] >> Yeah, the auto scale is running multiple instances of
those on the fly, right? So if you see right now, probably we are going over 23
right now in the analyzer, and my batch is reduced to two, so auto
scale is nothing but a scheduler. It's just another service
that you are running. And all it is doing is that
it's trying to load balance it. Okay, do I need to
add more analyzers or do I need to add more
batch jobs at random? And now we are back to our SLA. About 1.5 seconds was our SLA. The queue length is still there. Right now,
once the queue length goes down, you will see that it will start
putting more in the batch job. But here I should have
shown you this one, but the CPU utilization
remain roughly flat. I'm still using 75% or 80% of my CPU during that
whole course of upscaling. I mean you scaled up from what, two instances of,
to 25 instances within what? Less than a minute. And this would not have been
possible with VMs because now all of a sudden you get new
load, you need to circulate and get more VM, more VM, more VM and
you start paying for those VMs. You keep those VMs up and
running in case the load comes up. Here you don't have that problem. You just leave it,
your service will auto scale itself by using it's auto scale service,
which you can write by your own. You give your own parameters,
I want to scale up because my queue length is bigger than x or
my processing time is x. Depending on your parameters. It all depends on what parameters
you have to scale up or scale down. >> [INAUDIBLE]
>> Yeah. >> [INAUDIBLE] company [INAUDIBLE]
>> Yes, exactly, exactly, I mean,
because as I said in other case, what you would have done whether
you probably would have purchased, say seven VMs or whatever,
the 20 VMs that we are running. And you will not be
running them at scale. You need to find something to run on
them because you're paying for them. Here you are not paying. You are only paying
when you are using it. You are only scaling up
when you are using it. >> [INAUDIBLE]
>> You don't know that, that's the beauty of it, right? >> But
how do you know how [INAUDIBLE]? >> No, so because you initially made
that purchase that I want these number of agents on
my container service, depends on how many agents
you have picked up. Okay, mm-hm. >> [INAUDIBLE]
>> Mm-hm. >> [INAUDIBLE] >> Can you repeat? Sorry, repeat please. >> The agent is [INAUDIBLE]
virtual machine [INAUDIBLE]
>> Yeah. >> In that, one agent [INAUDIBLE]
>> Mm-hm. >> So you had two agent-
>> Yeah. >> There might be two [INAUDIBLE], once the agent
[INAUDIBLE] VM agent here. >> It may go to both of them. It depends how you
configure your dock. >> [INAUDIBLE]
>> Yeah. >> [INAUDIBLE]
>> Yeah, that's why if you want to
select high availability then. First, high availability
will be at the VM level. And then the high availability would
be at the Docker level as well. Yes, it will go to both of them. >> [INAUDIBLE],
right? It [INAUDIBLE]
how many [INAUDIBLE]. >> Yes. >> [INAUDIBLE]. >> Yes, and
initially at installation time, you tell how many VMs you need. That was just, remember the UI that, you tell us I will be creating
this docker image with five VMs. Okay, then five VMs it is. And you will only be charged
when you are using it. You're not charged when
you're not using it, right? Yeah? >> And it's possible to
auto scale actual VMs or just their [INAUDIBLE]. >> You can do actual VM autoscale
Azure but it takes time. That's where the problem comes
out is that if you say I want. I mean, that was the old
solution if I say it bluntly. Old is like what? Two years old. But that was old solution
where okay, I found that, I have to increase my VMs
because of Black Friday. So you probably will upfront
increase your VMs two or three days in advance. Here you don't need to do that. As soon as the spikes comes in, the
fact that a Docker can come in very quickly, you're using
those benefits here. >> I have another question. So the load balancer
service [INAUDIBLE]. Load balancer service [INAUDIBLE]. >> Yes. >> [INAUDIBLE] >> Yes. >> [INAUDIBLE]
>> Yes. Yes, you can either use Azure
Load Balancer or you can actually configure the Docker Load Balancer
as well if you want to. I would recommend not
going down that route, because it will add more complexity
especially if you are starting, but sure those options are available. >> [INAUDIBLE]
>> Yes. >> [INAUDIBLE]
>> The docker? >> [INAUDIBLE]
>> Yes, yep. Okay, now let me scale it down. So we had it all running,
everything let me scale it down to say, scale it to zero. And let's just go just to see
the OS here as well, come on. It will slowly go down and
then all of a sudden, after every adding five second or
so, the scaling down is done in a much
slower fashion in this way. So you will probably
see a dip the CPU for some time while we kill
one of the instances. And then as the load goes away,
it will start scaling down. So let's see. Currently I have 34 or
35 analyzer running and then we'll see when it
goes down eventually. We'll start seeing that impact soon. Remove the load completely and
then from 34 we are now down to 32. And then eventually it
will go down to one and zero after every five seconds or so. The scale down in this app
is done in a much slower manner just to ensure that we can
take the new incoming load as well. Yep? >> [INAUDIBLE] particle scaling [INAUDIBLE]
>> Yeah, but, so for example if you see on one VM, if you can run,
let's not call it VMs. On one OS instance, if you can run
20 docker instances that are just like I've shown you, then you need
to just get one agent at that time. >> [INAUDIBLE] >> Yeah, so I mean, typically what it is that you are
only paying when you are using it. So you might have those VMs but
you're not booting them or you're not using them right now. You're not charged for it. You are charged only
when you are using it. >> That's fine but [INAUDIBLE] business coming in [INAUDIBLE]. I have a [INAUDIBLE]. >> Yes so auto scale only work
with the maximum you have, right? >> The skillsets on auto scale
which are the actual VMs? >> Yes. >> If you apply [INAUDIBLE]
>> You can apply auto scale of the VM, but then
the VM bootstrapping time will come in play as well at that time, right? So you will definitely get the
benefit at some level of the VM auto scale but then you need to wait for
the VMs to be ready. In the Docker,
the scaling up time is in seconds,
while VMs may takes minutes. So one option can be that
you can have a multi-tier autoscale approach. Where first you tier up with
the Dockers, while in parallel, you spin up new VMs as well so
that they can be scaled up for that. >> You said [INAUDIBLE]
pay as you go. >> Yes. >> [INAUDIBLE]
>> Yes. >> [INAUDIBLE]
>> Exactly exactly. >> [INAUDIBLE]
>> Yeah. >> [INAUDIBLE]
>> Yes yes, so this one it probably is down to eight and will
probably go down to zero now so. Okay. Let's come back to the slides here. So the basic point of the previous
demo was to ensure that you understand that scaling
of a Docker is fast. It's seconds, not minutes. I mean nobody can do the VM
scaling in seconds right now. I mean no matter whichever
cloud you go to. In Dockers,
the scaling up is in seconds. The scaling down is also
very quick as well. And you are using the CPUs or the VMs that you stood up,
you are using them efficiently. You are using 80%, not 20% that you would have
been doing with the VMs before. You can go up to 80% of the CPU, use
them for whatever you need to do, and don't worry about
the scale that much. Little bit deep when removing, they're going to the next level. So basically, Linux already had
these control groups namespace and layer capabilities, and
other OS functionalities. What Docker did was added, okay. If you want to change anything in
these areas you need to be a Kernel programmer or
a Kernel developer for it. Well as much as I love to go down
that route, but that's not my job. My job is to use the OS, and scale
up my service and those things. So Docker came up with
that simpler model, where you don't need
to go inside these. And it provide you a simple
REST Interface and some Open Source capabilities. To change the to provide the
virtualization of the OS if call it. Windows also follow
the same mechanism as well, and windows compute services
which is part of the windows, actually is already there. And people wrote we work for the docker team to write this
platform specific code, and again this is all open source, everybody
can go ahead, and look inside it. And then on top of it is
a platform dependent court which. >> [INAUDIBLE] >> I did mention it, I think. >> [INAUDIBLE] In my
windows application, can I get container with Linux? >> If you have
a windows application. >> Can I get a internal Linux or
what? Yes, actually containers currently
are supported, I think in Windows, Azure Container is still in
preview mode, if I understand. Linux one is already available. So you can have your host OS as
your container host OS as Linux and then you can have Windows on top
of it or vice versa as well, yeah. Important thing not
everything is a docker. That is variable. All these softwares or services. They are primarily run in a Docker,
or in a container, but
it's not required. Just to make sure we understand
the terminologies her, very clearly here, that yes,
you can have a container, and all these things,
they are all running in a container, but they didn't do anything
specifically to run in a container. They are just using container
as a deployment block. If I make it simpler,
container there's nothing but all the MSN files or other things,
just a way to deploy your service. The next ten minutes, I have a demo
to show you how you can add value, add service, and
especially when the load goes high. How do you treat? This is a real world
example from the industry. >> Which basically is showing
you a proof of concept of which was done in days not weeks or
months. So that's exactly what the benefit
of container is that you can use this to scale up and
scale down your service. And you can just show the demo, and
then you go through that demo first. And then we'll go through
the learnings in the next section. So I hope the audio works out. >> Yeah, so yesterday we
wanted to show this demo and we decided we'd throw it
in real quick this morning. So I'm excited to have up on stage
with me Claudia from Microsoft and Adam from Esri. Why don't you guys introduce
yourselves real quick. >> My name is Cloudy, I'm a program
manager in the Azure IUT team. >> My name's Adam Molinkoff, I work
at Esri and I'm responsible for the real time aspect
of our platform, as well as the big data aspect. >> Cool, so you know one of
the reasons I was really excited to work with these guys on
showing this demo is, it's really taking all the pieces
putting them together. And seeing really the power
of what you can do, when you take all
those pieces together. So Adam, why don't you tell me a
little bit more about what you guys are doing at Esri and
specifically with this demo? >> So at Esri,
we make geo-spacial software. So that's doing analytics
over space and time. We do that both in real time
with streaming data, as well as with data that's at rest, from
a big data analytics perspective. We have embed DCOS in a product that we're building that will
be delivered later this year. As a manage service to
our customer base and this embeds DCOS in a number of
the frameworks that come with DCOS. Who have chosen to run this on Azure
because the Azure container service that's available on Azure is
basically DCOS underneath, so we used the same APIs
as we would for DCOS. But it gives us a great
environment to run this and we can also integrate that with
the IOT offering that Azure provides to bring in data from
lots of different devices. >> Awesome.
Awesome. So what's the role of the Apache
Mesos project for all this? >> So Meso's is the foundational
kernel underneath all of this. And it's been rock solid for us. In fact, our demo and all of the
products that we're building around this have been rock solid. Unfortunately, yesterday there were
some external circumstances that didn't allow us to view the demo. But we are super glad
to be doing it here. The other thing that
Mesos provides for us is a really easy time
to market for this as well. It used to take us months to build. We can build in weeks and
some cases days for different project engagements that
we do because of the foundation and the operating environment that
Mesos, and DCOS provides for us. >> That's great, and so you mentioned that there are some
frameworks that you're running. So which frameworks are you
going to talk about? >> So we're running Elasticsearche
are storage and search engine. We're also running Spark Streaming
to be able to bring in and adjust data in realtime,
along with Kafka. Spark Streaming, we've enabled
with geometry types and geometry analytics, and we've also written
some plugins for Elasticsearch that allow us to do additional things
with Geo inside of Elasticsearch. >> Sounds great. Looking forward to seeing it. >> All right, so
let's see it in action. >> Thanks, Ben. Alright, so what I'm gonna show
you now is the DCOS dashboard. We put together a small cluster for
the purpose of this demo. So there's eleven nodes in
this specific environment. I'm gonna click on
those nodes here and you can see those nodes
that have been provisioned. This is all running on Azure
container services and if we look at the grid
view of these. We can actually see the allocation
of CPU across all the different nodes that are happening. We can also click
through the memory, so this is the allocation of memory. And then this is also the disk
that's been allocated for this application. With DCOS, we have the ability to
install packages of all sorts. These packages are the featured ones
that are sponsored by Mesosphere, and fully supported. So we have Cassandra, Kafka, Spark. And then we have another of other
community enabled packages that we can install. So installing this on a DCOS is
as simple as clicking the install button and configuring a couple
things for that package. So this is how we've deployed
Elastic Search, Spark, a number of other environments. So as you can see I saw a new one
in there yesterday, Kafka Manager. That just appeared, so
I'm eager to play around with that. So there's new ones that keep
appearing and this is great for us. The services that we have
currently deployed in this cluster is Elasticsearch. So I can click under this here, and what we'll see is I have what we're
calling our spatiotemporal store. That's our space and
time storage system. And we can look at the tasks
that are deployed here, so this is a five note cluster, so
fairly small cluster, but for the purpose of this demo,
it should be quite sufficient. And then we can all scale this
Elasticsearch cluster up, it's simple as just typing in another
number here and saying scale out. And so after a couple of minutes, we would have additional nodes
within our Elasticsearch cluster. I'm also gonna look at the Kafka
distribution that we have here. So this is the Kafka package. Here we can see that I have five
brokers deployed across this. And these five brokers, there's
not the nice UI that there is for the other tools here at this point. I hope that that's coming soon. So what we can do here is go to
the DCOS command line interface. And so we can type things
like DCOS Kafka broker list. And this would give me the names
of the brokers zero through four. And then I can also do the same
thing to see the actual DNS entires. So with Mesos, we get Mesos DNS and Mesos DNS gives us an easy way
to access this information. And so we can see that I can have
my producers produce data to these specific topics. So we're gonna look at Marathon now. Marathon is like the init.d for
DCOS, so it runs services that are long running and it makes
sure that they're up and running. And if they go down, it'll spin them
back up and you can define back off policies, and all kinds of
other things with it as well. So we're running our
Elasticsearch in Kafka. These are the actual schedulers, that schedule the task
to run inside of DCOS. Where we also have deployed
here some SV components. These are our web apps,
the service a map services. So when we look at
the visuals in a minute, these are the things
that are backing at. What I'm gonna do now is
actually deploy a few additional things on the fly here. So we're gonna deploy what we
comically call a rat which is a real time alert task. And we're gonna deploy
a couple of these, and then we're gonna deploy a source. So the real time alert task is
our Spark Streaming effort. And then we also have a source,
which is gonna be the source of our data that's gonna be
streaming into Kafka. And our Spark Streaming jobs
are gonna consume that. So if we now go back
to our dashboard here, we could see that our
allocation is increasing. And then on our dashboard, we can see we're now
using most of the shares. So we've designed this specific
cluster for this specific demo. So if I click into
the real-time analytic task, what I can see here is
that there's five nodes. If I go back for a quick second, you can see that there's 20 cores
allocated and 10 gig of ram. And so there's the five
tasks that are running. You notice that they're all
running on different IPs. They could run on the same IP. It's up to the scheduler
to figure how to do that. If I take a look at
the configuration, this is using
the Mesosphere Spark 16 image. And so we're just running this
as a Docker container with Spark Streaming tasks across
five different machines. And then we have our normal Spark
submit command to kick these off. So what's important to note here is
that Mesos is the foundation for all of this. So if I go actually look at
the localhost Mesos interface, we can see that all of the same
things that we've been looking at DC/OS are exposed down here,
as well. But I can see the task-level
information that's running, as well. So here's my five rats, here's the tasks that
are executing against those rats. And so I have a full interface
into Mesos, as well. So think about DC/OS as
an abstraction on top of Mesos that gives you a lot more capability. So what I'm gonna do now is
open the actual application. And let's go to my
local machine here. And so this is our real-time
mapping application. So what I'm seeing here is an
aggregation of taxicab data that's streaming in New York City. If I zoom into this, what I can see is those
aggregations get broken down. So what we are actually doing is
we're querying elastic search. The data streaming in from the cabs
in real time from New York City, we're bringing that
data in through Kafka. Spark Streaming is doing
analytics on it and writing the results
down to Elasticsearch. And then this map interface is,
every second, refreshing and building aggregations on the fly. So you can see the numbers changing
and other things happening here. As I break down, we've written some
plug-ins for Elasticsearch that allows us to visualize this
through square aggregations, or even hexagon aggregations. So if I drill down a bit further, you can see that these
aggregations change on the fly. And actually,
if I zoom in far enough, I'll actually see the raw features. So you can some of the taxi cab
movement that's happening right now. And so this is what gets rolled
up into aggregations when there's more features. We don't wanna see a million
red dots on the screen. So we render that through
aggregations to make it more usable. So what I'm gonna do now
is zoom back out to JFK. And let's see what kind of taxicab
activity's happening here. So it looks like we've got
a few taxis running here. I can run this Identify tool, and
that can identify which taxi ID this is and give us the count of the data
that's coming through here. We can do that for another one here,
so we see TaxiID 211. And then we can finally
look at this guy here. So this guy's got five passengers,
and his TaxiID is 190. So let's go back and since we were
doing spatial temporal analytics, we allow you to actually create
time sliders where you can actually replay this data, as well. So I could come back here and
replay this data. And while I'm replaying the data, I can actually identify
additional where clauses. So I can say TaxiID = that
190 that we looked at before. And so that'll just filter it
down to that specific taxi. So this is building all of this in
real time as we move through here. And this data is still streaming in. So if we go back to
the live mode here, we see that the taxis
are continuing to move. So I'm gonna go back here, and we're gonna show you kind
of the IoT aspect of this. Certainly there's
an IoT aspect already, that you're seeing here with
the taxicab data moving in. But let's zoom in to
Central Park here. And I'm gonna flip over to
a specific view here and deploy another source of data. And this source of data is
gonna drive a simulated cab. That's this individual here. In this red area that
you're seeing here, Claudio is the guy that's yellow
that looks really unhappy. But the red area there is
a one-minute proximity drive time into that location,
factoring in real-time traffic, factoring in all kinds
of other aspects. So that shape can actually move. >> And then we have here what will
be the new Cloud Power T-shirt. It's a very simple, as you see, there is a tiny
microcontroller that is actually receiving messages from IoT
Hub in a secure and scalable way. So basically when the back end
is doing the heavy lifting of filtering and
processing the data, through IoT Hub you can actually go
and actuate a device on the field. >> So when that taxi hit the edge
of that polygon we actuated that event to show MesosCon 2016? >> Yeah. >> So the last thing I'm gonna
show here is we can actually visualize this also as a heat map, which is a popular way to
render a lot of this data. Let me fresh this real quickly. And so these heat maps can
provide you additional context into this information. And what we can do is actually play
that heat map back in space and time as well. So let me flip on the heat map. And there's the heat map. You can see there's tons of
red activity in New York. But we could break this
down a bit further. And so I can actually replay
this heat map, as well, because we're building
these on the fly. These aren't be
calculated in the stream. We're actually recording the data,
and then we can play this
back in real time. I can actually zoom in, and
you see a different extent and a different breakdown
of this taxicab data. So DC/OS is powering our
products that we're building. So we're building higher-level
products on top of DC/OS. And it's an extremely powerful
framework that you can deploy lots of frameworks to. Thank you. >> [APPLAUSE]
>> Okay, [INAUDIBLE],
if not, have a good time. So it is very long talk title,
but Magic Operability Sprinkles, hopefully that got your
interest peaked a little bit. You're here. >> Okay, this will be, yeah. So basically what we showed in
this demo was that scaling up, scaling down, deploying is the key
here where you can go from proof of concept to
the actual release very soon. At the end,
I'd just like to summarize that, ACS is an infrastructure service
[INAUDIBLE] service somewhere between PaaS and IaaS. So and the benefit more is on
going towards fast scaling, fast development cycle,
make it faster. You don't need to
re-architect your apps. You can use your existing model,
I just put it in a container. It is more of a deployment and
a scaling-up framework rather than anything else, I think,
just to demystify this whole thing. Secondly, it's all open source. So if you feel something
to be changed, go ahead and change the open source,
the Dockers and others. And we'll take it from there. And as it's taken by other
community, even the interfaces that we have using with Windows, or with
Linux, these are all open source. Then you can go ahead read more
about it if you are interested in that. And there are some important, I would rather actually go into this
one, especially the tech community. So if you have any suggestions,
any questions, any contributions, how do I use, which again,
the same very basic question. Should I use converter desk, should
I use DC/OS, should I use Swarm? What orchestration
engine I should use? If you have questions on that,
Tech Community is your solution or will be there to help you out. All these demos that we showed
are available in GitHub, as well. You can go and
look at those, as well. And lastly, yeah, join the Tech
Community, I think that's all. Okay, and that's all, thank you. >> [APPLAUSE]