Build a Company in Azure

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Dan is a great teacher, he ran my 533 course. There is something very wrong about a cockney pronouncing Azure like an American though

👍︎︎ 5 👤︎︎ u/GideonRaven0r 📅︎︎ Mar 17 2018 🗫︎ replies
Captions
hello and welcome to build a company in a day a session packed with demonstrations that highlight just how easy it is for companies to take advantage of Asia and begin operating at the speed of cloud I measured then and over the course of this session it's my hope that I can show you why I'm so passionate about Asia and why I should be top of mind for every IT professional we'll begin by understanding the components of the cloud before building our infrastructure and platform services finally we will wrap up by understanding how our resources are being protected and the insights that you can obtain from our extensive monitoring solutions if you want to follow along then feel free simply head over to the URL on screen and sign up for an Asia trial ready let's begin now most people have heard of the cloud many use it but what is it well in reality is just type of service where individuals and businesses rely on a third party to manage their data and computer processing via the internet perhaps a more practical way to think of it is as a utility just as gas and electricity is supplied to businesses and homes public cloud providers supply secure reliable and scalable computing from a shared data center that we simply plug into what's more with 38 regions serving over a hundred and forty countries the cloud empowers companies to run faster and cheaper than ever before simply paying for the resources they consume and bursting in scale when needed Microsoft we place high priority on a geographic expansion that enables the highest performance and data residency and with a unified code base running both publicly and privately via services such as Azure stack we can partner with organizations and enterprises worldwide and create a densely populated offering today startups governments and enterprises of all sizes choose the Microsoft cloud and it's comprehensive set of services to build deploy and manage workloads with integrated tools DevOps solutions and marketplace offerings all of which even building anything from simple mobile applications to large-scale internet so Lucian's an exciting thing is this wave of innovation continues to gain momentum with changes happening almost daily it's an IT professionals dream come true a Digital Playground that and they was asked to rethink repackage and create innovative solutions that maximize efficiency while fortifying security now we're going to jump into a demo of the portal in a few moments but before we do this understand the service tiers on offer to do so we first need to understand how things are done on premises building IT solutions is likened to build in a tower with blocks we stack one service upon the other until we reach our goal always ensuring that the block below is stable and secure before placing the next and just like the game the higher the blocks stand the harder they become to manage in a classic on-premises IT solution we would buy powerful servers to host our virtual machines which in turn would run our applications and while this was the norm we can see that the bit that matters providing our users and customers with great application experiences was in fact a very small piece of the puzzle by moving our solutions over to a cloud provider we can reduce management with companies such as Microsoft under taking the responsibility of the underpinning resources take is or infrastructure as a service for example now is is perfect for migrating solutions already in place over to the cloud and is often referred to as lift and shift with is you simply pull your virtual machines into the cloud and allow Microsoft to look after the physical resources whilst you concentrate on the operating system and the applications that run upon them for new projects PA's or platform as a service would be first choice with this tier you can become laser focused on what matters the customer and great service experiences instructing the cloud provider to manage all resources and reliability tasks while giving you total peace of mind that your solutions are highly available and globally scalable should you need finally we have says or software as a service a collection of prepackaged IT solutions that can simply subscribe to pink office365 outlook onedrive even xbox live with these solutions we can maximize the power of the cloud without ever having to consider the resources needed to power the solution now Microsoft offer a whole catalogue of services within the azure portfolio and we're going to see just how many of them we can use in our presentation today but before we do let's familiarize ourselves with the portal and navigating around both the graphical and command-line interfaces so keep you stature in the past you may have used it vide as a service manager portal or ASM for sure ASM was often referred to as the classic interface but has now been replaced with Azure resource manager arm allows you to view and manage all resources in one unified hub be that food of rich graphical experience or the integrated command line for you the cloud shell with arm you have a single easy to use console built just for you significantly simplifying building deploying and managing your cloud resources what's more you can organize the portal to custom fit your work style and stay on top of the things that matter most by pinning them to your dashboard and selecting the right amount of detail and insights across your apps and resources so let's go ahead and make some customizations now I personally preferred a default color scheme so I will keep it as is but what I want to do is clean up our dashboard rename and add a logo cleaning up the dashboard is as simple as clicking on a towel and choosing to unpin or resize let's remove all pintos for now and then rename our bash board to build a company in a bay you now let's add our markdown to do this I will move over to the left hand side and add the tile gallery drag the markdown towel over to the main window as we can see this opens a side blade whereby we can create custom content will add our logo by replacing the code to point to an online storage account where I've uploaded the image with that done let's go ahead and finish customizing after the left-hand side of the portal you'll see a list of resources which again can be totally customized today so simply click on all services at a top of the list followed by the services that you wish to pin or remove from the sidebar you can rearrange the order by dragging them around or even minimize the menu by clicking the left arrows icon at the top of page now over the course of the session we'll use the portal extensively demonstrating just how user-friendly it is but if you're a seasoned IT pro who prefers a command-line interface then those tools are at your disposal too if I move towards the top of page you can see that I can launch a cloud shell which will then allow me to build out resources use neither bash or PowerShell you'll notice that when I launch this for the first time I'm prompted to create some resources this is for persistent session information and means that my experience remains consistent you once configured my cloud shell environment will build allowing me to create resources using a command line I can even navigate to its own port or its shelter as you're calm and do away the graphical portal altogether should I choose will make use of the cloud shell throughout the session but before we do I just want to cover one more thing resource groups you see everything that we build in Asia will ultimately live inside a resource group and what's more by bunching services together were able to control access management and billion resource groups are super easy to create we can simply click on resource groups on our sidebar or use the keyboard shortcut GNR from there we can click on add and then provide a name select a subscription and define a location let's go ahead and call ours is resources and place it into the western region you once created let's go ahead and click into the resource group now I appreciate that we currently have no resources inside the group but what we can see is that on the left hand side we have a number of options from here we can look at activity logs and Identity and Access Management we can look at resource costs we can even set policies to specify what can and what can't be created inside the group we can also generate an automation script that can be used to really ploy resources at a later stage or into a different subscription altogether finally we can monitor metrics set up alerts and gain valuable insight into our resources so now we have a better understanding of the portal let's begin building out our services to begin with were concentrate on infrastructure as a service and use the tools available to build our compute components we'll concentrate on the network and our addressing schemes and then connectivity into Azure before building out some virtual machines using an array of deployment methods once then we'll understand what the deal is with containers and why they have become so popular before building out our directory and ensuring that we provide secure authentication into our system finally we will look at deploying a dev test lab and investigate how we can sandbox testing environments with granular cost control before protecting everything that we have created with Ash's powerful backup solutions let's begin by understanding the virtual network that a virtual network allows us to define our address space or the numbering system that we are going to use that would allow our computers and resources to talk to each other similar to a residential address in which houses share a postcode yet separated by the bore number a virtual network achieves the same in the cloud by defining Network and host IDs the first thing that we will need to do is decide what our address space will be and in this instance we will choose the value 192 168 10.0 with a slash 24-bit mask what this means is out of the 32 bits used for the address space the first 24 will be used to identify the network or our postcode leaving the remaining 8 2 identified a host this gives us a potential for 256 possible addresses now 256 isn't a particularly big number especially in large organizations but do bear in mind that these numbers can be changed to support millions of hosts we are just keeping it simple to help with the learning out of these 256 values I want to further subdivide I want to reserve some addresses from Y production machines perhaps reserve some for development and finally reserve some to route traffic across the network should I choose to communicate remote hosts to do this we will create subnets and these are created by borrowing bits from the host to create little groupings of resources and this is important because once grouped together I can then control the flow of information between them ensuring that my front-end and back-end servers only communicate with the nodes that they are supposed to but what if I want to communicate with resources there in a completely different Network altogether thankfully with Asia this is super easy and we can create periods between virtual networks which then route traffic through the Microsoft backbone and allow them to appear as one not only is this incredibly fast with low latency high bandwidth connections but because it's on Microsoft's own private network and not routed through the public internet you can also be assured that it's highly secure finally we have many options available to allow networks outside the scope of the Microsoft cloud to communicate with our resources for on-premises machines we can create site-to-site Virtual Private Networks and tunnel traffic securely over the Internet or we can create our own private connections with technology such as Express route for individuals we can create point to site connections that allow remote users secure access into our cloud resources to achieve this we need to create a gateway which will take on the responsibility of moving the data between the cloud and the recipient much like the postman moves mail between the post codes in our earlier example so let's now jump back into the portal and start building this out earlier we created our resource group I as resources and we'll be using there to host the components that we're about to create so from our desktop let's go ahead and click on resource groups from the left-hand menu and then is resources once inside the resource group we'll click on add now this will take us into the marketplace our online store for provisioning services from here all we need to do is simply type virtual network inside the search box now you will see that the search box will fetch lots of results but only one will be called virtual network so let's go ahead and select that as you'll see this opens up a new blade and invites us to select a deployment model either classic or resource manager but as we said earlier will only be concentrating on resource manager in our demo so we're click the create button which opens up another new blade and from here we'll need to complete the questions to proceed to begin with we'll name our virtual network build a company in that they Veen then then as we said earlier in the video was our address base to 192 168 10.0 / 24 from here we will select our subscription and shortly the virtual network will be in the existing I as resource group we also want to then set a subnet name to production and address range to 192 168 10.0 / 25 finally we'll want to ensure that we select pin to dashboard before pressing the create button and that's it the service will be deployed we should take less than a minute to complete you now once completely chill now is that the portal displays our newly deployed resource from here we can further configure by adding some new subnets which we'll use later in the demo to achieve this we'll move down to the section titled settings and select subnets from here you'll notice our production subnet listed any available addresses in addition to two buttons one called subnet and one called gateway subnet if we click on the subnet button let's go ahead and add a new subnet called development from there we will set an address range to 192 168 10.1 28 / 26 which will give us a total of 59 user Voyager spaces we can also see that we can set the network security group the route table and service endpoints with that created let's now create our gateway subnet now this will be used to route traffic between the Microsoft cloud and a remote location so we need to ensure that this is configured correctly for routing to occur fortunately as you will do most of the work for us and all we need to do is click the ok button brilliant our virtual network and subnet are all configured and we are now in a position where we can start hosting resources in our network but before we do we need to deploy our gateway server so that our remote machines can securely connect so back into our is resource group let's go ahead and click on add to go into the marketplace and from there what we will do is we will search for a virtual network gateway from here they're select virtual network gateway from the results and then go ahead and click create now for our name we'll call our gateway build a company in a day GW and ensure that the gateway type is set to route based VPN for the SKU we'll select the basic tier now when we click on choose a virtual network we should see that we're able to select or build a company in a Devi net that we created earlier then we can go ahead and create a public address which we will again name build a company in their Bay gateway finally we will ensure that the resource group and location are correct before selecting the pinter dashboard checkbox and hitting the create button at this point our gateway will be provisioned now be patient this could take a while but rest assured we have lost the keepers occupied whilst it builds and we need to think about how we will authenticate into the Gateway once it has now authenticated into our gateway will be achieved using certificates this means that we will need to generate both a client and root certificate in order for communication to occur to do this we will run the PowerShell script build a company in a Bay search NPS one which automatically creates the certificate required to authenticate with the Gateway now we can upload the certificate directly into Asia so we need to extract its funk ring to do this I will open up the certificate from notepad and copy the signature between the begin certificate and end certificate tags into my clipboard you then with my gateway created I will click on the point to site configuration found that the settings and then configure now from here I will specify an address pore of 192 168 20.0 slash 28 as well as enter a certificate thumbprint that we copied from notepad will name the certificate build a company in a day route and once done press save on the top menu bar now this will take a few moments but once save your notice that we are able to download the VPN client from the azure portal so let's go ahead and do that from here you will notice that you have a 32-bit and 64-bit client will go ahead and install the 64-bit version now don't be surprised to see a windows protected your PC message this isn't because we are installing an executable from the internet and windows is trying to protect us rest assured the file is perfectly safe so let's go ahead and choose to run anyway once installed you'll notice that we now have a build accompanying of a v-neck connection in our network connections if we click onto it we're able to connect and authenticate into our asia gateway to prove this this open up a command prompt and type the command ipconfig you'll notice the adapter PPP adapter build a company in the davy net which has in address space of 192 168 20.2 from the range that we specified earlier we now have secure access into the cloud so let's move on and start building out our virtual machines met a number of compute sizes available in Asia can at times seem a little daunting especially when you're trying to work out which one you should choose to support your workload to make life a little simpler Microsoft would be fine six service categories that around that specific workloads first we have the general purpose machines with a balanced CPU to memory ratio and ideal for testing and development then we have the compute optimized configuration which is perfect for running medium traffic web servers network appliances batch processes and application servers next we have the memory optimized configuration which is great for relational database servers and in-memory analytics moving on we have the storage optimized configuration tailored to meet the demands of big beta SQL and no SQL databases then the GPU optimized configuration which is targeted for heavy graphic rendering and video editing and finally the high performance compute configuration which are purpose-built for maximum performance and lightning-fast throughput again don't try to remember all of the numbers just think about the workload that you're trying to support and then refer to the sizing tools and calculators available on Azure comma another things are bear in mind is Asha isn't just a Windows platform in fact nearly half of all of the virtual machines created in the Microsoft cloud a running Linux so with that in mind let me try and show you a few different ways to deploy virtual machines to begin with we created the main controller and we're going to do this by creating an arm template now our templates are sometimes referred to as infrastructure as code and are a simple way of deploying resources quickly and repeatedly the main benefit of using arm templates are that they allow you to deploy multiple resources at the same time and allow they look a little scary to begin with when we look under the covers we can see that what we're really doing is declaring the types of resources that we would like to have the name we would like to give them and any properties that should be associated with them let's take a better look so from our portal we will head up to the search bar and from here I'm going to search for templates once in I will click on add and then in the general blade into the name build a company in that they DC followed by a description and a description we will type is this arm template we'll create at the main control of virtual machine connected to the builder company in a day v-net once done I will click OK and then in the arm template box I will replace the code with a script that I've previously created you again I appreciate that this looks a little daunting but let's try and break it down to begin with we have the schemer that a schemer is the rules of the template it defines what the template must look like how we extract information from it and how Asher interacts with the code next we have the parameters and we can see that these define things such as the name we would like the resources to have the network we would like it to be on and so on these are the questions that the user will be asked when they deployed a template but as you can see we're also able to pre populate these parameters with default values making deployment even easier for the end-user next we have the variables a variable store values that can be used multiple times throughout the template and because they're hard-coded they require no input from the end-user finally we have our resources or the types of objects that we wish to deploy in this case a virtual machine some storage and operating system and network card so now we have a better idea of the template let's go ahead and copy and paste into Azure you now I just need to select okay followed by ad and once our template has been validated and saved we'll go ahead and hit refresh and then we can go and deploy now to do is I'm just going to click on the ellipse and choose the deploy and we'll see that as this opens as we stated before most of our values have been pre-populated with the default entries all I need to do is choose my existing resource group is resources and then scroll down to accept the License Agreement pinter dashboard and then click purchase now once validation has occurred as you will begin deploying this template into our subscription so the next thing that we're going to do is we're going to look at deploying a virtual machine using a marketplace image so to do this I'm going to move over to the left hand menu and choose to create a resource and from here I'm going to search for a Visual Studio Community Edition on Windows Server 2016 so let's select that and then hit create and then from there we'll go ahead and give it the name VM one will opt to use a standard hard disk for our image and I'll type in a username of build a company in a day admin followed by a password next we choose to use an existing resource group and select I as resources before pressing ok in the next section we will choose a B 2 V 3 standard V M this will give us 2 CPUs and 8 gigabytes of RAM in our next section we need to define our settings the first setting will be our availability set now availability sets are a good idea to provide redundancy in our application and are therefore recommended for production workloads this configuration ensures that during planned or unplanned maintenance events at least one of our machines will be available and meets the 99.95 azure SLA in our demo environment however we will skip this stage and move on for storage we will want to use managed disks now manage this are relatively new and take away the burden of the administrator having to create a separate storage account and worry about availability and redundancy by selecting a managed disk as you will undertake these steps for you next we will come down to network and we can see both the virtual network and the subnet have been pre-populated with the settings that we created earlier this is because Asha realizes that a component share the same resource group and therefore suggests them as default values nothing to change here so let's skip to the public IP address the public IP address is our public connection to the outside world and you could argue that because we already have a private connection via our point to site gateway this step could be omitted however for our demo we will configure a static address and then configure a network security group to allow RDP traffic through public interface TCP port 3389 you next we come down to extensions and from here we can add features such as antivirus protection and desired state configurations into our deployment let's go ahead and select Microsoft anti-malware on the new blade that opens we will press create and then ok to accept the defaults for protection then we will press ok once more to close the extensions window now we need to decide where do we want to configure an auto shutdown policy from here we have the options of specifying a time slot to automatically shut down our virtual machine and whether we wish to send a notification before doing so for the sake of our demo I'm going to choose to leave this saying switch to offer moving on we now have the option with specifying whether we want to monitor boot and guest OS Diagnostics these will be particularly useful for gaining insights into the condition of the machine and enabling an alerts via the core metrics they set both to yes and you'll notice that by doing so we are prompted to configure a storage account to hold the diagnostic information once done we can press ok and then ok once more to move on to the final section this displays a summary of our confer raishin and validates that our inputs are correct from here we can see a breakdown of the components selected the hourly cost we can also download the configuration there's a template or choose to deploy a resource into production let's press the create button to deploy so this will take a little while to provision so let's pause the video while our virtual machines deploy so we've our windows-based virtual machines created let's take a look at how we can commit we'll begin by clicking on the pin hole for a DVM now when this opens it will default to the overview tab from here we can see general information such as the Machine status the resource group date has been deployed into any operating system that it's running as we move down the tile we can see basic metrics for the machine including CPU network and this performance and we can toggle these counters anywhere between an hour and 30 days at the top of the blade you will notice that we were able to start stop restart move and delete the virtual machine you'll also notice an option to connect by using the public IP address that we created earlier if we click on the button and RDP profile will download allowing us to connect into our VM because we also have a private connection into our VM we're also able to connect securely via our gateway to do this all we need to find is our private IP address to do this let's move over to the left hand side and under settings click on to the networking tab this will change the main window and from here we should be able to see that private IP address listed which in this instance is 192 168 10.4 with our address at hand it's open and remote desktop connection and connecting to the IP address of a DVM you when prompted we will enter the username build a company in that they admin and the password and then in the certificate warning window click yes to authenticate into the machine awesome now we will come back to the configuration of the machine later but notice that when the server manager window opens the Machine has indeed been configured as at the main controller on the Builder company the mine you so let's now close our RDP session and go back into the settings of our virtual machine you'll notice that on the left hand menu we can add additional discs additional extensions and many other components but some of the more important aspects are to ensure that we keep our VM up to date with the latest security patches and updates remember infrastructure as a service means it's your responsibility to make your VM secure thankfully aja makes this incredibly easy with update management which uses log analytics and automation accounts to streamline the whole process once enabled it could take up to 15 minutes to provision so we'll pause the video and return once completed okay so we can see that free missing updates have been identified meaning that our machine is not in compliance so let's go ahead and shed you an update deployment to do so I will click on to the scheduled update deployments tab from here I will name my scheduled update a B VM update and then move down the blade may you'll notice that I'm able to exclude certain updates this may be doubted testing or incompatibility remember you are in the driving seat we're just making the process as simple as possible under schedule settings we can configure when the update deployment starts and whether it's a reoccurring thing for our demo we will che jewel to star in 30 minutes from now and leave it to a single instance finally I will leave the maintenance window to the default 2 hours to ensure enough time for the updates to install ok let's pause the video once more and we will return once the updates have completed and we're back and we can see that our updates deployed but we've experienced a failure at some point let's look at a tours in investigating now why four click onto the update deployments tab I can see a status of failed from my current scheduled tasks I can gain more insight as to what may have gone wrong by clicking into it as we can see the majority of the installs have succeeded but the definition update for Windows Defender failed by looking at the updates we can also see that Windows Defender was installed in an update for the platform at a time which may have contributed to the error we can reschedule the failed update for another in store but for the sake of time let's move on now another area of the importance is inventory and change management by using these tools you can get great visibility into installed applications and custom defined configurations for rich reporting and search capability both will not only focus on software but also registry services and file changes making it a fantastic tool for monitoring baseline security policies as we can see change tracking has identified the changes in vote by update management providing a rich audit log if we move down to the monitoring section we can consume telemetry to gain visibility into the performance and health of our workloads furthermore we can track the performance of a resource in this instance at VM by plotted its metrics on to the portal chart and pinning it to the dashboard you we're also able to be notified issue that impacts the performance of our resource by configuring alerts and then action those through automation moving on if we select the advisor we are assisted by a personalized cloud consultant designed to help optimize deployments this is done by analyzing resource configuration and using telemetry and then recommended solutions to help improve the performance security and high availability of our resources while looking for opportunities to reduce overall cloud spend if we click on that diagram we get an exploded view of our resource what's more the components are hyperlinks allowing you to navigate directly to the Associated configuration blades if needed finally under support and troubleshooting we can see they were able to assess as your resource health which aids in helping you diagnose and get support when pager service problems affect your resources it informs you about the current and past health of your resources and provides technical support to help you mitigate problems we're also able to view as your VM booth Diagnostics to capture logs that can help troubleshoot boot failures and obtain screenshots to see the VMS current state as well as reset the password of forgotten machine accounts if need be we can redeploy an azure virtual machine to a new node if we've experienced difficulty connecting when we really play as you moves the VM to a new node within the Asia infrastructure and empowers it back on retaining all of your configuration options and associated resources finally if all else fails were able to generate a support request directly from the portal and streamlined directly into the right support area now with our final machine we're going to deploy a CentOS image and I'm delighted to be joined by Justin Davis one of our technology solution professionals who's kindly agreed to show us just how easy this is achieved by using the azure cloud shell Justin over to you thanks Dan but before I jump into the cloud shell I just want to spend a little bit of time talking about open source and Microsoft's four pillars of work firstly we've enabled the is your platform to run Linux within the hyper-v hypervisor and Linux integration services or lists for short enables Linux to run as efficiently as possible on our data center servers on top of this we ensure that you can bring programming languages from the open source world into our past offerings including languages such as PHP nodejs Python and many others the second pillar is the integration of open-source projects into the Azure platform projects such as HD insights which utilize the Hortonworks distribution or Azure Container services built upon kubernetes in all cases we manage the environment meaning that all you need to worry about are the applications you wish to deploy the third pillar and perhaps one of the most telling displays of our commitment to an open-source ideology is the pace at which we release our own software into the open source community we've taken services such as dotnet core PowerShell and Microsoft cognitive toolkit into the open and actively work with the open source community to drive innovation and fix issues raised through github finally an underpinning our commitment to open source foundations we are members of the Linux Foundation cloud foundry foundation as well as the cloud native foundation this allows us to listen to the community and contribute back as part of that so that's an overview of Microsoft and open source now let's start looking at how that fits with Azure and have a look at the cloud shell the cloud shell is an embedded Linux VM that is accessible over the web this is really useful when you don't have full internet access because of firewall policies but also comes with a few extra benefits the azure command-line is pre-installed and authenticated against your I sure accounts this will be familiar interface for Linux administrators and allow you to script actions that need to be repeatable so let's take a look at the cloud shell and deploy some machines to begin with we'll navigate the top of the menu bar and press the cloud shell button the cloud shells got some pretty great features built in above and beyond the usual command line firstly if you're a terraform what kind of person the command line is already there in the shell and fully authenticated and ready to work with your default subscription if you're announced below Cloud Foundry kind of person we already have those command lines installed and ready for you to use - there's nothing you will need to do here just clicking on the class shell icon will load it up for you and you're good to go we can see we have the Daniel Azure command bind I can issue usual Linux commands for one - but here I'm gonna create a default CentOS VM but before we do that we need to ensure that we're able to securely authenticate to the machine will do this by creating a digital certificate very similar to the one that Dan created earlier on and I do this by typing SSH - keygen into the command line interface and simply press Enter on the subsequent prompts once we've completed we have all the components needed to be able to build our virtual machine to do so we're going to type in quite a long command line and I'll run through that with you in a second let's copy and paste that great so we have it pasted in the command line let's just run through what's going on here so we're using the AZ command to be able to issue commands against to beat the API with the VM and we're going to use the create option as well we're going to use - nvm - to be able to give it the name - G I as resources is the name of the resource group we want this VM to be loaded into that Dan created earlier on and then we have the image that we actually want to create as well so in this case we're using sent OS but you can use red hat enterprise or suzer if you wanted to or any other Linux distribution or Windows we also want to add the admin username so the super user that I want to be automatically create on the virtual machines for me allows me to be able to come route for example and then the SSH key that we just created previously as well so everything we did just now we're going to push that up to the virtual machine the network security group we want to actually create and a name and the subnet the production subnets and to build a company in a de vino' that Dan created earlier on is where we want this VM to be placed next we want to be able to create the instance or tree or to choose the instance we want to use dan taught to them by those previously you can choose any of those instances that you think you want when you're using the command-line shell as well and then finally it's the public IP address on the name of the public IP address that we want to add into this machine let's go ahead and press return here and we're just going to pause this for a second because you don't want to sit here watching a blinking cursor for the next five minutes and there we have the virtual machine created for us you can see that we have the public IP address which has been given to us by Asha and the private IP address range which came from the subnet as well if you don't remember the IP address or don't copy and paste it down then we can actually use the the azure command line to be able to find out what that public IP address is when this is actually coming useful in a second after one of the commands that we run so a z/vm list IP addresses - n for the virtual machine name that we just created and also the resource group as well to allow the command to be able to find the virtual machine within any number of resource groups that you have we're going to use - o table because by default the is your command line also spits out data in JSON format table this is a lot easier to be able to use right well because we're now going to be creating a web server in this virtual machine we actually need to allow HTTP traffic or port 80 traffic through this is the port used for web traffic and needed for the web service role that we'll put in the machine in a few moments to open the port or type of command line a Zed VM open ports - G for the resource group that the VM is within and - n vm - which is the name of the virtual machine that we created we also need to specify the port that we wish to open and then also the priority as well so - - priority so if we had a number of rules within those NS Jesus Dan taught talks about later on then the priority that we give to it all either be higher or lower than existing NSG rules so the Isreal command line is now talking to the eight as your API and is asking to be able to open the ports that's coming back and now as you can see if we've lost the IP address that we had so we just use the list IP addresses let's take that IP address now and use it to be able to SSH into the virtual machine so the admin user that we created earlier on has already been created on the machine for us let's place the IP address and what you see when you first log into this machine is the security warning that the trust relationship hasn't been established yet yet yes and then that machine will be trusted from the usual command-line shell and for the target machine as well so now we just have a standard Linux virtual machine which is running sent OS what we now need to be able to do is to be able to install httpd or Apache because I'm old-school I like Apache as a web server and also we're going to use git as well to be able to check out the code for for Dan's website so let's go ahead and as sudo becoming root - sudo yum install httpd git so once we've got that hit return what happened is that those packages will now be downloaded for us and be installed it takes a little bit of time we hit yes we're all good to go for those packages to be installed for us and now we have a patch chief and get in store than all other dependencies there so now that we have all of the services installed that we need to we're just going to check out Dan's website from my github repository so git clone HTTP github.com forward slash Justin Davis /v a CI ad for build a company in a day so that's all been downloaded for us we need to go into that directory and we need to just copy those over into our www HTML oh I just need to make sure that's a recursive copying us go back - our turn great so everything's copied over there now now that everything's still installed we need to start a patch change so hasty chip you start oh I need to do this as root so ctrl C that so sudo service HTTP start great so Apache is now running for us and also if the machine guess we booted we need to make sure that the system comes back up and HTTP D is running as well so systemctl enable httpd press return great so it's all good and ready to go now let's take that IP address go to the web browser and let's see if Dan's site is up and running for us and there's the man himself so we now have Dan's website running on a central service running as I as on Azure so we've deployed three different machines by using three different methods and it's my hope that you're beginning to see just how diverse this product is remember you are in the driving seat and you work with the tools and utilities that are right for you but let's now shift focus and move on to our next section containers thanks Dan to help us understand a little better let's think about the traditional software delivery process you see in the past you design an application and from there we'd have to consider the operating system and all its dependencies and while this was fine on a developers machine problems would typically arise when he tried to run that code on another container solve this problem by not only hosting the application but the complete runtime environment meaning that all libraries binaries and configuration files are bundled into one package much like that of a zip file these are not to be confused with virtual machines which virtualized the hardware and the operating system before running the application containers should only contain the bare minimum to function which means they can be a lot smaller than a standard OS deployment what this means is because of the reduced overhead the container host can run many more containers than it ever could with a virtual machine containers offer consistency between development and production and allow you to go through the development cycle with a lot less risk this in turn increases the speed deploying to production meaning you can react faster to the requirement changes in your applications long-gone of the monolithic designs and long waits containers allow the hardware to be taken care of for you allowing you to concentrate on good quality applications for example if you cast your minds back a few years IT had a very different culture companies would order the tin wait for delivery and then deploy code the process was slow cumbersome and typically ran as a single solution the ability to scale their solutions meant buying bigger machines or if you were lucky more instances to scale out that is of course if your applications were designed to be distributed the lack of agility meant it unfeasible to break up services but today with containers applications can be divided into smaller more manageable chunks meaning that the updates can be made to one service without having to update the entire system let's jump into a demo and take a look Microsoft has really embraced containers across a large swathe that the azure ecosystem I'll come on to containers later but for now there's a really nice offering we have called as a container instances or a CI container instances are a way for you to tell Asha I want to run something in a container give me an IP address and some CPU memory and storage and I'll be happy so let's take a look at our site from before the less deployed as a container instance if we go back to the cloud shell we can do this pretty quickly so we use AZ group create - n build a company and they - a CI so this is a resource group that we're going to use to actually deploy our containers into that comes back really quickly and then we're going to use the AZ container command line to deploy the the docker image so a Zed container create - n we're going to give it a name so build a company in a day - G the resource group that we just created and then we need to give it the docker image that we want to deploy as well so the same image that we used before Azure Dan forward slash doc repo build a company in a day and then we want to basically make sure that it has an IP address attached to it so we're going to say we want a public IP address attached on to this container let's go for that and we have the container coming back now so let's take a look at the IP address that's been assigned to it and here we have so there's Dan site up and running again from a container so that's brilliant it's a very easy way to be able to get a single container up into the cloud very quickly now we're moving up the levels of responsibility here we've deployed a Linux version machine deployed a web app onto it and now we've done the same thing with containers to the level of management you need to be able to deploy your systems is greatly reduced it's great you can deploy a single container to a show and get up and running really quickly but your applications or site relies on multiple components or containers you need something to orchestrate those together as an example if wanted to scale our website from one instance to ten fifty or even a hundred because the amount of people viewing it has gone up I'd need to have that container as a deployment a deployment is a way of saying I have a container or a pod of containers that I want to scale up independently with an Orchestrator I can define our site as a deployment unit and scale that up and an deploying kubernetes from scratch is quite an involved process and what we've done is take the best practice globally and from people who have developed kubernetes with a simple point-and-click or using the Isreal command line you can deploy that as I'll show you shortly you can go fully deployed cluster in a matter of minutes and we take care of the updates the resilience and the scalability for you all you have to worry about is getting your app tested and deployed quicker so let's go back to the cloud shell and spin up an aps cluster in East us and get our website after running and scaled up so we need to be able to create a resource group for the azure container service cluster so a Zed group create - and build a company in a day - IKS - L for the location I said this case we're gonna use East us so that resource group has been created and now we use the AZ command line to be able to create the AKS cluster so a Zed IKS create - n the name of the cluster now we want to give the cluster and - G the resource group that we just created hit return what's happening now is that the command line is talking to asha creating a service principle for you which allows us to be able to spin up infrastructure in your behalf which becomes important when you're doing things like load balancer integration and this is gonna take a few minutes now so what we can do is pause and we'll come back in a second great well that was quite simple in the background Asha was spinning up virtual machines that were running your agent workloads essentially anything that we deploy on to aks will be running on these machines it's reserving a control plane for you to be able to work with so taking care of the management infrastructure for your kubernetes cluster and within the kubernetes cluster those managers decide where to place your containers and your workloads we essentially make that service available to you for free and all you pay for is that as the container workload machines right so we've got the cluster up and running what you now need to be able to do is to be able to authenticate against the kubernetes cluster and we have a command line in option to be able to do this for you which is a set IKS get credentials so Aza IKS get credentials - n build a company in a day aks - G build a company in day aks what's happened there is that we've downloaded the certificates for the cluster so we are now fully authenticated the cloud shell has the cubensis or cube CTL command or already baked into it so if we just go ahead and just type that in cube CTL we can see the command line is there and ready to go now that's great but now we need a workload to be able to push up to the kubernetes cluster so we use the cube CTL command be able to run a docker container directly and we'll mess around with that and expose the services and do some scaling up now so cube CTL run we're going to call the deployment build a company add a the image - - image is the docker container that we've been working with already and then we're only gonna have one replica so one version of this running at any one time some replicas equals one and we're gonna make sure that community is aware that we have a service that to be exposed on port 80 within the container so very quickly kubernetes has basically taken our request for that deployment and it's now deploying that's pod so a pod is the lowest form of deployment that you can actually get within kubernetes so we're still pulling the container down and then we have the running website there within a pod so that's brilliant the next thing that we need to be able to do then is to actually expose that so to be able to publish that service into the internet so we're do cube CTL expose deployment build a company in a day that was the deployment we created earlier on we want to export on port 80 and the important thing here is is the type so type equals load balancer this is really oh I've got this wrong go back up again just take the command in wrong - - type load balancer so what's happening there I've got to go back and remove the remove the service one second cube CTL delete service build a company in a day okay so go back up and let's deployed that again there you go so what's happening here is that kubernetes is talking to the azure infrastructure and spinning up a load balancer instance for us and attaching that to our pod that we've just pushed out now this takes a few minutes to be able to to attach and to be able to spin up a load balancer so we'll pause here I'll come back in a second you we now have the public IP address signed what we can do is just take that IP address just check that the service is properly exposed for us to be able to use let's copy that I'd open up a new tab paste the IP address and areas again so we've got Dan's website running there now that's great so you know the whole idea is that we've just told the cluster we wanted to be able to deploy a container for us and that's fantastic but what about if you want to be able to scale something up or down so let's just use cube CTL scale the number of replicas at the deployment is called both the company of Davis - yep so I missed out the number of replicas let's go back replicas equals let's do three so what's happening now is that the deployment has been scaled up from one instance of Dan's website into three let's do it again and we have three up and running and the load balancer is now looking after all of the scalability and scaling between those three deployments so there we have it we've gonna go on a whistle-stop tour of of open source and then gonna finish with kubernetes and from the outside you know not a lot of happen has happened here in terms of you know having to hand crank virtual machines or to be able to spin anything up for for kubernetes and we take care of all of that for you we just want you to worry about your code and your applications that you want to be able to deploy into the cloud so you know the whole idea of cuban it is that i i am telling kubernetes that there is a desired state that i wanted to be in in this case the desired state initially was i want one version of Dan's website running then my desired state was to be able to have a service published for that website and then more importantly the desired state of having three of those sites running at the same time if I have three physical VMs which are actually running my workload and I have a pod on each leaf one of those virtual machines and one of those goes down the desired state the cluster will still remain as in kubernetes will try and run that on another machine so we still have three versions of it and that's really really powerful so I hope that's been really interesting for you you know we've explored containers and orchestration virtual machines and all that's left for me is say thank you for your time field feel free to drop me a line on Twitter at Justin Davies or send me an email to J UDA and Microsoft comm if you have any questions about open source in Asia and now Dan its back to you now whatever you see in front of a computer sign into an application or post onto your favorite social media site you will at some point need to be authenticated authentication helps you keep personal information secure and ensure that users only have access to the resources that they're supposed to have access to so take a few minutes to understand what authentication is and how the cloud has made this reevaluate the whole process we begin by discussing directory services and the role of the domain controller to better understand think about what happens when we sit down in front of our work computer now before you can use it you need to log on and authenticate now this may be achieved by using a username and password or a smart card and PIN or for some it may even be achieved by using a biometric authentication method such as a fingerprint regardless of how its achieved your authentication attempt will be cross-referenced against a database to see if you can use the machine what's more once authenticated you'll be given a digital token which is then presented to the resources to determine the level of permissions you should have how as we move towards the cloud and the abundance of online resources our authentication requests can start to become disjoined think about how many times you must log on throughout the day especially when we are using services that are not directly controlled by our company by using Azure Active Directory we can effectively create a bridge between the on-premises and online resources that's ensuring that our end-users get a consistent single sign-on experience what's more with its full suite of identity management capabilities including multi-factor authentication privileged account management and role based access control Azure Active Directory can help secure your cloud-based resources from my security risk while streamlining IT processes cutting costs and ensuring that your corporate compliance goals are met so let's jump into a demo and look at how we can connect our domain controller into Azure a/b so back into the portal let's go ahead and look on the left-hand side to select as your active directory and we'll see that as the play changes focus we're able to manage attributes such as users and groups devices and applications as well as set things up like custom domain names and company branding what we want to do though is look at merging our domain controller that we created earlier with Azure Active Directory and to do that we will need to ensure that we create some administrative accounts for the service to use so from the edge of cloud shell I'm going to run the command a Zed ad user create with a display name of azure ad a user principal name of Asia a B at build a company today on Microsoft comm and put in a password with that done I will click on to the new user and then from the directory role section ensure that they are a member of the global administrators group before clicking on the Save button at this point were done with the configuration from the cloud side so we jumped back into our domain controller and populate it with some accounts now your recorder earlier on we used a template to create this virtual machine during installation we invoked a desired state configuration tool to build this machine out as at the main controller in the Builder company in add a video got local domain so let's now go ahead and open Active Directory users and computers as we can see this is a base build so we're going to go ahead and populate our directory with a script to do this I'm going to copy the AV user's folder into the VM and then open up and elevated PowerShell window you to begin with I want to set the execute policy of the machine so I will type the command set - execution policy unrestricted and then select yes for all when prompted next I'll copy the files over to the domain controller and run the Builder company in a day ad user script this will create a handful of users and a local administrative account you from here we can download the Active Directory sync tool and merge out the wretches together before we do this though I just want to first go back into server manager and click onto the local server from the left-hand menu and then disable the ie enhance security configuration for both administrators and users once done I can download and run the Microsoft Azure Active Directory connect all you when the Microsoft Azure Active Directory Connect window opens we will go ahead and accept the license terms and privacy notice and then select continue which will open the next prompt and invite us to customize or use these for our settings let's go ahead and select use Express settings this will configure synchronization of our identities and passwords in the Builder company in a day video local domain and start the initial synchronization between locations in the next window we will need to enter the details of our edge rate the global administrator followed by the local administrator account that was added via our script then on the next page we will notice a warning stating that we don't have a verified domain now this is fine for our demo but obviously not fit for production due to the fact that we want to enjoy a consistent naming convention as I said for the sake of our demo this is fine so we will select continue about any verified domains and press next to continue once done the service will validate and invite us to press install now this will take a few minutes first the dependencies we'll need to install and then that the Wretch's will need to connect before they start the initial synchronization however once complete we can exit the installation wizard had head back into the airport all let's pause the video and come back once synchronization is completed okay and we're back and as we can see our on-premises accounts have now populated into Azure Active Directory given us unified authentication so let's look at what we can now do now that our accounts are in the cloud to begin with I want to talk about with different SKUs available UC Azure Active Directory comes in three flavors these range from the basics to design for task workers with cloud first needs up until the advanced premium SKU which includes enhanced protection through hybrid scenarios that live resilient heart security signaling fraud demo we're going to activate a trial of enterprise mobility and security which includes the premium version of Azure Active Directory to do so this click on licenses from the left hand menu and then in the new window that opens all products under manage from the top menu bar let's go ahead and select try and buy and then select the free trial of enterprise mobility and security followed by activate okay so we now have a premium SKU in place lest I investigating some of this capability now the first thing that I want to do is reiterate the fact that we can of course at a custom domain known to Azure ad to do so navigate to custom domain names from the left-hand menu and then click on the button at custom domain from here we can enter our custom domain name which will then require us to verify it using either a txt or MX record now unfortunately I don't have a custom domain name but the process is as simple as that to complete continuing with our customization theme let's navigate down to company branding from here I can add backgrounds and messages to my authentication screens - boo so I will click Edit and then from here select a new background and user name hint before pressing Save you now we will test this out in a while but in the meantime let's continue configuring our identity service by setting up a self-service password reset policy to do so I will click on password reset under manage and then in the new window that opens select all and then save in the properties window from there I will select authentication methods from the left-hand menu and make note of the defaults Ebell or mobile phone now I can be more restrictive here and select multiple methods but for our demo I'm happy with the defaults moving on I will select registration from the left-hand menu and require users to register with the service when authenticating I will also leave the reco information policy to the default of a hundred and eighty days finally under notifications I will select and notify all that means whenever the ministry if password is changed and then press save now I'm gonna talk about security later in our session so the last thing that I want to demo here is enterprise applications and single sign-on to cloud-based resources single sign-on allows us to access all of the applications and resources that we need to do business by signing in only once and using a single user account to put this into perspective think about the companies that rely on applications such as office 365 box and Salesforce their end user productivity historically IT staff needed to individually create and update user accounts in each size application and users had to remember a password for each a charade the extends on-premises ad into the cloud enabling users to use their primary organizational accounts and not only sign into their domain joined devices and company resources but also all of the web and SAS applications needed for their job this means that staff no longer have to manage multiple sets of usernames and passwords and application access can be automatically provisioned or be provisioned based on their organizational group members or status as an employee so let's go ahead and test this with a connection to Twitter to do so we will select enterprise applications under manage on the left hand menu then in the new window that opens our click new application and then under category select social as you can see this changes the focus of the gallery to display social connectors and I will filter so that it only displays Twitter in the new blade that open notice the single sign-on mode in our demo we'll be storing the password in the cloud and then assigning single sign-on to one of our users to continue press the Add button at the bottom of the blade with our application added we are taken to a new window from here we can set up single sign-on into the application to do so select single sign-on under manage and then in the single sign-on mode select password based sign-on and press save you from the left-hand menu select users and groups and then add user and he had assignments window we will go ahead and select Azure then and then click assign credentials from the menu you any assigned credentials window were select yes to assign credentials on behalf of the user and then entered a Twitter logon credentials followed by ok and a sign you you'll notice that our user is now added into the list so let's go ahead and test this out to do so I will open a new session navigate to my at Microsoft comm and log in as you then you now as I do this notice the customization we added this lay on the screen also notice how I'm prompted to verify my contact information in order to proceed now this will take a few minutes to configure so once again pause the video and return once verified okay so now that I'm verified I just need to install the my app secure sign-in an extension into my browser you once done I will click on to the tweet like on my dashboard and as you can see I'm then automatically authenticated into my session brilliant so we now have single sign-on across our applications self-service password management and branded authentication experience let's continue with our session and build out our next solution now one of the first workloads that inevitably makes its way on to the cloud is that of the development team in the past this would have had to have been heavily policed by admin but thanks a new services such as dev test labs developers and testers can quickly create environments in Asia while minimizing waste and controlling cost dev test labs allow you to test the latest version of your application by quickly provisioning Windows and Linux environments using reusable templates and artifacts what's more you can easily set policies on your lab which define things such as the number of virtual machines allowed the size in software even schedules to automatically start and shut down your virtual machines Bev test labs enable you to create pre provisioned environments with everything your team needs to start developing and testing workloads simply clean the environments where the last good build of your applications is installed and get working right away let's jump into the portal and take a look so the first thing we will do is navigate to the is resources resource group and from there we will go ahead and click on add and search for dev test labs in the marketplace you'll notice tattoo results of filtered native depth test labs and deaf test labs for blockchain this enables developers to support public private and consortium blockchains for our demo we will select the standard dev test lab solution and then press create button to open the configuration blade now we don't have a great deal to sell up here we simply need to name the lab select our subscription select our location and then decide if we want to enable auto shutdown and configure tags what call our lab build a company and add a dev test lab and then accept all defaults before choosing to Pinter dashboard and press create better take a couple of minutes to provision so let's pause the video and come back when it's done now this resource will look a little different from the resources that we've created previously in our demo this is because the Deaf test lab is a platform for other deployments and in this screen what were effectively doing is scoping out our environment let's take a better look now the first thing that I want to do is configure my secrets which would allow me to save my passwords into a key vault that I can then use later for formulas and virtual machine creation and this is super simple to do all I need to do is type in a name and a value such as VM secret and then my password before pressing the Save button next I'm going to click on duration and policies from here I can begin to define the resource types allowed inside my dev test lab and place restrictions on costing and operating time if we move down to settings and then click allowed virtual machine sizes we can specify the sizing tiers that will be available to our developers let's enable this feature and then in the center pane go ahead and select the standard v2 v2 and standard b3v to virtual machines we'll then move to the top pane and press the Save button if we then move back to the left hand side and select virtual machines per lab we can define the number of machines that will be allowed to be deployed and how many of those machines will be granted premium dis support next I want to come down to repositories under external resources now I'm not actually going to configure anything here but I just wanted to highlight that we are currently connected to a public repo but we could easily add our own by clicking on the Add button the repository will allow you to add custom artifacts into the lab such as those hosted on your company's private git repositories okay one of the things that I do want to change is the virtual networks you see when the dev test lab was created it went ahead and configured a new v-net on our behalf but if you cast your mind back to earlier in the demo we already have configured a development subnet which I'd like to make use of here so I'm going to click on the Add button and from there I will select the builder company in that they v-net under the select virtual network area you'll notice that once selected the pre subnets that we created earlier are displayed we want to now select both using virtual machine creation and allow public IP creation options for the development subnet we also have the option of sharing the public IP address through network address translation and specifying the maximum virtual machines allowed per user however these are not required for our demo so go ahead and click the ok button followed by save on the virtual network blade with that done let's go back to the configuration and policies virtual network section on our breadcrumb menu and then remove the build a company in add a dev test lab virtual network by selecting yes on the confirmation prompt okay now while that deletes will navigate to the virtual machine basis section on the left hand side and select marketplace images what we want to do here is specify a whitelist of images that can be used in our dev test lab to configure this we will select no in the allow all as your marketplace images and then we're going to select the center Sohus image and the Windows Server 2012 r2 image you finally I want to come down to formulas on the virtual machine basis and click Add to create a new one now a formula is a list of default property values which used to create a VM and your now is that as it invites us to choose a base the only options that are presented to us and the marketplace in which is that we defined earlier let's go ahead and choose Windows 2012 next we'll give our formula and name and we'll call our demo server in the description I will place that it's going to be regular server and also have chrome and Visual Studio installed you for the username i will type build a company in a day admin and then for the password go ahead and select the vm secret to retrieve our password from the key vault that we created earlier for the disk size i will leave as is and also keep the virtual machine to the default setting now here's where things get interesting you'll notice that i'm able to select artifacts and artifacts are a what's derive from the repositories that we discussed earlier if i go ahead and select this displays a list of components in the public repo and from here I want to add Chrome and visual studio code before pressing the OK button as you can see we now have two artifacts selected and if I can now click on advanced settings I'm able to choose where this machine will automatically expire after a set time and whether it's claimed a wall by a dev test lab users I'm gonna go ahead and stick with the defaults here so I'll press the ok button followed by create to build our formula after a few seconds our formula is ready now this hasn't deployed any resources it simply created a template for when we wish to do so this test this out by navigating back to the dev test lab and then clicking on add in the center pane from here you will notice that we have three options we can select the formula that we just created or the two marketplace images we defined earlier I'm gonna go ahead and select our formula as a blade opens we can see that 90% of the parameter fields have been completed and really all we need to do is name of the M which I'm going to call dev test lab vm1 from there we will go ahead and press the create button and our machine will deploy now this will take slightly longer than the vm's we've deployed earlier this is because not only are we initiating a VM from the marketplace we're also applying the artifacts so let's go ahead and pause the video and we'll come back once complete and we're back hopefully if you've been following along that took no longer than a few minutes to deploy and we're now in a position where we can authenticate you into the machine to do so I will click on the dev test lab VM one machine in the middle pane and from there press the connect button to download an RVP profile once connected and authenticated you'll notice that I have both Chrome and Visual Studio installed and prominent on the desktop proven a successful formula deployment now before we wrap this up I just wanna show you one last thing you see in a development environment you couldn't want to be able to replicate workloads and sometimes the best way to do this is via a custom image a custom image is unable you to create VMs quickly without having to wait for all the required software to be installed on the target machine if you think about the dev test lab VM one machine and that we just deployed not only did it have to install the operating system it also then had to download and install the artifacts things would be much faster if this were all on disk fortunately dev test labs enabled this to happen and to illustrate I'm gonna create a custom image from our deployed machine to do so I will never get to the left hand menu and from there select create custom image when the blade changes focused I would name the custom image dev test lab image and then type in a description the next thing that I will need to do is specified that sysprep has not been run on the virtual machine and that I require it to do so you'll see that when I select run sysprep on the virtual machine and notification will appear what they're basically telling me is that sysprep will generalize my machine meaning that everything that is unique such as the name and security identifier x' will be destroyed in the image creation in layman's terms we are killing the machine so that it can be cloned and then uniquely deployed many times thereafter so let's go ahead and accept by pressing the ok' button again this will take a few minutes to create but when complete it will appear as a new option during the machine creation let's go ahead and pause the video and come back once done with the image creation complete will notice the ears now become a deployment option from the add menu okay with that done let's jump out of the portal and discuss our next technology so let's change focus and investigate how we can protect our workload in the cloud and I guess I want to concentrate on two areas here as your backup and as your site recovery now as your backup replaces your existing on-premises or off-site backup strategies with a cloud-based solution that is reliable secure and cost competitive it offers multiple connectors depending on the type of workload that you wish to protect and provides many benefits including automatic storage management unlimited scaling encryption and long-term retention with a limit of 9999 recovery points per protective instance backups are compressed to reduce storage space and in certain scenarios this deduplication is employed to ensure that file protection is fast and efficient what's more every backup component supports both incremental and differential backup options ensuring time efficient transfers a commonplace as a site recovery on the other hand orchestrates and manages disaster recovery policies and is a key service in helping organizations keep their data safe and workloads running when planned and unplanned outages occur is our replicates as your virtual machines between Azure regions but also on-premises VMs and physical machines either to the cloud or a secondary physical location let's jump back into the demo and look at how we can use Azure site recovery to replicate an azure VM let's begin by creating a new VM now for the sake of transparency and creating this VM in the each US region purely because of a limitation at the trial past I'm using to record this session nothing new to see here so I will pause the video and we'll return when our machine has been deployed with our machine deployed there select disaster recovery from under the operations hidden this will in turn open up the disaster recovery configuration pane under the target region I will ensure that issue s2 is selected now when you enable Asia VM replication the following resources will be automatically created this includes the target resource group network storage accounts and availability sets you'll also now is a cash storage account which is used before a source VM change is replicated to the target this ensures minimal impact on production applications running on the source VM now this will take about 20 minutes to configure so I'll pause the video once more and we'll come back once completed and we're back so that's once more going to the virtual machine and then select disaster recovery from the left-hand menu now we can see that we're able to both failover and perform a test failover so let's go ahead and click on the test failover button which will open up a new blade from here notice how we can select the recovery point and virtual network so we can see that we're going from east us to each us to and our recovery point we're gonna select the latest and then under the Asia virtual network we're going to select the one that was created when ASR was enabled now make note of the warning but for our demo this is gonna be absolutely fine so they say okay and once more we'll pause the video and we'll come back once the test failover is completed with our test failover complete let's click on the notification to open the job summary now notice the stages the failover went through in create in the test VM that this proves the failover is operational so let's navigate back into the disaster recovery section of our virtual machine and clean up our test failover from here we'll type something into our notes and then finally opted to delete the failover virtual machine awesome so we now have a disaster recovery and automatic failover configured for our virtual machine what about protecting the VM with backup from the portal were select the virtual machine and then back up from under the operations hidden in the blade that opens will create a new recovery service vault named via backup and a new backup policy by accepted of defaults when dumb oppressed the ok button followed by and label back up now this will take a few minutes but once the configuration is complete we'll be able to test to achieve this let's head back into the VM blade and once again slept back up under operations from the blade that opens were slept back up now accepting a default retention and then press ok to proceed from notifications were selector triggering backup of the virtual machine notification to open up the backup jobs and notice the status of the VM now this will take around 30 minutes to complete so once more we'll pause the video and return when done and we're back and we can see that our virtual machine back up is now complete so let's perform a test restoration to achieve this we were once again heading to the back up area of the virtual machine in the top menu were select restore VM and in the new blade that opens select the last crash consistent backup in the restore configuration pane we can select our restore type from here we can either choose to create a new virtual machine or restore disks to a staging location for this exercise we choose to create a new virtual machine when named a VM restore and keep it in the default resolves group we then press ok to proceed now once more we'll check on the jobs notification to launch the backup jobs window from here we can monitor progress which again should take around 30 minutes to complete so for the last time let's pause the video and we'll come back once this is completed okay we're back let's navigate to the virtual machines tab from the left-hand side of the portal where we'll notice a new running virtual machine titled VM restore this once again proves that our PCB our strategy has been successful and a diversity of our product now I appreciate that this was a long section but with the power of video editing we've been able to streamline this into an easy to consume segment I invite you to further study to see how this technology can also be used to protect on-premises resources both in a virtualized and physical state for now this jump out of the portal and discuss our next technology so up until this point we have really concentrated on infrastructure as a service and while I'm first to admit that we can achieve great things with what we have seen for me the true power of the cloud is embraced with pairs or platform as a service I'm going to spend a little time demonstrating the power of pairs and highlight some of my favorite tools in the Microsoft cloud these are the tools that I use every day tools to allow me to run my website automate my social media gauge public opinion and store useful data what's more all of this is achieved without me having to configure a single virtual machine I think you're really going to love this section so let's begin we'll begin by discussing app services now app services are the umbrella name for four key solution areas Web Apps mobile apps API apps and logic apps let's look at each of these in turn web apps are a fully managed compute platform that allow you to build and host web applications in the programming language of your choice it offers auto scaling and high availability supports both Windows and Linux and enables automated deployments from github Visual Studio teen services or any git repository the price you'll pay is significantly lower than running the experience on an equivalent to machine and largely determined by the app service plan you choose mobile apps provide developers with a platform that's both highly scalable and globally available with mobile apps you can build native cross-platform applications for iOS Android and Windows that seamlessly connect into enterprise resources through single sign-on experiences what's more you can empower a more productive workforce by building applications that synchronize beta in the background with your enterprise beta sources and says api's finally with push notifications you can reach millions of people in seconds engaging your customers with personalized messages regardless of their preferred device now we mentioned API is a moment ago but what are they and what do they do what is simple turns an API is nothing more than a set of instructions that define how an application talks to another think about the times you've used a copy and paste or where we visit a website of our favorite restaurant and see the directions supplied by a third-party mapping service API is make this sharing of code possible and they do this by exposing just a little bit of the program's functions to the outside world the azure api service makes it easy to develop hosts and consume api's both in the cloud and on-premises you'll get enterprise-grade security simple access control hybrid connectivity and seamless integration with logic apps finally logic apps hands-down my most favorite thing in the cloud and a strong area of focus in an upcoming demo though I used logic apps extensively from automating my social media presence to pushing information down into my smart mirror at home I even used them to gauge how well my presentations are received by an ease of sentiment analysis and trust me logic apps will change the way you think about business logic with simple scalable workflows that are triggered by events and able to talk with other services by means of a rich catalog of connectors what's more we don't even have to start from scratch we have many pre-existing templates ready for us to build upon so let's jump into a demo and start building this stuff so the first thing that we're going to do is deploy our website and if you cast your mind back to earlier we've already done this a couple of times both as a fully fledged web site on Linux and then as a container solution but what we want to do now is deploy a website as part of an azure app service for your visual studio so let's go ahead and open the web site in Visual Studio and then click on index.html on the right hand side now you're noticed that there's not much to it it's just a simple HTML file which is currently calling on a script to load my Twitter feed and display on the left hand side of the page if you want to change to your own Twitter feed you'll just need to replace the hate ref code with your own Twitter handle - the @ character let's go ahead and run this locally so that we can make sure the code is working to do so will move towards the menu bar at the top and click on the green play button this to the name of our browser in my case Google Chrome the page will compile and display a prompt asking if we would like to when they will be bugging now usually this would be a great idea however due to the simplicity of our code I think we could probably work out any issues so we choose to proceed without as you can see the page opens and displays a backdrop with our Twitter feed on the left hand side at this point we can close the browser and begin creating our web app let's jump back into the air portal and from the dashboard press the key combination G + R to open the resource groups from here we'll go ahead and click on the Add button to create a new resource group naming it past resources confirming our subscription and choosing West Europe as our location when ready were pressed to create bones our deploy now we'll give our second to create and then refresh the page once bumm there's clicking to the resource group so we can begin building out our web app to do so click on the Add button on the top menu and from the marketplace search for web apps you'll notice many offerings but we will choose the native web application option we should be top of page when the new blade opens we will click create for the app name we need to think of something unique this is because they're all subdomains under as your website's name obviously we can add our own domain name once deployed but for now we'll give it something obscure so that we avoid any conflicts I'll call mine build a company in add a website that as your website's name I then check to see if I'm happy with the subscription being used and they has been deployed into our past resources resource group with an operating system running Windows the next thing that I'm going to need to define is the app service plan now an app service plan is the set of compute resources that I want to use to run my code and just like a conventional hosting scenario I'm able to run multiple workloads of capacity allows and let me just take a timeout here to clarify something earlier I said that no VMs are used in pairs but to be more specific it's novi aims that you need to manage you see Microsoft will look after everything from scaling to protection leaving you with an abstracted administration portal to define your requirements so let's go ahead and click on the create button now in the new blade our name White ASP build a company in add a web ASP and assured it is located in the West Europe region before clicking on the pricing tier in the pricing tier blade you'll notice that you can choose shared compute models which in some cases will be free of charge what's more I can then scale to large deployments running into hundreds of pounds each month I'm going to go ahead and select the s1 standard tier this will give me a single dedicated core with 1.75 gigabytes of memory what's more it will allow me to automatically scale up to ten instances when needed as well as provide me with daily backups and the ability to create deployment slots which can be used for staging from here I will press the select button followed by ok the last thing that we need to decide is whether we want to enable application insights now these will help us both detect and diagnose quality issues whilst providing better insight into how our web app is being used for that reason I'll set it to on and then set my application insight location to West Europe finally I will choose to pinter dashboard and press the create button once deployed we'll begin exploring our application options now in our overview pane we have some basic insights things like data in data out an average response time now don't be surprised if we don't have any metrics displayed the web app has only just been created furthermore we haven't you out loaded any content to our site so let's revisit these counters later if I move up to the essential section you'll notice we have the URL and asp tiers as well as a blank FTP in connection details let's go ahead and set that up if I move over to the deployment credentials section I can then add my FTP deployment username build a company in add a FTP and my password once done we'll press the Save button oops now it looks like we have an error here because the FTP credentials are not globally unique so let's make a small modification to the name and save again okay so next I will move down to the deployment slots where I'm able to create additional staging environments now I don't want to spend too much time on this section at the moment as we'll be revisiting this later when we have uploaded our site so for now let's proceed with the demo if I move down to the next section deployment options I'm able to specify the deployment source that this could be visual studio to services github or even onedrive if I choose for now I'll leave the defaults and head back by using the breadcrumb menu next we have the application settings and from here I'm able to configure things such as the version of.net PHP or Java that I wish to use I can also decide the platform affinities and debugging options required name from authentication authorization I'm able to define authentication services such as as you rate the Facebook Twitter or anonymous authentication for our demo we'll want to lock down our website so it's only available to staff so we'll be revisiting this section once our site has been deployed in the backup section I can define a backup policy so let's go ahead and click on the configure at the top of the menu bar to get this set up in the new blade that opens I will select storage settings and then press the button storage account to create a new backup location I'll name the storage build a company you have a website and ensured it is unique in the dot core thought windows.net the main from here I will set the replication as locally redundant and a location to West Europe before pressing the OK button you once deployed I will click on this account build a company in a day web and then create a new private container main backups before pressing ok you when done that should take us back to the backup configuration pain from here I will set shade your backups to one accepting the defaults before pressing the Save button you you'll now see that a pain changes once more allow me to both backup and restore in addition to the you just said okay back to the left-hand menu and from here we can see that what we're able to say is both the custom domain and the SSL Certificates I'm also able to configure networking and choose whether I want to set IP restrictions CDN endpoints or integrate into existing virtual networks options are scale up or scale out they remember where the cloud mindset it's not always about being bigger often it's about quantity and being clever with sizing we have configured an app service plan that enables auto scaling and as such as you can add up to ten instances if required okay let's jump down to the section titled development tools now from here we can log on to our website using the cloud-based console or even launch the app service editor what this means is we can service our web app anywhere we have access to a browser and internet connection what's more with tools such as QD and resource Explorer I can gain valuable insights into my deployments when needed we'll revisit the app service editor a little later in a demo but for now let's get this page deployed to do so I will once again click on the overview section in the left-hand menu then from the top menu choose get published profile what this will do is download a connection profile that we can then use to push content into the web app from visual studio once downloaded we'll jump back in the visual studio and from the solution Explorer on the right hand side I'm going to right click our website and choose publish web app in the pop-up window I'll select the import for the publish target and then browse to the download folder to find our publish profile before pressing the OK button to proceed in the connections section you'll notice that all of the information is being populated meaning that all we really need to do at this point is press the publish button once done you'll see the visual studio output pane upload the code and set the access control lists before a new browser window is opened and our page is displayed and that's it we're done it's almost too easy isn't it but that's the great thing about Asia we're empowering you to do amazing things with minimal effort anyway let's jump back into the web app create a deployment slot and then lock down or vent occasion let's go ahead and create a new deployment slot so that we can begin working on version 2 of our site we'll call our slot dev and then choose not to clone from the original finally there's press okay to proceed now that will take a moment to build but once done let's go ahead and click into the new deployment slot you'll notice that the console looks similar to the last four what's different in this instance is that our slot has a newer URL associated with it and if we click onto it we are redirected back to the blank template now in order to upload our development slot we will again need to download a publishing profile so let's go ahead and download from the top menu then from back inside visual studio going to rename my website to build a company in a day v2 and upload to the new slot ensuring that I select that their fall from my downloads folder you as before the website will open into the new URL but we now have the option of swapping between the development and production slots by toggling the swap barn in the Asia portal okay so back into our main web app are now going to enable Active Directory authentication to do so I will select authentication authorization from the left-hand menu and then set app service authentication to one education providers I will move down to Azure Active Directory and then click to configure in the management mode I will select Express and then ok back in the main window I will change the action to take when the request is not authenticated to login with as your Active Directory and then press save so let's now test this out if I click back into overview from the left hand menu and then onto our website URL we should now be prompted to authenticate now obviously because of cash credentials I was prompted for authentication but then passed through so let's look at what happens if I try that again from an incognito window noticed that not only am i asked for a username to proceed but a customization from our earlier demo is also in effect okay let's jump out of the portal and think about our next demo so as I said earlier logic apps are my favorite thing in the cloud true service computing simple workflows and abundance of connectors these things will change the way you work and I know that when you see what they can do you'll fall in love with them too we're going to build out a couple the first will be used to automate our social media and push out a Twitter post whenever a new as your blog is created let's get it done so from our portal I'm going to press the key combination GNR to display our resource groups then from here select pet resources once inside I'll move up to the menu bar and press the Add button before searching for a logic gate from within the marketplace new blade opens are simply press the create button now in the configuration blade you'll notice that we have very little to do in fact I'm just going to name our logic app RSS feed and ensure that a subscription resource group and location properties are correct I'll also choose to enable log analytics before selecting to pinter dashboard and create to begin deployment you now they should only take a few seconds to complete once deployed let's go ahead and create a new blank logic app then in the logic app designer I'll go ahead and select the RSS connector from here let's select the trigger RSS when a new feed item is published and then type in a URL for Microsoft Azure comms blog into the dialog box once done I'll go ahead and send interval to 30 and leave the frequency as minute with our RSS properties configured let's go ahead and press then use that button then we will select the add an action button to proceed once more we are asked to choose an action so in the connector search box I'll type the word Twitter so that we can configure our social media posts now at this point you'll notice that the Twitter connector offers many more triggers than thereof the RSS connector however all we want is a simple tweet to be said so let's go ahead and select the top trigger post a tweet once selected we're going to need to authenticate into our Twitter profile so I'll go ahead and press the sign-in button and then from now into my Twitter handle and password before pressing authorize and you'll see that what's authenticated our connector properties box changes once more this time prompting us to enter the tweet text but here's the cool thing look at what happens when we click inside the properties box because we've already established an RSS connector dynamic content is returned meaning that we can now simply click on what we would like to be tweeted from the RSS feed let me demonstrate I'm going to select both the fee title and the primary feed link from the dynamic content pane and then head to the top of the menu bar and press the Run button you now proving that this works maybe a little tricky you see we're at the mercy of the blockers at Microsoft meaning that we will only see a post after they create a blog anyway it's a gain idea of what this is doing let's head over to my personal feed now I've been using logic apps like this for months and when notice the posts as we scroll down my timeline okay we're at done let's discuss our next piece of the demo now this next logic app will be a little bit more complex however it also yields a higher business value you see what we're going to do here is build a social sentiment engine one that will monitor our Twitter presence and report back on any negative tweets received to do this we're going to use cognitive services a collection of API is SDKs and tools that enable developers to build more personal computing experiences with applications that are more intelligent engaging and discoverable and what company wouldn't want this so let's jump back into the portal and make it happen so once again that's going to Pez resources resource group by pressing the key combination G&R now the first thing that we're going to do is configured a text analytics API which is one of Microsoft's cognitive service offerings to do so we will simply click on the Add button at the top of the menu and then from the marketplace search for text analytics API when the new blade opens we will simply click on the create button to proceed now here's the impressive thing for a service that will offer so much in terms of enrichment look how little we actually have to configure we simply need to provide a name to which I will call our social sentiment and then select our subscription location and resource group which in our case will be the existing resource group pairs resources I'm also going to have to choose a pricing tier now this range is from thousands to millions of transactions per month for the sake of our demo I will choose the f0 or free tier finally I will select the checkbox - Pinter dashboard before pressing the create button and that's it after a few seconds the service is ready all I need to do now is copy the endpoint URL from the main blade into notepad and then navigate Serkis under resource management and copying the name and the key one you next let's open a new tab and head over to power bi from here were signed in to our account and then and the workspace create a new streaming data set to do this I will navigate to workspaces from the left-hand menu and then select create app workspace I will then name my workspace build a company in a day and ensure that my account is added as the workspace member once done let's press save when the workspace has been created I will expand from the left-hand menu and then click on data sets now your notice that we have no data set so let's go ahead and build one out to do this simply click skip for now on the main page and then you create streaming data set in the new pane that opens this go ahead and select api's as the source of the data and then click Next in the dataset name we will type build a company and add a stream and in the values from the stream we will type the following key pairs tweet text which will be text location as text created that as bait time followers as number and sentiment score as number finally we will ensure that we have set historic data analysis to the on position and press the create button now let's head back into our azure portal and create our logic app so from the dashboard or will once again type the key combination GNR and then select the pace resources resource group from there I will click on the Add button on the top of the menu and search once more for a logic app we will name the logic apps of sentiment and ensure that it's in the right subscription resource group and location before selecting to turn on log analytics Pinter dashboard and press create once created we will select a blank logic app from the logic apps designer and then add the Twitter connector now because we've already authenticated to the connector it will take us directly through to the triggers let's go ahead and select when a new tweet is posted and search for the hashtag build a company in a day we will change the interval to 30 and the frequency to second before pressing the new step button and selecting add an action in the search for all connectors and actions search box let's go ahead and type text analytics and then once found select the text analytics connector in the three options returned bits choose text analytics detects sentiment and then social sentiment for the connection name the account key will be key one that we copied into notepad and the site URL will be the end point once done let's go ahead and press the create button now once the connector is configured we would click inside the text to analyze box and then select tweet text from the dynamic content blade I will also show Advanced Options and select en as the language once again I will choose to add a new step and in select add an action from here I will search for the text analytics connector but this time choose key phrases and tweet text from the dynamic content blade moving on I will add a new step and then select add an action from here I will search for power bi and then in the trigger section select a Crow's to a data set now I will need to sign in to the service so let's go ahead and click on the sign-in button and then from here I will enter my logon credentials to authenticate once wilth indicated I will be presented with three configuration parameters and I will need to select my workspace data set and table for the workspace I will select the build a company in a day from the drop-down menu then for the data set I would select the build a company in a day's stream that we created earlier finally in the table I will select real-time data from the drop down menu which as you can see expands the parameters to include the fields that were specified earlier now what we are going to do here is dictate the information that we will pass back in the power bi that will help us visualize our follower sentiment let's go ahead and configure the following four tweet text I will select tweet text for location i will select location created at creative at followers will be the followers count as sentiment score will be score now we won't see this visualized data until later in the presentation when we create our dashboard but when we activate our logic app in a moment any tweets we received will be sent to power bi via the streaming data set that we created earlier and therefore ready for visualization when we get to it for now I want to create a new action that will notify me of any negative tweets received so let's go ahead and click on the new step button but this time select add a condition in a condition parameters box we'll select score and then select is less then and finally not point four as the value then in the if true parameter section let's go ahead and select add an action from here that search for office 365 Outlook and then send an email for the trigger again because this is a new connector we will need to authenticate so I will click on the sign-in button and enter my Corp credentials once with enter cated I will go ahead and send an email to a then build a company labayda on Microsoft comm with the subject title- tweet detected then in the body I will type the following name tweeted a negative tweet the tweet reads tweeted text they have followers count followers please investigate you with that done orders left for us to do is test so let's go ahead and hit run on the top menu bar and then create a negative tweet you as you can see it doesn't take long to appear in our mailbox proving that the logic app is working so with that done let's jump back into the slides and discuss how we're going to visualize okay so let's spend a moment talking about data now we generate a lot of data in fact it's currently estimated that we generate about 2.5 quintillion bytes of data every day and that number continues to grow in fact if you look at the research 90% of the beta in the world has been generated within the last two years and that includes the 3.5 million text messages the 456 thousand tweets and the 47,000 Instagram posts that we generate every minute that's right every minute it's not difficult to see data is big business and being able to capture process and act upon it it's critical to our continued success but where will they it come from how will it evolve well what are the platforms that will generate this data will be IOT or the Internet of Things today we have over eight point five billion devices but by 2020 that number is estimated to rise well into 20 billion this will be connected cities connected cars connected agriculture even connected humans it's true I'm already cloud connected with a device that monitors my blood glucose enables both my medical team and family to check on my well-being it's an amazing time for technology and when you write capable of incredible things now Microsoft offer an abundance of beta services these include warehousing technologies that store the data factories that process the data and int all such as bots and power bi to visualize the data in our demo we are going to want to visualize the stream of beta that was fed into power bi from our logic app so this jump back into our demo and see how it's done so from power bi let's go ahead and click on create report from the Builder company in add a stream to open a new report window from here you will know it's a blank canvas but in the far right and the real-time data you'll see the parameters that were configured earlier now the first thing that I want to see is what's being tweeted to do this I'll simply click on the tweet text from the right-hand menu which in turn will open a table containing our tweets in the center window next we will gauge the overall satisfaction of our followers by dragging the sentiment score over to the main window now I want to make this a little more visually appealing so I will go ahead and change the visualization to gauge finally I want to find out where the tweets were made and I can do this by adding location and changing the visualization to map and that's really all there is to it we can manipulate the data and visualizations to meet our needs and then share the dashboards with those who have a common interest in the data what are the more exciting ways to interact with our data is via bot technologies and over the past few years we've really seen an explosive growth in their popularity there to really understand why we need to think about consumers and how they interact through mobile and online services in the past customers were happy to search the internet visit websites and navigate through menu systems until they found what they wanted but today in a world that moves at the speed of cloud this is simply no longer the case most will be using messaging technology such as Skype or Facebook Messenger and companies are looking at ways to tap into that captured audience through these services think about the times that you've ordered pizza now in the past that would have been a phone call then as technology evolved through a website and while that was fast it still meant having to talk to a person or navigate through a site to select the piece were of choice today things are different today Pizza can be ordered through Twitter Facebook or to the delight of my son through a smart speaker by simply asking for it to feed me and that really emphasizes the power of bots you see it's no longer about the person having to understand the machine it's about the Machine understanding the person and simple conversational interactions are putting the end-user firmly in the driving seat now we won't be building a pizza bot sorbet but we will be building a super easy chat bot using some predefined Hall in that we can then connect into our website let's jump back into the portal and take a look so before we configure anything in the areaportal I'm going to open up a new tab and head on over to QA make at the AI now this site would allow us to quickly configure a new bot and once I've signed in I will head ups in the main menu bar on the top of the screen and select create a new service you this will open up a new page and we'll start by name in our service build a bot next we have three options to choose from we can populate the payload of the bot by consuming an existing FAQ page by simply pointing to its URL failing that we can upload an existing FAQ by supplying the site with a tab separated file PDF Word document or spreadsheet finally we can choose to create the content from scratch by building our question-and-answer pairs directly I'm going to go ahead and select to upload a file and then from the C demos as your bot folder i select the azure bot csv file now that should be about 33 kilobytes in size and once uploaded i will scroll down to the bottom of the page and click the Create button after a few seconds our knowledge base will load and from here you will see 70 question-and-answer pairs now I've created this bot to answer questions relate you to Asia given a simple explanation and then point into doc stock Microsoft comm for further reading to test let's move to the left-hand menu and click on the test link the page will once again change and you will see our chat bot presented along with is configured welcome message from here I will test the bot by asking it to tell me about virtual machines as you can see not only is a simple explanation returned we now have a link to the document repository for further study ok our bot is ready it's really that easy so let's go ahead and get it published to do so I will move up to the top of the page and click on the publish link the page will change once more and invite us to review our changes but I'm happy with what we have so let's click on the publish button to proceed now with our services published we are provided with some sample HTTP request code this can be used as a foundation to start building our chat bot front-end however if we head back into the azure portal i'm going to show you an even easier way to start interacting with our bot service so from the pool - let's click on create a resource from the left hand menu and then in the marketplace i will search for web app bot from here I will press the create button and in the new blank that opens enter a bot name build a company in add a demo chat bot next I'll confirm that I'm happy with a subscription being used and select the pass resources resource group ensuring that we're in the West sure at location for the pricing tier I will select f0 this is the free tier and will give me access to thousands of messages more than enough for our demo for the app name we will leave it as is then in the bot template I will go ahead and select question and answer before pressing create but for the app service plan I'm going to use the plan that we created earlier we have more than enough capacity to run both our website in our chat bot so there's no need to create another a storage account will need to be configured to store our bot state and I'm happy with the defaults here so I'll move on for application insights I will ensure date sets are on and set my location to West Europe finally I'm happy to auto create the app ID and password so I will choose to pinter dashboard and press the create button now this will take a few moments to complete so it's paused the video every term once done okay with our web app bot provision we just need to connect it into our bot service to do so I will click on application settings under app service settings and then add the knowledgebase I'd being and subscription key from QA maker you with that done let's go ahead and press save now at this point we're in a position to add our chat bot into our web site to do so I'm going to click on channels from the left-hand menu but just make note of all of the connectors that you have available to you you can connect into websites Skype teams Facebook and many more what we want to do is get the embedded code for our bot and I will do this by clicking on the get bot embed codes in the prompt will click on that open the web chat configuration page and then in the new window that opens copy the embed code into notepad at this point you will notice that we need to enter a secret to do this let's go back into the channels page and copy the key now that we have our key let's paste that back into notepad and then make sure that our iframe is set to maximum width and height with that done we'll copy the whole line and update our webpage to do this let's navigate back to the azure portal and into our website then remember earlier we spoke about the online editor will let's now use this to update our page so from under the development tools I'm going to select app service editor and then go in the main window that opens a new tab into our browser in the editor window I will click on the index of HTML file and then paste the code underneath line 25 now this will automatically update so let's navigate back into Azure and click on the URL to open our website and test the new code as you can see after a page refresh we now have a chat window present on the right hand side and if I type virtual machines you'll see that we're presented with a summary and a link to the virtual machines documentation on Docs Microsoft comm so I want to spend a little more time talking about the other cognitive services and in particular the vision API you see with this service we can be still actionable information from images text handwrite in an even video and if we think about the practical applications it could be used to determine mature content locate faces or even categorize the context of an image such as identifying a celebrity or landmark if you look at services such as Facebook they are already using vision technologies for example when we upload images the service will usually identify friends and associates and then offer to tag on our behalf or perhaps the technology could be used by police and enforcement agencies using the vast network of CCTV cameras in our cities to help identify suspicious behavior and petty crime let's jump into a demo and look at this in action I've already built the front-end code so all we are going to need to do is create their vision API service in Azure so back into our portal from the left-hand menu I will click on the plus sign to open the azure marketplace from here I will search for the face API and then when a new blade opens simply press the create button in the new blade I will type build a company in add a vision for our name and then ensure that I'm happy with the subscription and location for the pricing I will go ahead and select f0 which is the free tier offering face detection identification verification grouping and searching for up to 20 calls per minute and thirty thousand courts per month next I will select the pass resources resource group and confirm that I've both read and understood the notice on screen once done I will select pin to dashboard and then press create after a few seconds the service will be deployed now I want to grab two things here the endpoint URL and the resettles and then the access key by clicking on the link show access keys both of which are copy into notepad now as I said previously our code has already been created and I believe that it's originated from a piece of work that was originally created by Scott Hanselman so props to him anyway from visual studio I will navigate to the main windows MLCs and move down to the section name two-face service client whereby I'll replace key one with the key that we previously copied into notepad we'll also need to verify that the endpoint URL matches the one that is defined in our script with that done let's move up to the top menu bar and press staff to test as we can see our program opens and if we click on the Browse button we can navigate to the CSV zoom folder and select the edge of n dot jpg image from here we can see that the vision API service has detected my face and if I move the mouse over the square I'm presented with the analysis of the image and given my sex age emotion and feature metadata again I invite you to investigate further because what we are achieving with cognitive services is simply incredible head over to our cognitive services page and play with the sample code and demo projects okay let's move on to section and discuss security and monitoring now this is usually a hot topic when I meet people and we must address a reoccurring question is the cloud safe yes the short answer is yes in my opinion a cloud is by far the safest place to store and run resources and I get it it's scary moving your data to a remote server where it's not under your direct control but the reality is cloud providers such as Microsoft have spent billions of dollars on building these fortified data centers and your data is encrypted both at rest and in transit using Eva client-side or server-side encryption technologies what's more your virtual machines can be protected using Azure disk encryption utilizing technology such as BitLocker or Linux VM crypt to protect both operating system and database with full volume encryption then on top of this we have redundancy with distribution of replica pairs separated by hundreds of miles what this means is is that they are not susceptible to the same regional disasters so things like floods fires and outages are contained let's take a closer look now this could be a day-long discussion in its own right so anything where best to summarize the fact is security and privacy are built right into the azure platform and realized through a defense-in-depth model starting at the physical layer we employ 24/7 security personnel to protect our facilities with cameras alarms barriers Benson and biometrics then secured by design our infrastructure ring employs the latest in operational security controls and the most comprehensive set of compliant certifications than that of any other cloud provider next we have the network layer we've isolated virtual networks access control lists and a myriad of security appliances finally the VM layer with encryption access control policies malware screening and protection the fact is security should always be top of mind but rest assured we've got it covered now one of the tools that helped you realize our commitment to security is the Azure security center ASC is and they would be Eurasia subscription and helps you protect detect and respond to threats we've increased visibility and control over Eurasian resources it provides integrated security monitoring policy management and health detect threats that might otherwise go unnoticed what's more it works with a broad ecosystem of native and partner security solutions now the service is available in two tiers the free tier which provides visibility into the security of your resources policies and security recommendation and then the standards here which adds advanced threat capabilities including threat intelligence behavior analysis and for attribution ASC also employs role based access control and provides built-in roles that can be assigned to users groups and services what's more your environment is broken down by compute networking storage beta and applications with each resource type employing its own metrics and indicators to warn of potential breaches and vulnerabilities this jump back into the port or and take a look let's begin by clicking into the Security Center from the left-hand menu now if this is the first time that you've clicked onto the service it will take a few minutes to get ready so we'll pause the video and return once complete once Daniel notice a message at the top of the screen stating that our security experience may be limited this is because we're using the basics queue fortunately we can get 60 day free trial at the more advanced service so let's go ahead and click on the message to learn more as we can see the standard tier will add advanced rectly texting capability which is definitely something the world want to investigate so let's click on to us a subscription to continue in a new window that opens notice the advanced features features such as Just In Time VM access adaptive application controls and both networking VM threat detection let's go ahead and select standard followed by safe to continue okay so we've our service updated the sake a whistle-stop tour of the service to begin with we have the overview screen this provides an at-a-glance view of any security recommendations along with alerts and prevention items related to compute networking storage data and applications if I click on recommendations will see a list of detected concerns and mitigation steps needed to make our environment more secure for example we can see that on a DVM endpoint protection has not been installed and vulnerability assessment solutions are missing on our virtual machines let's take a look at how we can resolve these issues to begin with I will click on the endpoint protection now installed on the edge of VM notice how a new blade opens identifying the affected VM and a button on the top of the screen that offers to mitigate the issue were installed in the solution if I press the install on one VM button I've taken to a new screen where ever I can choose the service endpoint provider in this instance Microsoft anti-malware in a new blade that opens I will press create and then accept the defaults before pressing ok to continue as we can see this then starts endpoint protection installation on the affected virtual machine back into our recommendations I'm going to select add vulnerability assessment solution which again will open a new blade now notice that both advm and VM 1 and missing the tall so I will select both and then choose to install on to VMs to continue in the new blade that opens I have the option of creating a new solution or using an existing one in this instance I will choose create new and from here we'll present you with a parlor solution that I will click into now you will notice a new blade that opens and once again I can configure the solution to mitigate the risk on my VMs from now I will cancel by heading back into the main page of the security center so back into the main window I'm going to click on to the compute toll which is displaying a red warning bar as the page changes focus to the compute specific issues I will click on two VMs and computers from the main menu from here I get a breakdown of the security assessment in tabular form and as we can see only one of my VMs is currently being monitored let's click on VM 1 to find out more in a new plate that opens were able to see the reason that vm 1 is not being monitored and it's because the agent has not installed by clicking onto the link vm1 we are taken into the management window of the VM from here I can see that the VM is running so let's try and get is protected to do so we will once again click into the Security Center from the left-hand menu if I click on to recommendations once more I'm looking for a notification to enable VM agent as we can see this isn't the case which means it's not installing in the background we may have to therefore install this manually fortunately this is super easy to do via the MSI which can be found in the following location now we can also see that we need to enable an NSG in the advm virtual machine as well as the production subnet you'll remember earlier in the course that we stated that the NSG was a software firewall to help control the flow of information to our resources let's go ahead and get this resolved to do so I will click on the warning for a DVM and then on to the VM in the new blade that opens we can see that we have two ns GS configured one for VM one and another for VM 2 let's go ahead and create one for a DVM to do so I will click on the create new button I will name the NSG a DVM and then on the inbound rules add a new rule for TCP port 3389 by selecting the RDP service and pressing the ok button followed by ok back into the security center and we can see that our security threats are beginning to get resolved now for the sake of time I won't continue working through the list but I hope that I've been able to demonstrate how easy as your security center makes it to manage the security of your resources one thing I do want to touch on is something called just-in-time VM access you see brute-force attacks commonly target management ports to gain access to a VM if successful an attacker can take control and establish a foothold in your environment one way to reduce exposure is to limit the amount of time that a port is open when just-in-time is enabled security center locks down inbound traffic to eurasia VMS by creating an NSG raw when a user requests access to a VM Security Center checks that the user is the necessary role based access control permissions and in automatically configures the network security groups to allow inbound traffic to the management ports for a specified amount of time after the time has expired security center restores the NSG to their previous state let's take a look at implement in the solution to do so I will click on just in time VM access in the left-hand menu of security center then in the new blade that opens click on the recommended tab select both 80 vm + vm 1 in the new blade that opens we can see the ports that will be configured and the time range that they will be open that it can be configured to your individual needs but for our demo we will go ahead and select the defaults by pressing save we've just in time configured let's click on the configure tab and request access for VM one in the new blade that opens I will turn poor three three eight nine to the on position and then adjust the time range slider to one hour finally I will click the open port button to continue after a few moments noticed that just in time window displays that Asia ban is now active let's test this out by connected into the VM from the left hand menu I will click on to the virtual machine tab and then select VM 1 in the VM 1 window I will copy the public IP address and then open remote desktop to connect you as you can see the session is successful okay so let's move on to our last demo privileged Identity Management privileged Identity Management enables organizations to minimize the number of people who have access to secure information or resources that's reducing the chance of malicious activity or accidental release of sensitive information at rate the privileged Identity Management helps your organization by enabling on-demand just-in-time administrative access to online services such as office 365 in tune and Asia let's look at implementing this solution to do so I will navigate up to all services and in search for privileged Identity Management choosing to favorite so there pens to my left-hand menu in the process in a new window that opens I will select azure ad directory rolls under the management heading in the new blade I will select verify my identity and then follow the prompts to setup multi-factor authentication using my phone you when authentication is complete I would impress done to return to the Asia portal and then click sign up to detect privileged role assignments with the blade refreshes I will select roles from the left-hand menu from here I will click on Justin's account under the global administrator to make him eligible for privileged Identity Management rather than permanent once done I will close the blade now in the main a charade lead the rectory rolls blade I will select settings under manage from here I will click roles then global administrator so that I can move the maximum activation duration slider to two hours and enable email notifications finally I will click save and then navigate back into Azure ad to verify that Justin has been reverted back to a user and no longer oka local admin okay so from an incognito window I will sign into Asia as Justin and then access privileged Identity Management from the all services menu you in the blade that opens I will click on the tab my rolls and then global administrator from here I will verify my identity by once again responding to phone verification once done I will be returned to the Asia portal and the global administrator role activation details blade from here I will click the activate button and then type user management in the reason box before pressing the ok button once actor they did make know of the expiration time which will be two hours from now to verify the privileged Identity Management has been successful I will minimize the incognito window and back into my Asia portal session from here I will click on the azure active directory and then users before selecting Justin's account if I then click on the rectory role you will see that Justin has been elevated back to a global administrator and privileged Identity Management has been successful in helping to lock down our resources with that let's jump back into the portal and wrap up our session well guys I hope that you found this session to be both enjoyable and informative and it's my hope that we have sowed the seeds of possibility and inspired you to go off and continue your as you do understand in the Microsoft proposition then we looked at some of the more common infrastructure components building our networks of VMs and containers then we looked at authentication and identity before shifting focus and exploring Bev test environments and BCD our solutions from there we began to explore platform components and touched on a world of service computing finally we looked at the security and monitoring tools available in Microsoft Azure and a steps we employ to ensure that your experience is safe reliable and consistent if we look back our service chart we have barely scratched the surface leaving you a technical treasure hunt to go off and explore remember you can sign up for a free trial at any time and if you have questions then feel free to reach out to me because I'm always happy to hear from you we've covered so much and in such a short period of time leaving me with just one question what could you build in a day
Info
Channel: Daniel Baker
Views: 112,103
Rating: undefined out of 5
Keywords: Azure, Microsoft Cloud, IaaS, PaaS, SaaS, 70-533, 70-535, Containers, Virtual Machines, VMs, Networks, Active Directory, Azure AD, WebApps, LogicApps, Cognitive Services, Bots, API, Security, GDPR, Windows, Linux, Bash, Powershell, Machine Learning, Kubernetes, DevTest Labs, BCDR, Backup, ASR, Azure Site Recovery
Id: 0GvMwCFhk08
Channel Id: undefined
Length: 160min 36sec (9636 seconds)
Published: Tue Mar 13 2018
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.