Day 4 - Sept 16 Cloud Summit 2021

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
so so so hi i'm anna hoffman hey friends i'm nicola hi i am hi i'm tanya jenka hello and i'm excited to be the speaker at azure summit 2021 it's a fantastic event that will be 11 days live streaming where more than 100 speakers from all over the world i'm excited to speak attention summit about power bi and synopsy analytics and i'm gonna talk about security and also a microsoft mvp for azure i'm speaking at cloud summit about automated release of document application and the best part is that this is a free event come join me live tv come join me live on learn tv on september 14th come join me on learn tv on the 14th of september this year a bunch of other microsoft and community speakers so if you want to learn how to secure azure come to my talk join us it will be a lot of fun see you then see you there see you there see you there so hi everyone and welcome back to class summit 2021 i'm your host stephen simon and we are back with part two of day 4 cloud summit 2021 and what an amazing part one of day four we have had did he talk about azure sql we talked about uh blazer we have talked about azure secure devops serverless and i'm really excited for the part two of day four as i i can go far and say that day four of cloud summit is one of my favorite day because i have absolutely amazing speakers today lined up uh the second part is going to be power packed and we are streaming on multiple platforms we're streaming at on c sharp corner we're streaming on c sharp live we are streaming on microsoft developers youtube channel we're streaming on learn tv and here's a quick fact that until now we have crossed over 60 000 unique audiences in just three and a half days can you believe that uh there have been around 70 000 unique registrations for this event and uh yeah this is getting bigger and bigger thanks to all the speakers and attendees having said that this reminds me that i have been looking in the comments you all have been very kind to each other and i want you all to continue that and for your code of conduct and a quick reminder that there's always three contests going on that whenever you ask a question to a speaker using the comments use cloud summit this way we can find a winner towards the end of the day also take a screen grab take a selfie be creative tag us on cloud summit live on twitter and linkedin using cloud summit live and we'll make sure we pick one winner from each social media destinations and we'll announce it towards the end of the day having said that it's time to go ahead and invite our keynote speaker of the day uh vishwas lily who's a microsoft regional director uh yeah and he's gonna talk about his keynote is going to be on driving cloud values lessons from the last decade in the cloud absolutely amazing so fasten your seat belts and this is going to be an absolutely amazing keynote hi vishwas welcome to class summit 2021 hi hi simon thank you for having me thank you for the kind introduction i really appreciate vishwas for accepting the invitation uh and i'm really looking forward for your session that that's about the journey of cloud in past 10 years i know already we are a couple of minutes in your session so we're going to chat towards the end right and i already see you have your screen share i'm going to add it to the stream everybody can see it and next 25 minutes is all yours thank you thank you simon and then thank you for the audience's numbers that simon was talking about is amazing shows you what the interest is in the cloud and i thank you for all your time i was thinking a lot about today's session i have about 20 minutes or so and i do want to keep some time for some q at the end so i was thinking about what we should talk about there have been excellent sessions there was yesterday simon talked about all of the various topics and then we have various series sessions today and later in the week so i was thinking how can i give you uh for all of those attending today uh some of the lessons that i've collected uh over the last few years working in the cloud so let us get started as you can see uh on the screen the title of my talk uh which i changed a few times is driving cloud value lessons from the last decade so what do i mean last decade september is september 2021 is an interesting month because it marks the 15 years when aws s3 the first service in aws was launched almost 15 years ago and interestingly enough and my own humble journey with the cloud for me started in 2007 and i did not have any i notion of this becoming this big uh somebody threw me into a project started working on it liked it and that's been sort of the journey and today if you look at it the transformational benefits of the cloud are well established i don't think we need to talk about the importance and benefits i think those establish those are well established and in fact all you have to do is look at the revenue or the consumption of the top three cloud providers something like 200 billion dollars in 2021 and if you look at this number significant number but if you look at this number against sort of the global i.t spend which is something like two or three trillion dollars so you can take it from there that even though we've made a lot of progress in the cloud many chapters of this book about cloud computing remain yet to be written and one observation that i'll share with you is when a technology like cloud becomes popular it becomes so pervasive that it ultimately becomes invisible right so already if you are paying close attention to what is happening in the cloud with the announcement of things like the microsoft cloud for healthcare for retail we're already seeing a transition into a dedicated vertical focus cloud where you have these value-add services right so uh healthcare monitoring healthcare payments and billing and these kinds of capabilities are part of the cloud built in so we're already seeing that transition happen but in any case let me uh bring the talk back to uh what i'm here to talk to to communicate over the next few minutes which is what are some of the lessons that i have learnt and i've learned through interacting with my colleagues at ais through our customers and more importantly even the community we've collectively learned a lot and i just wanted to share 10 simple ideas these are not exhaustive by any means but these are the ones that i came up with as i was preparing for today's talk so one thing is very clear that cloud providers are creating so many new features something like two to three thousand features are being rolled out every year and you probably heard of a term called cloud entropy and that cloud entropy is real you know the changes that are happening is real so cloud consumers and even large cloud consumers large companies enterprises even they don't have the resources to keep track of what is happening what the new features are coming out so as a result i feel that what we should focus on is building a strong cloud foundation and let me give you some examples of the things that i want to talk about in terms of building the strong cloud foundation so there's about 10 things that i want to talk about let's get started first and foremost i would say that you should look at your application portfolio and then make a decision about three to five architectural blueprints that can support 80 of your workloads right you have to take this craziness of new services all the time and boil it down to three or five key blueprints and then match them up against the kinds of applications you have look at your application portfolio and say i'm going to i'm going to double down on these blueprints and then further more once you have doubled down on those blueprints you have to make sure that you give your users a simple consistent way to provision these architectural blueprints right and we we've heard terms like cloud vending machine or i.t vending machine so once once you've taken these blueprints and you've packaged them up you should have a way for people to walk up to this vending machine and be able to simply get a copy of that blueprint okay let me give you a concrete example so you've determined that a certain class of applications are fit for serverless and then you've also determined that in order to be having the right security in place you want that serverless function to be vnet joined has to have this this construct private endpoints what not you figure that part out that's one architectural blueprint you don't want everybody to be going in and spinning that up on themselves you take that blueprint you write an automation script language of your choice terraform arm whatever you you want powershell make sure it is available for people to walk up and then get a copy of that that's what i mean by consistency and you have to treat this automation as a first-class citizen so you have to think about resilience so when you're deploying even if you're deploying the serverless application with some storage account you have to think about resilience you're spreading that out across availability zones you have to think about observability do you have the right monitoring enabled already you have to think about security and the zero trust concept that you assume that a breach has happened so as you're writing these automation scripts in in conjunction with those blueprints that i was talking about these things are important to have so that's number one that's consistency number two is cost optimization and i'm seeing a lot of this people get excited they jump into the cloud first of all some people are disappointed because they went into the cloud thinking it was a reduction in cost it is really not reduction it's cost cloud is about accelerating the time to value that's how you have to go in and once you're in the cloud as you become mature in the cloud your cost maturity will come but what i'm seeing today is a lot of companies have gone into the cloud have not put the right controls in place don't have the the tagging strategy in place don't have all of the governance policies don't have anomaly detection why is suddenly this service costing me so much because somebody changed some knobs somewhere and then equally importantly of course there are services like cost management today but then does the cost can it be broken up into a manner that aligns with your organizational hierarchy so you've broken up maybe you're doing product based development you've broken it up by the products is the cloud cost aligned with the hierarchies that you've set up for your organization really important and one word of caution i was reading somewhere that a lot a significant percentage maybe even 20 to 30 percent of cloud consumption is waste people did not put the right controls in place or let a resource stay on longer than it was needed so that's becoming very important and cfos around the companies are getting more strict about why am i seeing this spend go out of control so you have to be very cognizant of that next one is security uh resources in the cloud are dynamic they come and go we are provisioning these through the api so your security monitoring has to be dynamic no question about that so when i talked about the architectural blueprints i talked about how you must make sure those blueprints are secure from the start but that's only the first step you have to continuously monitor those resources so that they stay compliant with your security needs and the only way to achieve that in the cloud is thinking of security as code what do i mean by that you have to have jobs in place that are constantly running so many of the failures in the cloud are configuration related errors so you have to have jobs that are constantly scanning those configurations of the applications that you're deploying and making sure that you're not having a drift from a compliance state so security of course is important let me talk about data gravity and i don't know if you if you've heard of the term data gravity uh it was introduced in 2013 and the idea is that your applications tend to get closer over a period of time tend to get closer to the data make sense right you want to reduce latency so that's what the notion of data gravity is so as we have started talking about multi-cloud and there's an interesting paper that came out which talked about 80 of fortune 500 companies are multi-cloud either through acquisition or different business units deciding on a different technology and one of the impediments that you will run in and of course kubernetes and things like that allow you at least a chance of course a lot more work needs to be done but give you a chance to move your applications across different providers or even on-prem and things like azure arc aws outpost and things like that allow you a single control plane for your cloud right but because of data gravity because it's so hard to take your applications and move it further away from the data it is an impediment to truly achieving a multi-cloud and i should be careful when i talk about multi-cloud i'm talking about an organization that is deploying a similar class of applications across more than one cloud provider i'm not talking about uh i have salesforce and i have m365 and i have azure that's that's not my definition or industry's definition of multi-cloud let's keep going i'm about 10 minutes into my session and we've covered about three or four so i'll speed up a little bit the next topic that i want to talk about is inclusiveness and you'll probably be surprised by this but what i mean to say here is that cloud is not just for professional developers or professional infrastructure engineers we need to be when we are designing the cloud when we are designing the governance structure when we are designing the self-service capability when we're designing the network topology we need to be inclusive towards a growing community of citizen developers we know a lot of these applications today i.t is not able to keep up the demand for applications is very high consequently citizen developers and low code no code platforms are filling in that gap it is important as we design these cloud constructs that it should be possible for someone to take a low code no code construct and then be able to commingle it with a far lower level platform construct so that's really important that we think about this because in order to achieve all of the application needs that organizations have low code no code and citizen developers are going to play an important part let me go to the next one data analytics so you're thinking and about migrating your applications to the cloud you're reimagining your applications for the cloud maybe you're refactoring some of those applications maybe you're rewriting those applications as cloud native applications keep in mind that these applications that you're moving to the cloud are going to be an immense source for some data analytics down the road so if you can plan for those analytics scenarios up front what do i mean by that so these applications that you're porting or rewriting or refactoring for the cloud these applications need to expose the data so think about data ingestion how are you going to take the data out and go into an analytics store like a warehouse or a data lake how are you going to do the transformation and some of you have probably seen architectures like data mesh and think of data mesh as microservices style architecture for data so what it means is you are treating data essentially as a product so there's always this chasm between i have this operational data and this analytics data and architectures like data mesh are moving towards a model where you have a domain essentially responsible for making the data whether it is operational analytics available as a product to the other teams so as you're migrating this application think about this aspect as well the next thing i want to talk about is the cloud operating model so if you have a traditional infrastructure needs networking needs and security needs all of these these skill sets they need to be operating in a cloud model what do i mean by that so you may be a network person you may be a security person you may be an infrastructure person but if you are not embracing an iterative development model if you're not embracing devsecops you're not treating network as code or security as code and version control and checking and unit tests you're not treating it as as code as first class citizens you simply cannot succeed in the cloud so if you are taking a traditional idea operating mindset to the cloud you just cannot succeed in the cloud so it's really important to as you work with folks from these different communities and some of you may be network infrastructure and security teams it's really important to start moving towards a cloud operating model so i'm down to the last two items we've covered eight lessons that are important to apply as we move into the next decade or more the cloud the next one is quite obvious is continuous learning so many of you are working in the cloud already many of your co-workers are working in the cloud already you may have become fluent with cloud basics you understand i asked as you understand a software-driven network you understand the apis you understand the services but you need to take your knowledge to the next level and that can only happen through rigorous continuous learning and upskilling programs so if you're a cloud professional and if you're not subscribing to things like linkedin learning pluralsight you name it there are dozens of these uh these uh learning platforms there are youtube videos there are great content that is put out by the providers azure team puts out great content if you're not spending time you're not embedding a culture of learning within your organization it will be very hard to achieve cloud transformation goals cloud is changing rapidly you have to be able to keep up with that learning and then with that i come to the last uh point in my presentation and then go and this point goes hand in hand with the previous one i mentioned which was continuous learning so along with continuous learning and i see this with many organizations as well you have upskilling programs but you also in addition to that you need some sort of a clowns and sandbox and i'm not just talking about msdn accounts that i have this hundred dollars a month or 150 a month depending on your msd and subscription i'm talking about a true sandbox that your organization has set up which gives you the freedom to experiment and frankly freedom to fail you need an enterprise cloud sandbox which gives you an ability to collaborate with your co-workers which gives you the freedom to experiment with these services and it also gives you an ability to test out these preview capabilities that are coming out so for example you have you have an architecture already in place and i'll take the example that i had used previously uh private endpoints are becoming really important i'm sure you've looked at that if you have not you should look into that private end point sort of give you the best of both worlds right your service like azure sql database or storage but you can lock it down to your virtual network that's essentially there's more to private endpoints than what i've just described but if you don't have a sandbox that you can try these things out within hours not days and weeks it will be really hard to incorporate these capabilities that are coming out and learn about that and bring them into your projects and your applications so that concludes a few of the lessons that i thought are important i'm going to pause here and i'm going to go back to simon and see if there are some questions that we can take or how he wants to address this so go ahead simon thank you vishwas that was absolutely amazing i did i did love that how you covered the entire some of the pillars that of cloud by talking about the security and all that uh we don't have any questions for the keynote but i i really appreciate all that you have shared and your time for today uh thank you so much and any fine thing you want to plug in before we move to the next session yeah thank you one one point that as i mentioned to you i was thinking about this keynote till late last night and i was jotting these points down and i was tweaking what i had to say because there's a lot you can say about each of those bullets i mean as you can imagine and one thing that i did leave out yesterday because i thought i may not have time but now that we have a couple of minutes i do want to go back to it my first bullet was that figure out key patterns like three or five don't don't confuse yourself with constantly changing architectural blueprints look at your application portfolio figure out a set of architectural blueprints stick to them make them robust so i've said that but one thing i would like to add in conjunction to that is having uh some sort of a coe or center of excellence and center of excellence sometimes you know means different things to different people and sounds like a fancy term whatever you want to call it does not matter but it is really important to have a core group of people who are constantly looking out for what is coming down the pike what are the new capabilities so so this this observation that pick three or five architectural patterns and stick to them goes in conjunction with having a core group of people who are really interested in sort of pushing the envelope constantly look at what is coming out because what you don't want to happen is you don't want to be caught flat footed if a new service comes around that is completely disruptive or perhaps an architecturally significant enhancement is made to a service that you're already using right you may be using a service you may not be knowing but there's an architecturally significant enhancement and i'll give you an example logic apps that you may be using logic apps for a long time there's a significant change that has happened to logic apps called the self-hosted model so up until now logic apps has been available as a pass platform but now you can take logic apps and host it into a computer environment of your choice whether it is kubernetes or someplace else or on-prem even if you don't have a code group that is looking out for what is changing you may miss out an important benefit that can have a big impact on your project so uh make sure you do both so that was the last point thank you for this additional opportunity to mention this last point as well now which was i definitely agree that the continuous learning that you had on your slide that is super super important things are changing very rapidly especially the cloud we see azure being updated every every other day or not i should say every day so it definitely makes sense to keep yourself and your team updated on what are the changes that's happening thank you so much once again which was it's always lovely hosting you and we would love to have you back whenever available you and your team is growing continuously i stay updated on linkedin so congratulations on that and yeah have a good day ahead and thank you so much thank you thank you simon all right with that absolutely amazing keynote by vishwas really we now move to our next session but before we move it we do have couple of minutes and i see many more people joining people are joining us from zimbabwe people are joining us from cuba uh i don't know what this place is but welcome elena and yeah welcome everyone this is cloud summit 2021 uh day four part two uh and yeah the second part is going to be really really exciting you had an amazing keynote by vishwas lily and now we move to our next session that's by thomas maurer who is a microsoft cloud advocate uh i do remember i hosted him back in 2030 in the month of november and it's been almost 10 months uh so you know he's a really busy person and even the last time it took me very long to bring him to a live show and in today's session he's going to talk about azure hybrid learn about hybrid cloud management with azure and to be honest what a currently rockstar is follow him on twitter and linkedin and his tweets and like a post always gets like hundreds and other interviews might don't even get even 10 let's welcome thomas mara so hi oh my goodness hi thomas welcome to cloud summit 2021 hi simon thank you very much for having me it's an honor to speak today at cloud summit thank you thank you so much so much for accepting the invitation i've been looking at you on social media you are packed with the events and you are really really busy uh by the way how was your summer this year ah it was very good it was very good very exciting stuff happening for me personally uh as well as work wise um so now doing really really well was a very good summer uh i hope you had a very good summer too yeah here in india we don't celebrate a lot of we don't do a lot about summer because it's very hot anyway we do kind of in this winter time but yeah thomas i'm really excited to learn about hybrid cloud and let me tell you hybrid cloud devops this is the two topics that people really wanted to learn when we took the registration so i i'm definitely looking forward for your session i see you have already shared your screen i have added it to the stream now everybody can see it and next 25 minutes is all yours thank you very much simon this is this is awesome so yeah my name is thomas maurer i'm going to speak to you today about hybrid cloud management and i probably need to extend this and not just say hyper cloud but also multi-cloud management right because we're not just limiting this to azure and on-premises we're actually enabling this also if you run mult in a multi-cloud environment and we'll talk about that uh in just a bit so let me dive into this presentation and again i'm not necessarily going to show a lot of slides i have a couple of them to illustrate a couple of things but i want to show you some some cool demos as well so before i actually go into the demos i want to quickly outline why we're actually doing this and why this is such a big deal right so we did some research and obviously we figured out that like customer environments get increasingly complex right there's a lot of change happening with the cloud but also with new technologies and so we are seeing like customers working with hundreds and if not thousands of applications and managing these and taking advantage of these and these can be very modern applications they can run on past services containerized even serverless apps but then we also have a lot of traditional applications running in virtual machines or if even physical hosts we also see customers obviously using different types of infrastructure they use their own data centers they have their branch offices factories retail stores uh edge locations they use hosters for their workloads so there there's a ton of things happening and they always need to have management for that as well and then last but not least what i already men mentioned is the multi-cloud approach right we have customers who are some of them are probably strategic in a multi-cloud environment other customers they just started with a one cloud provider and then later on for example switched um to azure which by the way is a fantastic choice but then they're probably not going to migrate everything right to azure and they still need to be in control of everything um even though they choose multiple providers and so this obviously adds all of that adds a lot of complexity and we want to make sure that we not just service your management but also in terms of security and compliance um and app deployments for example and and cloud architecture in general can make it easy uh or easier for customers in that sense so uh before i dive into azure arc i also want to quickly speak out and say hey um there's obviously a lot of reasons why customers are in a hybrid environment they're probably like there are reasons why they can't use the cloud and so we want to help them to like enable them wherever they are right if they have um data sovereignty challenges if they have network challenges where they have like not a lot of bandwidth to azure or they don't want to rely on the internet connectivity for their workloads to run um there are reasons and we offer us a large set of uh services and products to actually enable our customers it's not just one product which does hybrid but instead we have like a set of different solutions out there for our customers let's think about iot right we have a rich set of iot solutions we have our azure stack family which provides you with cloud inspired infrastructure wherever you need it and then obviously we talk about azure arc and azure arc is actually one part of that is bringing azure services to any infrastructure so when a customer cannot use an azure service in azure we try to bring that service to the customer and then last but not least that's what we're going to talk mostly about is the azure control plane using azure arc right so that you actually can manage everything in a single control plane and so let's dive into what azure arc is and i quickly one sentence um so if you look at what azure arc is it's really extending the azure management and azure services to anywhere right this is this is absolutely key because again customers are not just running their workloads in one place they're probably running their workloads in different locations they take advantage of the cloud they build hybrid and multi-cloud applications and architectures so that is where azure arc really comes into the game and we offer different um tool sets for this so for example one is about exactly having that control getting that central visibility um and operations and compliance tools uh but then also obviously we want to enable customers to bring these cloud native apps to anywhere and help them for example managing their kubernetes clusters or deploying apps to these and then obviously if customers want to build um applications running our past services for example they should also be able to do that not just in azure but also outside of azure so how do we do this and how do we look at this in general and and let's dive into the technology a little bit so with azure we have this great control plane right where we actually can manage all our azure resources now our customers told us okay hey this is so great i really get like a single control plane to manage all my azure cloud resources but what about the resources which are outside of azure and so that is what we actually brought in so we were speaking about two different things here so one is what we call the azure arc enabled infrastructure and that allows you to connect existing infrastructure such as servers and kubernetes clusters or microsoft sql server for example to the azure control plane using azure arc and then take advantage of the azure management services such as security center update management monitoring and many many more and then the other part where we gonna have a quick look at today is the azure arc enabled services part so this is actually where we then enable customers to actually deploy azure services wherever they need them so instead of talking all of the of that on slides let's just give you a quick look how that actually can look like so here i am in the azure portal and you can see here um i'm in the all resources page right and so you can see here every resource i have deployed in azure and everything basically has a name has a type you can see here uh it can be a vm it can be a virtual disk a database even public ip addresses for example are an object right and they're usually part of a resource group they're deployed or joined to your location and then they're part of a subscription as well and you can obviously use tagging for that as well now our customers told us hey this is great but again where are the resources which are outside of azure and so to actually illustrate that what i want to show you here is um let's quickly filter for to list all my servers right so servers which are running in azure but also servers which are running outside of azure and you can see here if i look for arc i can actually see i have a couple of arc servers here already joined and i will tell you a little bit more how i did that um but then obviously not just my arc machines i also want to have my azure virtual machines here so if i select these two i can now see them all my servers side by side in that single view and you can see here the blue ones um are the arc are the azure virtual machines um as you know them and then we have the other ones here which are my servers which are running outside of azure but you can also see that they're basically part of a resource group they're also parts of subscription and they show up as an azure resource right i can even use features like tagging for example here so if i select a tag like for example cost center i can select okay i want to see all my servers which are like part of these three specific cost centers and if i hit that i can take advantage of of the native azure capabilities for tagging as well and that is because it is now a native azure resource right even though the server is running outside of azure we still create that native resource in azure so you can take advantage of all of that you now might say well thomas this is great to get some visibility right but i want to i need to do more and i completely agree with that so one thing i want to show you here is com is when it comes to compliance so for that we are using azure policy and azure policy is really there to get control over your azure environment but what a lot of people don't know there's a feature called azure policy guest configuration which also helps you to not just configure the azure environment but also for example if you run virtual machines within azure um to actually control the operating system right think about group policies on steroids in one in an example uh and with azure arc we can now extend this to resources which are outside of azure now i can simply go and have a set of policies which we already have built in for compliance reasons like if i go for example to create a new assignment here i can basically select an initiative here which is uh uh basically the policies uh different policies here so you can as you can see you can define your custom ones but we have a set of built-in ones which are probably very interesting to you and some of them are technology focused right like let's see like let's make sure that defender for sql agents on virtual machines is deployed and so on but then we also have a lot of them uh who actually based on industry certificate certificates so for example we have for fedramp we have we have iso pci irs uk nhs and so on which you can select now in my case i deployed a very simple one which actually goes out and in audits virtual machines or physical servers as well like service in general for insecure password settings so i could set this up now but that would take obviously a while and so what i'm going to do here is like in a good cooking show i already prepared um this policy right so we don't have to wait for it so if i want to now see my compliance state of my environment i can simply hit compliance and now you can see a couple of different things first thing you can see that i'm actually doing a very bad job when it comes to compliance and secondly you can see here all the actual policies assigned and if i click on this one this is actually the one i just showed you and you can see here i'm not complying with that and if i scroll down i could even see the different policies itself for example password age and password blanks and so on which are not configured the right way now this is obviously interesting to know but if i'm in charge of compliance i also want to see like what resources are actually not compliant what of them what servers are not compliant and so if i click on compliance button i can now see here okay these are the servers um and which are not compliant and if i have a closer look here i can see these are my azure vms here you can see resource type microsoft dot compute slash virtual machines and as well my arc servers so this is microsoft.hybrid compute machines and so i can see here basically have a compliant view about all my resources and again this is not just limited to servers we can also look at for example sql servers or kubernetes clusters as well and then do much much more now how do we actually do this or what did i just show you so if i go back to the slide that quickly and no worries there are more demos to come what i just showed you here is a high level architecture of how azure works so when we are working with azure as a customer we're basically using tools and experience like the portal the cli powershell apis and so on to actually interact with azure resource manager now azure resource manager provides all these great benefits to manage resources at a large scale right like tags grouping subscription management role-based access control policies logs and so on and if you look at then we also offer a ton of different management services like monitoring update management containers and so on um to actually manage these azure resources right but again cluster customers do not just have azure resources and so what they did before was they have their own management tools their existing management tooling to manage resources which are running on premises or at other cloud providers now with azure arc we're basically building that bridge between the azure resource manager and the resources which are outside of azure so you can actually get a representation of all this stuff within within azure and you cannot just use it in the portal uh you can also use that using arm templates you can use that using the cli or powershell and so on and you can still keep on using your existing tooling right we are not we don't want to create any dependencies which are like opinion for your business critical applications if you don't need to right you can still use them but you get that extended benefit basically using our management tools so what can we actually manage if i talk about azure arc enabled infrastructure so we have for example servers azure arc enabled servers so we can manage that we can actually manage azure arc enabled kubernetes clusters and azure sql servers and i'm going to show you that in just a bit here so let me quickly go back to the demo here so how do i actually manage that um if i go to the arc azure arc page arc center within the azure portal this is the one place where you would go to manage all your arc resources so what you can do here is you could onboard new servers and new infrastructure new kubernetes clusters and then they will show up in the azure portal now what does that actually mean how do i import resources it's very simple basically what you need to do is download the agent install the agent and register it like that machine or that kubernetes cluster with azure that's very simple and we help you by generating a script um to do so and then the machine after a couple of minutes it will pop up in the azure portal and you can see here it looks like this so for example if i click here on one of my servers you can see here it actually now looks like an azure resource right it's part of a resource group it's part of a subscription you can also see some additional local information here that server is actually running in my local data center or basically underneath my desk here and then you can see here on the left side we get all the good stuff you get from all the azure resources like the activity log to audit like who has done what we get role-based access control so we can actually take advantage of azure active directory to see who has access to that resource or enable people to have access to that resource we can do like things like get security center recommendations so for example here you can see here that like there are a couple of things i need to do um i can also for example take advantage of extensions so i can deploy for example a custom script extension to deploy a script to that machine or i can take advantage of for example log analytics and azure monitor to actually get some monitoring information here uh from that specific system so you can see here that this server doesn't do a lot but you can see here um cpu utilization memory utilization uh disk and network utilization as well right i can also get a dependency view and much much more here i can also do some update management so if i want to patch that server i can directly do this and control this out of the azure portal so you can see here that this actually is not compliant because there have this server has missing patches so i could actually go out and just deploy schedule a new update deployment again there's much much more to show but what i want to show you next is like actually like the kubernetes cluster management right so i also have a couple of kubernetes cluster running outside of azure and i want to manage these as well so what i can do here is i can actually go out and have a look at this cluster here you can see here on the left side i also get all the good stuff like monitoring policies and all that stuff uh security center all the stuff else or a lot of stuff i also get for server but then what i want to show you here is how we can actually manage apps and do a githubs integration to actually manage that so if i look at git ops here i can actually deploy that configuration and say hey i want to deploy for example an application to that kubernetes cluster so what i did here is i have a git repository down here where my application and my application configuration is stored and i said basically to my kubernetes cluster okay pull that application from there and then look for changes to that application um and so when as soon as there is a change please also apply that to the cluster and so what i can show you i took all my skills basically to create a new application here which is basically a very simple application it says let me quickly refresh here it says hello so that is what the application does and then i want to change that i don't want to say like it should not just say hello here it should say um hello cloud summit so what i hear go here and and change that so this is my git repo and now i'm doing something please don't repeat that at home this is really just for me right now so i'm going to directly do a change in the main branch here so let's say hello cloud summit now well since i'm still a good admin i'm still going to at least to commit message here and then commit that change and again don't do that at home it also works with obviously approval steps and different branches and so on but now if i go back what you can see here in this config this now goes out every three seconds basically to see if there is a change and so we can see here okay go from that git repo and basically look for that change and if i now go back and talk long enough if i now refresh my application here you can see here it says hello cloud summit so i can do a couple of different things there obviously not just changing like a message but this like you get the example the idea uh of that so there is much much more we can do but let's go quickly back to the slides uh what else we can do or have a quick summary on what we can do here so with azure arc we can actually manage servers again you saw that how that works how we can organize and take advantage of the security features and management features we have with azure arc uh we can do the same thing or similar things or even more if you will in some cases for curve engineer arc enabled kubernetes clusters uh where we can actually go out and do for example that git ops deployment and we can do monitoring of our kubernetes clusters as we can do it like for example as we do with an aks cluster right but we can do that with kubernetes clusters outside of azure um running basically on premises or even at other cloud providers and then we can also connect to these clusters so if you need to do some configuration changes if an admin or developer needs to go out and do some changes we can do that as well so that is pretty cool stuff we can do there so this is one part about hybrid cloud management right we get that single control plane we can attach existing resources to it like servers and kubernetes clusters um and then actually go out and manage these and take advantage of that hyper of that cloud native management tooling we have in azure the other part i promised you is the azure arc enabled services part and that basically allows us to deploy services to resources which like or deploy azure services outside of azure so let me quickly show you how that can look like so again if i go back to the azure arc page here we have here the infrastructure management but then on the bottom you can see here the services right so these are some of the services we currently arc enabled and what we have here is an interesting piece here so we call that custom locations so in my case i defined two kubernetes clusters as custom locations where i now can go out and deploy azure resources so to quickly show you how that could look like is here if i create a new app service you could just click create new and then i would obviously select a resource group and select this one and then maybe give that a name so let's see let's let's do cloud summit and then i need to choose a region right where do i want to deploy this and this is nothing new i've seen that before but the cool part here is that i cannot just select an azure region i could also select now my custom reach my custom locations right so these are now basically in my own data center you can see here tom's data center that doesn't really sound like a azure region right or i can even do that to another deployer to another cloud provider so how cool is that i can now deploy a large amount of azure services not just in azure but also at all the locations so i could go out and go through obviously the setup i can also again use arm templates to do that i can use like the cli to deploy that as well i can do use native tooling within visual studio code for example as well so if i want to use that i can do that as well so that is that is pretty cool stuff we have there as well and so with that let me quickly should talk a little bit what what's actually there so in this case i saw like when we talked about azure application services which are currently in preview so you can go out and deploy for example app service functions logic gaps api management event grid outside of azure which is pretty awesome and then this obviously enables a couple of different scenarios for developers where now you cannot just build these cloud native applications in azure but then also take that and bring that back to like basically any infrastructure where you want to run it but it's not just about applications right most applications also be databases so one thing which is also already generally available is azure arc enabled data services and this allows you for example to run azure sql managed instances in these places as well and then you get not just you can just the sql database but you get actually the azure sql experience as a managed service where we take care of the updating process of the security and so on and you can actually go out and deploy that and the only thing you actually need is a kubernetes cluster right so you need to have an on-prem kubernetes cluster or a kubernetes cluster running another other cloud provider as well and then connect that define it as a custom region as i just showed you and then you can actually go out and deploy that in the terms of dark data services you can even go and do that in a disconnected environment as well so what does that give you so if you think about if you're cloud architect or developer you can now use your existing tooling and your existing skills to actually deploy um this into these services or these applications to your infrastructure using for example the git ups techniques to deploy to your kubernetes cluster but then you can also take advantage of the azure services of the cloud native services for example like azure sql and functions and web apps and keep on doing that doesn't matter where they're actually running so this is gives you the kind of like flexibility you want to want to have there now one last thing i actually want to mention um if you for example want to deploy this now in your data center you're saying well i don't have the knowledge really to like build up a kubernetes cluster right i don't just want to want to for example do that we also offer a solution called um azure stack hci and this is basically an infrastructure solution doing virtualization where you can then run windows and linux vms and then run different applications inside these vms right this is also a great solution if you look at vdi and so on but a service which we also bring to azure stack aci is aks so the azure kubernetes service cannot just run in azure but can also run on azure stack hci and so that enables you then to run basically all the containers as or containerized applications you have but then also the azure arc enabled services i just showed you in your own data center locations and then obviously build your cloud native applications and this is great because aks on azure stack hdi again is kind of like deployed as a service on top of it very easy to deploy to go forward with it and you can deploy that in your own data center and you don't need to have all this like i build it by myself experience so you can have the full stack microsoft if you want to but again it's not necessary right you can also just use other kubernetes clusters like openshift and so on to do similar things as well so with that i just want to tell you a little bit where you can find more before i'm actually done so if you're looking at like this hybrid a multi-cloud approach and want to dive into deeper into what i just told you especially when it also comes to management have a look at the cloud adoption framework we now have a special scenario for hybrid and multi-cloud guidance basically where we actually bring you that information um and this is again um a lot of awesome stuff we basically developed together with our customers the engineering teams and so on focusing especially on these hybrid multi-cloud approaches talking about the control plane and so on and then we have a ton of different services additional information like microsoft learn highly recommend to check that out we have reference architectures and so on so there's much much more to learn again i have a couple of links inside here uh where you can actually go and find more also check out microsoft docs and last but not least uh check out the mic the azure arc jumpstart guide to actually deploy quickly these different technologies so with that i want to say thank you to hope that keeps you provide you a great overview all right thomas that was an absolutely amazing session although it went over my head but looks like he covered the entire ecosystem of azure arc uh to be honest i really loved your slides i really loved your slides so thomas you are you are in a very fancy designation right where we did discuss that you work with the engineers and you will get to work with the customers right so while working with this hybrid ecosystem right what is a big challenge or a debate you get to face with the customers any one point would like to bring from your experience yeah so there's always a i think a debate when it comes to like security and and stuff like that right and what i see now what happened over the past couple of years was like we had this like in the past we had this discussion okay is the cloud secure right and now this this conversation slightly shifted away and saying hey how can i use the cloud to make my own premises environment more secure right so they people start to see the value of the cloud and actually it can it's not just like to move everything to the cloud it's really coming down to actually take advantage of the cloud and make your on-premises environment even better all right make sense let's uh maybe let's let's answer this one from a bird eye you're right thomas what are the requirements to connect private servers those not directly connected to the internet to the cloud so azure r currently uh needs to basically have a connectivity to the azure control plane right you as i mentioned you deploy that arc agent and that arc agent can be connected through through different ways so first of all you can have that over um outgoing traffic on port 4 for free so https uh encrypted in that sense uh that we basically just need to access some of the azure apis uh however we know that some customers obviously don't want to allow direct internet access so we have two other options so we have also the way that you can connect through a proxy server if you have that so you can actually enable that as well or if you really want to have a like not necessarily even going through the internet you can set up for example if you have an express route or a vpn connection you can actually use a private endpoint or a azure private link configuration so the azure arc agent takes advantage um of your vpn or your express root connection so in that chance it doesn't even like if in in the sense of um azure express route it doesn't even connect uh through the public internet well that makes sense looks like azure our team has been working a lot any way you want you can go ahead and connect your services to the azure thomas that was an absolutely amazing session any final thing you want to plug in before we move to the next session now i just want to say thank you and again for everyone looking out at these hybrid architectures please make sure you check out the cloud adoption framework and you can also find a lot of content on microsoft learn yeah thank you so much thomas absolutely amazing as always i would always love to host you back once again i hope i don't have to wait for another 10 months thomas thank you so much have an absolutely amazing day ahead and week and the rest of time and i'll see you some other time thank you so much and tada bye take care thank you very much simon thank you while we now move from one cloud advocate to a session of another cloud advocate it's by hank uh for those who follow a c-sharp corner right uh we and hank work very closely together but i don't get to host hank very often because i believe all advocates are very busy and i'm hosting hank only for the second time in past 18 to 19 months so i'm really excited for this session because hank is gonna talk about what's new with azure machine learning and if you just go ahead and look at microsoft ecosystem of ai and machine learning there's a lot that you can do in that entire ecosystem the designers the notebooks and whatnot they have it so to talk about today is hank and let's welcome our next speaker hi hank welcome to class summit 2021 hey hello thank you thanks for having me i just realized i was looking at the video and i just checked your linkedin how was the very first episode of uh oh we call it a bit of ai season two oh yeah that one was really good it all worked still worked so that was good and we had a really amazing guest that uh they did some really cool things and ai so yeah you should check that out yeah i i got a notification on linkedin that hank was live so i just checked it and your live shows are a very creative like like not just another live chats i really love tuning in to your events hank i know i am already a couple of myths in your session actually you're sharing your slides once you await the stream yeah thing so i'll add it to the stream uh yeah i've added it to the stream now everybody can see it and next 25 minutes is all you perfect thank you very much hello everyone and welcome to my session what's new in azure machine learning um so my name is hank and i'm a cloud advocate at microsoft based in the netherlands and my job is to make you all successful on our cloud with azure ai so in this session we are going to cover a few things um azure machine learning is big we only have 25 minutes so i picked two new things i want to talk with you about so first for the people who don't know what edge machine learning is i'm going to explain very quickly what um what our offering consists of so what is azure machine learning then we're going to have a look at how we can use visual studio code and the azure command line interface hdcli to control azure machine learning and finally we're going to dive in online managed endpoints so machine learning on azure um there is a lot like as you can see here so if we start at the top that are our cognitive services so if you want to use artificial intelligence use computer vision huge speech in your applications and you don't want to do the ai you can call one of our one of our endpoints and we will do it for you then you can use any tool like a jupyter notebook visual studio code a command line to train your own models we support all the popular frameworks so you can train pytorch tensorflow psychic learn onyx models all on our cloud and we have some services for you to make things easier like extra data breaks as your machine learning and machine learning virtual machines and we're going to talk very specifically how azure machine learning can make your life a little bit easier and finally we have a lot of infrastructure to make your training go fast and we can run things securely in the cloud for you so what is azure machine learning so machining is a cloud service that helps you manage the life cycle of your machine learning model so from the data all the way up to running your models in production and getting feedback from those models so azure machine learning covers the whole day-to-day workflow including mlaps so what is it exactly so it is a set of cloud services combined it is just not just one thing it is azure machine learning it is storage it is a key fault it is a container registry it's our containers so it is a bunch of cloud services together that you can control with a python sdk and through the azure cli and today we're going to talk about how we can do that using the azure cli and that set of cloud services with that tooling enables us to prepare data train our models track our experiments and finally deploy our models so visual studio code and the command line interface what does that have to do with machine learning so i will tell you that so the azure machine learning of the hdcli as an extension for machine learning and with that extension we can control the azure machine learning service so we can run comments in this using a cli comment that will kick off a training will create a data set for us we'll manage and update an end point can register models basically everything you can do in the azure port you can also do in the cli with the ml extension visual studio code is using that extension to execute comments so visual studio code also has an extension for azure machine learning and with that extension if you have that installed you can connect to your workspace and get a visual visual representation of what is in your workspace so you can see what type of data sets you have you can go to your experiment can go to the endpoints and the cool thing is that from here you can create configuration files that can be run using the azure cli so you don't have to type the wall comment you can just create the configuration file give it to the cli and run it so how does it work so um here we have this this slide so you have your workspace as a machine learning workspace in the cloud that is the big icon in the in the bottom and visual studio code understands and knows your workspace so you see all the things and and with that knowledge it can generate yaml files yama configuration files and with those conversation effects you can use the cli to create new resources in your workspace like kick off a training run deploy a model register model create a new data set and when that runs you can see that back again in visual studio code so small recap about what we're going to do we're going to use visual studio code to show elements in your machine learning workspace visual studio code can generate configuration files for you a visual studio code uses the hlcli to execute these configuration files for you and that is all very handy to separate configuration from your code that will do the actual training so now it is time for a demo and we're going to actually dive into the code and see how that how that works so let's dive into visual studio code and so here is my directory just a directory on my local hard disk checked in to a github repository where you have configuration files and here is the hfc and azure machine learning extension that can browse through my workspace so i've already created the workspace my workspace is called cybertron i have different data sets here i have a lego data set version one and version two here i have a bunch of models it connects now to my actual machine learning workspace and we'll get all my models i have a lot registered so it will generate a list soon here we have my endpoints and here under we have the different types of compute that are available so let's dive into some of these configuration files what you can do so i'll go back to my files and here i have create my workspace um yaml file i will tell the name tell which location it has to be deployed give the friendly name can give it some tags and i can click right click on it and run i will not do that now because we don't need a new workspace but we can see if we can create a data set so the data is all in here training data and we're going to create version 3 of it you have to save it first thank you for that and then it will of course first see if there is a connection to the to the data set which probably is not otherwise it would have had this connection so what i will do is i will restart visual studio code and we will try it again extension is loading so you can see if we have access to the machine learning workspace yes we have access and it should start executing the comment so here you see it is running the azure machine learning or the as you see a like comment with the configuration file and this context about my subscription visa script so visual studio code generates that as you see a live comment for me which is very handy so if we now go back to our azure machine learning workspace cybertron go to the data set we see that it has created a new version of the data here so this is all pretty handy and useful but how about training a job now so here we have a yammer file that will help me train a model so we call it this xb we will lock everything under the simpsons demo experiment you can find my training scripts in the scripts folder in the terrain folder we have a look in that folder here we have a python file that actually does the training for me so this is an example i copied from the pythonx website and i've added a few things i added some references to the html python libraries parsing some incoming arguments and at the bottom of the script i am uploading the models the generated models the created models to my um experiment tag it and then register on mobile so let's say here as a cloud cloud summit and kick off a training round because that takes a while um so it has the train script and this is the comment that it will execute when the train script is uploaded to the ground so it's a stream.pi it has data go back later where the data is coming from and the number of training routes and the model name and this command will run in an environment so there are two types of environments there are environments you can create yourself bring your own docker contain or you can pick one of the environments that are managed by us so i'm picking an html curated environment having pyrotorch 1.7 installed on ubuntu 18 with cuda drivers and a gpu that is gpu enabled and i'm picking version 3 of that container this container is going to run on my compute cluster optimus prime which you can create also with a with through a yaml file and it's going to use my input data and this is actually pretty cool because because azure machine learning is connect or visual studio code is connected to my azure machine learning it will actually auto complete my data set so we are using um version 1 3 of version 2 version 3. so this makes creating these configuration files really easy and here we see already the output for my job that has been training in the cloud so it's sending my all the outputs back from my training script so this will take a little while and then eventually my model should be completed and we should be able to see that in azure machine learning itself here we see scripture on 5 in experiment demo it is finalizing it is actually created now with our scripts here and it should deliver us a model which we can find back under models with our correct tag conference azure cloud summit still have to learn how to spell but it is actually here so here is our model and using the cli or the python sdk both have like the really big um advantage that you can see how your model is being created so this model has been created by this run in this run we can see its command was executed and very important which data set was used so if we click here we see the data set we see how many files are in the data set and what the total size is yes that is registered as the lego data set where version 2 is being used we created version 3 and we can actually explore this data set and see the images takes a while but if we are patient we will see bart simpson here so this are the training images and there are 500 of it and that's all with some configuration files so yaml is there for configuration and then the python file or wherever you want to turn your model in is separated from this configuration so now that we have a model we probably want to deploy that mode at the moment we can deploy models in a container instance in a linked azure community service but now in preview is online managed endpoint and this is a really really cool offering because it is like the title already tells you it is completely managed for you you don't have to do anything you get automatic scaling you can manage traffic to your different deployments so how does it work you create an endpoint and in that endpoint you can do all types of different deployments so let's say i have model one and model two so i deploy model one first throughout ninety percent of the traffic to that and i make it run on cheap compute it doesn't really perform so i think let's train another model with another framework or with some different parameters and let's run that on a c a gpu enabled machine and put 10 percent of the traffic there then i add some logging and i can really see the differences between those types of deployments and if my deployment in for model 2 is better then they can route all the traffic to that by using the cli or by just using the visual interface in the ml.com portal so let's see how that works i think i've still a few minutes to show you that um okay so into deployment so for the deployment we can also use curated environments or we can create an environment that is just very specifically for my trained model and here i'm creating a deployment environment called simpson scoring version 2 based on this created from this docker base image and then i'm asking it to use this condor file to install some extra tooling for me and that's here install python 3.6 html please pythoid and pillow that's it don't install anything more just install that then i can start creating an endpoint and when you create an endpoint you also specify your first deployment in it so this endpoint is called sims as demo it's an online endpoint and i'm saying just add some authentication for me so this i don't have to handle in my code this managed endpoint handles authentication for me scale all the traffic to the first to 100 of the traffic to the deployment called v0 of v01 and here is the deployment enable application insights for me out of the box application inside so you can track your performance of your container and then use upload all the code that is in scripts score and use this scoring script because it will run into a in a server and it needs like a little bit of code where you tell what it should do to load your model and how to run data through your model to get a prediction so this is an init method and this loads your model and then there is this run method and that will parse the incoming json downloads the the image do does preprocessing over the image so that it is in the right tensor and then it will run it through the model and it will return the prediction as adjacent to us so that is that so score.buy run this in the azure machine learning simpsons v4 version 2 environment so this is environment i have created use some compute use the standard f4s v2 um virtual machines you can say also the standard nc6 machines if you want to have a gpu and then you can define how you want this deployment to scale for you do you want to have it manually like me or do you want to have it automatically and say max 10 instances and the minimum is one and now spin up five please so you can say those kind of things here so then you have one deployment up and running and the air can add another deployment so this was version version one of the model and now we're going to add version two of the model same thing application inside same instance type same scaling but just a different type of the more of different version of the model version one i trained only for two rounds and the other one i trained for 12 rounds so this should be a very um distinct outcome if i send one image to one endpoint and the other image to another endpoint this should be some difference visible in that and then i can say actually we'll update the endpoint with simpson's 5 and skill traffic 50 50. so this i've done it takes around 10 minutes for a deployment to be done at the moment um so we're not going to wait for that so here we're going to look at the deployment endpoint and we will see simpson's demo here we have two deployments version 1 and version two we had fifty percent of the traffic here we see which model is actually running in their environments and here we see version two is running in there we can have a look at the deployment logs for if anything goes wrong can go to the metrics and to the deployment back here is how we can actually consume it so there's a url there are some keys here some simple code so let's see what happens if we send an image from what we're sending to homer simpson to that dose to that endpoint no that's not what i wanted to click on i wanted to click here send request so we've sent a request and we see that we hit html model deployment version 2 and the score is 95 for homer simpson now we've sent it to version 1 and it is 85 so if we keep doing that we will see that all different requests that will swap between end point 1 and 2. so that in very short are two things i wanted to talk to you about about what is new with azure machine learning so if you want to check out my demo code you can go to ak.mess aml machine learning demo vs code or if you want to read more documentation and see more examples than my quick 20 minutes about tech technology you can go to akm dot message aml vs code or ak.mess aml slash oma online managed endpoints and learn more on microsoft docs so back to you simon absolutely great hank i think you just covered an end-to-end uh solution in just about 25 minutes here's my very good question i remember the last time hosted you you did everything inside the azure machine learning right and you didn't write even a single line of code whereas today you ended up writing a lot of code so here's my quick question can i if i if i don't know yaml and all that stuff right so whatever you did today can i do exactly that on on azure machine learning studio and all designer yeah so there is also the azure machine learning visual designer and there you can drag and drop some basic steps you can still write code in the visual designer because you know sometimes you just need to need to bring your own code but yeah this this is especially handy these configuration files if you want to bring things in production so everything you do is documented in in this case yaml files and also azure devops or github actions can run as you see as you hdcli comments for you so it can create those jobs it can do those deployments automatically for you and you can test it locally on your machine and your data scientist team can work on the python file that will actually train the model separately from how it is going to run in the cloud having said that hank as as we started from the very beginning right azure machine learning and ai and machine learning is a very big ecosystem why there's so many products out there now as i said you are also cloud advocate and you work very closely with the engineering team and with the customers right out of all these products which are a couple of products that uh customers uh really use a lot i would like to know that yeah yeah so um what i see is that actually machine learning service this service is really starting to be used in uh in big environments to actually start working together on creating ai solutions and this is also the product i focus on wow that's great that's absolutely amazing uh hank before we move to the next session i i know you you manage the global icon community shameless plugin anything you want to plug about the upcoming events and all that i saw you did share uh welcome back global ai committee sometime just go ahead and plug something yeah perfect thank you for that yes so two things are going to happen in october from the global ai community we will have another global ai student conference around eight hours of content from student ambassadors and people in the field so that is really good and um there is going to be the global ai back together it has been quiet for a while from the back together and we are going to try and see if we can do um three evenings in october a workshop with each other and get back together to learn all about ai again and some countries will be lucky enough that can be in person some countries are going to do it virtual some people just love the wolf return thing and i just going to continue being virtual so it is up to everybody user group all over the world on how they want to participate and deliver the session yeah i saw it for india it was pune but unfortunately it was online i just wish things get better here in india so that we can take things offline i also saw one for indonesia and in the latin america and also in the europe so you already know i have checked that website uh so thank you so much hank i really appreciate absolutely love hosting you i don't want to wait another like in here to host you don't want to do that so i'll keep on picking you uh until then all the best for all your live shows and events that you do and yeah we'll stay connected have a nice day ahead yes thank you good luck all right with that session on what's new with azure machine learning you learned so much we now move to our next session that is around to get certified i mean it is so important to get certified um and uh there's so many things that you're going to learn and i believe this session was one of the featured session for microsoft build wow so if you if you could attend microsoft build hazmat thiago joining us uh i don't know from where he's joining so i'll just ask him uh yeah let's welcome our next speaker thiago hi tiago welcome to cloud summit 2021 hi simon welcome uh that was a great video by the way and i i still need to figure it out the last picture from where it was because i don't even remember that i don't even know that we had to literally scrap your social media but yeah i have a small team who helps me out but thiago you know i'm really really looking forward for your session that is about know all about azure professional certification because i know not many people talk about this session and people really want to get satisfied even maybe a very fundamental certification so i see you're sharing your screen i have added it to the stream so next 25 minutes is all yours okay thank you very much so yes so this session is about exactly that all that you need to know in order to get azure certified so my name is thiago and by the way that's a picture of my hometown i'm from lisbon portugal and i have all my information here so you have my twitter handler thiago costa pt i usually share tons of stuff on on twitter so if you want to follow me please do i share stuff around azure and also about azure certifications uh then i'm an mvp for azure regional lead i'm also a microsoft certified trainer and i help microsoft so i'm one of the members of the certification council member we are a very small team um and i'm not microsoft employee but i help them with some information and some feedback about the certification program what will be our agenda for today so we're going to talk about very quickly what is azure today how important is azure in the world today then we're going to talk about how microsoft builds a certification program then we're going to focus on the certifications on azure what certifications are out there what can or what should you uh want to achieve in terms of azure certifications and then i have a small step which is four steps to get certified so some old small tips that um that i have around around how to get how to get certified so there you go so what is azure today so azure today is the second largest cloud provider look i'm not going to give you the speech the sales speech on this tons of regions tons of fiber okay tons and tons and tons of edge sites so azure usually sometimes it's even closer to you than what you think because it's not only the main regions you also have edge edge sites for services like cdn and things like that that maybe they're even in your hometown okay so this is this is what we have and then why should you care about azure today well for sure you care because you're attending this but there is huge demand from customers it's it's it's like nothing that i have seen before in all my career and i already have a few years working with microsoft technologies there is but so so much need for people that are capable of working with azure and what better way to show that you're capable with the microsoft certification you might not have you know years and years of experience but i will care for you because you know you care to take a certification you care to learn and that's very very important for me at least when i need to hire people for my projects and things like that and then of course look it's fun to work with i i have to say i'm very passionate about what i do but i'm also very lucky because i really do what i love so until now for example i'm designing here an architecture for a customer um and and and i'm i'm having fun to be honest i hope that my customer is not watching this okay but um and they pay me which is also nice okay um and to pay for what you use and that's the model for the customer that is pretty cool because well you don't want to pay for services that you you're not using or you don't want to pay for having i don't know let's give a number 20 servers when you only need two but you have to have 20 you know because when you have a peak on the demand yeah you need a 20. that's the on-prem world in the cloud world we don't have that we're just going to have two that's what we need now and then when we need the 20 or even when we need 30 40 50 servers we will be able to have them of course you need to pay them but we only pay for what you use and that's that's the perfect uh the perfect approach for for all of this so let's talk about the certifications okay so today basically we live where we live in a world that is a connected world and we have all these dots and everyone is connected and this is not only for our lives it's also when we do projects like we have a customer we need to migrate that customer totally to azure well that's that's a big challenge for customers in a certain dimension um i'm a cloud i'm a solution architect and and i basically lead those efforts but i'm not a very very deep dive person in networking for example so i need to interact and bring people from the networking world same thing for databases i'm not like the best person in terms of database i have a huge background that but i didn't work with it for the last years so let's grab someone from that field might have a java application i'm not the best person for that for sure so let's bring something someone from that world so all these connected dots when we have projects we today and we all know this more and more because of the pandemic that we leave we can have teams that are totally dispersed around the world and that's great because i can now get the top of the top and getting someone very specific to work in a role that is just networking or just databases and i will get the best persons to do the job and with this okay we get why microsoft created and they changed like two three years ago from a product based certification like you were windows server 2016 certified and was like that version oh but we're working with 2019 you're not certified on that yeah but i'm certified on the previous version and that was the world that we lived right now no you live in a real world and this role base is based on your roles that you want to achieve or that you already do if you're a networking engineer that's what you want to do azure networking there is a certification for that azure administrator or solutions architect okay so that's what we have and with this we talk about how microsoft build certifications i think this this is important for you to understand certifications a little bit better there is a basically a work being done in the back stage that is called a gta okay so it's called a job task analysis that gta microsoft basically what it does it identifies okay the roles that are in the market and they they very early they said well we have an azure administrator we have an azure developer and we have an azure solution architect those are like the three first roles that you know they came up with um with that done it's done by subject matter experts they sit down and they say what does an azure administrator let's give the azure administrator as an example for for our session today what does an azure administrator need to do what he needs to know and then they do a list of all the things that an azure administrator needs to do and it's with that document that we call by the way an objective domain that we are going okay to create this is what comes out on the exam this is all the topics on the exam it's based on that document that microsoft also creates the microsoft learn content because they need to create learn content for that it's where they have official courses it's where other companies like linkedin learning pluralsight and all the other companies from the world they grab that document and they start to produce content around that because that's what you need to know and of course there are exam questions also created based on based on that so this align learning experiences is something amazing that we have today so nowadays what we have is that we have digital skilling like microsoft learn which is free you don't pay for it you just go to learn and look just search for all the content that you have there and you have amazing content that is pretty good to be totally honest you can go to events remotely like this virtually like this event for example that has tons of amazing sessions i was seeing the last two sessions for example and they're pretty good to be honest we learn a lot even people that are super experienced there's always something to be totally honest that we don't work on a daily basis and we oh that's that's that's interesting you can also attend classroom training so the traditional classroom training taught by microsoft certified trainers um it's a model okay i'm very into that to be totally honest i really love to teach i also teach people that's one of the things that i that i do too and i really love the classroom i also have videos okay so you can also grab videos from linkedin learning or from pluralsight there is like tons of stuff there the thing is not everyone learns the same way so if you prefer videos great for you if you prefer attend an event also great for you there are people that they prefer classroom training same thing okay or you can do a mix okay just a piece of this and a piece of the other one and i'm sure you will you will ace it so what we have on the microsoft certifications so we have three let's just say call it levels it's the fundamentals the role based the role based are the ones that incorporate the associate and the expert levels and we're going to understand what they are and we also have the speciality which is a very focused on very very specific roles okay so like for example azure for sap workloads you have a sap workload hey great uh we need to run that on azure and look that's specific for the sap world so it didn't make sense to create an associate certification for that um definitely not an expert one um so they created this other level which is the speciality fundamentals are what so fundamental i tell everyone start with the fundamentals hey tiago but look i'm i'm already experienced with azure do you have a microsoft certification or at least a rules-based certification and if the answer is oh no it's going to be my first one start with the fundamentals um it might be an easy certification for you i'm not going to lie sure it is going to be but you learn how questions are formulated you learn how the exam structures are and all of that that is going to be super valuable when you take the associate or the expert uh exam so that's that's that's a great a great way the rule based what we have so we have the associate and the expert levels and there is one thing that i always love to say is that you're not only going to be tested through questions but you also need to know how to do things especially on the associate level not so much on the expert but also on the expert but on the associate usually the exams are they give you a question like a small scenario there and now you have to say how you do something and some of the questions are even with live okay an azure portal and you really need to do things it's not just okay doing uh just answering questions you know it's you really need to log into the azure portal and do something on the azure portal and this is how you should test people nowadays to be honest this is how you really know if someone knows something or not is by asking them to do it okay so this is what we have in terms of the in terms of the certifications and if we turned the laser on here so we have this expert level that we have and we have the associate level on the associate level look we have the azure administrator and the azure developer i think they are pretty straightforward what they are right creating virtual machines creating things and resources on azure managing the resources on azure developer is yeah let's develop okay uh applications and mainly you're being tested on the exam in c-sharp or python you can select one or the other and it's let's code applications using azure services no one is going to test you on how to code you're not going to be tested on that for sure it's more on how do you use azure services like let's give an example azure storage how do you access an azure storage account through the sdk and how do you store data there like a pdf file things like this then we have the security engineer okay so the security engineer it's a very entry-level certification around azure security as a whole azure id we talk about um basically about the azure services like like sql database how do you protect that containers how do you protect containers virtual machines of course how do you protect um how do you protect that so it's pretty cool what you can do with this then we have the network engineer that's a new one by the way okay that's a very recent one the exam it is still in beta it's the az-700 uh and as the name says it's just the networking component there's a little bit of overlapping between the network engineer and the networking part of the azure administrator so if you already took the azure administrator with a little bit more effort you you take the networking engineer i'm sure you will you will pass and then look we have the expert level that in the expert level we have two certifications which is the solutions architect which is architecting designing applications to azure or native cloud applications on on azure and then we have the devops okay so the devops engineer it's a certification that that's a very special one because it's the only one that has a prereq currently um that you need to be certified with the administrator or the developer but i'll talk about that later uh so there's a pure prereq and then this is not just devops on azure but it has tons of things that you need to know um so it's not just the azure part but it's more the azure devops project with rappos with uh pipelines okay so all of that all of that part and then we have here the speciality okay so we currently have three specialities sap workloads iot developer azure virtual desktops there are the three current ones and i'm sure look what you see here if if we had the same talk i don't know three four months is for now yeah this will be different okay definitely no questions on on that so let's talk about specifically on this so this is another way that i have to show basically the certifications where you can see here that this devops engineer you have this pre-rack here but all the other ones there is no prereq even the fundamentals is not a prereq this is why it is clearly saying they're optional i recommend you to take it for sure but it's not something that you need okay um that you need to do and then look i have here like learning pots so this is like the learning part for the azure fundamentals and there's like four topics in here and then we talk about this is the exam the az900 but now how do you know this okay there's always that big question that people have is that oh thiago how do i know that okay let's let's just drag here okay this and what is the exam it's the az 900 right so you just search az 900 and you go to this page exam az 900 the title of the exam and let me just put this a little bit bigger so that everyone can see this quite well and if you scroll down you have this area skills measured and there is a link here that says download something which is the exam skills measured and this is a pdf file you look at this pdf file and look it has here all the things that you're going to be tested like literally everything because when i showed you right just this and you see their cloud concepts it's this part that says here describe cloud concepts but then as you can see there is like literally tons of small bullet points in in this so you really need to know okay what it is this is an easy certification it's not that that complex to be totally honest with you uh but this is what we have what we have here then we have this is the the administrator one so we talk about azure subscription storage solutions virtual machines networking and identity if we want to see some of the key points on this one okay yeah definitely azure id is a big topic creating users groups devices multi-factor authentication and a little bit of azure ad connect even though azure id connect we don't clearly talk that much on the exam on this but you need to understand then we talk about governance we talk about cost management okay tags role-based access control policies and if topics like this i'm saying this and you're like i'm not getting what thiago is saying well just study first before you take the exam it's my it's my advice okay other stuff it's around storage like storage accounts the import export service data box files backup and i just have this here as key points and you know there is tons of stuff that you need to know okay so um this is why okay let me just grab here we also have the developer that you need to take the az tool for that's just one exam and you get certified and again tons of topics here just search for this okay uh on that part then we also have the solutions architect so the solutions architects a little bit different as you see there's like two blue boxes here yes there are two exact so there's a one exam which is a 303 and there's a second exam which is the 304. the 303 it's a very do it exam it's a very hands-on exam how to do things and there is a big overlap with the az104 which is the azure administrator so if you already know the azure administrator topics maybe even already are certified on that taking the 303 should be more accessible for you but it's still a a big exam very sim still some things in here that you don't have on the az104 okay so reach out for the exam objective pdf check out that compare read it see the stuff that you're already good on it move on read the other stuff because there's always things that you don't know because you don't work with them all the time which is perfectly valid perfectly fine and then there's the second exam three or four so the 304 it's a design exam so it's as the name says it's not how to do things it's how should i design things so it's more like it like you have got questions like oh we have this scenario what services should you put here or do you should you use private endpoint for this or a v-net integration things like that okay so it's more conceptual than then let's do it how do you get certified you have to have both exams a question that comes up all the time oh do i need to do 303 first and then 304 no it doesn't matter okay just do it the first that you want and then do the the other one and when you have the two you get certified a normal puff for me yeah just fold the numbers okay 303 first 304 after that will be my my choice and that's my recommendation and with this we get to the devops so the devops path as you can see yeah tons of content here same thing check the pdf for all the details and we have this az4 but a great thing here an important part of this is that there is a pre-rack you need to be certified on the azure administrator or azure developer okay so you need to have that certification and take the az400 you get 35. again like the same thing loads of people sometimes they take ac400 they don't get the certification they kind of complain then i tell them oh are you an administrator or a developer oh no you need to have it you can take this exam first and then one of the others first again doesn't make sense but you can do it if you wish and a question that some sometimes comes up here is okay do i need to take az104 or tiago i took the administrator but in the previous version of the exam like az103 yeah doesn't matter it's the certification not the exam not how you got it it's the certification that is the prereq got it so it doesn't matter if you had a previous generation certification as long as is the azure administrator associate you're good to go okay there is there is no problem whatsoever there so with this and we have three minutes two minutes how do you prepare for a certification that's that's usually loads of people that question me around this so i have like a method that is the four steps to get azure certified which is pretty pretty simple is read the exam objectives you know that file that i showed you um get training resources practice and then take the exam so let's talk about the exam objectives we already saw it so this is the 104 uh check this read all the details of this and then look find information around this um get training resources that's usually what i say in my website tiagocasta.com azure certifications you have um basically uh study guides for each one of the exams i'm still building for the az700 and and the latest ones but just go there and you have like links for microsoft learn content uh around around this there's the microsoft official courses there's microsoft docs with tons of documentation around the services there's practice tests i just put their measure up because it's the official one there are other providers out there that have amazing practice tests um and then you have linkedin pluralsight content and again look other providers okay uh out there with with amazing content i really didn't want it to put more over there because i will always going to miss one of them and there's always people that get mad with me because oh we forgot about this one uh yeah sure and there's even some that i even don't know okay step three practice okay go to the azure portal fire up visual studio code create powershell azure cli command look do things okay that's how you learn you sometimes you gotta practice question how do you do this and you don't know yeah just go to the azure portal and try doing it you're learning now you know the answer oh that's the answer you answer that you will never forget about that anymore because john now you know how to do and then fourth step an important one take the exam i will even going to tell you a secret before you start step one start step one and schedule the exam right away for a further date now you have a commitment that day you're going to take the exam you can always postpone it okay if you wish but at least you have like a personal goal that they take the exam don't be afraid to take the exam okay some people are i'm afraid of failing if you fail what's the problem take the exam again okay there is a fee that you need to pay that's the only bad part of the story because look i'm not going to lie all everyone that is certified and they have like years of certifications in somewhere in their story they have failed on an exam i have failed an exam no no problem in saying that okay just go for it i'm sure that sometimes you you know more than you feel that you know when you get to the exam there's people that pass on the exam with a good score and they were a little bit afraid of taking the exam okay so just just go for that okay and that's it okay so thank you very much i think we are on time my name is thiago and this is how you get azure certified that was absolutely great uh thiago i can bet that whosoever has watched past 25 minute session right they have a very good idea about your certifications what is i sure the importance how you can go and schedule it some of the best practices you shared we do have some questions let's go ahead and take it there's always questions around certification isn't it so here's the first one you did cover uh in your session ferris says i'm a student doing an undergraduate doing a mod major in applied technology what certification can better prepare me for calories in the cloud yeah so i will i will definitely first of all try the fundamentals okay they are the the more accessible ones to enter into the technology um and then and then try to go to one of the associates and i'm not going to tell you oh this is the one that you should take because it depends on what is the path that you want to go if you want to pass more on the developer world yeah take the azure developer if you prefer to have a career more on uh building solutions and things like that like virtual machines containers oh go for the azure administrator okay so it kind of depends but the fundamentals is the same for everyone and it's a great way to enter in in all this world yeah i i remember doing the fundamentals back in 2014 uh and i and i still like to go back and do it because it just refreshes your basics and i like to go ahead and do it so yeah i took all the fundamentals that they exist even in areas that it's not my area just to know a little bit you know like for example i took one on a power platform i don't do power platform stuff to be honest with you but i love doing it because i learned stuff and that's that's the important part and i think i should do one power platform too i did one for ai uh but i think i should do one for power platform too um let's take another question that's for uh sriram he's asking thiago may know if there's any certification for azure kubernetes no there is not that's a that's a valid point and i already already took that to microsoft because i think you know not kubernetes specific i will go a little bit more broad but container i can containers in general but no there is no nothing there is some kubernetes certifications but that's not not azure kubernetes or aks okay so i think i think i lost simon's voice but i think i can i can read the question no no no you you you didn't lost my i was just checking if you can read my lips but let's you feel but that's a good that's a great safe by the way uh so this money i can i can read the question many years before azure certification becomes obsolete so that's a very good question john um i i understand your question the technical name is not obsolete okay it's um many years it's valid okay so currently if you take a certification today it's valid for one year okay every year you need to re-certify but the re-certification is free and it's just a thing that you have to do it online you don't need to go to a testing center there is no no formal validation it is just like the differences on the last year and i renewed my certifications and look i took like 20 minutes 30 minutes to renew each one of them it's a pretty it's a pretty easy uh process and you have i think it's like six months is to do that so there's like more than time enough it is free you can take one shot if you fail you can take a second shot right away if you still fail you can take one shot every 24 hours i think um so to be totally honest is pretty straightforward okay to to do that you might see some people saying two years by the way okay because it used to be two years okay but currently is one year all right here we are running a little late for the next session so let's quickly take a couple of more questions um um the next one is uh from akshita actually has asked me is there a way to get a voucher for taking exam i know this one way yeah so there is there is a way there is a virtual training days search for that okay so yeah go ahead and attend virtual training days and and then and then try to find then usually there's like some training days from microsoft and then you if you attend then there's like some things that you need to do you get a voucher yeah except perfect that that sounds i was going to give any certification for testing it yes yeah now most near one is the devops one which is not that that that similar to that thank you damian tbc's awesome session uh the next one tiago what options do we have for database and cosmos database this is a lot of money right i'm not sure about cosmos so for for for databases you have the dp300 which is the uh i don't know i don't recall the name of the i think it's data is something they the engineering something and you have for cosmos db there is no certification aligned directly with cosmos db but that will be that would be a great idea to have okay really really good idea to have something like something like that okay mark brown if you're listening people want cosmos tv certification yeah let's get one of those yeah i'm um i think i think they're all done uh people say thank you thank you for joining in uh sebastian asked if this uh re-certified you did tell about it so i think we're all good with the uh questions tiago any file thing you want to plug in before we move to the next session if you want to take a certification just know just just go for it to be totally honest don't be afraid um there are some people that they say i'm not capable you know i'm not i don't i don't have experience just start studying go for it you can do it that's that's that's what i have to say everyone can do it if i can if you can look everyone is smart enough to study you just need to put an effort and take the certification yeah and sometimes you fail i recently failed the ai engineer exam so but no i don't mind confessing that yeah yeah so thank you so much tiago it was absolutely amazing section i love your background if we can barter with your pillow and what i'm gonna send you that would be great it's it's it's a really right size and i have that and i have and i have two pillows of those okay okay you can you can exchange all right thank you have a nice day today bye take care bye-bye all right with that all you've learned about the azure certification we now move back to the uh sql thing i mean we have so many sessions on sql uh uh yeah because people love people love sql anyway so for this session we have mara mara's going to talk about data replication in azure sql and beyond and i'm really excited to host once again i believe i hosted her in the first quarter of this year which was um sql server virtual conference and here's the best part about the submit that mara likes that's our intro video so let's play it hi mara welcome to cloud summit 2021 hello how are you i'm doing great mara thank you so much once again for joining us on the live show quick question how was your summer it was really good i traveled a bit back home in europe i was in romania for a long time so really nice how was your summer i'm so mara i know you have many indian friends so here in india we don't do a lot of thing in summer because it's very very hot inside so that's not something very interesting but yeah yeah all right feel free to go ahead and share your screen i know i'm already six months in your session so we're gonna do chit chat towards the end of your session yeah so now let's go um justice i don't see your screen yeah i see it now i'm gonna add it to the stream maybe mario would like to click on that height stream height all right that pop-up that says stream edition yeah yeah this one all right next 25 minutes is all yours sounds good so you can see my first light ride the blue one because i have more like that okay perfect hi everyone thanks for joining today's session um i will i'm going to talk a bit more about data application in azure sql and beyond and what i mean to beyond is that we'll also explore some solutions available on sql server um and yeah energetic sql databases and inject sql managed instance so i'm a program manager in the azure sql department at uh at okay so let's get started so for today's agenda uh we are going to very high levels and quickly discuss why is data application relevant and then we look at some common scenarios and solutions that we've seen our customers use for for application needs and then we'll explore some of the solutions in mock that um given the limited time we'll only have time to explore fog of these solutions and we'll do two demos as well so we look at change data capture sql data sync change tracking and transactional replication and lastly we are going to cap it all up bring it all together and do a comparison of these drug application technologies so at very high level uh why is data replication relevant well there are so many scenarios for the application all over the world in so many industries like you might have a business that has offices and inventory in different regions of the world and you want to make sure that all the bases in all these regions stay um stay synchronized uh the inventory so you might use data application technologies you might be a nine-on-one system that has to very quickly displace resources within a country and you have to be aware of all the needs at all points in time so you would need a real-time as close to real-time application to be able to very quickly react to people's needs in different parts of the country there might be a disaster in one of your regions so you want to make sure that your database gets quickly um shifted to another one to save all your data so there are so many um scenarios for for replication and we are going to explore some of them today so uh as i was telling you the um we've seen our customers use application technologies for for multiple scenarios so you might want to synchronize your distributed applications or workloads that are globally distributed and here you might want for instance from the distributed applications you might have you might stop different workloads so one production database and another analytics workload database and you want to make sure those are synchronized for such needs uh you have several solutions like seeing change data capture exchange tracking transactional replication we are going to explore all of them today and then on the business continuity side um this refers to the procedures enabling you to continue operating in phase of disruption so if there is a data center outage application upgrade anything like that um there got some solutions unfortunately we don't have time to go into this today then you might um want to scale out your read-only workloads so you want to offload like uh need read-only workloads instead of running them on the git right replica for performance purposes and lastly you might just do data migrations from on-prem to azure and you might want to use some some of our technologies so just a quick heads up this is in no way a comprehensive list there are so many more replication technologies that we have both at microsoft and third-party great tools out there uh but today we are just going to focus on on these ones okay so let's get started so we gonna look more at the change data capture so change data capture helps you record uh data modification uh language changes so um as you can see in this diagram you first enable change data capture at the level of the database that you want to track changes for once you enable it at the level of your database you look at the source tables and you enable it at the level of each table that you want to drag for changes once you do that all the changes in your source tables that are correct go go to the transaction log and from there there are two two jobs a capture and a cleanup job they are sql server uh agent jobs in azure sql managed instance and on the sql server version of cdc will go into the azure sequel databases this is soon so these two these two jobs that actually cause server agent jobs capture the capture one takes changes from the transaction log and adds them to the change table so for every search table you you enable for cdc you have an automatic change table being created for it an associated cdc table so all these changes are taken by the capture job from the transaction log and added to this associated cdc table however there's also a cleanup process a cleanup job um which is also a part but by the sql server agent and that cleanup job cleans these change tables based on a time-based retention policy and lastly once you have all your changes in the cdc associated tables you have query functions that you can use to consume the change data from the cdc change tables and afterwards you can stream these changes to other external destinations if you prefer or you can keep them in the same database and run some analytic workloads on them or we'll look into more details um soon okay so i was telling you for cdc network sql databases which is currently in public preview as of this summer um we have a cdc scheduler that replaces the sql server agent right so the cleanup and um capture jobs are being replaced by ctc scheduler which automatically runs uh clean up and capture on your database so you don't have you can safely assume that they are being run on the background however if you still want to manip manually execute execute scan or clean up you can still use the same stock procedures on demand so that's the main and only difference that you see in for change data captured in sql databases relative to cdc in sql server or on azure sequel managed instance so here are some of the key cdc use cases so you you might want to track data changes for for audit purposes you might want to propagate changes to other downstream subscribers you can do etl operations um to move the changes from your oltp system to to data lake of data warehouse and for this if you want to downstream like changes you might decide to use azure data factory for instance uh you might also want to perform analytics on change data or do event-based programming to have like instantaneous responses based on data changes and here you might have an application that does um dynamic uh pricing so that would be very very helpful great so bringing bringing it all together for for cdc we have some key concepts um that are that are worth understanding and afterwards we are going to go in the in a demo so first of all as i was telling you we have the capture and the cleanup processes the capture one scans the log for new change data while the cleanup um cleans the change tables or not based on the detention policy however you can also run this manually or by the scheduler if you're going to adjust sql databases then you have the change table so for every source table enabled for cdc you have an associated change table created then you can do monitoring so these are two dmvs that you can use to monitor cdc uh then you have table valued functions so these these allow you to to to take the changes over a specified uh range and to return that from you know filter giza that sets you you get all changes by default you can also enable it such that you get net changes and we'll see in the demo how you can do that and lastly we have lock sequence numbers which is really important and this one identifies changes that were committed within the same transaction and objects those transactions so that's on the transaction log side cdc uses as lsn okay great so if you want to actually enable cdc on on your azure sql databases as i was mentioning you first act at the database level and once you enable this at the database level you have five artifacts which are these five system tables that get created on your same database so it's very important to make sure before you start using cdc that you have enough space given all these additional artifacts that cdc creates on your database and also something to note for azure sql databases is that cdc won't work if your databases are uh indus s3 tx which is like standard 3 or below so for instance it won't work if you have a basic uh tier database so once you enable cdc at the database level you get these five system tables so the first one list of capture columns list of tables that are enabled for capture and the ddl history table then you can see the index is associated with uh with this change table and lastly you can see the mapping of alison number and end time once you do enable cdc at the database you also enable it at the table level this is the this is what you would run to do that and the moment you do that your capturing thing up get created and there is a table to track changes on on your socks table um so if you do this on sql server or azure sql managed instance you must instruct that your sql server agent is is enabled for these two jobs to run great so now let's do a demo for uh for cdc on azure sql databases which as of now is in public preview great so here you can see i have a test database which only has um one table and the one customer stable and one that i could need so i enable the database for change data capture as you can see and now i'm trying to show you that the system tables get created the moment you do that so you have five additional tables in in the system being created which we just discussed about um so you can see the the capture columns table because you can select which columns to enable cdc fold and we're gonna do that soon the change tables the ddl history for the ddl changes done in the past index columns and the lsn time mapping obviously this will be mostly empty because we just started so now let's see what we have in our customers sort source table which we want to enable for ctc as well it's just one very simple record so i want to enable cdc at the level of this table so i run this and as you can see you can also um here is where you decide if you want to support net changes you automatically get all changes by default and here you can select which columns to track in my case i want to drag all columns so once i do this you can see that i have a new table which is dc um dbo customer city which gets created from the associated search table that was enabled and you can also see that i have the jobs showing up the capture can clean up jobs there you can see where the table was created and here you can see the default for the parameters for the for the drops which you can change and we'll do that quite soon so now i just want to insert a new customer on my socks table and see that gets um uh identified by change data capture and you can see it automatically so the scheduler runs the scan automatically and takes the changes from the transaction log and adds them to the associated cdc table however you can still run manually the scan if you prefer to do so and so this still works and here as i was telling you i want to change some of the parameters for cds you some of the default ones so let's say i want the detention policy to be changed so i just did that and as you can see if you go back to the cdc drops table you can see that has been changed there so based on your needs you can alter these cdc parameters so now very simple now i just want to disable cdc on my table so i want to see which table is enabled for cdc in case i would have more let's say so this is why i run this command and i can see that there is a parameter for cdc specifically to find the table extract by cdc so i found which table it is and then i disable it at the table level and then at the database level however you can go ahead disable cdc automatically from the database level and that will be all good you don't have to do both necessarily so this is pretty much what i wanted to what i wanted to show you so to get an idea of how cdc works on a very very simple example on error sql databases okay so let's move onward so now we are going to look at sql data sync which is a different um application options which actually allows opportunity directional but also by the external application so it allows you to synchronize data and the whole concept of sync revolves around this idea of a single group which consists of a hub database and potentially one or more member databases so the one requirement is that your hub database is always an azure sql database however your member dbs can be bought either on hfc equal db or on sql server on prem for instance so it doesn't really matter where your member db is um however hub has to always be on your assets called db so there are some key concepts and properties for single groups so first you have the sync schema which specifies what are the tables that will be synchronized within a database then you have can choose the sync direction so whether you want one mega application or bi-directional so the thing can happen from member to hub hub to member or both so what's what what's very important is to to recognize that zinc has a hub and spoke model so all the changes from the members go get to the other members by first going through the hub so that's how um sync uh sql data sync works then you can also configure the sql sync frequency so here you might decide to do things to run sync manually on an on demand basis which is totally fine however you might also decide to do it automatically so when you go in the azure portal and configure your syn group which you'll see in a second for demo you can say i want sync to be triggered every 30 seconds or even every 30 days which is which which totally works however it's worth um noticing that there is a latency for all these sync changes even though you set up the frequency frequency is not the same as latency the frequency is how often do i want my sync to be triggered and lastly i have a conflict resolving policy so here are two options in sync either member wins or have wins as you have data conflict changes okay so as you can see here there is a sync a local agent for sql segment so if you have members that are going to you must you must use this so basically when sql data sync service communicates with the agent it does so using some encrypted uh connections and the unique um agent key so the sql segment databases authenticate the agent using this connection string and agent key and this design provides high level of security for for your data great one last thing that i wanted to mention about sync is that you have a private link as well which is being used for the sync service to connect to your hub and member databases and this offers a secure connection however you can only use sync private link if your member and hub databases are all hosted on azure sql and in the same cloud type so basically all of them in the public public cloud or all of them in the government cloud some some common use cases that we've seen sync being used for hybrid data synchronization distributed applications and globally distributed applications and we also already discussed about these ones in the beginning and now i'm going to do a demo for the globally distributed um replication scenario with sync so in this demo we we have a business that has inventory across two um two separate regions around the world one in the us and another one in asia so what i want to do is to make sure that those those those two databases across these two regions are constantly in sync and i will want to do bi-directional thing so my u.s database is going to be the hub and the asia database will be member and changes will be shifted from from one to the other and back because it's bi-directional so let's see how that would work so i have in the azure portal very simple i'm just showing you what i have in the demo in the hub database i have four items in my inventory in the us and then i have a asia member database which only has three so as you can see i need to synchronize one of my last items that was recently added to us through the asia database so i go into the sync product databases to create a new single group and here i i set up the sink name us to asia it's usually recommended to set up a new metadata database which has to be in the same region as your hub database that's something to to keep in mind and for the passing theory i'll just put it as a basic database this is a very small simple workload i'll use the default collection so here i can choose do i want automatic sync and i said yes i want thing to be triggered every two seconds with the hub wind conflict resolution policy and i want to use private link for security purposes here you can learn more about it and i create a single group very important for your single group to be created you must manually approve the private link between the hub and the sync service so what we have to do right now is to go to the manual approval see you have a pending private endpoint connection which you have to manually approve so we are proving that only once we do that the thing group gets created then i want to add some members so i add the i have to log in on my hub database with the server credentials and i want to add a member database which in my case is also azure and it's in asia so i select the the subvert to which database this database belongs i select the database and i selected i want bi-directional sync from home to member member to hub and i have to login on the server side as well again i use private link between the sync service and this member database for security reasons but again i have to manually approve it before the member gets added to the sync group and because of that i'll show you a different way to approve a private endpoint connection to the azure portal so you can go on the subworks side for that asia database go into private endpoint connections and there you can see all your pending ones so i want to approve this one manually such that i can continue using sync great all good let's move onwards this is taking a bit of time once i've created a single build hub member hub and member i can go and select which tables i want to synchronize so i refresh the schema as you can see for our simple workload we only have one table just the second which has three columns so i just select that i want one column synchronized and then i have to do the same for the member database it's loading same i have a primary key which is called that um and now i get to the to the single loop so the single has been created as you can see it shows up in my single books it has two databases member and hub one table that is being synchronized across this you can see some logs for monitoring purposes in the portal and now i want to see has my additional inventory item from the u.s actually been synchronized to to asia that should have already happened since i set up such a low uh sync frequency right of a few seconds so now i'm going to access both of these databases and we expect to have same items in both of them as you can see also the casing similar to change data capture create some artifacts table on your source database so you must make sure you have enough space there for that and we're on the member side because we want to see if that forked inventory item from the us the melon had shown up with this price and it seems like it did so both databases are in sync right now which is great we just had seen three dogs from memphis from hub to member from u.s to asia but now i also want from member to hub because i set up bi-directional sync so i'm inserting a new item in the in the asia database so i insert a dragon fruit there however i made the spelling gag so we'll have to fix that yes as expected so now let's see if this gets um redirected to back to the hub in the u.s and now as i was mentioning let's see us you can see this did not show up yet here because there is a sink like latency as well apart from just the frequency so we wait a few seconds go back to us and indeed the dragon fruit shows up here so this is perfect we have set up by directional replication between asia and the us and it all works so let's move onwards okay so one other uh replication type that i wanted to discuss today is change tracking so this allows you to to record the goals in a table network changed but without capturing the actual data that was changed so if you remember for cdc you actually also capture historical data changes but for change tracking as you can see here you can see oh what type of um dml change occurred on on each of these uh on each of these goals so um this this is pretty much how how how change thinking works it records that goes in the table will change but without capturing the actual change data so there are some key change tracking concepts so you have a change tracking table similar to cdc we also have a change tracking functions some query functions so this supply the details of the changes in an easily consumed relational format so for instance the following functions return information about the changes such as change table of changes that's what you would use basically transcription info for all the changes to a table that have occurred since the specified version you can use the change table version function to return the latest change getting information for the specified role and then you have an auto cleanup this tends the user databases to identify change tracking enable databases and based again on the detention cleaning policy which is all time based um each internal on this table is first of any expired records and lastly you have the change tracking current version so every time a user accesses a table um they can ask the version number active at that moment so you must keep that version number safely stored and um and then you can just use that version number the next time you you do change tracking to get changes since then so here sometimes people mix uh change tracking and change data capture because they are very similar so um but they got also some key differences that i wanted to highlight so for change tracking it shows which goals have been inserted updated deleted and the next changes based on the version number as i was previously mentioning however change data capture takes the historical data changes and because of that it on the performance side this consumes more overhead however some similarities is that are that voltage tracking and cdc get enabled at the database level and another table levels maybe you would like to wrap a little bit yes i'm grabbing this one okay oh we have 30 minutes right so we are over five minutes oh yeah go for five minutes okay my slides were saying that we are 29 minutes because i was tracking okay i i'll wrap it up very quickly sorry um and um yes so both change tracking and cdc can be enabled on the same database no special considerations required great we here you can see how you can enable change tracking i guess we'll have this shared uh these slides shared and you can go through them um transactional application we don't have time to go through it however the key details here is that you have a snapshot agent a login doc agent and a distribution agent so it's quite different from what we've seen in seeing change data capture exchange tracking and these are some considerations to have in mind for transactional applications so this is what we discussed today and um here are the scenarios that we looked at very briefly and some of the solutions we took some of these solutions out hopefully the demos and all the info was helpful to you if you have any information feel free to reach out and yeah this is all thank you await mara that was an absolutely amazing session uh i do know there's not many uh sessions on internet around data replication in azure sequel so i believe definitely people watching it live right now and once we archive it on channel nine and on the website uh they'll get to learn from it and i personally feel that i should be able to go ahead and deliver this session next time because i have seen it so many times i'll just let you say next time all right any final thing you want to plug in before we move to the next session oh no this is all great thank you and if there are questions feel free to reach out on linkedin twitter i'm happy to respond questions thank you so much i'd love to have you back once again have a nice day ahead and yeah bye take care you too bye-bye all right with that session by mahara on azure uh dear application we now move to our next session uh and i find the title of the next session very interesting you know it's like from oops to oops incident response with notebook so let's go ahead and welcome shafiq rahman and julie kosmano for our next session hi everyone thanks for accepting the invitation so uh uh julie where are you joining us today from i'm actually currently in san diego so it's very sunny here yeah i can see it and what about you shafiq i'm from uh redmond washington from microsoft headquarters wow that that's a great place from from to tune in um i i won't take much every time you know we're gonna do chit chat towards the end so there's no punishment of your time take your entire 25 minutes i'm gonna add your uh slide to the stream everybody can see it and next 25 minutes is all yours awesome all right thank you simon all right folks uh let's uh let's let's learn more about the uh the incident response with jupiter notebooks so some of you probably have heard me um and others talk about chicken and exuberant books quite a bit but today we're going to show you how to apply some of the basic things that you have done in jupyter notebooks as well as some of the software engineering practices to be able to apply it in the incident response server world so as you may already know that is getting more complex and systems are getting more complex as well how do we you know think about using troubleshooting guides in a different way because it might be the weakest link for your or toward your happy customers all right so here we go so uh joining me today is shafiq who is my partner who is the software engineering manager at work at microsoft and so he and i work together um in the in this world of tsg arts so and and i am julia cosmano i'm a program manager at microsoft as well all right so this is some of the learning journey today so first of all i wanted to highlight if you actually go to that link of the bitly link uh you will be able to take a look at the the the slides today so you can just follow through um as i'm going through this with you if you'd like and i'll be updating the session notes as well on that link all right so our four learning uh items today are the first one is why incident respond why is it so hard so let's talk about the problem statement and then we'll talk about how do we rethink about troubleshooting guides uh in a different light and how do we think about using executable reusable automatable troubleshooting guides with deviator notebooks and tying it all together so that's our last bit which is a demo and then we'll share with you the uh the resources where you can get started as well now first of all uh we wanted to set a kind of baseline here so when we so shafiq and i talk about troubleshooting guides or tsg it really does mean uh similar to like play book and run book or knowledge base so we don't distinguish it just for the simplicity purposes but at times you would have to kind of think about it in a two different lights which is you know steps to identify issues or procedures to achieve specific outcome or essentially maybe it had to mitigate as an example but in this context let's just refer to them as just tsg just to make it simple and if you are new to notebooks uh check out other other videos that i have created before for other uh for the conferences also if you're new to parameterization in notebooks i do also highly recommend checking out erin nelson's parameterization in azure data studio um he presented it at data expos a really good coverage on uh on how to do parameterization in azure data studio uh for networks so if you're pretty new to notebook so this is the screen just showing like a quick kind of overview of what you can do with uh with notebooks essentially you know code and then the results and then documentation as well in one place so let's step back a little bit here so what is the problem with that we're trying to solve in the incident response sort of world now firstly i'd like to begin with a question where do you store your troubleshooting guides or your knowledge base today whenever you have you know this instant response have you how do you go back to that documentation that helps you to troubleshoot issues so how do you store it do you store it in onenote do you store it in pdfs do you start in wiki do you store them as scripts and what do you uh where do you store them is it in sharepoint is it actually version controlled is it on the network drive and then have you thought about the permission aspect of it confidential sort of you know uh policy around it how about securing it and lastly how do you think about making it recoverable like what if the uh the troubleshooting guides are gone like how do you recover from it so these are some of the big questions that sometimes quite forgotten but you should really look into it again especially in the you know this 24 7 sort of world it's definitely worth revisiting that and that two sets of keystrokes that are very very often used and it actually can be quite dangerous which is control c control v so you do a lot of copying from code from one place your perhaps your onenote or other places and then paste it to say for example sql server management studio or azure data studio etc so a lot of copying this code and then modifying so essentially in the sre or set reliability engineering world or devops world this is considered as toil so it sounds subtle but it is actually very erroneous so um so something to consider there so if i were to summarize the troubleshooting guides challenges today is the fact that you do a lot of copying and pasting with the static troubleshooting guides it's hard to discover it's hard to keep track of changes and think about the quality as well is it testable what if you actually ship a new system is your troubleshooting guide actually testable or have or match or in parity with that new version and what if you want to crowdsource troubleshooting guides and is it easy to search and it's actually also not reusable and also not automatable if you were looking at the state of devops research from 2019 there was a section that was called productivity pronoun and juggling work i thought this was absolutely uh relevant to where we are today as well in 2021 so essentially what they're saying is in that research is saying that reducing toil is actually quite important and we want to make the work this you know ops world to be more repeatable to be more consistent fire scalable and auditable and that resonates with what we do here today in the tsc ops uh methodology i highly recommend to check that uh study by the way it's it's a really good reading all right so what if we take a step forward a little bit here and look at troubleshooting guides in a different light which is essentially looking at it from software artifacts point of view or software engineering point of view so if we can make tsgs your troubleshooting guides content as software artifacts what that really means is you can make them executable you can make them reusable and you can make them automatable as well so if they are software artifacts then that means you can also potentially do this auto like the goodness of auto build or do uh testing as well right so um and actually that's the world where shafiq is in today so that's why i'm so excited to have him talk more about how tsd engineering loop works uh in in a moment before we get there let's pause a little bit on the tsg's sort of characteristic characteristics that we ideally think would work well so we like the aspect that you know the troubleshooting guides usually have documentation and the fact that it has code but we want it to be executable we want it to be auditable so that means you want to be able to include results uh perhaps visualization perhaps some analysis and interpretation and this is where notebooks really shine so imagine putting troubleshooting guides and software artifacts with notebooks and this is a a short plugin to azure data studio by the way as you may already know azure data studio actually supports tribute notebook so if you want to try that just go to ak.msg azure data studio but feel free to use other jupyter notebook is ide that works for you for us um azure data studio works really really well because it allows us to connect to other data sources like sql server as your sql postgres as that explorer actually i just wanted to log soon as well so um from troubleshooting especially data in the data world it's it's really ideal uh uh for for a lot of users today all right with that i'm going to hand it to shafiq thank you julie so as as double shooting guides are very important artifacts um to be able to deliver on the promises of reliability supportability and all the service level agreements you have um so we believe that we should have an engineering discipline behind it like if you think of onenote word documents there's there's just stored there there's there's not much engineering discipline like there's no testing behind it there's no not much review behind it um so so if you look at the diagram on the right uh those are a couple of loops we've identified where we think this is how troubleshooting guides are used and that's what we've seen on our teams so you add an update a troubleshooting guide now if you think about notebooks you would add change the notebook now you can check it in and we are checking it into a github repo or a git repository so that allows people to review the changes and approve them and then we also believe in testing the troubleshooting guides and once uh it's all checked in and ready and tested uh you can manually execute it that's the inner loop you're seeing there so so the developer when they're handling an incident would go and manually execute it and then as you do with the incidents and tickets if you have postmortem meetings that's where you would also identify improvements you need to do to your tsgs or new tsgs which needs to be added and you can create tasks for that and and keep doing this loop and improving your process over time now the outer loop you're seeing there is about automation so once you've tested you could also automate things like deployed to an execution framework and then use those same notebooks you're using manually to to do like uh auto detect incidents or do auto root cause analysis and and things like that right and uh so notebooks are a simple format it's a json document uh and uh you're using git for version control so it's a very simple system very flexible you can take it to any environment you have if you have some secure environments or if you have environments which are only on-prem without online access this can be modified to suit your needs and notebooks are easily shareable and uh the beauty of notebooks is um they also store the results in them so so once you execute something let's say there's been failures or successes those because those items or or those executions become inherent part of the notebook when you save it and and share with people so that allows you for to do debugging or forensic analysis of what happened while the incident was being mitigated um so i'll give it back to you julie awesome all right thanks jeffy okay let's go to the next slides all right so how do we think about executable reusable automobile tsu's with jupiter notebooks here so essentially like we've talked about before if we can build from the executable we can achieve that reusable and we can achieve that automatable and therefore it's going to be less toil so reduced manual execution right so when we think about executable you can think of it as simply like it has to be able to be run on a user interface so that way you can develop your content your troubleshooting guides and you can also test it manually you can you know run it and it actually works and this is why where you know notebooks and azure data studio works really well because you can do that and then the second one is being able to parameterize it so if you want to be able to make it a little bit more scalable it has to be parameterizable what that really means is say if you have a troubleshooting guide for a specific sql data source as an example it should also be able to be run against other data sources so you should be able to primarize that and job notebook supports that and azure data studio supports you to be able to parameterize your um your notebooks as well if you're running powershell or python and then uh automation as the automation aspect of it is actually fairly simple today and it's going to be a lot better hopefully in the in the coming months so the first one is if you are using sql notebook today and you want to run it against multiple servers or databases i would highly recommend checking out this info sql notebook commandlet which means you can run it in powershell and then if you are using powershell notebooks then i would highly recommend using info execute notebook and that allows you to run partial notebooks and then lastly if you're using python then papermill is the sort of go to standard so essentially if you if you are using powershell python today you're actually quite a way way there as in like your you're uh you're almost set so essentially now when we think about our tomato bowl now the question is what systems or automation solutions are supposed that supported the powershell or python these are pretty popular languages right so you'll be able to kind of invoke these commandlets or use papermill as an example paperman package so i would highly recommend trying this out as well if you if you're new to uh to automation or new to try to automate notebooks all right so what have we covered today so troubleshooting guides could be a way for you to kind of improve your incident response today like thinking rethinking about how you approach troubleshooting guides using software engineering practices like uh shafiq has mentioned before thinking about troubleshooting guides as software artifacts using jupiter notebooks and building up that executable to reusable to automatable troubleshooting guides so next then let's wrap it up with what can we automate and then some demo all right so there are two types of automations that you can do so in the incident response world you have the diagnosis aspect and then you also also have the detection and mitigation so with that i'll i'll let you get to each other in some more on the automation aspect that we've uh we've thought about in tsc ops and we've also implemented it too so um so on the next slide you'll see uh the workflows in this diagram you'll see what auto diagnose and rca means so so when a ticket is filed uh this is all automation without any human intervention so when we we can run some queries uh this could be part of the notebook so so no notebook is executed you can run queries on your telemetry or even make some rest api calls to your service to get some information and enhance the ticket so you're getting information and adding back to the ticket so what happens is when the developer or dri opens the ticket the information is ready for for them so julie was mentioning a bunch of toil in the past so what this does is removes a bunch of uh the time you spend or minutes you spend in the beginning of your investigation uh uh going to a bunch of systems running queries is all automatically handled for you and this makes a huge difference in your time to mitigate and time to detect and investigate so for this kind of workflow it's very simple you need only read only access so the environment executing the notebook just needs read-only access to logs and read-only access to service endpoints and maybe write only access to the ticketing system so that it can write back then the other thing julie mentioned was auto detection and auto mitigation so over here this is more a flow which is working on a timer so from time to time the system kicks off a notebook which can detect a problem like it could run a query and if it detects a problem it can go and log a failure and uh in addition to that the next step it could check is is there a safe mitigation and if it knows that and then it could perform the action to take care of the incident so this is the happy path where everything is handled no human is involved and your system is back up and running now if you do not detect a safe mitigation you can file a ticket that becomes the auto detect scenario and if you don't find a problem you're also vlogging a success and and you're getting some telemetry saying hey the system is looking good uh so so in this scenario you you need a little more permissions like you need read write permissions to logs because you're writing back even on the service endpoint you need some right permissions so that you can take some actions to fix the service so over here you have to be a little careful where you have to identify actions which are safe and only do those because you don't want to open your system up because if you have issues or bugs in your tsgs you don't want them bringing down your system so so a little more care needs to be taken for auto mitigation but it is possible to identify some high impact incidents and auto mitigate them so back to you julie all right thanks uh thank you okay let's uh go through a simple demo how we can illustrate how it actually can work from the from the uh incident response sort of ticketing system or support case system all the way to a execution of notebooks so let me before i start with this um let me show you one thing here so assume that so let me just double check yeah system one so as you might have this db diagnostics notebook which i actually bought from glenn berry's ashesical database diagnostic information queries so i've taken some excerpts of this notebook and essentially think that hey if there is a an incident aside to me i want this diagnostics notebook to be run before before i read it so essentially pre-execute this so that i don't have to execute it manually i don't have to do that copy and pasting code right so that's that's what i i aim to do and in fact uh let me just switch over to here so in fact i actually have published that db dagnostics.ipy file to my github so um so i'm hoping that the uh system the incident ticketing system can actually pick it up pick up this notebook from github and then just run it and then when it's assigned to me i can just read it right i can read the outcome so as you can see here just kind of showing you there is no result sets the the notebook is just kind of clean so there's no results it's just a bunch of code like like such all right so what i wanted to do is let me just go back to powerpoint here when there is a incident or a support case that's assigned to me it will automatically execute the notebook and then it will also post the link of the notebook outcome so this is the pre-executed notebooks with the result uh on the on the on the ticket and then hopefully it also gets emailed to me how wonderful is that or maybe not so much because you know your inbox might be pretty full but at least the notebooks part is being pre-executed right so let's go here a little bit just to kind of make it real um as real as we can so let me just do uh um i have it left in here no doesn't kind of make me never mind i was gonna make it a little bit uh bigger the the browser but it didn't let me uh anyhow anyhow so i do have a ticket here uh which is called slow query it hasn't been assigned to anybody so i'm just opening it um it looks like it hasn't been assigned to anybody the notes here just says cloud summit test and then that's pretty much it so now i'm going to assign it to me and then i'm just going to close this for a moment and then we'll get back to it because the execution should take a couple of minutes so what happens in the back end is essentially when that ticket that i just showed you so this i'm using planners kind of to illustrate an incident response system or support case system right so in the back end when i assign that when that ticket is assigned to me it's going to trigger this azure logic apps workflow which will run azure automation which will run this tsg notebook from github will fetch it from there and then run it and then store it into an azure storage azure blog storage so that i can view it later so with that let's take a look inside the uh inside the azure logic apps workflow itself so this is what we have today so i have a demo tsu workflow a logic app and if i click on edit it will show me the flow so it will say that when a task is assigned to me on planner then i'm going to create a new instance of the essentially of a job of the azure automation job so i have my automation account execute notebook here and i have the uh runbook name infosql notebook the magic actually happens in this info sequel notebook which i will show you momentarily and what it's going to do is it's going to pass the parameter the title of that of that ticket into the info sequel notebook um if you recalled earlier the title of my um ticket actually has a server name instead of his name right so it's going to uh it's going to parse that and then after that it's just going to update the uh plan a task and then um you know with some content here and then lastly it is going to send me an email so it's going to send me an email with a link to the planner it will the link to the notebook itself so now let's dig into the info sql notebook runbook which is part of the azure automation it's essentially just a powershell so if i click on edit here and it's just powershell so it takes the title parameter it's just doing some kind of fancy uh extraction of server name and database name and then after that it's going to go to my github account in a moment there it is i'm going to my github account here and yeah just fetching it so that i can execute it against info sql notebook commandlet all right so yeah so that's going to run the server name instead of this name and then afterwards it's going to post it to the azure storage uh account so the rest of the code is essentially to post it to azure storage account all right so what you've seen today is essentially someone no it just happened to be me at the time but someone else could write a tsg notebook and publish it on github and then so that if say for example a support person say for example team create a task and assign it to a dba it will inform all this beautiful workflow and run the appropriate notebooks obviously and then so that when it's gets assigned to somebody say for example gloria or myself it's going to have all the details that is needed so with that i'm just going to show you uh quickly looks like it's done so take a look here um a new notebook has just been created and then last but not least because this is probably the key thing that we wanted to show you essentially hey i've got an email and just got sent to me local time is 943 so if i click on that it will say do you want to open it in azure data studio and i say yes so and then click open yeah download it so it just um it's just opening the uh the notebook execution the the pre-executed notebook so if i scroll down then i can see that you know certain cells have been actually all the cells have been run so i can start doing the diagnosis so less essentially less um less copying and pasting everything is pre-run for me um so we covered this a little bit before so i'm just going to speed it up since we're actually running out of time and i want to be respectful of the next speaker as well um so essentially you can have author and you can have consumer and there is the pipeline of the tsu repo and validation um that you can use um and this is super important especially if you're if you have a pipeline that can check your kind of credit scan or passwords etc that might be actually accidentally included in your troubleshooting guides that would be that that should be something that you want to kind of consider um so yeah today i showed you github repo and then the test workflows using planner and azure logic apps all right so i just wanted to wrap up the the useful resources or some of the things that you can do today so first is learn jupiter notebooks and if you're really really new with jupiter notebooks and really new with python i would highly recommend using azure data studio that's how i also got uh started um as well like got started like you know really diving into notebooks more and then think about how you can format your tscs to be more execute executable and reusable and then think about automation as well because this is the part where it will save you time and hopefully give you that you know advantage um winning advantage over at the other companies or competitors as an example so um just to wrap up i've got a bunch of references that i have shared here on the slides that you can take a look at your you know free time um i would like to especially thank the community because without them this this wouldn't happen as well because uh for example glenn berry's diagnostic notebooks have been super helpful for a lot of people uh rob seals has talked about jupiter notebooks as well doug fink talked about um how to use powershell notebooks in powershell in fact he is the uh the creator of the powershell notebook module for info execute um notebook and lastly emanuel also has created this sequel diagnostic jupiter book so with that i think i'm going to end um i might actually if i can go to uh let's see i wanted to show you this because in case you missed the slides um go to this bitly link because it will contain the updated slide deck in five minutes only as well as all the session notes and follow me on twitter for any additional updates as well which is ms sql girl all right thank you all right that was a great session you know i've always used notebook just just for for writing some code but looks like actually data studio is on steroids i mean you guys and girls have built such a such a great product my only quick question is uh how do you actually decide what new feature to go ahead and add it i host many program managers product managers everyone follows a very different approach right so what approach do you follow because i see gypto notebook is something that is built on top right it is used very widely so how do you guys go ahead and see okay this is the feature we are going to uh build this quarter yeah great question so uh we take feedback from uh from users so if you go to github microsoft azure studio unfortunately i don't have the link handy but we are on github um so you can actually raise issues there so we triage them we take a look at them we also present you know places like this and then hear your feedback as well and we also do user research to decide you know which way should we go etc we have a lot of uh use base from external as well as internal so yeah lots of great feedback from folks and keep them coming because that's the only way we can improve the product for sure all right that's absolutely great uh so we'll move to next session now any final thing shafiq and julie want to plug in um just wanted to say thank you simon for having you know having us and hopefully we'll see you again next time and yeah don't forget to check out azure data studio yeah i'm here thank you simon yeah thank you shafiq and thank you chilly i i believe uh the azure data store is also on twitter right so go ahead and follow them for latest updates thank you both if you really enjoyed hosting we love to have you back once again hopefully in person i hope i get the visa uh but until then take good care of yourself have a nice ahead and see ya bye see you bye all right with that we now move to the final session of the day and i'm really excited for this one because i'm hosting shannon for the very first time and she's gonna talk about azure vmware solutions overview and i have to be very very honest uh i i did see one of her tweets where i always go ahead and type the vmware in the wrong way i still do it even when i was reading her session okay okay that's how we write it the first two letters of capital all right so let's welcome our next speaker shannon to talk about azure vmware solutions overview welcome to cloud submitted hi shannon how are you i'm well how are you i'm doing great i love your background there's so many posters what are they about uh so i'm i'm a dj right that's how i got into technology and so a lot of it is it's concert stubs on the door uh it's a handful of posters of some of my favorite musical artists and none of them well i guess outside of the smashing pumpkins none of them are american-based fans uh so it's just kind of an accumulation over the course of probably my 20s and 30s just picking up you know memorabilia at concerts and things of that sort my twenties has gone doing all these uh live shows so i only have the screenshots with me that's good that's perfectly fine some of these screenshots are great though they're really great background stuff you know i'll try to put some screenshots i know we're already in uh 10 minutes in your session but it's all yours 20 25 30 minutes is all yours so thank you all the time i'm going to add it to your stream your screenshot will stream everybody can see it and next 25 30 45 minutes is all yours perfect thank you so much simon i appreciate it so hi everyone i'm shannon keane and yes it's pronounced keen it doesn't look like it i'm a senior cloud advocate at microsoft in my role i spend a lot of time talking about azure infrastructure and one of the big topics that i've been focusing on specifically the last year and a half has been azure vmware solution and yes a lot of people do capitalize the w it's not capitalized so if you want to make it seem like you know vmware just make sure that you spell it capital v capital m w-a-r-e so uh you know if you want to connect with me as time unfolds i tend to share a lot of good information on twitter and linkedin so these are great ways to get in touch with me my dms are always open as well whether that be on linkedin or twitter so if you've got questions you want to understand something a little bit better feel free to connect with me and write me something if you don't want to be more public and ask me the questions online that everybody else can see um so yeah so so thanks for having me simon i definitely appreciate it it's been a weird year and a half it keeps moving longer and longer so these virtual things are always kind of a fun reality so let's start uh with the introduction in the background of the service i think that's going to make a lot of sense here as folks become more familiar with the solution so azure vmware solution delivers a comprehensive vmware environment as a service this allows customers to run vmware native workloads exactly as they run on premises except it's in azure so customers get a chance to capitalize on their existing vmware investments skills and tools so all of that time customers have spent learning vcenter vsan vsphere you can take that plug it into azure this option provides you with the most symmetry in terms of your on-premises workloads so most of our customers are running the version that gets deployed with azure vmware solution on-premises so you can take your existing applications and you get to have that familiar platform and it's a great win-win for customers that have been a little hesitant to moving towards the cloud this is essentially running vmware natively on azure so you've got your on-premises data center which is comprised of vcenter vcenter vsphere vsan and nsx you connect that into azure the recommended pattern is to connect it with an express route circuit you'll then deploy the software defined data center or sddc you'll get vcenter vsphere vsan nsx and you can even think about extending what you're doing with site recovery manager or srm that reached general availability back in july so if you aren't yet ready to even migrate your workloads on premises you could think about azure being a target for your disaster recovery environment the minimum deployment for azure vmware solution is three dedicated bare metal nodes those are your dedicated servers nobody else has access to them it's not logically separate like most of azure with that deployment comes an express route circuit that express route circuit gets peered into an azure v-net and that's what gives you access to the microsoft azure backbone network once all of this is connected and i think the hardest part is network connectivity you can start onboarding azure services like azure active directory any of the azure security services azure sql database azure monitor you can think about extending what you're doing with an application gateway or even artificial intelligence we had a first iteration of this back in i want to say it was late 2018 early 2019. we have since gone a different direction and we have this first party service this reached general availability last september and if you follow the service over the course of its lifespan at microsoft you'll notice that the first iteration didn't have nearly as many regional availability points hopefully what you're seeing here is that microsoft has invested in making this be a global service all of the planned regions are available on our public website and just note that if you are hoping for one of these planned regions to come online and maybe the date slips that is a possibility so just be be prepared to work with your microsoft counterparts to understand what the timetables look like if you are thinking about deploying azure vmware solution in one of the planned regions right now if you were to deploy azure vmware solution this is the software specifications and the hardware specifications i won't spend too much time talking about it but this shouldn't feel too foreign most of our customers are running this flavor of vsphere on premises and then in terms of the hardware specification we only have one sku at the present time this could always evolve and change it's going to be an interesting service as the entire ecosystem sort of pivots and looks at embracing this as a way to speed up transformation so the actual nodes themselves are hyper converged nodes and so you're getting massive compute and massive resources to be able to run your workloads in azure the big part of the slide though i want to call it at the bottom there's a minimum of three nodes per vsphere cluster you can't go lower than that there's a maximum of 96 or sorry a maximum of 12 clusters per private cloud a maximum of 16 nodes per vsphere cluster and then a maximum of 96 nodes to an azure private cloud instance hopefully you're seeing that we can think about a lot of different types of scale and you can scale up and scale down on demand let's spend some time talking about the deployment process next so right now you will deploy the private cloud in azure it's just like a lot of other azure resources in fact i've got a video demo that walks through this you'll create a v-net and then you'll create an azure bash and a jump post you could always create a jump host with a public ip that has just in time as well bastion's not in every single region just note that you will need some sort of way to access the web gui once azure vmware solution deploys so you'll create a gateway subnet in that v-net and this is to link that express route circuit into your azure v-net and after deployment you'll link that sddc to the v-net from there you'll connect to the jump host behind the azure bastion service or you'll enable just in time to get into that server you'll connect to the vcenter environment then you'll connect to the nsxt environment those credentials are registered with azure you're not putting a password in that's all done via automation by way of deploying the service you'll next enable global reach for on-premises access so the recommended pattern and we'll talk about this is to have your expressroute circuit on premises peered into azure and then you've got an expressroute circuit that's peered into azure and you'll enable global reach to handle traffic routing between those two environments the last thing you'll want to do is you'll want to join the domain and you'll want to configure your identity source and that is done right now with the support ticket into microsoft just because of the fact that you don't technically have access to the root uh of your esxi hosts so let's talk about network architecture because i've hinted that it's probably the hardest thing to wrap your head around so there's two types of connectivity models with azure vmware solution there's basic interconnectivity which is what is shown on the slide this is what happens when you deploy the azure vmware solution service the express route circuit gets deployed you have a virtual network and then you link the express route circuit into the azure vnet this is what helps you integrate with azure native services and sometimes customers don't need that hybrid connectivity right away as well so this is what's called basic interconnectivity the networking is interesting so there is a management network that azure vmware solution only leans on and it should be a non-overlapping ip address with your azure vnet as well as on-premises it's just like extending your network connectivity if you were moving everything into is so the management network itself it requires a minimum of a slash 22 sider address block you can always make that bigger the recommended pattern though is just to make sure that nothing overlaps and then the corresponding azure v-net won't be an overlapping ip address space and it will have to have that gateway subnet to terminate the express route circuit into that azure v-net then there's full interconnectivity and this is what i've talked about right the idea that you need your express route circuit from on-premises to connect into azure and then you need your express route circuit from azure vmware solution to be peered into an azure v-net when you enable global reach that's a fully supported sla backed guarantee from a migration pattern so if you need to migrate vms from on-premises all the way into azure you will need that express route circuit if it's a production-based environment i think one of the biggest caveats here is is in the next slide there is an opportunity to set up a site-to-site vpn using a vmwan hub now the vuon hub does the same thing that the global reach will do it'll handle east-west traffic routing the big piece here though is if you are migrating production workloads and something happens and you call microsoft for support vmware might be engaged and vmware will ask almost immediately is there an express route circuit so we always stress that this is something you can do for testing before the express route circuit shows up because sometimes that takes some time or you could if you just need hybrid connectivity this could be a recommended pattern as well when it comes to the migration side of things let's walk through those steps so you'll want to first assess your vmware environment and you can do that now with azure migrate that reached general availability right around the ignite time frame of this year that will help you identify the workloads to migrate a lot of times customers don't know what they have they've given folks access to build up servers at will and nobody really knows what's showing up so there's a lot of sprawl on premises once you have that assessment once you've identified the workloads you can define the migration approach there's a product that vmware uses called hcx this is the swiss army knife of migration this is what handles the v-motioning of a vm from on-premises all the way into azure vmware solution and hcx supports live bulk or cold migration most customers pick live especially if it's a production-based workload because it's minimal downtime but if you do get a change window or maybe there's a dev environment that you need to move over and it's not on all the time those two patterns are also supported this helps you identify the pad like the the steps to full production and then that enables you to move to the proof of concept side of things this is where you create the create the private cloud you move a few vms using the preferred migration type and you sort of build up that familiarity as to what azure vmware solution looks like in terms of azure this then helps you transition to production so there's the at scale migration and adoption right so once everything's up once you've vetted everything everything's been tested everything's been deployed you can start to think about migrating those vms pretty readily and if you don't have the cycles microsoft does have fast track and we've got a partner ecosystem that requires certification from microsoft each year to be able to work with partners or well i work with partners work with customers to migrate vms into azure vmware solution so let's talk a little bit about management and support next because i think that's where a number of questions still show up so oftentimes we see the shared responsibility matrix for is pass and sas this is the shared responsibility matrix for azure vmware solution hopefully what you're seeing here in the slide is that a lot of the burden of responsibility falls on microsoft so you're no longer getting the pages or the emails at 2am when your power supply your power supply fails that's usually when it likes to fail 2 a.m or 3 a.m you don't have to worry about patching your esxi hosts you don't have to worry about upgrading the hosts you don't have to worry about physical security microsoft is focusing on that for you this frees up your engineering cycles so you can focus on things like life cycle management for vms so if you've got older os's still in your environment this allows your engineers to start thinking about what an upgrade path would look like to move to a newer os you could even think about bringing on configuration management a lot of customers because they are doing the full gambit of support related to their vmware environments on premises when they transition that burden of responsibility or a good chunk of that burden of responsibility over to microsoft they can start to figure out what it would take to bring into consideration configuration management so the support structure is also cool here if you are running azure vmware solution and you run into a problem you are only calling microsoft so microsoft becomes that point that central point of support and escalation microsoft troubleshoots all of the azure native components is it something with networking is it something with access to the portal right once that has been determined that it's not an azure specific issue microsoft on behalf of you will go to vmware and open up a ticket with vmware you don't have to open up a ticket on be on your behalf with vmware so then microsoft continues to sort of play point until you reach full resolution in your environment one of the coolest things here is talking through integrations so i often say it's like achievement unlocked when you have your azure vmware solution environment sits so much closer to the arm apis these are some of the popular azure vmware solution integrations at the moment so azure netapp files for file share then there's the blob storage components so if you're familiar with content libraries and templatizing your vms you can have that live on an azure blob storage account which is kind of cool you don't have to move any of that to the vsan data store because once you deploy this you get a vcn data and that's a finite resource right unless you add additional nodes then there's iscsi disk pools which is in public preview so it's another way to expand what you're doing related to your storage footprint so if you don't have to worry about adding an additional node if you just need additional storage for a short period of time you could think about using iscsi disk pools then we've got a lot of customers who are looking at azure traffic manager and application gateway just for highly available highly resilient applications that's hard to do if your environment lives on premises in fact i don't think it's possible so having your azure vmware solution vms sit closer to the arm apis is impactful in the sense that you can make use of these azure native services that don't cost a a lot of money just to spin up and to make use of there's support for hub and spoke so if you've got a hub and spoke already deployed in your environment you can extend what you're doing in the azure vmware solution and that would just trickle into what you're doing from your full-blown enterprise scale digital transformation then there's the microsoft azure backup server one thing to mention here too is there's a number of third-party backup solutions that are also available at the moment so commvault veritas rubric uh veeam i'm sure i'm missing one there's about four or five of them that are now certified with more that are going through certification process right now with microsoft so you can either use microsoft azure backup server to backup your vms just like if they lived on premises and you were using microsoft azure backup server or you can lean on some of the third-party solutions that might already be in your environment you can onboard azure monitor as well as azure security center and azure sentinel so i think these are also awesome because a lot of times folks don't know what they have they don't know the security posture of their environment so being able to onboard and expand what you're doing from a security perspective alone helps a lot of customers out with making sure that everything's baselined correctly and configured correctly in terms of the resources because i always like to leave folks with a lot of good info uh a lot of customers and a lot of folks that i've talked to always ask about hands-on well it's a hard service to do hands-on with so microsoft partnered up with vmware to create hands-on labs now if you're familiar with vmware's hands-on labs it requires a registration and then you're able to go get familiarity with whatever service you're looking for doesn't have to be just azure vmware solution but azure vmware solution has a couple of hands-on labs at the moment these are click-through guided tutorials that help you figure out how to use the service and configure everything correctly the first two labs deal with private cloud deployment and connectivity so you'll go through the notion of building out the azure vmware solution environment and then connecting it into an azure vnet and then the other lab deals with onboarding hcx because remember that's the piece that migrates everything and establishing a site pairing from on premises into azure and then migrating vms so that's the recommended pattern in terms of being able to migrate vms from premises all the way into azure there's a number of ways you can if that doesn't work out because usually some customers are get to get to that point where they realize hcx is not going to work with them for whatever reason there are a number of other ways you can get vms into azure vmware solution but the recommended pattern to follow until there's something that doesn't work right is hcx so this lab is really helpful in that aspect microsoft also authored a learn path i authored two of these three modules so these are great ways to understand everything you have to factor in related to planning so the first module deals with what is azure vmware solution helps somebody out with the value proposition why would you bring this into an environment and it helps you understand how it's positioned how you could position it with your company and hopefully onboard it into your environment the next module deals with deployment so it's all of the prereqs that you have to factor in from a networking perspective from a subscription perspective from a white listing perspective so in order to onboard the service you have to whitelist a subscription and then you're able to onboard the environment into your subscription so a lot of these different components are covered in that deployment module and then it'll walk you through what a deployment looks like it'll have you walk through and deploy the azure vmware solution environment connect it into an azure vnet and then you're ready to go for the third module which deals with hcx so that walks through the same sort of prereqs required to establish a site pairing from on-premises into azure over your express route circuit and then what it looks like to migrate a vm from on-premises into azure so i want to walk through the demo next and i want to add a couple of caveats here so azure vmware solution does take about three and a half to four hours to deploy so rather than keep you on here for three and a half to four hours when we've already been live for a little over three i sped some of this up so i wanted to show you what it was like to build out azure vmware solution how you build it out in the portal so this is a great little video demo and i will just kind of guide what's happening with my voice so you'll want to go look for the azure vmware solution environment and then once it shows up here nothing's been deployed right so you can go and you can click create at the top you can click create private cloud in that blue box doesn't really matter it'll take you to the prereqs page so if you somehow missed the prereqs let's say you just decided you were going to deploy it this would at least take you to where it shows up in our documentation right so if you didn't take a look at the hands-on labs or the learn module this would help you out so you need an ea agreement or a csp agreement and then a valid non-overlapping site or address block and the recommended pattern is it's a minimum of slash 22. it could always be bigger if it needs to be so it's just like a lot of other azure resources right tie it to an azure resource group give it a name my specific environment lives in north central and then you'll go down here to select the host again there's only one sku so it's the av-36 node and then you'll supply the sider address block and i picked 10.5.0.0 however i mistyped it in the demo but it always works out just fine and then you can go in and you can add some tags so tags are a big part of folks related to cost management and chargeback you can add tags into your azure vmware solution environment then once you're done you can select review and create and we'll go through the validation process before you hit create you can always go down and evaluate all of the settings you had configured this is always helpful because it does take like i said about four hours to deploy if anything wound up getting missed you can just hit previous and course correct and fix that before you deploy so it's as simple as hitting create which i don't know if you're like me that's a very easier reality than having to build out vcenter vmware environments on-premises right so in a matter of four-ish hours you'll have three nodes a number of servers that are involved in the environment and everything's up and functional right so all of that was abstracted for you behind the azure resource manager portal so you see over here on the right all of the 10.5.0.0 that wound up getting carved out by the automation i just supplied that cider address block and you'll know it it worked because you'll see succeeded so while this is building it'll say building so when it says succeeded this is where you need to go in and you need to now connect the express route circuit into the azure v-net so this is a brand new screen what you wind up doing is you need to have that virtual network that has a gateway subnet you can create a brand new one or in my demo i already have a vnet with a subnet created you'll select that and it's a matter of going up to the top and hitting save and then a number of automation components get kicked off so go to the same kind of deployment overview and then you'll get a bunch of things like you get the v-net gateway the local connection point the express route auth key etc and you'll know it worked because you'll go down to the connectivity section underneath manage you'll go to the express route section and you will see that an auth key has been created and used to connect your express route circuit into an azure vnet so this demo encompasses basic interconnectivity there's more involved if you need to figure out how to deploy this with your on-premises environments and if you get to that point you can always reach out you can always talk to your microsoft counterparts there's a number of teams available at this point in time to help you realize the value of this journey and that is the end of what i have to share today so thank you for the opportunity simon i definitely appreciate it um i guess i could flip over and see if there's any questions so let's see alright shannon that was great i think it was a very very crisp presentations uh we do have a couple of questions from the comments let's take it yeah i'm seeing it i'm seeing it yeah yeah so let's see so it looks like uh what is it okay so the licensing components are interesting you don't have to add licensing or deal with licensing for vcenter vsan nsx that's all taken care of for you even the add-on of hcx advanced that's taken care of for you if you need site recovery manager that's a byol license and that's something you would work on with vmware but all of that's built into the core package you don't have to worry about licensing which is honestly kind of nice yeah who wants to deal with lighting right i don't i i used to always tell people go talk to your licensing people like i don't know any i don't i don't pay attention to it so that's that's helpful it's all included all right so we have one from alan allen is saying shannon can you just confirm that express route is a prerequisite a pre-requirement i don't know yeah so it's a it's a prereq if you need to migrate your vms on-premises into azure so when we were talking about the different connectivity models the full interconnectivity is the one you need if you are migrating vms from on-premises all the way into azure vmware solution you could stand up a site-to-site vpn tunnel and you could migrate maybe dev workloads or anything that's non-production based but when it comes to production you want to make sure you've got the express route circuit because if you have a problem with migrating a vm from on-premises all the way into azure vmware solution and you have to engage microsoft and microsoft has to go engage vmware vmware's first thing they're going to ask is how is that migration pattern happening so in order for them to back that migration pattern with that their sla they want that to happen over an express route circuit that that could change that's just where it stands right now so if you just need maybe site connectivity right you just need hybrid connectivity you don't you're not planning on migrating bms you could have a site-to-site vpn if you're trying to build out what that looks like you're trying to build up the the notion of moving it from on-premises into azure you could have a site-to-site bpm for kind of lower uh hanging workloads something that's not production but when it comes to production and a true migration pattern of all of your heavier production level workloads you'll want to make sure you have your express route circuit peered into azure from on premises that's perfect pradeep says amazing session ah thank you thanks for tuning in um i don't see if you have any more questions um i'd probably be saying is it vmware on azure solution i think yeah so it's called azure vmware solution or abs you might see it referenced inside of our documentation you'll see it as azure vmware solution we don't abbreviate it it's weird we abbreviate a lot of things but we don't abbreviate azure vmware solution in our documentation so that's the way you'll see it referenced and i think vmware has an offering in most major clouds too so you may you might see it on aws you might see it in gcp i i feel like they're realizing they need to be involved in a lot of different cloud deployments because customers aren't just in one cloud anymore yeah exactly we're moving towards multi-cloud and uh people have already moved towards multi-cloud right that makes sense um all right that was shannon that was absolutely great i don't see any more questions over there any any final words or anything finally you want to plug in before today just thanks for letting me come and and present it's always an awesome opportunity to collaborate with somebody that i haven't yet really collaborated with i've just interacted with you on the uh on the internets right so it's nice to get a chance to come and collaborate with you and actually participate so thanks for the opportunity and thanks for everybody that tuned in thank you so much shannon thanks for accepting the invitation actually i really appreciate it uh we would love to have you back whenever you're available i'm your cloud advocate so i i don't think they're going to say no to me i love it i love it keep that in mind i put you on the spot right it's recorded all right shannon thank you so much have a nice day ahead we'd love to have you back and yeah take care and we'll see you soon all right take care bye everyone all right now that is the end of day four of cloud summit can you imagine there's still seven days four five six seven eight nine ten eleven yeah there's still 11 uh there's still seven days to go for cloud summit i even say that even until now we have covered the entire ecosystem of azure but there's still a lot to come and especially on sunday we have an entire day just dedicated for students how amazing is that we also have microsoft imagine cup winning team joining us uh then we have microstudent partners they call in microsoft learn student partners uh but but yeah people oh my goodness shannon has gone but people say wow thanks for the session many thanks for the session great session she was such a treat yeah i really enjoyed hosting uh um i appreciate your time shannon yeah that was absolutely great i mean i would say the rhythm that she had right there was a very good rhythm uh in that session uh yeah and we just go back and watch it we're gonna put all this session once again on channel nine and on azure summit live website uh mostly after this summit ends uh so definitely go and check that out having said that um if you if you're watching us and learn tv we have actually stopped streaming on learn theory because we have hello world going on but we are still live on microsoft developers youtube channel so what i'm gonna do now is i'm gonna announce the giveaways how interesting is that we have been contests going on so the first uh giveaway that we are going to do is for the uh q a that people have asked we have always been telling you that whenever you ask question to these speakers or share your thoughts use this cloud summit hashtag and now we are going to do a draw right no favorities we're going to do a draw so in account of three two one let's click on this draw button boom some drum rolls let's see who wins cloud summit uh strike it no microsoft developer you cannot win it oh my goodness oh yeah i thought microsoft developer won it but hey rick intiwari tv congratulations you win cloud summit swackit uh please take a screenshot of this and drop me an email at simon at azure summit dot live and we'll make sure that we ship you the goodies uh in the next month having said that we do know that we have other good goodies to be given away but we will do that but let's go ahead and actually quickly see what's on the agenda or on day number five okay let's start so on day number five to get started let me go ahead clean go ahead and zoom in okay so day number five september 17 we have uh as always it's a welcome note by me so there's nothing here over there we start the session at 6 30 a.m eastern time zone at the first session is by tayo ali who is a microsoft mep and who's going to talk about azure sql database where is my sql agent that's a pretty interesting session uh topic uh so definitely go ahead and tune in tomorrow uh and i see this still coming saying oh wow people really love shannon session wow that's great uh uh shannon says thanks simon really to participate i'll be looking for your next event i'll get some other goodies to share thanks and i really enjoy your seeing you uh the next session after that i have a tomorrow now this is the schedule that i'm showing tomorrow tomorrow is september 17 and we start at 6 30 a.m eastern time don't worry if you just cannot figure out just visit azure submit live after this even ends and we're gonna update the uh website with the exact video and the chat so don't worry about that after that we have another sql session that's by christopher christophy most probably that's on sql server and azure and there's so many sessions just around the sequel now that's amazing and after that we have uh and 30 minutes break then we have tushar kumar who is a cloud solution uh architect his session is on friday am i showing the wrong day no i think i'm doing right uh he's going to talk about is your arm now got biceps up this is a very interesting session so not a problem all right then the following session we have mike van der gatt he's going to talk about on how you can actually add additional security to your azure pass solutions then again we have a break a little too many breaks tomorrow i must say because tomorrow the day is a little long then we have mustafa thorman who is one of our featured speakers seashop on the map microsoft mep recently moved to a new position uh he's an absolutely amazing community guy and he's gonna talk about azure devops love starting and and i know you all know learn you all want to learn about azure devops so uh let's attempt to stay on top uh okay all right so the next session after that we have by tanya and if it's tanya you know she's gonna talk about security uh and that's what it is cloud native security she is ceo and founder of we hack purple and yeah definitely check that session out then we have parents joining us friends will talk about your snap serverless sql pool building logical data warehouse over data election databases it's going to be a very good hands-on session i believe so uh it's a good session that i'm looking forward for tomorrow then we have schweiten lola who is a c-sharp corner mep data analyst and a full-site developer and she's going to cover detect and analyze phases using azure cognitive services so we have database we have uh ai going on we also have a sessions around security and then after that we have a session on uh azure sentinel from the analystic analyst perspective that's what that would be by rod trent rod trent works at microsoft as a senior cloud security advocate and global azure sentinel sme at microsoft then the final session for tomorrow that we have is by jen who's a data engineer and he's going to talk about getting started with azure synapse so day five is going to be packed with the session it's going to be a little longer than today uh today was i believe about eight hours of eight hours of summit so yeah that's it for the day five so for everyone who's watching us on microsoft developers youtube channel don't worry uh you'll get the streams tomorrow and or you can always visit is submit.live website and we'll just update our over there having said that let's take a minute break and after this minute break if you're watching us on microsoft developers youtube channel we're gonna stop streaming over there so yeah if you want to continue watching for another five minutes and see who's going to win the giveaways uh visit azure summit.live having said that let's take a minute break and i'll be back i'll be back very soon hi i'm anna hoffman hey friends i'm nicola hi i am hi i'm tanya jenka hello i'm excited to be the speaker at azure summit 2021 it's a fantastic event that will be 11 days live streaming where more than 100 speakers from all over the world i'm excited to speak attention summit about power bi and synopsis analytics and i'm gonna talk about security and also a microsoft mvp for azure i'm speaking at cloud summit about automated release of dotnet applications and the best part is that this is a free event come join me live tv come join me live on learn tv on september 14th come join me on learn tv on the 14th of september this year a bunch of other microsoft and community speakers so if you want to learn how to secure azure come to my talk join us it will be a lot of fun see you there see you there see you there see you there so you
Info
Channel: Microsoft Developer
Views: 1,747
Rating: 5 out of 5
Keywords:
Id: KnNkR09zloo
Channel Id: undefined
Length: 238min 34sec (14314 seconds)
Published: Thu Sep 16 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.