Azure SQL July 2021 New Updates | Data Exposed Live

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
[Music] [Music] hi i'm anna hoffman and welcome to this episode of data exposed live we are super excited to have you join us on the show today we hope you all had a nice july 4th uh wherever you are and from wherever you're streaming and from be it learn tv twitch twitter or youtube i want to remind you all to ask questions from wherever you're streaming in from we are looking at those questions and we might even be able to answer some of those questions live uh so yeah just head over to learn tv to ask questions and engage with us it always makes it a little more fun kind of breaks up the uh well we actually have a lot of speakers today so it's going to be a really exciting episode so i think you guys are all going to like learning about what's new across the azure data suite um so with that let's go ahead and get right into it i wanted to start with uh some brief product updates as always and uh the first big update uh that i wanted to share with you all is about azure art now recently actually just last week there was this big azure hybrid cloud event and during that event we announced uh the general availability and the upcoming general availability of azure arc and a few related services inside that so i just wanted to share really briefly uh some of you have probably been hearing about azure arc enabled data services like azure arc enabled sql manage instance or azure arc enabled uh sql server so those are some cool things to check out i know there are also some postgres hyperscale options as well as services to support your applications um specifically i just pulled this slide from that azure hybrid cloud event so you can see what exactly was announced a few things here i'm not going to go through all of them uh but a short reminder that everything you hear about today will be posted on the news update blog which should have just gone live so be sure to check out that news update blog it's at aka.mslash news update um that being said we can continue on and talk about the other big announcement i wanted to talk about is azure active directory only authentication now this is specific to azure sql database and azure sql managed instance and for you all that are familiar with these two services you might recall that when you go and deploy an azure sql database logical server or an azure sql managed instance you have to deploy with a sql account um but afterwards uh you can add azure active directory accounts and azure active directory admin that sort of thing so in this preview there are two important things that we've done we've added this option for you to only allow people to log in using azure active directory but on top of that later in the preview what we added is that you can actually go and create an azure sql database logical server or azure sql managed instance without having that sql admin uh to deploy initially so that's another new change so something to check out uh something we've been hearing a lot from customers so hopefully if it's relevant for your organization to move towards aad only this is one step closer to that um now the other exciting really exciting announcement is azure sql hyperscale and i just wanted to show i'll link to this in the blog but i just wanted to show a brief video so you guys can see kind of what it's all about and then i have a cool surprise okay so how else then have you configured your hyperscale implementation secondary replicas not just read-only replicas but named replicas have been game changing for us this allowed us to keep our primary instance for our compute intensive etl and data processing we can then assign name replicas for specific purposes like analytics and recording these tap into a copy of the same data but have their own dedicated computer if you recall when i showed you the slider screen we had a secondary replica configured for customer traffic which is part of a scalable pool of replicas for load balancing if i switch tabs you can see that we also have a named replica configured to handle analytics traffic routing traffic to these replicas is as simple as changing the connection stream this means that critical customer workloads are not interrupted by our compute intensive analytic operations so with such massive amounts all right um that video didn't play as exactly well as i was hoping it would but um hopefully you all get the uh the gist of it and my special surprise the data that are coming in let's stop the video okay so um again first time playing a video in a live stream we'll work on that um but something exciting is that i'm actually going to bring in a special guest today uh the pm leading this feature uh davide maori uh davide thanks so much for joining us today yeah thank you so much for having me yeah of course and um before you get into telling us about this cool new feature and why people might want to use it um i'd love to just if you could give our viewers like a brief overview of what you do on the azure sql team yeah sure i'm a pm in the azure sql team and as a pm we try to understand what the customer needs and want what features we can add to the product and then make that make it happen right uh so especially this this feature is basically one of the the features that came out from uh request from uh from uh from customers to have a much better scalability story awesome cool so azure sql hyperscale named record yes tell us all about it oh yeah and as you may guess i have some slides ready so here we go so what are named replicas well let's start with from the easy part named replicas are as the name implies a replica uh and probably people are already used to replicas because they have the geo replicas they also have what we now call high availability replicas which are basically logical copy or sometimes also physical copy of of the data and we have these these copies are usually read only now the named replicas are different from the usual replicas we had in hyperscale because the replicas where before uh were super amazing because we don't have to move data around to create a replica thanks to hyperscale architecture so we can just basically create a sort of a logical replica uh but the one of the issue that customer was telling us that we should have improved is that those replica wouldn't show in the portal and instead now with naved replicas you have a regular encode database that is visible in the portal with the exception that is actually a replica of another database so you have all the comfort of you know managing this replica like a regular database but it's actually a replica not a standalone database and with that basically uh you can just create a you know a replica in the same server in another server but since we are not really copying that around and that's why name and repeaters are so beautiful we are just basically spinning spinning up a new compute node and connecting the compute node to the existing page server uh you know the hyperscale architecture allows us to do that because we have everything as a set of distributed services well the only limitation of limited replicas is that they must be in the same region uh because that's the trick we use to avoid to copy to copy data around if someone needs to have a physical copy in another region for example for disaster recovery purposes of course then they can use the geodr but with nematra because we are covering all the possible scenario and also name it replicas and this is what um the video was showing with named replicas you can have any of the named replicas you have with his own service level objective so maybe you have your primary as only four cores which is more than enough for the workload to sustain but for reporting purposes you need to have 20 cores and with that you can just use the named replica set it to 20 core again it will be created without copying data so it will be just a matter of seconds and then you can run your reports uh turn off actually destroy the name of the replicas and you're good so very nice for reporting and htap workload um also for security reasons some customers asked us to create them with replicas because they wanted to create a replica so a kind of a logical copy of data that can be accessed only by a specific principle and that principle shouldn't be able to access the primary so you can completely isolate the access to your database and give it for example to a data scientist of another company so that they can do whatever they want but they don't have any other access to the primary so you're sure that they're only working on the copy of the data and of course another very cool stuff here is that all named replicas benefit from the azure library benefits so the cost of anaemid replica is somehow 30 percent lower than the primary so if you have a massive ltp solution and you architect your application carefully you can take advantage of scaling out and saving money instead of scaling up so instead of maybe moving to a from a 20v core to an atv core you can create four additional 20 v core but those 20 record being limited because we i just uh uh basically we just have the benefit of the azure azure hybrid benefit which means again same performance but but with less uh price so you know it's a win-win and uh i guess that's it uh the only thing i want to uh highlight is that we also have created now a portal uh um experience uh before we just released the limited replicas in public preview so now they are like ga already uh of course are still in preview so if you find if customers find any issue it will be great for for us to have the feedback so that we can improve the product but in general the feeling is like a ga product because you have the portal and you can create an image replica for the portal manage from the portal or use powershell hdcli or even a rest api to automate everything so it's really a complete complete feature set and uh with that um i'm good i just want to remember that we will also have a dedicated data that exposed the uh session on with replicas right yes yeah we have a session coming up soon so definitely stay tuned for that deep dive of that episode uh one question davide just to to kind of round out how cool named replicas are like you said there's no real data movements you can kind of spin these up and delete them very quickly so does that mean that i could say like oh i'm going to run this big workload and i just like scale up on the fly and then scale down on the fly yeah you can you can just instead of scaling up which is you know something that people are using the right now scaling up and down very quickly and you can also scale out so for example if you have a you know a massive amount of mobile clients for example connecting to your api and instead of scaling up you prefer to scale out because also you you need more connection capabilities and in general you can have up to 30 named replicas so if you do the math atv core for 30 mm replicas you can have more than 2000 vcore just serving your today so you can really sustain a huge amount of workload um so yeah looking forward to customers taking advantage of this cool awesome and um for everyone wanting to learn more you can check the news blog because we put a lot of references so you can learn more about uh the customer story you saw a snippet of as well as some of the innovations that davide and team have been working on so be sure to check that out and davide thanks thanks again so much for coming on the show absolutely thank you for having me awesome okay so that's really cool name replica is really cool changing the game 2000p course there's a lot of course but i know some of our customers need uh that much compute so it's cool that we're able to give it to them now um so next we have another special guest coming on um and this guest mitch van sloot is going to be talking to us about this new ip asset so i'm going to bring mitch on uh hey mitch uh how's it going hi there great i'm glad to be here today so i just want to give you a bit of background um so my team is a group of worldwide architects that um are part of the sql engineering organization and um basically what we do is we unblock customers that are looking to modernize or migrate their data at to azure sql so often as part of that activity of of migrating we end up developing some workarounds to help unblock that and so this particular tool is one of those that we've developed so originally this tool i developed in ssis so sql server integration services and basically it did a table at a time in the database and we did it for one of the banks that was um that was willing they absolutely needed to make sure that all the data got across but um basically what uh so they were willing to take the hit in terms of the amount of time it took at table at a time to actually make sure the data was across but basically because it didn't scale we ended up redeveloping it as a c-sharp multi-threaded application so anyway so in terms in terms of the actual tool the sources that it will uh it will support our oracle db2 mysql uh postgres teradata and netezza um it'll target pretty much anything sql so whether it's whether it's synapse or whether it's any of the the sql whether it's it's on premises or in a vm or one of our path services like dbmi or hyperscale and um in terms of the actual application it's a console application it doesn't have pretty ui unfortunately um but uh it does um it does use some colors in the unfortunately for the color blind in the audience that's um yeah i use red and green which is is unfortunate but anyway um you can see the errors like it gives you error accounts and such in the console window um so in terms of inputs there's the application config so most of the stuff is done in the application config so it's pretty straightforward and really all you have to do is set the source and the target databases um whether and tell me what type they are and then the rest you just run the x um the actual executable um so in terms of outputs there's the console output um and then if there are any mismatches between the tables um it'll actually it will create a text file for each one of the mismatches or for each one of the tables that tables that have mismatches and there is also an optional spreadsheet output report so basically the way that it works is it does an md5 hash across all of the rows or all the columns in all of the rows and and so what's on the wire is is just the the primary key or the unique key so i need either primary key or unique otherwise the only thing i can do is really a row count compare um but if you if there's a primary key or a unique key um there's the unique key and the hash for each one of the rows um and then it does a streaming compare so in terms of resource utilization locally it's it's pretty light so anyway i just thought i'd give you a quick quick demo in terms of what it looks like so if you can switch to my screen great so this is running against a local sql server database and a managed instance so what i have is basically i have a very simple it's part of i've stolen some tables from adventure works and i've added this there's another table in here that's called a test table aa was just so that it's at the top of the list but basically what this does is it it actually um it has a column of every data type that i support in sql so and if you actually look at the output here you can see in red you can see that there's hash mismatches in this table and then if you look at a little further down because it's multi-threaded the um the the little number here i don't know if you can see it hopefully the little number in the brackets is actually the thread number so they're interleaved so we see there's 12 mismatches within the aa test table and then all the rest of the tables are green so they're all good so we were doing in that compare we were doing about 18 000 rows per second um so so this was going to mi versus obviously the the local database is running on my local machine so now if we look at um what is created here um so the log looks like this so this is the um this is uh where we actually see the mismatches and again this is a very contrived example to prove that i'm catching differences in the different column types so you can see there's one bit different in this long binary and anyway you look through the rest of it um for debugging and for um to help you understand what i'm doing i actually put out the select statements on the source and on the target in this case they're both sql so the the two commands are very similar but i have another example that i'm actually running against a db2 database a local vm running db2 and here again i have that test table just to test against db2 and again the output's very similar and then what i see is um in the log i actually see the the command that i issued to the the source which is db2 so it'll be formatted a bit differently so um yeah and then the last slide just had some pointers to some resources to actually get the tools the tools on download.microsoft.com um and uh within the tool the the only thing that's missing is if you say if you're going against a db2 or an oracle or a teradata or such um those drivers are actually licensed so we're not allowed to include them in the zip so but in the user guide it explains how to actually get them and how to update the the binaries and whatever else so uh in terms of feedback there's an email there um and then basically the the ip that we create within the team we try to share back with the community through the migration asset sections of the azure database migration guides so there's also a link there so awesome cool mitch that was really cool i've actually never seen this before but i can definitely see how it's going to be very valuable for a lot of customers and what i like about it is it was kind of developed out of the need that a customer had and then um you were like hey this is this is positive experience for this customer let's kind of scale it out to everyone so it's very much customer driven which is awesome and so thanks mitch so much for joining us all right thanks sure um cool onward and forward great show so far some cool demos some cool announcements and things to go try out reminder head to aka.mslash news update to see all of the links so if you want the database compare utility you can head over there um so next up what we have is we're just continuing on with our awesome guests and i'm going to bring back on our dear friend from the azure data factory team uh wehong uh we hung thanks so much for coming on the show today yeah thank you anna you know it's always a pleasure to be on the show yeah it's a pleasure to have you and you're starting to become kind of a regular which i i think makes sense honestly because azure data and azure sql plus azure data factory often go hand in hand thank you anna awesome so today i believe you're going to talk tell us about some things that are new so i'm just going to go ahead and pass to you sounds good so today we are really excited to share the newly designed azure data factory homepage and i think first of all you know thank you to the design engineering and pm team for working together really hard over the past few months to deliver not just a modern but an accessible experience now one of the things that we'll do as we switch to the demo you'll see that the design of the newly designed home page is now more fluid and in fact what you'll see here is immediately once you open the azure data factory studio you will get to this new home page that is not just modern but it's also more fluid with better contrast uh as you view this not just on the browser if you view it on your mobile phone you'll be able to experience it being the you know the smooth reflow capabilities for all the controls but at the same time one of the things that is always top of our mind is developer velocity and now you know as an etl developer working in the azure data factory studio one of the things that you want access to is you know some of the recent resources that you have recently worked with because we want to get into it and immediately work on the data pipelines or the data flows and so as you scroll down the home page one of the things you'll see is the recent resources but at the same time also show you some of the latest and greatest uh showcase of things that we've just shipped but also integration and many of the other products like azure purview and of course you get access to tutorials videos community content and much more right and one of the things i just want to call out is we could not have done this uh change and or the redesign of the azure data factory home page without the community support and so we i just want kudos to all the community whether it's mvps folks that's passionate about data for all the feedback over the past many months that led us to redesign this homepage and since we launched this on monday the feedback has been streaming in many of them has been positive and some of them of course gave really good feedback on how we could add even more things right to make you more productive now to end off i just want to show you one more thing is if you're gonna create a pipeline or if you're gonna create a you know a data flow or if you just want a power query experience in data factory all you need is clicking new and if you click on the pipeline this immediately brings you to the pipeline design page which is just awesome right and so if you're new to azure data factory you no longer have to look for you know where should i get started in creating a pipeline it is all there wow that is really cool love the new update that's right and so if we switch back to the slides uh i think besides this update that came uh over the past uh month or so the team has also worked hard on delivering several other features uh one of them of course is support for always encrypted if you're connecting to azure sql uh sql mi or sql server uh and one of the things that's top of mind that we get asked a lot during a lot of customer conversation as well as product round tables is well i want to be able to send an email uh when the pipeline fails or when part one is successfully run and instead of giving you emails they say wow how nice if we bring azure data factory and microsoft teams together and so today if your pipeline fails you can actually create a team's channel uh and it's gonna send you a notification and you can configure what is the format how does it look like by just you know editing the json payload which is awesome and so now you get all your notification about data factory and the powers of all your pipelines and data flows just there in microsoft teams and i think the last is yeah the last piece which is really exciting is power query activity we continue to make significant progress in making sure that you know not only can you do complex field mapping to some of the power query uh columns if you will uh and continue to push on that front to make sure that the visual data wrangler experience is the best experience for all and so we can't see what you can build uh with data integration azure so thank you anna awesome yeah thanks so much wuhang it's exciting to see all these updates coming in and also to see like how you know y'all you know maybe the azure sql team invests in something like always encrypted and you guys follow so that we continue to actually be always encrypted no matter what we're doing in azure sql or in azure data factory so that's pretty cool and of course teams integrations is cool so i'm happy to see that too but um thanks so much for joining us we hang for our viewers we will put a link to all this uh in the blog thank you thank you anna for having us on the show thank you of course all right cool uh so great show so far and still so much more to come so i hope you guys are ready to stick around uh the next thing i wanted to do is just touch on a few uh blogs so if you're not familiar with this segment basically i go and look at all of the gazillion blogs that get posted every month across different websites kind of bring them together and say hey these are the ones i think as a data professional or sql professional you might be the most interested in um so first i start with azure blog this is usually uh reserved for you know like big executive announcements um so to talk about a few of them uh one i found interesting was this new data center that is uh in arizona and this is interesting because it's a sustainable data center so you'll have to read about what exactly that means but if you see a new region in the azure portal it's called west us-3 and you'll know it's that sustainable arizona region so i just thought that was kind of a fun fact uh the other two well two of the blogs here are about that azure arc announcement the first one by rohan kumar and the second one by guillermo um so again just uh another reminder that this azure arc announcement is a big deal for microsoft so definitely want to read more about it and you can actually check out the azure hybrid and multi-cloud digital event on demand if you missed it live uh also if you missed last week's data exposed live session we did a really fun episode uh with buck woody dehaan and jay and lior from different teams around azure sql as well as the jumpstart team and they showed us all about azure arc and the different data services how to get started and then how the jumpstart team is making it really easy for you to just kind of like do a few clicks and have an environment to play with all this new cool stuff that you want to learn all about uh so that's a really cool one um the other thing i wanted to mention uh this is kind of tangential but then as i was reading it i thought uh this is actually very similar for what goes on in an azure data modernization um let me just check something real fast uh hopefully you guys can still hear me okay and that is the uh the joy of streaming live sometimes stuff goes wrong um so apologies i just lost connectivity for a second i've been having ethernet uh issues today but bear with me i appreciate y'all's patience um so what i'm going to do is just kind of re-shear my screen if you just give me one moment again i do apologize for the inconvenience here um but what i was talking about before my internet so really kicked me off of the uh wi-fi is uh this third blog here which is about scaling with retail applications using the cloud application framework so hopefully you guys have heard of the cloud adoption uh framework so uh i don't want to go over that too much but what i wanted to mention is they talk about kind of three phases when it comes to kind of innovation and digital transformation the first is like a siloed retail so this is where you don't have a lot of cloud innovation or cloud adoption so you don't necessarily have the right uh updates or intelligence that you could add to your applications that's kind of like the base level of where they see a lot of retail companies today then there's a connected retail and this is where you're starting to migrate some workloads to the cloud and also take advantage of some of the innovation and you know kind of agility you could move with in the cloud uh and then finally which i really like is the analytics driven retail and this is basically like you've taken steps to align to a common data model you're preparing for future growth and you're actually unlocking some pretty advanced analytics to you know have a better retail story i also just want to thank some of you on the stream that understand the wi-fi issues um we've all been there and we just do what we can all right so um that's some of the azure blogs i also wanted to take some time to talk about the latest data exposed episodes that we came out with uh this month lots of interesting things to check out uh cdc went public preview earlier maybe a month or two ago uh so you might want to check out and learn more about cdc and azure sql and how it's different from sql server and how it's really similar to uh what you may know in sql server uh jason anderson came on to talk about azure sql database ledger so this was kind of like a short form video if you want the highlight reel on what azure sql database ledger is but if you want to go deeper if you go over to our live on demand videos on our azure sql youtube page you can check out the azure sql security series the data protection episode which we did most recently featured like a 45-minute dive into what azure sql database ledger is you'll hear me talk more about this later in the episode because i just think that tech is so cool and uh bringing blockchain to sql without you having to really learn blockchain is just really impressive and i'm like really proud of our team for doing that uh we also continued our migrating to sequel series so you want to check that out and we had a great mvp edition episode with monica raythen who talked about performance tuning for your readable secondary so that's you know has some gotchas and things you want to check out uh and learn more uh another thing i've already mentioned the deep dive that buck dehaan and jay and lior did on azure arc so check that out as well as buck and i did our most recent ethics nai episode of something old something new this was a really fascinating conversation a really hard conversation but i think it's one uh that's worth uh looking more into especially as your organizations move towards using more analytics and more intelligence in your applications the uh the last thing i wanted to talk about before our next special guest is what's going on in the sql server tech community and blog uh two things i wanted to call out here uh one you might have seen this already but uh we did release an update that the beta program for sql server on windows containers has been suspended um if this is something uh that you've been working on or looking at i definitely recommend checking out the blog to read more about what this suspension entails uh finally uh you all know that i like to do some data science work so we had this recent blog from james roland jones on our team about the future of r and azure sql and sql server now if you go back in time you remember that between 2016 and 2017 we introduced machine learning services for r and then python which basically allows you to do in database machine learning so not moving any data out of the database and doing machine learning um so that's uh been a really powerful thing and something a lot of our customers are taking advantage of but basically what we're doing is there were there was an acquisition that we made before all this happened it was called revolution analytics a very cool team actually my previous team to here was working out with some of the revolution analytics folks um but basically they created some of these packages called revo scale packages and they allowed you to with very high performance and scale do in database machine learning so one of the big announcements that i saw out of this blog is that you know we're going to be open sourcing those packages which is really great and i really think it's going to help us do more in database machine learning because those packages are going to be freely available easy to access and easy to learn and work with so i'm looking forward to that happening and you can be sure that once they are released on our universe or azure universe uh i'll do a little demo on dna expose live news updates for you all all right so switching gears a bit we're doing something new uh that we haven't done on the show before and that is bring in someone from another company but i'm really excited about it because we have this great uh member of our community adrian hall who's gonna come on and tell us a little bit more about the ashura uh graphql engine so without further ado i'm gonna go ahead and bring uh adrien up hey adrian how's it going hello it's going quite well awesome well we are so stoked to have you on the show um before you tell us about the graphql engine can you tell us a little bit about you know what you do and what her shirt is as a company yeah so uh what i do specifically these days is i'm a developer advocate at hasura so i get to build all sorts of cool stuff with the hasura graphql api product and then talk about it and show people and also work with the customers to see what they're building um lots of lots of good stuff these days with sql server postgres and the various things that we have our graphql api connect to so that's that's basically what i'm doing these days yep awesome cool uh and then uh you know let's just get right into it can you tell us a little bit more about this graphql engine and how it might relate or be useful for people using sql so the the most useful thing is that you can get graphql apis in a matter of seconds literal seconds on your sql server database with doing nothing more than just deploying a container or a server or however you want to deploy your huster graphql api you deploy it you point it at the sql server database with a standard connection string like a dsn connection string and then say track all these tables and boom you have graphql apis for every single table in your database so i'll show a bit of that here in just a second awesome that sounds cool would love to uh see it awesome so just just for uh to to tempt the live demo uh magic out there the demons of the live demo i just wanted to show that i have destroyed my previous uh environment which was out in azure just minutes ago and have since redeployed it with terraform back out to azure so i have a completely fresh clean database and haser api and everything it's deployed migrations seeds and all that using the hasura cli tooling so what we're seeing is live and fresh it just got deployed so i just wanted everybody to see that to know that this is how quick it is it's literally this quick so with that i'll bounce over here here's the actual api running in azure at this very moment i'm using the container services where i've just deployed out a container not not the elaborate kubernetes container services which you could use this is just the uh what is it acs so just a single container and i have it pointed at various uh databases out there a postgres and a sql server database now if i go over to what i deployed and look at our hasura console i have it pointed right now at the data tab of our console and i can look at the postgres and sql server database that i am connected to and then over here you can see some of the tables i like trains so i have a trains database this is actually a schema within the database and then i have a table called railroads and here's some of the railroads amtrak sound transit cal caltrain etc that are shown here now this is just the data tab in the hecera console product where you can browse data but you can also modify tables and do things like this you can connect up the relationships which is very important in graphql because you have the whole graph that you can see through and query with and the good bit of course is the graphql so i'm going to click on the api and this is where you can explore your graphql via a standard graphic ul interface and here you see all the tables and the entities this can be from a single database multiple databases it can be connected to other graphql remote schemas out there you got a lot of options and it brings it all together basically like a gateway so you have a gateway graphql api to all of your graphql entities and if i just want to run a query just to show you um i can i can click over here very easily to put together a query but i'm going to hand code this real quick just for fun i'll type in query and then i want to see my railroads and i'm going to ask for the name of the railroads and each railroad acts as an operator let's see here there we go operators and that has trains that they operate train and then get in here and i'm going to ask for the name of the trains so now i'm going to run this query against this database with recreated migration some seed data all that just put in there minutes ago and there we go graphql query executed against that and you can see amtrak has these co-starlight trains sound transit has a bunch of commuter trains etc so that connected related data as you would create in graphql is right there already the relationships and everything are tied together um and that's what hasura enables you to do in a matter of seconds so like if you have pre-existing data or if you just want to build it out via the console or build it out via whatever migration tools you want to use it doesn't matter how you do it all you got to do is point this at it and boom you have graphql apis wow that was very cool awesome cool wow i'm going to have to go check this out myself i know uh you share a link with us which we put in the blog uh is that the way like the best way you would say to get started or do you have any advice for folks just getting started um let's hear the the quickest way to get started locally is to actually just do a quick search this is what i do every time when i i do it myself just type hasura docker quick start and it'll put you right to the docs page and give you the latest greatest docker compose to spin up and it'll spin up a postgres and a hasura api server free to use and then you can add whichever databases you want whether they're docker containers or local like sql server databases that you're running locally and you can just point it at those and then start working with it so that's that's the quickest way to do it um as far as the repo that i sent you it's the terazera repo which i i named it terazera because it's azure and terraform um so in in the hasura is also a portmanteau so i decided to use a portmanteau for my my repo but that has this this code that i just deployed which has the terraform the sequel the migrations the seeds all that stuff so if anybody wants to check that out it's there and i also work against it a little bit whenever i do live streams so um i use it for examples and sometimes i add extra context like i've added a local deployed uh environment as well as actual azure deployment so if you have an azure account you can deploy this whole setup into azure too cool awesome well thanks so much adrian uh to our viewers uh be sure to check the blog to learn more about how to get started with this and how to take advantage of this very fast uh graphql engine so again thanks so much for joining us yep awesome cool okay so we continue to have lots of cool stuff to see on the show hopefully you all are enjoying the show so far let us know we'd love to see a comment or question from you all letting us know if this is the sort of things you like to see and hear about uh going forward i wanted to talk a little bit about some of the azure sql blogs uh just very briefly um so the first two i want to talk about uh are the first is the azure data community hopefully you all have gotten a chance to see this and find a user group we spun this up a little while ago but reposted some things about how to find upcoming events um so you can see kind of in here you can see podcast blogs etc but you can also see some of the upcoming events so this was a new thing that we added to the website based on the feedback of the new uh advisory board which was also uh one of the blogs so if you want to learn more about the azure data community advisory board and how we're making that work and how we're trying to get feedback from you all um you want to check that one out as well uh you can see some of the other blogs here are related to some of the things uh that you've already seen or heard me talk about already so i'm not gonna talk about those um this ssrs one i wanted to uh show a demo now this is a demo i did a while ago but i wanted to just show you all uh the demo and walk through it so you can see how the ssrs opportunities arise in um once you move to azure so i'm just going to play this and i'm going to talk along it full disclosure it is recorded uh but it you know it'll still be useful so um what i have here is an azure sql managed instance and you can see it has all the instant scope capabilities you're used to seeing with sql server um and now what i'm also doing is i'm using this uh in an ssrs virtual machine so that's what i'm actually logging into right now is the ssrs virtual machine what you can see is they have these different images available for you when you move to the cloud so this is going to make it easier for you to basically say okay i want to lift and shift to the cloud and you know i either want just sql server reporting services or i also want the sql server itself um and you can see i'm in that ssr svm now and then if i switch over to the actual report server you can see i have some reports that are reporting on the performance and i can also see that i'm storing those databases in azure sql managed instance so you see the report server and report server tempdb i'm restoring my ssrs databases in manage instance which is going to help me in that piecemeal kind of migration now one of the things i'm showing here which i'm not going to spend a ton of time on right now is just the report itself so this is a paginated report and you can see that the variance is like very high so you might think that maybe my quota and variance like something's going on here um and if you even take a look at another report what you're going to see is that i don't even have a quota for most of the months that you know people are are are doing their sales thing um so what i did in this demo is i put on my data scientist hat and i went over to azure data studio and said hey i wonder if i could take a look at this and uh you know do some data science and kind of make a better prediction for what the quota should actually be on a month over month basis so we can make more accurate predictions and more accurate quotas for our sales folks so if i hop over to azure data studio what you're going to see is this machine learning extension now this is a relatively new extension which is just making it a lot easier to get started you can manage your packages you can import models you can create a notebook for me i just went ahead and created a notebook and i started doing some data science uh in this demo i was also showing off how we've improved the markdown experience so if you don't like markdown you can take screenshots you can edit text very easily and we're going to convert it all to markdown for you so you don't have to learn it unless you want to so that's what i was showing there kind of cool if you haven't seen it before especially if you don't use azure data studio i highly recommend it okay so back to this data science problem basically what i was doing is starting to understand okay so we are missing a lot of uh quota examples and if you look at the sales versus the quota there doesn't seem to be a big pattern so now we're going to do is we're going to do some data science modeling again i'm not going to go through all of this but basically we're using that sql machine learning services that i mentioned briefly earlier in python this time to basically uh use python to train a model and then we're going to uh basically store that model in a serialized fashion which you see there then we can use the model to make new predictions and insert them in a table so i know i'm going really fast but i just wanted to show you how you know now you see the adjusted quota and how we're able to you know replace those null values and if we go back into ssrs we can kind of add this adjusted quota column instead of the original quota and you can see now probably we're getting more realistic uh quotas uh that we can give to our sales people and set realistic expectations um so that's that's one one one cool thing that came out of this demo and then the final thing i wanted to show is how i was also taking advantage of power bi uh per user premium um and i was very easily able to migrate that paginated report up into uh power bi and now i can start doing more things here and maybe even creating dashboards so i know a whirlwind of a demo uh but just wanted to again just show you all uh that that blog that talks about how to move ssrs to azure is very useful and kind of gives you a great way to take that phase migration approach you've heard me talk about so many times to azure all right so finally uh before our next segment i wanted to talk about some of the azure database support blog topics that came up uh these are really interesting the team always does uh a great job over in support to kind of surface up these learnings to us um a few things here you'll want to look into yourself one of the interesting ones i mean all of these are very interesting but one of the interesting ones that i got to play with was this automated uh backpack uh exportation so basically uh they have code in here to create a powershell runbook and the runbook so i basically copied and pasted all this set of my azure sql database and it's basically going to on a schedule uh go and run the powershell commands to export this azure sql database as a backpack file and store it in a container so if i have some processes or requirements that require me to do that i can do that and i can even set a retention period and after that retention period it will delete it uh this was so easy to set up you can see i set it up uh just yesterday afternoon uh just yesterday afternoon and i set it to uh you know do a backup every hour just so we could have a lot of things to look at but definitely not required to do it every hour uh but this was just a really cool vlog and he steps through uh how to do that uh the other thing i found really interesting was this synonyms thing you know there's so much to learn in sql server and azure sql and as someone who's only been doing this for a few years i'm always learning stuff so this most recent thing i learned is that you could use synonyms in azure sql managed instance to make it so that you can easily reference your linked servers table so very short blog very simple but a tip and trick that i didn't know so i thought i would uh go ahead and share with you um now with that being said i'm going to pass it over to cheryl and mark to talk about a sequel in a minute so i will bring you all up and thanks so much thank you well welcome again to another segment of sql in a minute we're going to chat about the work that we do here on the sql team i'm excited especially to share this topic today because it's about migrations and that's an area that i focus on here on the team but my special treat today is that i am joined by one of my favorites a senior content developer here on the sql docs team mark hey how are you doing today i'm doing well cheryl thank you you know when i think about migrations i can't help but think about the technology that's involved in moving forward into that space the time the effort the energy that it takes the thoughtfulness that we need to go through to make sure that we make the right choice for the migrations one of the main features that we have that we've developed over time is the sql or the migrations hub page and that hub page covers many different areas mark you were going to share that that screen uh let's get started and and take a look at that i there's so many gems on this page and i just i really want to just get into this um can you tell me something about it of course well cheryl you're not wrong this hub page does have a lot of beautiful things to get people started if they're on one platform and they want to get to another this hub page contains data migrations from databases to different databases it contains different migration tools it also contains open source databases not necessarily sql server anything azure sql related and other customer stories and other customer feedback that we've gotten that customers want to know how to migrate from one form to another well you know i don't bring you on here unless there's something that i want um there was there was a special thing that you worked on that i was excited to hear about um in the my sequel space can you share a little bit about that yeah so i'm gonna just scroll down a little bit on this page here um as you can see there's a my sequel section specifically so what we recently did was there was a very popular a very um traffic heavy pdf file in a github repo that was very heavily viewed however it was very long um and you know a lot of people it got easy to get lost so what we did was we took that same pdf and we added it to the hub page if i go and click on this link right here um you can see that we split up the pdf in sections um this when you click that link this first takes you to the introduction which just explains kind of the whole process and this takes you from an on-premises my sequel database to a an azure database for my sequel basically taking your on-premises my sql and moving it to uh the azure cloud and this just kind of starts you off gives you a little bit of an introduction however we're like you know what we did not want the conversion to be just as long as a pdf so what we did was we kind of took its table of contents and sections and we split it up ourselves to make it more readable and easier to follow so as you can see here you click from the hub page you come to the first introduction but as you sort through on the left side we have our own table of contents as you sort through each different page there's a pre a prerequisite section excuse me that kind of references the previous page which kind of leads you into the next section and as you continue through each section at the bottom there's the next steps which leads you into the very next section of the guide so if i'm going to click assessment it will lead you into the next part which falls into the pre-migration section and then you can just follow through here if you want to skip around you can there's a data migration section and then there's a post migration and then we've added some samples and a summary so when you're all going through this um you have a chance to kind of play with it yourself wow there is so much information there i i love how the sections are all broken up and you have it very intuitively outlined so as you go through the progression of doing a migration you basically are just clicking as you said through the next step to get to the next section so this is a this is some great work here you know i want to i could literally talk about this all day but i want to thank you so much for joining me here today this content looks incredible i'm sure the community really appreciates the effort of you as well as your project team for putting all of this information together i also want to call out and masha thomas who put together our hub page where all of these content pieces reside so you might be wondering at this point how do i get to all this great information well take a look at your screen it's all right there the sql documentation if you click on that aka link there look for a migration and then you'll go to the migration guides and you'll be able to see it there you'll be able to see the guide that we just showed today as well as all of the other guides we want to thank you for joining us today on sql in a minute take care and be safe thanks so much mark for coming i appreciate you having me thank you thanks so much folks awesome so moving right along uh our next guest or our regular uh guest our co-producer marisa i'm gonna bring marisa up uh hey marisa to talk about events hey anna thanks for having me on again although you probably don't have a choice here always always a pleasure so as always we have a lot of events where we're speaking at this month um i'm going to go through them and we also have links to these events in the blog that anna will put up later uh so if you want to register attend any of them or find out even more details i'll head to that blog on july 14th we have bob ward speaking at adele webinar and he'll be talking about microsoft sql and how it's evolving and is is your company even keeping up with the pace on july 28th we have actually a few speakers speaking at the 8kb sql server internet internals conference bob ward will be speaking about sql weights latches and spin locks and pam will also be there speaking about azure virtual machine storage on july 28th we have uh julie attending or presenting on notebooks 101 at the database professional virtual machine group um so make sure to check out the details on that in addition we always have our date exposed live shows which is on wednesdays at 9 00 a.m um upcoming week or next week we have the azure sql virtual machines this is their their third episode um and that's with pam and david the following week we have a ask the expert session and that's with bob ward and buckwoody and they're speaking about azure arc and azure manage manage instance um to if you want to submit questions uh we can you know definitely answer them on screen or possibly answer them on screen and send your questions on twitter to the to ask bw for bob ward or buck woody if you ask them they'll argue about that but anyway send it to ask bw and in the blog we also have a forum if you want to submit your questions to the forum and then on july 28th we have a a azure data studio power hour where we have a bunch of people from the tools team joining us uh and uh you know it's gonna be rapid fire showing us new tools in that hour and then as always we have our on demand date exposed shows that get released every thursday at 9 a.m pacific and you can find those at ak dot um ms slash data expose y t like youtube and that's it for this month thanks anna awesome yeah thanks so much marisa it's always cool to uh see what's coming up on dd expose live uh really interested to see how the ass bw goes and uh the power hour as well i think that'll be a fun one so fun month coming in thanks so much for keeping us on track great uh so uh we have so much going on but there's just two more things i want to share with you all before we close out the show for the day uh the first is the monthly microsoft learn module today i'm recommending you check out architect modern applications using azure sql database uh this is a really cool module i'm not biased at all um but uh this is a cool module because you can get hands-on you can configure your development environment with visual studio code you can deploy and configure azure sql database using scripts and you can also automate your updates to your azure sql database using a backpack and using azure sql actions so that's also a cool one uh to check out so this is the link where you can find that i also put a link to it in the blog uh finally uh i have my pick of the month now you're probably tired and sick of hearing me talk about how cool i think azure sql database ledger is but i think it's really cool if you haven't gotten the chance to learn about it go watch the short episode on our youtube channel or watch the deep dive episode or if you're feeling really nerdy there's a white paper that explains how we did what we did and how we're making it super easy for you all to take advantage of those capabilities so again i think this is kind of moving towards the future and you'll continue to see our team take the latest tech investments like blockchain and try to bring them into azure sql in ways that it makes sense um again just taking a look at some of the questions uh we will have we're looking for a link in the chat it's a very simple link you know you hopefully you can remember it's aka.mslash news update so definitely go check out that link to get all of the references for all the great stuff you saw today uh we also host this uh video on our youtube channel so if you subscribe to our channel you can go back and re-watch any pieces that you want to re-watch and then you can also get notifications for the next time that we stream um as we close out i just want to thank you all for joining us we are streaming every wednesday at 9 00 a.m pacific and we release new episodes on thursdays at 9am pacific so something to look forward to tomorrow um but again thanks so much for joining us and we hope to see you next time on data exposed [Music]
Info
Channel: Azure SQL
Views: 662
Rating: undefined out of 5
Keywords:
Id: 4ag8zL53F2Q
Channel Id: undefined
Length: 60min 6sec (3606 seconds)
Published: Wed Jul 07 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.