Office Hours: Ask Me Anything About SQL Server at Hallgrímskirkja

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello party people and welcome to office hours so today i'm standing outside of the holgrim's kirkja church in reykjavik beautiful modern church really sharp uh just fantastic lines and let's go see what you all have in store for questions for sql server let's see the first one okay this actually works out pretty well uh normally weird sean says we've seen a bit of iceland or quite a bit of iceland since you moved there via your office hour sessions and while beautiful it seems a little deserted can you dispel the now prevailing myth that you and your wife are the only people in iceland at the moment you'll see people throughout the course of this stream probably it is about 7 00 a.m here in iceland i would usually rather stream in the mornings before my wife wakes up it's just one of the things that i can do to get things done before she's up and around so usually when i'm streaming there's like nobody out and around i'm usually up at like three or four in the morning uh doesn't matter where i live i just get up at this god-awful early time so you'll probably see people walk past and go to the church door and look around through the course of this broadcast plus you'll probably hear cars going by because i'm in downtown reykjavik this is normally when i show y'all stuff i try to show y'all like wonders of nature that you wouldn't normally get the chance to see and i guess this is kind of wonderful so that's cool uh next up let's see here was andy leonard asks what bugs quote unquote exist in sql server 2019 and i know from from talking to andy that the reason he's asking is he's working with someone who's considering going to sql server 2019 here's the thing with open source projects if you look at open source software there are bug repositories bug trackers where you can go and submit issues microsoft's azure data studio or visual studio code are both great examples of that where they keep their issues list public on github sql server not so much with sql server the only way to find out if a bug exists if it hasn't been released publicly yet the only or real to better say if a fix hasn't been released publicly yet the only way to find out is to have the problem happen call microsoft for support go through the whole rigmarole of going through support and have you rebooted have you patched and then they may tell you that there's a bug that's known that exists and that they have a fix coming but for the rest of us the way that we find out when bugs are happening is actually the fix list if you go in and look at the cumulative updates and the way of course that i would suggest doing that is sql serverupdates.com sql serverupdates.com we keep a track of all the uh cumulative updates that go out and then look at all of the bugs that were fixed in each cumulative update microsoft has gotten to the point where they're just really phoning it in with these they're not bothering to list out specifically what the scenarios are that that cause these issues they're just giving a brain dump list of things that half the time they have spelling issues in them grammatical issues i'm like i don't know that i can really trust whatever the contents of this is so sql server 2019's been out for a couple of years now and the bugs keep coming fast and furious now on one hand i should be happy that microsoft is fixing them on the other hand of course though i'm not happy that the quality has been this hit or miss it's been just so rough over the last couple of years because they're trying to throw all kinds of new things inside the engine they're adding on more stuff around linux support kubernetes the accelerated database recovery all these things and they make big changes now it's awesome that they're making these big changes but the drawback is is if they're not really testing them spoiler alert they're not really testing them if they're not really testing them then they cause all kinds of other problems with people who don't even intend to use those features so you just have to be careful about those column store indexes are a great example that where people have seen so many incorrect results bugs over the last couple of years next up we have michael michael asks you've said you want to retire relatively soon how have you gone about preparing to support your wife yourself and your wife misread that but that's a freudian slip there support yourself and your wife for your retirement consulting income would seem to dry up once one stops actually consulting so i think of income is uh like a threes the more is it like a stool and the more legs that your stool has the more stable your support will be so one leg of my income is consulting another leg of my income is training sales and then another inc leg of that stool is online services so software and online services subscriptions things like the consultant toolkit uh sql constant care and there are other projects that richie and i are working on as well so when i start to to retire my idea of retirement is to not have anything on my calendar for that that means no consulting and the training will be more around recorded training this will actually be just a sneak peek this year's black friday sale that starts in november this year's black friday sale will be the last year that you can buy a live class season pass from me the last year where you'll be able to just buy once or pay once and then attend all my live online classes because after 2022 i'm going to dramatically taper down the number of live online events that i do and in person events i mean you know frankly i just don't think that the the the uh conference market is really ready for on or live in person stuff anytime soon next up we have kevin kevin says have you encountered sp blitz cash query store and sql profiler showing a query returning a large row count high cpu time large read count but when i run the same query from ssms it shows less roads less reads and less cpu same parameter values but often concurrent runs yes if the amount of data in the tables keeps changing wait hold on i know what you're thinking you're like brent that could never possibly happen to me but very often it's queries that are working off of a queue or working off of show me all the orders that haven't been processed yet and every now and then there's a whole lot to do and then the rest of the time there's not really that much to do that's a classic example for it another example is i know you said you think it's the same parameter values i'm not so sure i've seen a lot of cases where people go oh no i know this is the parameters that are using no no no wait when you're looking at things like query store or sp blitz cache you're not going to get every set of parameters that runs necessarily you may only get the cached set of parameters and people may be running it with different parameters all the time so those are the two places that i would start make sure that you know you're actually hitting the same parameters i know the audience is going wild in the comments and they're like bran i can't believe you didn't mention parameter sniffing that's why app queries are sometimes fast and sometimes slow but remember to note that kevin's answer said different numbers of rows returned and that's where i go i don't think that's just a parameter sniffing issue alone i think it's genuinely different parameters and that we're not trying to measure the same stuff next up phillip says can i just say you're awesome brent well you can say that uh hope i get to meet you in person one day and you know obviously because of the coronavirus stuff like my i don't i'm not doing any in-person stuff in 2021 i'm not booking any travel in 2021 and then i'm not even buying tickets to do work related stuff in 2022 uh so i'm waiting till january the 1st of 2022 before i revisit that decision about buying any tickets anywhere i just have no interest in in going to where so many people are like i want my freedoms i don't want to wear a mask okay great do you brush your teeth do you brush your teeth you brush your teeth for other people too right you brush your teeth so that the rest of us don't have to smell your breath about what you that salmon that you ate this morning could you do me a favor could you put a lid on that covered spewing mouth of yours i get that you don't care whether or not you get infected but the rest of us have things like asthma and immune deficiency problems where we kind of care and i do so much free stuff for y'all for just out of the goodness of my heart here you go could you do me a favor and just do the little thing of putting on an ever loving mask doesn't cost anything god just ridiculous so that's why i'm not in any time interested in doing in-person events because there's going to be some guy in the corner going i want my freedom and he's going to be yelling and spewing viruses all over the place mike bird asks possibly a dumb question that's okay i like the easy ones but is it possible to redirect output from set statistics ion to a io on not ion set statistics io on to a file or a table mike dude that is not a dumb question that's a great question and i have asked that question myself i am not aware of a way to do that i know somebody's gonna catch it now there is a little bit of a tricky way to do it you can't really directly get the output of set statistics i o on this way but modern versions of sql server do include the number of reads that were done inside the execution plan there are logical reads numbers inside the execution plan your next question is going to be brent how do i get the actual execution plan to go into a table the trickiness continues so sql server 2019 has this set last actual plan on switch one way that you can do it is turn on set last actual plan on and then immediately after your query finishes you can go pull that out of the plan cache and it'll have things like the number of logical reads that were done i'm not entirely sure that the last actual plan includes a number of reads it might not but somebody in the audience audience will surely test that and correct it i know for example it doesn't have cpu time but it has a lot of other good stuff um the was there was another way i could do it oh of course profile or extended events you could fire up an extended event session a profiler session etc and capture the number of logical reads that were done that way sp blitzcash also pulls a number of logical reads min max last all kinds of stuff like that we get that from sysdm exec query stats so you could also check sysdm exact query stats for your particular query hash or plan hash and then pull the numbers that way it's not as elegant as what you want it's what you both you and i want is just like some kind of magical command that we could run unfortunately it just doesn't work that way next up random name asks a certain security software vendor is pretty adamant that the app should be installed on the same vm as the sql server because quote there could be a lot of traffic i like that end quote do you have any guiding principles for when if ever to have the full application installed onto the database server so you got to pick your battles if the vendor requires that then that's what you do doesn't that suck the vendor's probably wrong vendor's probably wrong about all kinds of things but you know what you're just going to suck it up and if that's what the vendor requires you're you're probably going to deal with it you can ask questions but inside your organization you probably don't want to be seen as that person who thinks that they are going to throw up all kinds of roadblocks now i'll tell you one reason why i've seen a security tool and maybe it's the same tool maybe it's not issues where a security tool has said we do need to be on the same server that one was because they did have network problems if an attacker took the sql server down or took the network down in between the application and the sql server and in that scenario they wanted to make sure that logging continued to work even when there was a network problem it's like okay all right well whatever uh but i don't agree with it again but i'm like okay if that's if that's what you believe you want i'm gonna give it to you i'll give you enough rope to hang yourself and plus besides if they complain about performance problems what's the first thing you're going to say separate off the sql server so you can always just say that if they complain about problems but they won't complain about performance problems because it's their own application and they're the one who told you to set it up this way martin asks hi brent i know that using the merge command is advised against are there any other operators that are problematic and i think what by what operators you kind of mean like things that you can do with t sql there's a there are a whole litany of them and it's kind of beyond what i would go into quickly but that's why i have classes like the fundamentals of t-sql tuning are fundamentals of query tuning and fundamental or mastering query tuning i'm going to just brain dump a whole bunch of ones off the top of my head multi-statement table very multi-statement table valued functions scalar user-defined functions non-sargeable things in the where clause non-sergible things in your joins table variables under some situations temp tables under some situations wouldn't it be awesome if i could just give you a list of commands and say don't ever use these but unfortunately that's where the training class stuff types to come stuff types of stuff comes in one other thing that i'll say inside there too is i would kind of advise you to keep an eye out for any feature where microsoft brings it out and in the very next version they don't do any improvements to it at all so i'm thinking for example sql clr service broker java stored procedures these are things where microsoft brings it out and does a whole you know song and dance pony show about hey check out our brand new feature and then they never actually improve it those are ones where i get a little bit nervous especially when there's no instrumentation to tell you how those things are performing next up fraud says mr ozar when using log shipping can you still perform a full and log backup dude you already asked this go back and check the past office hours questions philip says what do you recommend these days in terms of what processor like intel versus amd and clock speed and number of cores to choose from um so for me and he asked several more things inside the follow-up but i'm just going to boil it down it is so rare that people get to choose their hardware from scratch for me these days what's so much more common is that people are running under virtual for building new servers people are either building vms or else they're building uh vms in the cloud like amazon ec2 azure vms and so forth it's so rare for me that i can sit down with a client say all right you're going to build a physical box let's go with four eight core cpus with you know 3.5 gigahertz uh processing and our speeds and so forth it's just so rare and in fact i used to have a module in my mastering server tuning classes that talked about how to do that they talked about these are the cpus that i recommend right now uh here are the server models that i recommend right now but i had to keep it updated like every three months all kinds of things would change around server vendors availability and i kept pulling the mastering server tuning classes and i said you know as a percentage of you how many of you have built a new server and picked hardware in the last three months and it was just really really slow numbers really really small numbers sorry because those numbers were so low i stopped teaching that module of the class like it doesn't make sense for me to use up their valuable teaching time when so few of them are able to leverage that material for every hour in the class i want to make sure that it's packed with useful stuff for as many students as possible so i don't track that stuff anymore either i just wait until a client needs to go build a new server and then i assess what's available at the time so i'm really hesitant to go and like say recommendations out on a youtube video because the video is going to stick around for a long time instead i'm going to give you just the general recommendations of get as few cores as possible in order to support your workload sql server standard edition two thousand bucks a core enterprise edition seven 7000 bucks a core get as few cores as practical to support your workload and the growth that you need to achieve during the lifetime of the server but with the fastest clock speed possible you know as much gigahertz as you can possibly get uh because that is essentially free sometimes sysadmins will be like oh my god but i get a 30 core real 16 core cpu 32 with hyper threading you know it's just that it runs at 1.1 gigahertz and i'm like oh come on shut up you know that's that's we don't need that many cores at the cost of seven grand a core for enterprise edition uh next up uh michael asks my friend has a very busy table uh lots of selects right updates and inserts uh 30 gigabytes uh with only two indexes a clustered and a non-clustered and both are heavily utilized my highest weight see you were supposed to say your friend's highest weight oh you got busted there my highest weight by far on these is uh schema modification locks could you have too few indexes and could that be my friend's issue yes and yes they can't all be long answers right yes you can have too few indexes and yes that could be the problem um so how do you know well that's where my mastering index tuning class comes in and in my mastering index tuning class i give you exact examples of those too few indexes and too many we work through them in presentations then i work through them in demos in front of you then you get hands-on labs with both examples you have to figure out which tables have too few indexes in which have too many and you have to fix it that's one of the fun things about my mastering classes is you have to prove that you actually learn that stuff it's a ton of fun next up seven says i asked a question earlier about stats full scan maintaining the integrity of the full scan during auto updates i guess what i meant was if enough rows have been modified to trigger an auto stats if it could append a full scan rather than replace it i don't even understand your question i'm sure it's me it's not you i'm sure i'm just not getting what you're asking what you might want to do is go post a question over at dot dba.stackexchange.com but include as much specifics as you can about the actual problem that you're hitting if you're going i'm just curious if x can happen then go build an experiment and then post that and say i have a question about these results because i'm just not getting what you're asking and i'm sorry i'm sure it's me i just i don't get what you're what the practical thing is that you're asking here especially when you tack on things like no recompute and i'm like if you're doing if you say no recompute i'm really curious about what these these stats changes are especially when you say auto stats cash tell me the truth says how do you find which column stats sql server used to create an execution plan i'm trying to find a list of column stats that are potentially useless they're kind of two parts to this one is how do you know which stats were used to create a specific plan in modern versions of sql server it's like 2016 and newer i don't remember the exact cumulative update where this dropped but in modern versions of sql server in the execution plan both the estimated and the actual plans will have on the say you take the select operator right click on the select operator and go into properties and you're going to have a section in the properties that talks about the statistics that were used i can't remember what the name of the section is i can't remember if it's called statistics or if it's called something else but that's the place to start now come all the catches that is only this version of the plan that will change over time and it can even be influenced by things like parameter sniffing the second part of your question was how do i find stats that are effectively useless now see here's another problem what you're really asking for is across all of the queries that have been compiled which stats haven't been used we don't have that that just doesn't exist what you could do if you wanted to be ambitious because it sounds like you're trying to go through and prune some of your stats is you could drop all of the system created stats any statistic whose name begins with underscore wa underscore sys you could programmatically go through and drop those sql server will re-add them though if even just one query uses it if you write just one query it will re-add a system created statistic so you're kind of trying to roll a boulder up a hill there that's i want to say it's called a sisyphean task rolling that boulder up the hill it and i i question the value that you'd get out of it if you were determined to do it i could see it on tables that were say over a terabyte in size because your stats updates would suck pretty bad so i admire where you're going but you kind of see now why it sucks so bad uh next up we have daniel daniel says hi brent we are monitoring sysdmos memory sysdm os sys memory and i didn't have any warnings since i activated lock pages in memory we now have several events that say i have high memory per day why is this a problem okay a why are you monitoring it right you could pick anything randomly and start monitoring it you don't go monitoring something and say hey i'm watching barometric pressure and it went to 870. why is this a problem dude why are you monitoring it instead start from the other way around what's the problem that you're trying to solve and then go start monitoring metrics based on that but don't just play random i'mma grab a metric off the grocery shelf and try to start monitoring it because that isn't something that people look at high memory conditions low memory conditions people look at okay so what's going on on your server just because you're curious in terms of trivial pursuit when you turn on lock pages in memory what sql server is going to do assuming that you set everything correctly what sql server is going to do is it's going to go allocate all of the memory out to max memory it's going to go allocate all of it on startup and lock it in memory well you may not need that much your sql server may not need that much memory and other applications might not use that much memory oh there's a street sweeper coming i always turn the other way oh thank god i'm like this is going to be a real test to this microphone um so uh the thing that i would step back even ask is why are you turning on lock pages in memory it feels to me like you have a lot of free time on your hands and you're randomly going out and pushing buttons and flipping switches and watching random metrics i appreciate that you have enough time to watch youtube videos that's awesome but maybe you could ease off on touching the production sql server for making changes for no reason and monitoring random metrics maybe watch more videos rather than going and touching buttons i admire that you i don't admire that you're nice you may not be a nice person i think you're a nice person you're here i i need all the viewers i can get that's not really true i don't care five viewers or not one falcon says is there an automatic way to open to update open ssl which sql server installs with python services i have no idea i don't deal with python services at all next up igor says what is the best way to reduce table size after drop column delete rows right if you want to make the table smaller you could could delete rows where do these people come from lord please give me the strength uh no coming back over here uh so what are the other ways that you could reduce table size uh delete uh or drop columns right size your data types if they're stuff that's a care that should be using veracare for example drop rows that you're not using in terms of archiving and then you could also compress tables via a table compression or index compression either row or page um it's for me or old column store is another one that's really actually quite good column store indexes have great compression ratios so those give you a few ideas where you can start next up normally weird asks i have an interest in contributing to the community to pay it forward as well as to selfishly gain experience to meet a long-term goal of becoming a performance tuning consultant how can i as a mayor mortal positively impact the community beyond answering stat questions oh that's a great oh my gosh sir that's a great question um so often when we're uh my nose is running i don't know why i needed to tell you that because you could kind of see that now so often when we get stuck in one route of giving back to the community we get stuck in one way of always doing it that way we don't see the wider variety of options out there so i'm just going to brain dump a bunch of things that i see out there recording youtube videos so like recording tutorials writing presentations uh uh volunteering at events and i they don't have to be in-person events but online events also use volunteers as well organ conferences like sql bits the past data community uh sql saturdays data saturdays group buy there are all kinds of online events that often need volunteers to help run them smoothly to help coordinate the program list like which sessions are going to be accepted mentoring now once you do a few presentations there are mentoring opportunities as well i know group by runs a mentoring thing where you can mentor other speakers to go get better so hopefully that gives you just a rough idea there of ways that you could get started if you also search for brento's r 500 level guide to career internals brento's our 500 level guide to career internals i've got a whole four hour presentation that's free out there on youtube that gives you examples of how to do that and how that helps you become a performance tuning consultant over the longer term as well next up we have aaron aaron says i have several queries that generate plans that have an index seek plus a key lookup for their respective tables what are some things to take into consideration when deciding to leave it alone versus adding the columns in the key lookup to include the index oh man i wish i could boil this down to 60 seconds but i have a whole entire module on it in my mastering query tuning class in my mastering query tuning class i explain uh when key lookups are okay when you need to consider covering them inside the index and then also ways that you can work around them ways that you can change your query in order to reduce the overhead of key lookups reduce the number of times that you do them and the kind of key lookups that really screw you residual predicates i there's no way that i can boil that session down to 60 seconds but it's an excellent question and it's the kind of question that says it's the time for you to go and start attending training classes that's why i teach them uh next up let's see marcus the german ah good to see you again marcus the german says hi brent a friend of mine would like to know how she can uh determine that a parallel queer executed query is executed on one numa node let's assume we have a two by eight server and max stop is set to eight so it's a great question what's the problem that you're trying to solve like what is it that you what action or what pain are you experiencing and then what action might you take in order to correct that pain let me tell you just to say let's pretend that the answer was yes a query executed across multiple pneuma nodes what are you going to do about it if you don't have an answer for that stop asking the question because i don't think you're really going to have an answer for that i don't think that that's something that you're really going to go and change something about the query or the server so before you ask a follow-up question step back what's the problem that you're trying to solve and let's approach from it that way next up archer says i'm doing the death method he's talking about the death method from my mastering index tuning class he says is it a good idea to test any index changes in a non-production environment before i apply them in a production environment and there's two there are two ways to think about that one way that's a diesel land cruiser making all kinds of noise here one way to think about that is to think is the script going to work is the index change script going to work and i'll say absolutely you should test those for example when i do index tuning for clients i don't usually have access to the production server when i'm doing it disconnected like i'll get a bunch of metadata from sp blitz index and then i'll go work on the index changes on my own laptop and then i'll hand him the list of scripts and i have a little uh code block right at the top that says hey go test this in a development environment against the production database because i might have had a typo i might have fat fingered a column name i might have put a column in there twice you should go check to see if they actually compile the second part of your question is how do i know that they make queries faster and not slower and and he says in there in the follow-up very inside the question he says have you ever used distributed replay the problem with replaying queries if you search for brent ozar why are load tests so hard i've got a blog post that i've written out there about why load tests are so hard the problem with replaying those is imagine that you have a delete query if you have a delete query that says delete all of the sales where customer name equals brent ozar you can only run that once oh you can run it multiple times but it won't have any performance impact on the multiple times afterwards because you're going to find all the rows in the first pass and then they're all going to be deleted now imagine a query that says insert and you've got primary keys foreign keys on the table unique constraints that insert may work the first time and then fail the second time let's take a query that says go pull all of the rows off of a queue that need to be processed that query will return different numbers of rows depending on when it was run these are some of the reasons why running a load test is so hard the labor is so astronomical for the setup and let's be honest index changes are relatively easy to implement and back out so i don't have a lot of clients who are willing to go to the expense of testing index changes when they are so easy to just wipe out or redo good on you for trying it or for considering it next up mike says i'm testing a solution using triggers to compress lob data on insert with the compress function with a view containing the decompress function i'm thinking to trick the app by replacing the table with a view and avoid changing code seems to work but any gotchas to look out for oh god no no no no okay so my first reaction is no it may be smart in your situation here's where i start to get nervous you might have the application for example i work with so many third-party applications where the application will go through a deployment script and they'll run an alter or they'll do something that they'll add a trigger to it or they'll uh change a constraint on the table and because somebody was trying to trick around with different views and triggers it ends up hosing their deployment script you don't want to be the person who hoses the deployment script so i would if it's an app that you control then i would say well why not just fix the app so that it does this stuff if it's an app you don't control i don't think that's a route that i would go down i love the concept i've just seen so many problems when the developers do something that the the dba wasn't expecting when they were altering the table next up andrew says i'm switching a recent stack overflow users table to an older stack database to do some tests it takes a very long time and the system appears underutilized i've got very low cpu drive rights on the nvme drive has lots of memory available see a screenshot i'm not going to see a screenshot here buddy okay so let's let's zoom out a little bit bigger and let's talk about when i have a query that's slow how do i find out why that query is slow one of my favorite ways of doing that is with sp blitz who if the query is running live sp blitz who will show you the live query stats or show you the uh well it'll show you the live query stats on some versions but it'll show you the weight statistics what that session is waiting on not only right now but the top session waits for it overall could be all kinds of stuff could be uh what waiting to write to the log file could be uh growing the drives or growing the database file but that looking at the queries top weight stat for the session overall would be the place that i would start the other thing i would say is i don't ever move objects in between databases when i'm doing my stack overflow demos i have a stack 2010 database 2013 and a brand new one and i just use restores every time just do a restore from scratch because odds are you're going to be making so many other changes through your database structures and you're going to be changing tables just it's safer to reset to a known point every time by doing a restore otherwise you're going to build this whole house of cards and one day it's going to all fall over because you're going to run an experiment and somebody's going to point out hey your experiment is wrong because you forgot you left this foreign key in place or something q technic says how do i monitor cpu load by each database on sql server i don't like statistics from the plan cache because of the different lifetime for each plan and the cache so the the problem with this you're not the first person to want this but the simple problem with this is cross database queries oh doesn't that suck you probably saw that coming cross database queries where should you assign the cpu load when you join across two different databases for sql server there's just simply no such thing as tracking a queries workload by database now you might be saying but brent i have no cross database queries okay great that's cool but the rest of us do so we haven't the sql server just never puts in the metrics for that kind of thing what you could do if you swear on all that's holy that you don't have any cross database queries and you're lucky enough to have enterprise edition aka expensive edition then what you can do is you can set up a resource governor now resource governor when it came out the idea was that it would make queries slower i just can't imagine why that didn't catch on you don't have to use it for that what you can also use it for is you can lump users into groups aka workload pools and then you can monitor which workload pools use the most cpu memory uh disk io all of that stuff because resource governor has reporting dmvs that you can go query i'm not saying use resource governor for what it was intended to use don't slow the queries down just put each login into different workload pools and then you can monitor which database applications logins use the most cpu marcus uh says hi brent do you have any experience or recommendation on receive skyed scaling rss on sql servers this is a tcp ip setting for network teaser it's like a network card setting and it's been years since i've looked at this because it's been years i got to go back to the documentation every single time i want to say that my the microsoft cat team had good articles on it but if you just search for sql server receives or receive site scaling then that will give you the articles on it it hasn't been something that i've had to look at in the last few years though uh next up david says what are the options for using a source control system for versioning of stored procedures like git svn etc david you you just listed them there are all kinds of helper applications that you can use with version control for sql server but come on now next up technic says oh technic asks the same question again we'll skip past that uh stephen says brent thank you for your answer on why does a joined spill to tempty bee the parallelism is evenly nearly evenly distributed so the operator wasn't allocated enough memory okay great is it possible to allocate more memory to a specific operator and a query oh or do i have to use min grant percent i'm not aware of you of an opera or a command that you could use to influence the memory grant that a specific operator gets you're going to want to use min memory grant or the min memory grant percent query hint i don't recommend that often it's super rare that i would ever recommend it but just based on our interactions it sounds like you've got enough knowledge in here that that hint makes sense for you always learning dba says hello brent since you mentioned a few days back that production dbas will be le production dba jobs will be less demanding over time what do you suggest to transition an existing career as a production dba into architecture data science roles so i my thing would be go with what you love whatever it is about your job that you really enjoy chase after that part the stuff that's highly paid is going to change from time to time and just because something is highly paid doesn't mean you're going to enjoy it whatever you hate doing if it pays a lot of money and you hate doing it it's not like you're going to really get excited about learning more and when you're transitioning careers it's really all about learning more about what you need to do next the things that you're passionate about learning you will eventually become a master at and that will pay you more money so i would go there being a specialist there and then last up we'll take one more uh well kaya says hi brent i have an app that works best when my databases are in sql server 2016 compat mode my instance is sql server 2019 the app admin wants to set the system databases to compat mode 2016. i prefer that they stay as native 2019 note i could test it out but i would appreciate your opinion so for me i i don't think there's ever been a situation where i felt compelled to change the compatible or even felt the desire to change the compat levels for system databases i don't even know if you can for master like that doesn't even uh i don't know why i would care or want to so the simple question that i would ask your admin is why why do you want to make that change can you point to some resources where where you've read that that makes sense and i think that's always a good idea anytime someone wants to do something on your sql server ask them just why there's some kind of rule around uh you ask why five times and it gets to the root of whatever the universe is but just to start by asking why and maybe they've got a great idea but ask them to point to references for it as to why they want to do it i've never seen a need to do that and if somebody's running that much queries in the system database we're going to have bigger problems like yo dog why are you using master to host your applications all right so it is let's see it's about eight o'clock here in reykjavik i am going to go pack up the camera and go down to my favorite local bakery and donut shop here deeg down by the uh shore makes phenomenal donuts and uh bagels just really good stuff and uh then today's sunday after this i gotta go update the screenshots for uh mastering parameter sniffing before wakes up before erica wakes up and then today we're not doing anything today we're goofing off because it's kind of a gloomy day here in reykjavik uh i don't think it makes a whole lot of sense to go out driving around we'll probably we've been meaning to uh go inside and go shopping here that might be uh something that we go do today so thanks a lot for hanging out with me at office hours i will leave y'all with a view of the church adios
Info
Channel: Brent Ozar Unlimited
Views: 4,793
Rating: 4.878788 out of 5
Keywords: sqlserver, brentozarunlimited, brent ozar, mssql
Id: slku6V9mWnw
Channel Id: undefined
Length: 44min 49sec (2689 seconds)
Published: Thu Sep 02 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.