Common performance testing problems and fears (k6 Office Hours #33)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
everyone and welcome to another k6 office hours i'm nicole van der hoeven i'm a developer advocate at k6 and today for this halloween special i have tom i want to introduce yourself for those who haven't watched the other office hours you've been on sure my name is tom i'm on the professional services team at k6 and i'm happy to be here great so today i think we should start off with um what you mentioned earlier tom what's that the thing that you were going to mention oh yes of course yes yeah it was uh just brought up in the uh slack channel uh internally uh that we've now got a cloud release notes page um so uh k6 cloud release notes so if you've ever wondered what's going on on the k6 cloud you can now uh check it out online uh do you have the url yep i just posted it in chat as well nice floor says she's here for halloween puns are there going to be halloween puns um [Music] sure tom's come up with some really interesting things he's going to lay on you later no i don't know [Laughter] you're welcome to make halloween funds for us so this is actually this show is actually floor's idea for because it's halloween this weekend we wanted to talk about something spooky things that scare testers and in particular load testers so i think that one of my favorite things about testing is that it touches many aspects like from the business to development and testing operations it just goes through the entire cycle and i think a good tester is at least interested in in every phase even if you're just brought on for a specific thing that doesn't mean that you can't you know question requirements and i i love that because i think all roads lead to testing but at the same time it can be the the same thing that i love about it can also be the thing that's most difficult what do you think tom yep totally agree yep yeah because there's like there's a lot to go wrong right and it also depends on the team that you're on because if you're on a team where the everybody thinks that quality is your responsibility it can be that's such a broad thing like quality of an application in general that can't be that's not a responsibility that can be borne by one person do you still find that tom in professional services engagements with k-6 i think you know quite often you one becomes the deliverer of bad news and uh that in itself you know the potential to be uh that person giving someone the bad news that you know this application that's been uh been developed over the course of several years ultimately doesn't uh meet the performance criteria you know you're you're you could be the one that's delivering that news and that could be quite a lot of pressure for people and it's also difficult to you know you have to be sure that what you're reporting is actually correct that there was no no mistakes done in the actual testing methodology and so that could be pretty scary yeah yeah so let's talk about the planning phase when you don't really know when you've just gone into a project you don't really know anything yet what are some of the things that are the most scary for you um i guess not having the complete picture you know not not fully understanding the architecture i guess that can be and that's a solvable problem you know yeah a lot of times you just need to ask the right questions to uh to figure that one out certain personas can be challenging to work with that's also something that could come up um you know that there is there are individuals who um seem you know intent on making life difficult for no particular reason maybe it's because they're under a lot of stress themselves and you know they need someone to take it out on but yeah you know every single application that you test initially that it's going to be a bit scary as you you know discover the inner workings of it so yeah yeah and also have you ever been in a project where even where people have been willing to help but maybe it's like a microservices based architecture and there are these disparate siloed teams and they don't really talk to each other so even though they're willing to help there's no person that knows the entire flow of a message i find that scary it's not insurmountable but it is scary to be the one that has to put all the pieces together yeah that's true yeah you kind of just discover a lot of politics sometimes between the different teams and you kind of you do have to tread lightly so you don't end up being seen as uh as a threat really yeah yeah i know when i was doing contracting i was often called in because there was there had already been a performance incident in production and so when you come on after that there's this preconception from the rest of the team that you're there to critique their work and you know basically solve the issues that they created and that's just like that's the worst possible environment i think to come into and that's the worst attitude because i don't see it as our job to point the finger at somebody else because any any issue like that i think is is a problem is the fault of many people if not everybody that was involved right that's right yeah yeah people should really just be grateful that you're pointing out what the problems are and that's often the difficult part and that's why people do load testing and that's why it is considered to be one of the more difficult uh you know disciplines in in testing because you do need to have a pretty good understanding of you know client server architectures and the protocols involved and uh yeah it's it's that you know that the fact that you are that one delivering the bad news it's assumed that you've done your due diligence in ensuring that the testing has been done properly as well and if people catch any kind of hint of that not being the case then uh you know they'll start doubting uh the output that you're that you're showing so yeah floor has a good point here she says technical debt can be daunting yeah that's true to to come in and to inherit all of that that can be have you i guess that this is a little bit in the scripting part but um sometimes if you if the team already has a lot of scripts but it's in a different tool and you're tasked with having to reinvent all of it that's that's a big ask it can be daunting adele sorry go on no no they're good sorry about that um moazadel says the scary faces when trying to determine the baseline and load scenario or profiles yeah that's that is that is a fear let's let's talk about that actually um so one of the things that i had thought of doing was for this episode was to talk about public recent public fails the truth is though ever since i be i got i got started with tech in general and being on the building side of the software i don't know i think you just get a lot of empathy for other teams i don't want to call anyone out because it's scary can you imagine working for something that's for a site that that has a potential to go viral how do you plan for that when like human irrationality really has a big part to play in in what goes viral and what doesn't it doesn't always make sense it's not predictable right so it is very difficult to to come up with the load scenarios or profiles i mean there are ways to be more educated about our guesses but in the end they're still guesses to some degree right yeah that's true and you know you could argue uh you know if if you are suddenly going to experience if your site does does go viral you know in some ways that's that's a nice problem to have and you know if people start writing about it well you know any publicity is good publicity so it's not the end of the world in some ways i guess but there's definitely some applications where you know if it's uh if it hasn't been tested according to you know stress scenarios uh and you know it turns out not to work when it goes live then it could be quite damaging for it um and you know that's really why i think there's different types of tests you want to run you know baseline you just run with a single virtual user through all of the flows capture your best case response times then i think you know what i typically do is you run a sort of realistic test um and and then there's also the stress tests where you go way beyond realistic tests to see you know when when something breaks and i think if you at least run those different types of tests and um even if you don't know how the application's going to be used when it's when it's made live then you'll still have tested it to a good extent that hopefully you'll catch things before they get revealed in production yeah and what about i know it's still not eastside it's not the end of the world and it's still not but i i'm always extra wary when i'm on a project that's like for the government or for mission critical apps you know i i once worked this was not as a tester but i once worked for an airline and i was just in this is early on i think i was doing data entry or something and um and mistakes that we made on the team could ground airplanes which cost thousands of dollars an hour because if we didn't have we didn't have an application that that could update in real time which made which parts had gone under maintenance and which hadn't that's that's a security issue and that's grounds for saying that a plane can't take off and so that's actually how i got into testing because um there were engine components in particular that um that weren't updated in real time and we needed something a system that was reliable enough to to get that through when any sort of project like that is is scary it's exciting but it's also scary right yeah imagine uh testing air traffic control systems oh yeah those are probably that's pretty scary like what if we get to testing medical surgery software that would be that's that's pretty scary um let's talk about scripting what are some scripting fears that you've faced well i have to go back to my early days in load testing where i didn't know how to program i didn't know very much at all really and i was asked to resurrect some scripts that hadn't been used for a long time that no longer worked because the application they were targeting had changed but what language was it in c plus oh wow never never really looked at code very much and suddenly faced with two plus plus there was some java as well the java stuff was a bit easier to get my head around but uh but anyway you know it touched on all of the really common things you do with you know when you're writing automation scripts um correlation parameterization uh those are probably the two main things so you know very quickly figured out that these things are very important to the functioning of your script and and so yeah that was it was just uh overwhelming at first but um but really you know once once you kind of get over that and you understand that it's very important to pay attention to um what you're seeing you know attention to detail is like one of the the key things i i feel in for for load testers or even automation engineers in general um you know the fact that you can run your script observe the results you know tweak it if you're not getting the right result run it again you know it's very easy in that sense that you can the debugging cycle the debugging loop is quite quite straightforward and it doesn't really change much beyond that you know the projects i do these days there's still you know a large portion of that is making sure you're doing data correction uh correlation correctly and uh yeah so but it can be very challenging you know if you've never programmed before especially if you don't know much about the http protocol or even websocket or grpc um it's it's it's it's very front loaded isn't it the amount of uh knowledge you need in order to be able to be effective yeah unless you're like a permanent tester for a an organization i think there's this expectation that as a performance testing contractor you are you should be able to either remember your experience with different technologies or be able to pick it up really easily and do one and to a certain extent i i think that that's a good thing because i don't think that you need experience with every single tool because once you know the basics once you understand the larger picture of what you're trying to do with a test then you can learn the that particular tool or that particular language but on the other hand it can be very daunting to suddenly be faced with some some weird proprietary language that you suddenly have to to get up to speed with very quickly and it's not something that's ever used before you know have you ever used silk performer uh no oh i i've heard of it i've it was one of our competitors and i don't know if there's nothing around these days are they i think they are but since they were acquired i haven't really heard that much it's a good it's actually a good tool but they use bdl i can't remember what it stands for but i haven't seen that language used anywhere else so for my first engagement i was like oh no how am i going to do this you know you i guess the way that i get over that particular fear is to be open about it i never go in saying yeah i'm an expert when i've never even touched it i'll say no i've never had experience with it but here are the other tools that i've had experience with in the past that i also had to learn on the job and i'm just as i'm just like anybody else i'm going to google and ask people for help if if i need it but i i do think that it is still daunting right to be the new person that's completely lost and correlation is a big thing too yeah i was going to say um that you know that yeah you reach a certain point where you start becoming confident in what you don't know um as opposed to you know uh you eventually you just know every how everything fits together what you know application client server architectures tend to look like and they're pretty much always going to share similarities so when something new pops up um you know it's probably not going to take you very long to figure it out it's still new it's still a bit daunting but you know if again if you just pay attention to detail and if you can try and tweak something and try again you'll eventually figure it out and then you realize there's nothing really like that's scary at that point so yeah i was going to say that the correlation is one of the the things that you know when they have the the picture of an iceberg and you see the top and maybe you've gone through the requests and in dev tools or something that's usually what i do to scope out a site a web app i just put on web tools and look at the network panel and then i look have a look at the requests and sometimes they seem pretty straightforward and and and then you start scripting and then realize just how much correlation needs to be done but maybe we should back up can you how would you explain what correlation is um well it's one a server sends some uh dynamic value to the to the client and in order for the client to then send a valid uh request back at some later points uh doesn't necessarily have to be the next request but it could be some point in the future conversation it'll need to send that dynamic value if you don't the server is going to go nope not not accepting that request so they you know they're used for a variety of different purposes state is is probably one of the main ones but also you know security making sure that no one's tampering with requests between the client and server so dynamic ids very important if you don't have them 100 correct the ones that you know are mandatory then uh you're going to get unexpected responses and you're not going to be simulating a a client properly you know the server is just going to send you back an error instead of doing some processing it's probably quicker for the server to throw you an error page back than actually doing what it was meant to be doing with that request like in you know create order where you're not sending the correct dynamic ids it's just not going to create the order which could involve tons of other you know back-end systems so you know really making sure that you're getting the expected run responses is the key and um doing correlation properly is is one of the things you have to to do to get there so bill said ghost requests do you know what a ghost request is is that a real thing or is he just trolling us [Laughter] it's responses that don't have an associated request i guess it there was no request i don't know they just knew we were there and sent a request um but yeah so so the problem with correlation is uh when you and this is a good case for recording when you know that you have an application that has a lot of of dynamic ids like like you said tom and and correlation it's a good candidate i think for for recording i like to have one session that i can look through and then check like okay this was the request and the response is here now the second request is passing a certain value in the header i don't know where i got it from so what i'll do is i'll take that value copy it and then do a find on the response for the previous request and then see where that is and the problem is like you said it's not necessarily the the last request that it came from it could have been the very first one when you first logged in or something so that's when it gets tricky um and sometimes they're really complex or very secure applications that do this for every single step and and multiple ones for every step yeah one one example that pops to mind is asp.net with view state values oh yeah yeah those are usually on pretty much every single request and yeah i think asp.net i don't know if it's a feature of the language but quite often there'll be some really complex post data forms web forms that are being basically bounced back and forth between the client and server so it's maintaining state in each and every request and that can get ugly pretty quickly so if you're having to test an asp.net based website i'm sorry yeah another one that comes to mind is the the oracle suite of apps like banking apps and of course that's the reason why the security is the reason why they do that because obviously in a financial in any sort of financial application security is a very big deal you don't want anybody to be able to access your your accounts or or even your you know mortgage applications or anything like that but every move has several view states and many ids that you have to correlate that's uh yeah that's when it's difficult to use a protocol level tool that's i think that's a good candidate for using something that's more browser-based if if that's at all possible yes yeah agreed yep i think we go on yeah i guess the good news is you know for the most part you know 95 percent of the time um you know it you you can say that you know if the browser can do it then you should also be able to do it at the http level um the only time when that gets a bit tricky is when there's some kind of encoding or encryption going on of of the actual uh http payloads so if you you might have some post data but it it looks like a bunch of random characters well if if you're able to do you know interact with the service in the browser it means that there's some javascript running there's some client-side javascript somewhere that's encoding um the data and then you can literally just go hunt for the javascript that's doing it and um and copy and paste it into your script if you know with k6 we have we are scripting in javascript so um quite often you'll just be able to copy and paste the encode and decode functions from that are running in the browser and and simulate it in in your k6 script uh if you're in other languages i think there's even bridges too uh to you know be able to run javascript with in the context of java for example i think i had to do that once at one point and then with another tool but basically if the browser can do it you should be able to do it as well in a script yeah but then there's also the rise of single page apps now that are done in a browser but then when you look at the traffic there's barely any and what's happening is that the traditional way that applications are built is that the processing is done by the application servers on their side but the single page apps part of the reason that people some some applications are on it now is that it they kind of offload that the processing the need for processing and the resources that are required by it to the client side so there are lots of javascripts or or other scripts that are firing off on the client side now the problem is with protocol level scripting tools they might download those java those scripts but they don't execute them in the way that a browser would so there's like a whole generation of browser-based tools now that that have to do that because there is no there are tools that where there are there's no way to test them using the protocol level yeah yep it's funny that it's a it's a battle that's been going on for you know decades now you know how much to run on the client and how much to yeah to do on the server and it does just seem to keep bouncing back and forth you know we go through waves where no no no let's rethink this let's do it all in the client uh now the client's going doing too much now let's put it back on the server and it's like we can't figure out what's what the best situation is so i guess a bit of both is probably the right answer is there a particular engagement that comes to mind when you think about particularly tricky protocols or scripting challenges yeah there was there was one i forget the name of it but it's one of these big enterprise applications citrix no no well yeah i guess we're different because that's what was going to be well mine was going to say something like sap or something oh yeah sap is kind of you know that does have a proprietary binary protocol that it uses at least the the rich client the old school sap i think these days it's all web-based um uh yeah people soft ibm i think it was an ibm product but basically it was really messy at the protocol level no surprise that people were wanting to test it for performance but you know even something like salesforce can be very complex and it's generally generally speaking it's the applications that are very customizable that are going to be the most challenging to to do at the protocol level you know the customizations that these sites cater for these applications cater for inherently make the protocol communication diff difficult yep go ahead sorry sap is is tricky because they have a lot of the legacy sap apps are are like desktop apps so so that's already difficult to test but then they have this thing now that where you can put um those web those apps on the web so you can make it kind of like a web app the problem is i once had to test that and the way to do it was using keyboard shortcuts so it was a very different way to test wow so it was literally like the green screen version uh in inside a browser yeah it's like it was is virtualizing it so there were no dom elements that you could use to interact with so the easiest way was keyboard shortcuts then you have to have a tool that that does support that where you can just send the keyboard shortcuts but yeah so i mentioned citrix as well i once had to test this citrix implementation so i had to see how many virtual machines could be started through citrix and that was that was really difficult i tried several tools for it and and i ended up using loadrunner and it was still a pain it's just it's just a hard one to test had to do bitmap comparisons there's no there's nothing again there's no like dom elements that you can search for or something in the response so what you do is you you take a screenshot of what you should see and then during the test you take a screenshot during the test and then compare but there are issues with that because weird things happen when you're rendering a page or or in this case a machine [Music] yeah yeah i've done uh done a few citrix projects in my time because it was uh one of the apps that fixed our trade okay that's pretty good oh floor is just like on fire here with all of the halloween pun spooky application platform i'd agree with that i think that's appropriate there was a another another thing that comes to mind where it had to be it was on the tcp protocol and it was all binary data so it had dynamic data and i needed to do correlation but the responses were in binary and then i had to i had to figure that out um and then like decode it and then and then encode the request it was like it was very interesting i loved it but it was definitely scary at the beginning yeah that kind of stuff can be really tricky and that requires a lot of trial and error as you're figuring out how to to do that decryption and re-encryption as well once you know because you can't send the decrypted traffic to the server it's still expecting in its encrypted form so you have to be able to do both yeah yeah yeah and it's also a pain to to debug or to troubleshoot because you can't just look at the response and know what it's saying you have to get it and copy it and paste it and then decode it so that's so difficult carlos quiroz says socket io is possible test i don't know what socket i o is uh look at i o i think is um uh basically uh it's an implementation of websocket oh okay behind the scenes that uses websockets so um it adds a bunch of functionality to a websocket um you know interaction on the client and the server um to cater for some things that aren't uh part of websocket itself i believe i might be wrong but um it's it's used quite a lot when you know for for people looking to do stuff over web socket and they just need more functionality that means yes uh yeah because we have a support for web socket we should in theory support socket io as well yes i just posted a response as well with a link to our documentation on how to use how to use k6 to test that i haven't used socket io in particular but if it's web sockets it should be fine okay let's talk about execution um anything any fears come to mind on on this front so you've got this at this point we've done the planning oh we never you know we never addressed we never talked about the load profiles that um were mentioned earlier by moaz yeah actually the load scenarios and profiles workload modeling in general is is a huge thing because it happened it has to happen earlier on and you might not know the application really well at that point and that can be difficult to get right yep one of the key um topics there is are you trying to run a realistic load test where each virtual user maps nicely onto the activities of a real user and uh the the one thing that um bridges that uh or attempts to bridge the two is uh the concept of think time uh using sleep statements in your script to actually slow down the execution of your of your script so you know if you don't use any sleeps in your script your your each virtue user will run it as quickly as we can run the javascript code and and how quickly the servers that you're interacting with respond and so um each virtual user in effect without a sweep statement becomes a hyperactive user and so when you run a thousand virtual user tests well it's actually more equivalent to uh tens of thousands of of real users and so that's that's an important uh thing to bear in mind but then you have the question you know what is a realistic think time how long should your sleep durations be and that's almost a science in its own right trying to get that uh guess what that number should be um yeah what's what do you feel is the number nicole oh just like the number for to answer everything probably 42. [Laughter] you know you're probably not that far from uh you know what it actually ended being because you know the sleep time has to take into account that you know people go off for a cup of tea over lunch they'll still have the application open on their machine and they're not going to be using it for a good you know several minutes at that point whereas other times they'll be clicking through every few seconds but you know if you're to design a realistic test you have to take into account that people will for extended period of time not be doing anything with the application um so i remember working with one guy who uh who was convinced it was something like 30 seconds so yeah one of the the one of my favorite projects that that i worked with in terms of workload modeling is a um a gaming company they sold they did many things but in particular i was on the team that was testing for horse racing and the the bets that they were selling for a horse race it's called the melbourne cup in australia and it is huge in okay at least in australia it's huge and um what was so interesting about that is sometimes when you do workload modeling you also have to understand a lot of the business so i'm not i'm personally not a gambler really i don't i don't place bets on really anything but i had and that meant i had a lot to learn because in order to to script which transact in order to figure out which transactions i was going to script i had to understand different betting types there's apparently like a distinction between paramutual bets and fixed fixed fixed bets so fixed bets are like they they are calculated by mathematicians on on the team and there's just a price like that's the odds and that tends to be simpler to to test because as long as that number gets updated in real time it's it's there's no computation involved because the computation is on the part of of the people that are setting that number but the paramutual one is different because it like it it takes all the bets and based on how many people have bet on a particular horse versus another the odds change that really requires a lot more processing power so i had to think about these things and incorporate that in a load test because the load profiles the load that they exert is not the same so how much of the bets are paramutual and how much are fixed those are not things that i ever would have thought of that's not common knowledge you wouldn't think that that's something you'd need to know for for performance testing but sometimes it is necessary and another thing first someone else has done all the sorry we've got a bit of delay between us yeah certainly portugal in the us yeah or it just might be my internet connection actually it's like cutting out quite a bit what are you saying yeah that um you know ideally someone else is the one giving you uh the the figures that you then need to translate into your you know transcribe into your performance test um i know on one occasion that bank fired because someone had done the calculation wrong uh so they you know they'd given some number of requests per per month or something and then they were trying to work out what that it what was that was going to be per per second and they'd missed out one of the divisions you know they were going from number of days from 30 days in the month so divided by 30 to get per day divide it again by hour to get you know per hour and yeah they missed out one of the steps and so the the numbers i was given were just way higher than they should have been and of course the site didn't handle it and so there was this whole like oh is this actually a realistic test and it turned out not to be the case so yeah yeah i think most people tend to think oh if if you if your test was for more than what it actually is that's a good thing because you're on the side of the of caution well yeah but also there's a lot of resources that you wasted that weren't necessary both in terms of time and actual money for running environments and having that available those things don't come free and they're not cheap either yeah so i think that's one of the things for from the execution part that that i fear it's environment contention have you ever run into that as well other people using the environment yeah it can it can affect your you know results if you know especially if the the functional testers it's usually the functional testers i'm sorry to blame them but they'll be running some heavy batch process and all of a sudden you see a huge spike on your performance test that you've been running repeatedly without any issues and you're like what what is going on here um but you know i think it's usually low testers that annoy the functional testers more because you know we're you might have some students that are creating lots of data and all of a sudden they can't find their test case you know in the list of orders uh anymore because there's all these other uh load generated orders in in there in instead so but yeah there can be a bit of contention on environment sometimes ideally you'd have your own completely separate performance test environment yeah the problem is though that sometimes there are other forms of performance tests that you might want to run in staging or system or system integration environments right so yeah i can definitely i've been on that side where i i trashed a an environment and i didn't realize that there were functional testers who were so annoyed because everything suddenly became really slow it's a matter of communication but you know communication can be hard yeah and of course the answer is always well why don't we just test in production which is a whole other scary thing right yeah i've done i've done it plenty and it's okay usually because you know someone else is asking you to do that and ultimately it's their responsibility if anything goes wrong you know uh if the only thing you can do is to be very fast in providing feedback on unusual things that are happening in the test if you suddenly see a spike in in response times or errors you better be hovering your finger over the abort button to stop the test yeah but you know there's other other ways of dealing with uh with that you know you could be testing out of hours i definitely remember many times of you know running low tests at two in the morning uh with a group of guys you know all sleepy and you know eating pizza or something yeah trying to stay awake but yeah but then you know the the worst the really bad things can happen as well you know you might suddenly see a problem you stop the test and the problem still is there on on the system you know and then people start panicking you know let's just reboot everything and hope for the best and yeah i've seen environments not come production environments not come up back uh after a load test so it it is scary when that happens but ultimately someone else has made the decision you are just you know the the person who's having to execute the test so what can you do what's what's even scarier is sometimes test environments are are hooked up to production components so sometimes they they just have like the the the ui or something that that's a test environment um or maybe the database but then it gets hooked up to the production payment servers or something something really scary like that and you think you're testing in a test environment so you're trying to break it whether you're a functional tester or a performance tester you you use really strange combinations on purpose sometimes and when that shows up in production that can be that can be a problem i know once i was on this team with someone who thought he was testing in a test environment and then we happened to sit next to the operations engineers that had these big monitors of signups but he was doing like a test data run so he was creating all of these users he didn't know that he was creating them in production and so all the ops people were like yeah look this there's an increase in traffic and then marketing was brought in because they're trying to figure out where is this traffic coming from and yep it was a load test but that's not necessarily you know the load tester's fault either because he was testing he was using an end point that to his knowledge was going to a test environment but it wasn't so it's a it's a communication issue and a lot of a lot of the scary things in a performance project i think have to do with human factors things that aren't even related to computers you might you might uh treat you know as a penetration test you know like hey i was able to bring down the production environment with with the script anyone could have written these you know if it's a public website for example so uh it's a what is it white hat hacking yeah i i love that because sometimes that is what we're doing right we we sometimes you there are tests where you are trying to bring down an application but you know with consent [Laughter] that there's a big difference there but also i think that we're trying to a lot a big part of our job is trying to simulate users real user behavior but that's based on the very big assumption that we are good judges of of user behavior and not just we as testers but also the business sometimes people do things that are just completely out of left field and there's no way that you could have predicted that one thing that i love to say is still on the the gambling thing there was this one application that was for live for live betting and live racing but there was another in the same organization where they were virtually simulated horses so there were no real horses they were like graphically animated horses super cool but the weird thing was the real live horse race also drove traffic to that simulated horse race they're completely different applications so there's no there's no overlap in terms of the the architecture but there was in terms of the load why because maybe people didn't realize that they weren't real horses they weren't really sure what what they were betting on um that kind of thing is not something you would think to test yeah that's true or how about um things like for a for a sale you might think well let's only test our site right after the sale goes live but actually if you've sent an email or a marketing campaign out beforehand you should probably also be testing refreshes like maybe the the last 30 minutes or the last hour that could bring your site down before you even launch the thing because of the anticipation and people are just refreshing and refreshing and sometimes hard refreshing so you can't even cache those results right but that that doesn't seem like rational behavior or predictable behavior but i've definitely seen that yeah so how do we how are we supposed to account for the psychology of it yeah i think a lot of these things can uh you know with the with the mail shoot as an example i really feel that these the actual tool that's sending out emails should have some kind of stagger option or something where you you know instead of sending everyone an email at in the very same second that instead you know it's over a period of hours so that you have a bit more control over when people are suddenly going to to visit your i wonder if that's built-in functionality you know i guess there is some kind of batch process that sends these emails out anyway but whether or not it it's configurable to do that over a period of time you know certain certain apps if you tell everyone hey it's going to be available at this very specific time yeah people are going to be sitting there refreshing refreshing and then all of a sudden everyone's going to be wanting to to to navigate there at the same time and that's it's a recipe for disaster quite a lot of the time so that's a very aggressive ramp up that you have to design in your test to simulate that kind of behavior and you might not necessarily think that as a performance tester you'd be talking to the marketing department right [Laughter] yeah yeah and the marketing people are just doing their job too they're not they're not aware that that it could drive load at the same time and that that might be an issue that's that's a that's something that you you have to think about when you're thinking more holistically outside of just the the tech of it all and then also in your example of sending out emails what about customer support because you're sending out emails but if there's anything controversial or potentially broken like if you're sending an email about a new feature you should let customer support know that they're they may experience increased demand as well because they might say you know they only have two people on rostered at that time they there's also a whole thing about scheduling people to make sure that that there's going to be someone to handle those requests yep yeah like i think this i think covet has given us lots of examples of these cool interactions between human psychology and and software performance i know in the netherlands we have this corona line and when it first started the problem was not well okay they had very early initial problems with the application but even after they solved those and they were able to book appointments for the the test the kovit 19 test really quickly the problem then became shifted to traffic like actual car traffic because people couldn't get to their appointments in time because there was so much traffic there there's so many cars and not enough parking is that a performance testing issue [Music] maybe not traditionally but these human factors affect how users actually get to use your application so maybe it's worth considering so let's so just to just to kind of start wrapping up here because i could just keep talking to you on this but what's one small tip that you can share that you use or have used in the past to help you get over some of these common tester problems or fears i guess it would be uh you know not to take anything personally you know you're being put under a lot of pressure by other people but you know uh you can't take it personally um people are under a lot of pressure when it comes to performance performance testing is usually done so late in the day that it's just before release day or weeks weeks before release day and um and you know sometimes you're going to be the deliverer of bad news and that's that's perfectly okay no one has the right to be upset at you for for doing your job uh properly uh in terms of you know the fear of never having done load testing before and uh you know that kind of um in familiarity with what you're doing um just pay attention to details um that's you know really one of the key things with low testers having an eye for detail will allow you to notice things that really small things that might seem like they're not that important but chances are that they could be like you know not correlating a particular value you have to be quite accurate in what you do you have some things you can't control some things are out outside of your control when you're testing so yeah just be aware of those things really not necessarily a small tip but a few small tips maybe yeah there's almost like a a tester's hat that you put on where you look at everything with a critical eye and i guess the the the goal is to not be is to critique without being without it without making it personal right you're not you're not criticizing the person you're critiquing code or processes or or even the infrastructure but do you find that you have difficulty taking off that hat outside of work i feel like it's carried i i i bring it with me in other aspects of my life where i'm like yeah you know there's ways you can explain something or say something to someone in in a nice way even though what you're saying is you know is loaded with uh with other things you know if if someone's written a poor piece of software um you can just point out hey this this request took really long to respond i thought you should know that that might become a problem later on instead of saying hey you wrote this really terrible piece of code what what do you do you know so yeah there's ways you can uh you can communicate that or more likely to get you the intended response right yeah and just like testers developers are people too they're they also sometimes are given very big tasks with very short time frames and no they usually know that they haven't tested everything and that's not possible anyway for testing to be completely exhaustive so i think of it as not really a matter of you're you're a crappy developer what are you doing you know i think of it as hey this one thing you probably um thought that this was not a big deal i wouldn't have either but i talked to this person from the business and and she said that actually a lot of people do do it so maybe we could fix that you know sometimes it's a matter of priorities not matching up and priorities change as the project goes on so i think that there are bugs that that arise that are either nobody's fault or everybody's fault it's usually not just one person making the mistake so the the tip that i wanted to to talk about is um is it's a really small concrete one that's not going to solve all the problems or anything but one thing that just gets gets a lot of the stress out of my mind is keeping a test execution log because when you're testing especially if you're still in that shakeout phase where you're testing quite often just maybe not high a load i i really like to have a text test execution log which is just a a list of all the tests that you've run and you're putting down you know the number of users that you had the environment that you're running on the scenario that you tried how it worked out and having it that way even if it's just for you is really useful because then you can look back and see okay well i did try that and this was the problem with that so i should go and do something else now but also if you publish that put it in some shared drive or something or maybe include it in your email updates to stakeholders i think communicating that out to everybody that needs to know is also a really good way of making sure everybody's on the same page like yes i was testing from this time to this time here here are the usual times that i test so you know it kind of underscores the point in in the case of environment contention there's no being transparent about when you're testing what you're testing and what the results were means that your job is going to be easier because people know that you're testing at that time you know and if you had any issues calling it out in in public means that you're more likely to get those issues addressed or help in addressing those issues yourself yeah taking notes that's that's a very very good good tip as well i think i think we've been here before haven't we this is i think it's funny because i purposely tried not to talk about taking notes but now that you've mentioned it that is kind of taking notes isn't it yeah okay well there's a lot about a lot of common tester fears that taking notes addresses for me sometimes you get so lost in what you're doing and you look up and four hours have gone by and you've just been testing the same thing then you start to think did i already test that or was that yesterday or was that another thing did i test it with a different user was that the you just get lost just write it down and that way you know you know what you tested and everybody else does too you said somewhere that in a comment to me today you're trying to figure out if you'd done something but it was pretty obsidian so you just didn't know yeah yeah i was 19. i mean try try remembering what you did when you were 19. unless it was like a photogenic moment photographic moment you probably don't have a record of that or what you thought or what you learned it's pretty much gone it's just broad strokes take notes people take notes yes so one thing that i wanna oh two things that i wanna end on observabilitycon is not next week but the week after that that's november eight to ten so i'm going to put a link in the chat here i'm going to be speaking in it and so are the c our ceo robin gustafsson and one of our software engineers olhaev tuschenko and this is one to watch because we're announcing something that's pretty big and i'm very happy about it i know tom is a fan of it as well we can't really talk about it yet but it's free to register go and listen to us announce it live and do people call it the other thing did i just say ollie no i just figured oh observability011y yeah [Laughter] but another thing is we're hiring come join us if you think that it would be cool to work with people like me and tom we are we have a few positions open right now so there's python engineer front-end engineer kasich developer advocate and we're also looking for a product marketing manager so just a little plug please click on go to that link casex dot io jobs and that's it for me and as floor says happy halloween people and monsters alike thank you for coming on tom thank you it's always great to have a chat with you thank you everybody for listening and have a good weekend happy halloween
Info
Channel: k6
Views: 99
Rating: undefined out of 5
Keywords:
Id: jVa3noSOyg8
Channel Id: undefined
Length: 61min 6sec (3666 seconds)
Published: Fri Oct 29 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.