How to load test a GraphQL API built with Hasura using k6 (k6 Office Hours #28)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
hello and welcome to another k6 office hours i'm nicole van der hoeven and i'm joined today by two of my colleagues at k6 i'm sima as always and i'm tom for the second time in two weeks yay so sim is back from his vacation and tom um graciously agreed to to come on because we wanted to talk about a topic that he's he's personally been able to get some hands-on experience in so we were supposed to have a special guest today but he hasn't come on yet so we'll see if he if he comes on he might he might do so halfway through the stream um but first let's talk about what you do tom just for people who didn't see the first one yeah sure so i basically work on a professional services team so um i help people with various aspects of using k6 all the way from you know talking about how it works how to run it and you know training that kind of stuff um all the way to um writing scripts for our users and actually running load tests for them as well um we've got a lot of experience in in that and some of our users won't have had any experience and they're in a hurry to get something tested and that's where we can come in and help out okay so that's it's such a good um it's probably you're probably the one that uses k6 in the most production like an environment out of out of really everybody because you're you see different setups all the time with different companies all the time so it's a lot of exposure yeah and uh i guess that's why i'm here now as well i came across uh yeah in uh in a recent project um so that's today's topic right you're mainly here because we want to have awesome people on the show not know of a particular topic yeah definitely so i think that the the topic of um load testing databases is always a little tricky because not all load testing tools do support that so how how would you go about doing this without using hasura yeah i guess it's it's more uh indirect i think even with asura the the underlying database is sort of hidden hidden behind the scenes you're just interacting with it through this graphql um language i'm not sure if languages is the right word for i think it is it's it's a bit like you know sequel in the sense um and uh but yeah when you interact with it you're just um sending messages through a web service that then uh handles the communication with postgres i think it is by default postgres database and uh yeah just lets you get data out in a variety of different ways that ultimately i think the goal is to remove the need for there to be you know a sequence of client server requests uh so today with with microservices and lots of rest apis uh you might find that a single user action results in uh 10 20 30 http requests with each one getting uh data from a different microservice and and that when you add all of those together those round trip times um add up even if the services respond very quickly so i think part of part of the reason why exists and and graphql is to enable you to get everything you need in in just a single http request response and you can you can design the query um to do to grab exactly the data that you're after which is which is cool so i think that without asura without using graphql i do want to share my screen because there is we do also have another solution and that's using this extension that one of our other colleagues did ivan he created xk6 sql so what this is is if i think we've we've done a few episodes of office hours about xk6 but it's basically a way to extend k6 with functionality that that isn't built into k6 for performance reasons because we want to keep it as streamlined and and clean as possible but then we also want you to be able to build in things that you would use for your particular situation so this is a good example of it where he built this specifically to test to be able to write k-6 tests in sql so probably if we if we wanted to we could do something like this it does support a few different databases so it does support postgres and um there are i might put this in the chat actually so that people can go to this repository and once you build the standard once you build this custom version of k6 then you'll be able to test it and he has like some tests some sample tests here depending on what you're looking at i was thinking i it had occurred to me to try and and create a script for this beforehand but well life happened so instead i'm going to show you what it could have looked like almost as good right um so this is what you'd have to do you would import this import sql from this library and then um the cool thing about this one is that he uses the setup and tear down functions so for people who aren't aware the setup and and tear down our like special functions in k6 the setup is run once at the beginning of the test and then the teardown is is the opposite it's run once at the end and the idea is that only this part is going to be iterated on and you leave the application or the database ideally in the same state that you found it so that's more sustainable for repeated tests so i guess you could do something like this right without using graphql or or hasura what do you what do you guys think yeah for sure i mean asura in in particular i guess it's like tom said that you can combine different types of postgres or whatever and have that that presented in one graph but i mean the whole point of graphql as such is mainly from at least my experience to be able to combine multiple apis that you already have [Music] just creating a graph and saying like okay here are all the resources that i already have in my system combine them into one graph that you can query uh so it serves somewhat a different purpose than pos than the sql extension given that that clearly works against one particular data one yeah yeah but other than that for sure it's it's definitely an option depending on how your application is structured of course yeah so um i i think that this this will only work if you're if you're just testing if you have the one thing and you're willing to make separate scripts but then that quickly gets unwieldy right and the whole part the whole purpose of graphql is to be able to unify that and ideally just have the one thing that you have to to figure out and test those endpoints so what is the what does hasura do differently or what does it add as a as a service that's a great question [Laughter] um well i can only comment on on my own experience having used it it's got a really cool api explorer or i think it's also known as as the console and it allows you to design your graphql queries um within the browser you can run them and the fact that you can run them just in in this console is actually quite useful for k6 because um you know you can see the http request that takes place when when you execute your query and you can record that with the k6 browser recorder uh or you could even you know just manually you know look at the the request and and and uh write your script from scratch using using the data that you see and so so that was um you know it's like a little playground you can probably show it if uh if you want see what it looks like sure i can also um i i've got it here up anyway so uh yeah or did you want to share it oh no no go ahead was this what you were going to share yep yep yep okay so i thought that this was a pretty cool um it was a pretty cool console for hasura and and when i started the when i started the when i executed the docker container and i started it up um this just came up on my browser and i can see exactly like i can start to form query so i think in this database there's a bunch of there's not that much data in it yet but you can still like select say i want to view bike brands by id and then you can run it and then you see the response here too so it's just a nice interface and sometimes when you get more complicated um queries you have wares in there and whatnot then it's just nice to be able to see this part as well which is what you can put in your k6 script yeah the the interesting thing here is that when you send this request uh that whole query um appears as a string in the post data so you do need to do some uh uh actually it's it's a string within uh within a json object that is uh stringified so do you actually want to to show a script and also we have a question from stefan kolke correct me if i'm mispronouncing your name does asura aggregate multiple endpoint schema into one aggregated schema yeah that's that's kind of what i was expecting as well you can correct me if i'm wrong tom but i think the point of adding azura on top of graphql rather than working with raw graphql is that you can just say like okay i have these databases these remote schemas whatever and just say okay combine them and it will set up all the resources you need to be able to query that as opposed to raw graphql where you would have to do that yourself adding them one by one and providing all the logic needed to you know to to make that query possible is that is that correct tom or have i i think you know more about graphql than i do my first experience of it was through hasura so for me the two are like the same thing so yeah you probably uh yeah it does that sounds about right i think yeah it's just it's just designed to make it easy it's unfortunate that we don't have our guest on who knows all about these things so we're kind of it's kind of the the blind leading the death here but we're trying the best we can so if you have questions send them in keep us occupied with something we we do know okay you should be able to um share your screen domino and i'll add it to the street um naveen kumar i i'm sorry i'm not sure if it's a sanskrit word i actually googled that because i got curious as well and yes apparently it's taken from asura which is a sanskrit word for divine wow okay cool i'll send the link to more information on that in the chat cool so i've got pretty much the same interface that nicole was just showing and so um yeah so usually zoom in a little bit as well yes i can do that and i'll also zoom in the visual studio here i love the fact that this is all uh electron right where it's it's got the same zoom functionality as what's in the browser it's pretty amazing i think there is also dev tools you can access with uh with visual studio code anyway really i think so yeah i think i've seen it at one point i thought i thought it was possible to bring it up and you could like inspect elements on here as you would in the in the browser did not know that okay all right so let me uh so this is this is just the example code base i've got i've kind of gotten into the flow of some some things that i i always tend to do i'll always have a main.js script which is the the name of the script that i run with the k6 command i've got a utils file that contains a couple of functions that i'm using very frequently in in pretty much every project i do these days it's basically this check status one that base it's checking the status code of it all received responses and make sure that it it matches what you're expecting it to be and so you provide the expected status and then to booleans for whether or not to print the response body if there is one so we'll we'll be able to see the response body when we get something unexpected so that that helps debug sometimes sometimes you get some useful information in there and the other is uh whether or not to fail an error so that cause our fail function which will stop the virtue user from proceeding at this point and that's useful in uh if you don't have something like this then you might find that the the virtue user continues to cause errors to happen and then it gets difficult to know which error was the you know the um the the original one that you need to worry about and so if i and then i have a folder for each different type of uh oh nice sure what to call it i guess it's uh yeah i'm not sure of the right name for these but queries yeah yeah yeah request types yeah sure so here's my queries and then i i just put together the subscription when i think that's working as well but i'll start showing one of the more the basic query type which is the equivalent of your get request but it isn't actually a get request in this case um everything you do whether or not you're just doing a read operation it's still a post request and sorry to interrupt but can you talk about the differences between mutations and queries and subscriptions yeah sure so a query is a read operation you're not actually modifying uh any uh data with it you just you know select something from some table and so that's a query but you can uh parameterize it as well so in in this case the query accepts an id parameter and then you know we've got a where clause here that's uh using that so we're selecting a bike brand by its id a mutation on the other hand will be um like your your put requests or post requests that's modifying data in some way you're mutating the data i guess and then a subscription is essentially your polling for an update on a particular something whatever it might be so um you can think of perhaps um what one way it's used is in like a a chat um where you might subscribe to a chat room and then the server at that point can push um chat messages to you at will whenever they happen uh so it uses it actually uses i believe it's websocket by default but it might also support some of the other sort of um pre-web socket protocols like uh long long polling or uh even maybe even server sent events i'm not not too sure about that one but um yeah if if there's websocket available in the browser i think it defaults to that because it's the the the fastest of of those um so we also got a question sorry we also got a question from chuang tran sorry if i if i butchered that could hasura work with schema-less database such as mongodb um i i actually had to to look this up and what i saw was that this support is coming soon so i've just posted a link in the chat uh you can register for early access but it's still in private beta yeah yeah it looks like you're saying no i was going to say the exact same okay i researched it as well but i'm thinking regarding the the different query types tom if for instance you have a a subscription for bikes that you are currently listening on and i were to do a mutation at the same time then you would immediately get that data feed it to you as an event i guess that you could react on and do something in your app yep that is my understanding yeah which is which is pretty good um a system that's designed like that it sounds like it would be very efficient um you know i like i like websocket a lot you know being able to receive updates from a server at will instead of having to pull the server for updates uh definitely makes for very responsive applications should i continue going through the script just a little bit okay um so um one thing i should point out i'm using httpx which is a library that we provide as part of our jslib uh collection httpx is is useful because it allows you to set for example a base url so i don't have to keep repeating this portion of the url all over the place if i want to change the base url i just make the change here excuse me and one of the other benefits is that it has this the ability to set global headers on all of the traffic all of the requests will have a content type of application uh json going out so instead of repeatedly adding that code for every request i just call this add header command once there's also clear header and remove header i believe so it's very useful just to cut down on the amount of code repetition so in my export default and so let's see get bikes by uh get bike brands by id that'll be in this guy over here um so here's the the actual query itself um it's pretty much copy pasted from from within the um the console uh ui uh which is handy um we're using um i'm using template uh as a template literal strings i'm still still kind of learning javascript as i go along but i think uh yeah i think they're template literal so you can you know draw draw new lines in here and it's uh you know it still maintains some of that um uh the the tabbing in here so it's easier to read and the alternative would be to you know write this out as a as one long string with new line characters and it looks very messy but actually using template literal strings uh it looks quite nice it's quite easy to read still and i'm feeding in the the parameter with this uh the syntax here to just yeah a runtime that'll translate to whichever id we passed in a session this is normally where you would see http.posts but since i'm using httpx it's it's sessions that's the name of my variable that i created from httpx there the endpoint the actual path for the url is the same for for all graphql requests i want to kind of um come come back to that in a bit um we're stringifying the query that gets sent in uh by actually this whole um json object which will be which in this case just consists of a query property and then the actual query string and i add a tag and again i'll come back to that in a bit and then i'm checking to make sure that the expected status is 200 there's also debug statement here to print out the response body and then good practice i'm also sleeping for a random duration between two minimum and maximum values how to to kind of paste the virtual user to make it more realistic um obviously if you don't need to do this then you can leave that out i've got my pause min pause max up to two to six seconds in this case but that that's that's basically it now the reason why i added this tag here is it has to do with the fact that all of the uh requests that you make from uh to hasura they all share the same uh endpoint name uh so all of them will be going to uh let's see if i can find one this one earlier up here somewhere yeah this is the the the the actual url uh right here and so with that you know there's no way to distinguish between the different queries uh or mutations or even subscription all those subscriptions will be using websocket but your mutations and queries would all have the same url if you were to look at them in your results after the test so tagging and giving them a unique name is actually quite important so you can tell that okay this was uh this was the request for get bike brands by id and and not some other one that can also yeah i got an another question from stefan goke uh regarding whether it would be possible to read a query or mutation from a graphql file uh just wanted to comment click on that that yeah that would definitely be possible you can use the k6 open command to read from whatever file you have and have that be read as as text and then you would have to use something like regix i guess if you wanted to interpolate any any parameters or such but it would definitely be possible to to read from a graphql file yeah i haven't actually seen the form out of those would be interesting to see if if it literally is just you know something like this query or if if it contains some other metadata that might actually be useful for us if we had some kind of support for you know graphql in an extension that reads graphql and does it does some extra things oh it's the same exact same as you have exactly yeah but you would probably given that the file name is different you would probably get some syntax highlighting and things like that that we lack now so it could definitely be beneficial to do it that way although it would not be as fun if you wanted to do arguments as you are doing in your your query yes and uh thank you for mentioning that now there's another way of doing this that i wanted to show and so so you're right you know i'm actually modifying the query string here um which um you know makes it a bit more difficult to to use something like these these files and so uh there's another approach in this bike models where i'm actually using variables so this is a similar sort of query as as the other one but this time we're querying the bike models and i've made it so that i'm querying by ids where ids is actually an array of of multiple ids however many you want [Music] and uh oops sorry and in this case you can see there's no variables or anything no string concatenation going on in the query string itself um but there's this type ids variable that's been defined and when that's present in the query you populate this variables property in the json that gets uploaded to the server so the query gets passed in as as normal but as the query contains a dynamic a variable you can also pass in the variable down here okay then it would definitely work with the graphql files as well because then you could use that approach to substitute whatever you want to fill the api or whatever yeah i think this is this is probably the preferred approach so you don't have to touch the the query at all you can literally copy and paste the ones being used by the application itself i think you know when you're designing a designing this properly you'll be using uh variables like this instead of baking them into the actual commands themselves makes sense and that's all that's different really between the two so different ways of doing it yeah i should probably share this uh somewhere right in this this repo yeah do you already have it in one no i put it together rather quickly that's all right if you do end up putting it in a repo i could put it in the description of the video later sure yeah uh you can also show a subscription running oh of course yeah well we can really see it running too if you want uh i was uh able to get some pretty high concurrency on this on he tried it with 5 000. yeah i tried it with 10 users 5 millisecond response times i was like what that's really low nice i tried it with 100 it flew through at 5000 i think i think that there was one test i ran with 5000 where it kind of struggled at one point i didn't really know what the problem was it looked like it was some kind of a dns issue actually um but but yeah it it it runs to some degree so i'll just run it on the cloud and it's just a demo app so it's okay to bring it down potentially yeah it's just running on a some kind of linux instance i think it's only dual core 8 giga memory so it's wow that's pretty impressive then yeah it still manages to um take on quite a lot of load so just starting it up here i'll show you the the previous run here so this is thousand virtual users three millisecond response time the response timeline didn't even budge at all so i'm guessing it might be because there's some kind of caching going on some query caching potentially now i'm running the same query over and over again so um it's not not that great it'd be better if it was randomized but uh this is the the previous 5000 virtual user test where there was a in request rate and response times increased so definitely something uh went wrong there but um yeah it's really good to see 1250 requests per second i wonder if it would have been what it would have been like if if you also had like a post on there yeah actually mutating the data yeah yeah and then having subscribed uh webs web um websocket users receiving those updates whenever a mutation takes place that would be a very interesting test um it's still doing pretty well now i think i've got a tiny error in my scripts though oh some errors coming through as well yeah dial io time timeout i think that's actually um you know a go um error some something to do with dns i haven't quite checked it out not sure why it's happening but uh yeah it is going on anyway so it's well it seems like it could have crashed right because didn't the the graph look like uh oh there we go here's a question for for you as well uh is the vm in a load zone dedicated or shared i think it you said it was aws i imagine it was a dedicated one no i think the question is whether the vm or the containers we use for low generation whether they are dedicated or dedicated as well yeah yeah or at least that's how i interpreted it given that it uses the term load zone which i think aws doesn't have yeah yeah yeah so yes um if you run it on k6 cloud you we do use aws so you're just using our aws account you can choose the the load zone but anyone that you use there is a dedicated one so you you're sure that any load generators that are being spun up are only for your use at that particular time because otherwise it's really difficult if you at all share infrastructure then there's just so many variables and you can't really be sure if those results that you're getting for your test are being influenced by somebody else's test that might be using up you know the cpu or memory of that box so no it's it's absolutely dedicated for from that perspective and also from a security standpoint that would be probably not a good idea to keep your you as a customer safe in terms of data i mean what if you for some reason got access to each other's environment variables or something like that that would be devastating for most so yeah we try to to keep them separated for sure or we are keeping us separated we're not trying we are i i think nicole might have muted herself oh yes [Laughter] i wanted to ask about the the peaks here what are we seeing yeah that's a great question so this is what happened this is a demo app as well so it's it's not like it was necessarily built to to be performant but could that have been the ones we got the binder before yeah possibly i just ran out of handles and then got sad and refused to to answer for a while until it made its way back what were the checks that you had again um they're checking on the status code yeah so so these guys are just checking that the response was a http 200. and so it seems for the most part we're getting our responses back except for in a few occasions uh i guess what is the response though is the response http 200 error yeah it might have been because i just put together the subscription script and i hadn't really tested it yet so i should have kept it out of the test yeah no we actually exist excellent that you already have a script we got a short notice by the way reply from mark uh regarding the load zones as well uh which we might want to highlight [Music] sure yeah and that's basically what you said nicole that if we would have shared instances then you would have gotten a skew of your results in case anything else is running on the machine possibly weighing it down so just to read it out mark said the stability of the load gen is super important in a test overutilization or saturation of cpu or memory will skew results absolutely i'm thinking of a thing you taught me way back nicole that i actually still think is super reasonable and that is if possible uh have it have a threshold in your test that makes sure that your cpu utilization is below a certain threshold just to make sure that you detect any cpu starvation or saturation directly and don't have to sort of correlate your results or why it's skewed with what happened on the server at that time so you just get hard facts that okay bam at this point there was a saturation of my cpu so that's why i'm getting a lower throughput than i would have expected yeah so i always like to do that because there's so many things that you might miss in a when you're running a load test there's a lot you can easily get into you can you can go down the rabbit hole of chasing errors or what looks like performance bottlenecks and forget sometimes the most obvious things like the cpu utilization of and memory utilization of the load generator itself it's easy to focus on the application servers and and the resources on that side but the load generation side should never be an issue and if you're over utilizing it then it's it's quite possible that the act it's kind of like the um the observation what do you call it the observer effect in quantum physics where the act of measuring the thing or testing the thing becomes what brings it down or becomes what um becomes a performance bottleneck yeah and that's absolutely absolutely not what we want we want something we want to get really clean results mark has a question opinions on what that value should be for cpu or memory utilization i'd go with 80 percent yeah i think it's okay you could probably go to 1995 without any issues as well but 80 should be safe because you have to to consider how much each bump would be right so if if you put it at 99 then maybe it's already too starved or to to to saturated so because you don't know what each increment of that utilization will be so putting it with some marginal or some margin would allow you to be on the safe side that you are not actually saturated but not not detecting it yeah and i think it also matters when that happens because for me if if it went over 80 but it was at the beginning of a test i'll i'll let that slide because you know that might be there's a lot of things that that happen at the beginning like the scripts are being transferred over connections are being established it's i'll forgive that if it's just a blip at the beginning and but if it's sustained throughout the test where you see that it's continually higher than 80 then then i would worry about that is that kind of what you would do as well tom yeah pretty much um one of the useful things with the k6 cloud is that we allow you to add you know utilization charts to your in your analysis tab so you can keep an eye on it there as well obviously if you're running on your own infrastructure then you need to figure out other ways to to keep an eye on utilization but yeah anything above 75 80 percent is when you're probably your cpu is you know on some occasions at least probably um you know out of time and uh you might see some some impact on your response times and also you on k6 k6 cloud you can see the thresholds there so if you've set thresholds for the low generator cpu and memory those will get called out there too oh this is now happily running at 5000 users with a bit of a you know tiny impact on on response times jumping between three milliseconds and eight milliseconds oh no oh no actually that's actually still still really impressive well something just broke anyway yeah oh okay well maybe i spoke too soon yeah yeah so i'm not sure if the server is supposed to handle 5000 users that are doing things every five seconds or whatever it was yeah so what was your experience when when testing this in in real life so i know you can't of course divulge any confidential client information but maybe you can talk a little bit in more general terms what what you were testing sure yeah it was um we didn't really know much about the inner workings of the thing needing testing and so something like the uh the api uh console this hasura console is actually really useful um because all of the the queries mutations and subscriptions that were written by the customer were available to uh to play around with here much like we've got these bike brands bike models bike types here and so that then you know gave us everything we need to create scripts from uh from what they'd created basically and then it's just a case of working out well how frequently are these different um queries called in the of the app um you know some of these might just be called once per user per day others uh might be a lot more frequent um so uh there's still some effort there to figure out how to design a realistic test but at least you have you know the full list of all of the scripts basically in this ui and being able to just press the play button to execute it within the browser even to see the results and to play around with the query makes things very easy to do uh so it's been a great experience actually i quite like hasura yeah it kind of gets me thinking about when i've for the first time stumbled upon a swagger and swagger ui and i was like wow this is the future of testing because now everything is just so visible and yeah this is basically the same thing right but for for graphql apis so really really nice i really like what i'm seeing it also has this uh data section which allows you to see uh the database and uh you can write sql i guess it's not maybe not working for me it's looking a bit strange but um at least for the the customer i was working with they um you could see their database here and you could write your own sql statements or and then see all the data that's in there i think you'd even insert rows through this ui so that makes you know managing test data very easy um but yeah i've only explored the surface really um the api and the and the data tabs i haven't really looked too deep into these other sections but i'm sure they're all very very useful hopefully we'll be able to schedule our our friend at hasura for some other time so we can dive deeper into the internal for the actual asura tool yeah yeah you'll be like this is how you actually use it you guys have been doing it wrong this whole time yeah he might have he might have said that if he had seen but you know i think i think we we got you got the idea you have to just you got it going anyway um but i think that the cool thing about this is that i guess the the alternative to this would be if you're trying to test something like this end to end you could just be doing in this case it looks like it's some sort of e-commerce app right that sells bikes and you can filter you can go to different product pages and view the the information on each model so one approach would be if you wanted to skip this part of it you could just do it from from the from the site itself but the the good thing about this approach is you're able to zoom into a part you're kind of isolating just one part so you're not touching anything else other than the database so this could be you could be doing this earlier on in a cycle like if something hasn't been fully integrated with other components yet this is a great way to already be doing some sort of performance testing even before the system integration phase of a project yep yep that's true yeah i'm also thinking that it's really cool out of the perspective as we talked about in the beginning that it actually allows you to combine different combine different data sources or different apis right into one single schema or graph i'm thinking for example that if you are really into this whole thing with devops and you've really shifted left into small cross-functional teams and you do this bike shop as in this example then you might have one team that is solely responsible for the inventory of bikes in your in your warehouses and one team that is responsible for developing services related to cost and one for the actual product registry then you could combine all these three sources while still allowing all the teams to operate in them independently and release their services independently you could still combine them into a unified api and be able to do queries and mutations across the whole sources all three sources which is just [Laughter] future of computing no but seriously it is it is um it's a it's significantly easier also as a performance tester if you had to figure out a different way to test each each component on each new database that just gets old really quickly right and what's even better is if we if if we did have something like this if we were going to switch businesses to sell bikes instead then then we we should also have this uh running in some sort of ci cd pipeline right so that changes that are made in in the database or like migrations uh that occur you could also run a load test so that um you know whether what you've changed has affected performance at this level so i think um we do have some some cicd integrations uh zuma you want to talk about the the azure one uh yeah sure um [Music] yeah uh sorry uh i just need to you don't have to demo it share my screen no i just want to show it a little bit okay yeah okay so feeling brave today integration it was and we recently did a um an office hours with with someone from git lab who who showed how to integrate k6 with git lab and i think i've also done a video on how to do it with github actions but there's a whole bunch of them and sim is going to show the azure one that he created yeah so the thing with azure is that or rather with visual studio or microsoft is that we have two different offerings here right so we have the azure pipeline extension allowing you to easily run your tests as part of your ci and we also have an extension for visual studio code that will allow you to run your k6 tests locally directly from vs code so you'll it will be almost like having an integrated terminal you can just bring up the command palette and press run k6 test and you would get your uh your test running directly in this code but for the azure one let's have a look at that listing in the marketplace so the point of this is to try to make it as easy as possible to run a k6 test whether it's in the cloud or if it's local on your runner ci runner to be able to do that as quickly and as efficient efficiently as possible so what you have to do to get started is to install it to your azure account and you do that just by clicking this button and then it will install itself and then you can just go ahead and add a step for it in your pipeline definition so in this case we're running a local load test and we're using the javascript file your k6 test script as the input so that's what we will be executing at this step and if it were to fail for a threshold any threshold reason or something like that then your ci workflow would also fail giving you a red light and prompting you to have a look at what happened and you can do the same thing with the cloud as well you just add the cloud parameter and flip that to true and add your k6 cloud token to the secrets of the pipeline and then you just run it again and it would use the cloud resources instead to do the same thing and just as you would with your regular k6 test runs locally in the cli you can also add arguments right so you can use all the arguments available in the keys kasich cli as part of the pipeline as well but instead of having to manually go through the hassle of setting up some means to download the k6 binary uh making it executable putting it in path and things like that you can just install the extension and it will take care of all of that for you and all you have to add is what file to run nice so that's basically it and uh we've not gotten so too much uh usage of this yet we have some nice reviews and about 670 installs so that's quite nice but we definitely want more feedback and more users for this one as well so we know what direction to take so if you're an asher and you want to run k6 tests as part of your pipeline go ahead and add it hit us up on slack and let us know what you think of it i will do actually i have a need to do it so i'll definitely be trying it out we also leave a review and we also have a question here from quang chuang uh previously i had to use the library to convert the k6 result to junit because the as azure pipeline only supports this format k6 run dash oh cloud okay do you want to do that that's actually true what you would have to do in that case is to store the output of your test in a file and then you would add another step to your to your pipeline that would run the actual k62junit command and feed in the results from the previous test so that would definitely be possible to do in two separate steps and still use the the extension but there's no native support for it right maybe that could be something we could have a look at whether we could add a support for automatically transforming into junit as well that's actually a really good feature for for that project so how did you pronounce it nicole schwein i'm not sure if it's chuang or kwang yeah sorry if i got it wrong but in any case feel free to raise an issue in the azure pipeline extension repo on github and just to make sure that you that you are notified whenever or if we ever implement such a feature for you [Music] yeah sorry trying to say it right but we i'm sure we're not saying it right so um but yeah that would be a good feature so uh we zimmer and i were in a we're in a podcast that was recorded for con 42 and that was on tuesday but it probably won't be out for a bit but one thing that i that happened there is that to close it off um mikko who is the interviewer asked us what one piece of actionable advice we would recommend to somebody who's just starting out in the industry what's one thing that you did that gave that was a really good thing to invest in for your career so i wanted to ask you both that now too i'm shamelessly stealing from mika simmons you have an answer that you'd like to share yeah i can i'm gonna say the same thing as i said there that my biggest return on investment in my career has probably been from doing other things than than just working like enjoy nature exercise make sure you get you know that you actually use your body so that you find some sort of balance between body and mind because that would really allow you to to elevate your thoughts when you are at work in a whole different way than you would if you were just working all the time so my suggestion would definitely be what has worked best for me is to do other things to to practice some sport go out in nature something else anything something you enjoy that is disconnecting so i can say mine to to give tom a bit more time to think but mine is a little bit weird maybe not as weird as sim is but my thing is um i wish that i had learned to take better notes earlier because if you're working in tech things move so quickly and because we have limited space in our brains that space needs to be prioritized for connections for for making connections between seemingly unrelated things and pulling out insights from things that we learn i don't think it's best utilized for things like memorizing you know the right syntax for different types of languages and all that stuff should be downloaded into a second brain so i really wish that i had started taking notes earlier because taking notes and actually saving them because sometimes i took notes and once i took them i never reviewed them never thought about them never really processed them and thought about whatever that thing was that i took notes in and how how how that fits in with my view of the industry so if i were starting over i would definitely do that earlier because i would i do not remember everything that i've i've come across and if you're looking for a tool for that completely unrelated and this is a free product that i have no interest in recommending like i don't get a kickback i use obsidian.md but you there are different tools that you can use for this purpose really there's anything that's like that lets you save it in plain text files basically i also use what's your thing i think tom uses obsidian as well right i haven't been using it much lately i'm letting myself down uh being a windows guy i i it's like an automatic thing for me to type note you know windows button r notepad and i'm still using way too much notepad oh no i should be a chain for myself i know you should it's really bad but having said that i i was half expecting the obsidian plug to come out so [Laughter] um what is your thing i'll i'll take uh pages from from both of your books i think i think it's definitely taking notes is very important i have i have so many notes and i have been using them um particularly you know when when i'm on a call with with um with customers it's always good to to take notes that's that's one of the main times i take notes and and actually review them and and yeah taking a break uh you know don't don't you know just stare at the screen and get frustrated when you can't figure out why you're getting an unexpected response the amount of times where i've just gone away and done something else and come back to it and suddenly noticed something that i didn't before uh it's it's weird how often that happens and so yeah take regular breaks drink lots of water and and take notes that's that's that's how uh how i do things cheers why is this is that this brown thing that i have in my [Laughter] product body oh i actually covered the logo i know but everyone knows what's underneath come on is it is it pepsi or coca-cola [Laughter] [Laughter] so i do want to say that we won't have an office hours next week because we're part of grafana labs now so that means that when they have a shutdown day we shut down too so they they started doing this thing i believe for the pandemic where everybody just doesn't work on some days because sometimes it's difficult when you take leave but you know everybody else is still working um so this way we just know the entire company is not working so it's like a true break oh mark has mark answered the question too he said some people talk to rubber ducks to work out problems i go running yeah rubber ducking is a is a pretty good thing it's basically like talking out loud and pretending you have someone to bounce ideas off of i just call up sigma for that yeah i was going to say i can't really do that i've tried to do that at some point but it's just so awkward i just call someone and pour it out yes yeah but i was gonna say i'm not going to be able to do that as often as i would really want yeah because we have some sad updates or at least it's time for me uh this is the last office hours that i will be part of and at the end of this month i'm leaving k6 so nicole will from what i do we did that at the same time from what i've heard nicole will continue to do office hours uh hopefully absolutely with a new sidekick but in any in any case uh i've had so much fun with all of you and the whole community for the last year and a half almost two years and i'm sure we're gonna continue to see each other in other settings than than just in office hours and who knows maybe i'll come back to office hours at one point as part of my next gig that would be interesting we are so sad to see you go but like forgetting everybody else i'm sad to see you go too because yeah i'm sad as well uh i mean i've always changed your mind i'm the one who decided this so i can't be too sad right but i'm really going to miss all of you and i i really hope we get lots of chances to to collaborate on things going forward you've been really an in a critical part of office officehours and and youtube you've always been supportive of my weird ideas and i'm so glad that that you did it with me i think this is the 28th episode now and most of that we've done together so i'm absolutely going to miss you as will the rest of the team you are not going to get away from me that easily because i have your number and i have your twitter handle and i know where to find you anyway thank you everybody for watching have a great weekend thanks everybody
Info
Channel: k6
Views: 254
Rating: undefined out of 5
Keywords:
Id: sAPHPvmfdPQ
Channel Id: undefined
Length: 62min 32sec (3752 seconds)
Published: Fri Sep 17 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.