PANEL DISCUSSION: Successful Test Strategies

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
all right hello everyone well welcome to this session to talk about successful test strategies i'm joined today with uh three incredible people that i am happy to to introduce and then have them share a little bit more about themselves first i have kristin uh jack quani she's a principal qa engineer at paylocity we've got alfred lucero senior software engineer at twilio and we've got jack benton staff software engineering test so we're going to go around the rooms and just get a little introduction from them and they're going to share a little bit about how their organization handles testing we'll start with you kristin okay hello everybody as stacy mentioned i'm kristin jack foney and i am the principal engineer for quality at paylocity paylocity is an hr and payroll software company and we have about 45 product development teams each team has one or more software test engineers the software test engineers do much of the test automation and also some of the manual testing along with product analysts but it's very important to us that software developers are participating in testing as well so we have our software test engineers are really guiding the developers in thinking about what to test and how to test and when to test and so they're taking part in the process and helping with the automation we're very good thank you very much now we'll go to alfred share a little more hey everyone my name is alfred lucero and i'm a software engineer on the ei frontend team at twilio which is a sas company that provides a bunch of apis and tools for developers and marketers to do things like send emails sms text messages phone calls and automate other forms of communication my team is under the email platform which used to be known as sendgrid and i manage the and manage and build the web application that customers use to check out their email data and settings on and just a little bit of background about myself in relation to testing i've written some blog posts related to n10 testing with cyprus and helped to lead the migration adoption of cyprus in soundgrid a couple years ago and expanded our organization in the paid cypress dashboard service to include more twilio consoles teams as of late in relation to development and testing across twilio we have full stack teams and more front-end focus teams like the one that i'm in who are in charge collectively of adding the unit integration end-to-end tests and visual regression tests so each of the development teams are empowered to write tests for their own world application and it's not solely on one person so it's a collective effort for writing tests and we do have some developers who have more of a qa or test background but it's not required throughout all the developers everyone's kind of just learning and building off of each other and it helps to have those who do have those backgrounds to provide some guidance when it comes to writing tests great thanks alfred and now over to you jeff my name is jeff benton i'm a staff software engineer and test with uh rxsaver project we recently got acquired by gotorex so we have um part of that organization our philosophy at rxsaver is that everyone owns quality and testing the way that we are organized is we do have software developers or software engineers and tests which are part of the team but all engineers everyone participates and contributes to the testing sets are really more of seen as ambassadors of test and quality to the team helping to keep testing in front of them and also participating in just the test related projects as well as all the things that the team is doing as well and uh that's how we're we're set up great thank you i'd like to go a little bit into tools um alfred thank you kind of started and maybe i'll go back to you to kind of finish up but what are some of the tools that um testing tools that your team uses and and languages and how did your team decide on those yeah so for most of the web applications across twilio we use javascript and or typescript especially typescript because it provides a lot of the static type checking uh for our web application so we could be certain if the build succeeds at least it passes a lot of the initial checks for the types that are involved and it also helps to just be aligned on one language for building out the web and writing your tests because in the past we've had some issues where we had to context switch between using ruby for end-to-end tests and using javascript or typescript for writing the rest of the web application and it just turned out to be a little bit of a mess and in relation to that we build our web application with react and typescript so we do use tools like just an enzyme or just and react testing library for unit and integration tests so more teams are actually transitioning to using react testing library more so just because of the style of test that you be writing where you wouldn't be focusing too much on the lower level details and you're just focusing on more on how the application is being used and for end-to-end browser automation tests we use cypress so this is one that we had to really convince stakeholders like product and engineering managers that this tool could provide value and it could improve upon the existing end-to-end testing solutions that we had we used to have a ruby in-house custom solution with selenium and also tried out webdriver io as an alternative to that but both solutions weren't working out too well with us so we ended up searching up and looking up cyprus and doing a proof of concept with that and building out a subset or converting a subset of the prior web driver io test to cyprus test and see how they fare against one another and gathered metrics related to the test run time the overall passing rate what's the overall developer experience what are the features provided in cyprus compared to the other solutions like the paid dashboard servers the screenshots the videos and we're able to definitively show the management that cypress was a good solution so we were able to adopt it as one of the first teams and then bring other teams on board into the organization that we had with the cypress dashboard service so that's kind of like how we got started with cypress we also use a tool called storybook so this is more for just component development you could it makes our life a lot easier to just like build the components out in isolation without having to integrate them onto the page and just verify visually that the variations of your components match the designs and the mock-ups so we really use a lot of driven development when we're building out our features or any new sort of pages or small components that we need and for visual regression testing across browsers and devices other teams have used chromatic with storybook to do a lot of the page diffs i'll do a lot of the component diffs of the styles of in case there are some big changes going on we want to make sure all the components are still looking as expected and we also other teams have used apple tools for page diffs as well for visual regression testing just to make sure that everything is still looking good if you update your style guide or update your component library's version and in general when it came to picking out any tools or libraries we always like to look at the overall developer experience what are some of the documentation that they have the resources and examples available the whole feature set compare and contrast see the community support how popular it is is it being maintained well in the repo we want to make sure that this is a library or tool that we could use going forward in the future and not just for only a couple of months so we want to make sure that it could scale with us as we grow good good stack so jeff want to go to you yeah so uh go ahead and just start with the different stacks because we have different testing tools that we use for each of the stacks we've got a web stack we have kind of a back-end feeds jobs api stack and then we have an android application and ios application for the web where it's javascript react next js and for testing we're using jest for the unit tests and react testing library for the component tests which is kind of that next layer up and then we are also doing uh using apple tools and cypress for the functional test we actually have that divided up into kind of three different approaches we have a kind of a functional approach which stubs out all the um major api requests and then we run our apple tools test on that version of it which is the visual testing we upload it syncs with the visual test grid that they have and um then we also have uh small set of tests that actually don't mock out the uh api requests and that's kind of our sanity check that everything is is working like we want for the api in feeds that is a spring boot application so we're using the spring testing framework which is junit 5. one of the cool things that we're using in there is test components or test containers test containers to start up uh some external dependencies that the api relies on like the database and the caching with redis and then we also use a really cool tool called the james bloom mock server which is knocks out our interactions with external web apis that we depend upon and that what that allows us to do is with even though we're doing integration testing that's outside of our api we're able to um have all of control over all the test dependencies so it makes our tests really reliable and not flaky for android that is a cutlin application and we are using espresso in there and that's worked out pretty well um we do uh espresso and then um use apple tools in conjunction with that for to give us some visual testing and then for ios we're using xc ui tests and um that's a swift application and we're also using appletools with that as well to do the the visual testing for us great great stack um how about languages any um any challenges in deciding between those tools jeff that you've seen um i think the the key thing for us was not so much that the language but more just the test strategy and this is one of the things that was really cool uh we because in my experience when there's a separation between tests and developers that you can end up with some duplication especially on the um on the between the the lower level test and the upper level test that you have going on there and so we collaborated and figured out what kind of coverage we got for the lower level tests and then we made sure in our end-to-end and upper level tests that we weren't duplicating that as well and that saved us a lot in terms of the tests that we need to maintain and also the amount of time that we spend testing there's a little bit of risk in the approach that we've taken but we're willing to fill in those gaps as issues come up so i'd like to go to um if you could tell us a little bit more about the tools and the programming legends there sure absolutely um so for front end automation we had been using spec flow for about five years and just in the last year we've uh switched over to cyprus which we are really excited about probably about one-third of our teams are on cyprus now and having a good experience with it um we also have some teams that are using just straight selenium in c sharp or python we have this concept at paylocity called paved roads which is basically um where each team decides for themselves what their test automation should be like but we also want to make sure that people are using roads that have been paved for them so for example for cyprus we have a sample project and we have all kinds of blog posts and we have get started guides and we've got a login npm package so we recommend that that teams use the paved road but if for some reason the team wants to go off-roading they can do that so we do have some teams that go off-roading if they feel that it's necessary and the way we make decisions about what tools to use is um we have something called an rfc which is a request for comments where anybody that is interested in using a particular tool or framework fills out an rfc which basically says if this is the problem we're trying to solve this is what we're proposing for an approach these are some other things that we've evaluated pros and cons and then we invite everyone in our whole technology team to comment on that and then we make a decision from there that's um called like a lightweight architecture decision record and so we've used that for new tools and that's what we did with cypress and that's what probably we'll continue to do going forward another thing that i think is very important to mention is we focus a lot on api testing and so most of us are using postman newman for that but there are some teams that are using spec flow or cypress in actually making their api calls through those great right well i love the idea of paved road gives people you know the tools and they need to do it but also gives them a little flexibility not to feel like it's being forced do you know laura some of the comments around the the transition to cyprus i mean cyprus is is hot it's popular um uh but any transition from one automation uh framework to another is no small small fee um what what were some of the comments that that made you all make that choice um well i think um well for me and the reason why i i started pushing cyprus was i had just tried it on my own because i wanted to write a blog post about it and i i tried it out and this is amazing like all i need to do is just is just download it and it runs because i didn't need to worry about getting the browser driver um so i started inviting teams to to try it out there was actually a team that had tried it out before i had even tried it um and and some of the comments that we got initially was well we've invested so much in spec flow and and my answer to that was well you don't have to make the entire decision right this minute you don't have to switch over right this minute but just try it out and i actually had um a challenge i called take the cypress challenge and i said just give me half an hour here are your instructions run through it and then i was getting comments like whoa that was easy or wow my tests run much faster now and and so so what teams are doing now is they're they're gradually making the switch great well thank you kristen i i love that it's almost experimentation which is you know one of the one of the best practices of a devops and agile team and i think it's a good transition to my next question about cultural traits and um as a a consultant owner of a consulting firm um you know we often get companies that come to us and they have no automation or they have 20 people that did their own homegrown automation and they come to us and ask us you know can any team do this um so i want to put that question out to you all um is automation possible for uh is it possible for any team to be successful with automation on are there certain traits that are needed so i'll throw this first to you jeff what are your thoughts um i think you need to have uh strong engineering management support and also support from product because test development takes time and that needs to be incorporated into your estimates for getting work done and then also maintaining it as well once you have it um but i think one of the key ones is really getting all your engineers to buy into it uh that everyone's responsible for quality and testing and it doesn't belong to some other group even if it's a set of people within the team uh something else that i think also helps an organization is if you're willing to take some risks uh if you're trying to automate everything uh then you end up with a lot of tests they you have a tendency towards some brittleness and then also it be it takes them longer to run and then more time to maintain so finding that right balance between getting the confidence to be able to release and and your tests are are covering you the way that you need versus we can't ever have anything ever bad happen in production because then you're just really fighting an uphill battle and the good news is i think that even if you don't have these items in place you can start working towards them as as we all know that cultural change really takes time and it's worth the effort but you you've got to be willing to evangelize talk to people do examples i love what kristen was sharing about hey take the cypress challenge that that's just so fantastic yeah oh great it sounds like you have that culture as you you're sharing about um you know the teams both kind of agreeing to to to use the same technology um alfred how about you yeah i think it was echoing a lot uh from chris's story we actually did something really similar with webdriverio and our prior like custom solution where we have to do a lot of the research and experimentation up front before moving on to cyprus and convincing stakeholders like how jeff mentioned where we had to convince product with the convince engineering managers that this is a good tool to use with certain trade-offs in mind but it would be there to help ensure a level of quality that we didn't have before with our web applications and with our prior testing solution so part of it to be successful is just willing as jeff mentioned just to really buy into what could be what is the value that these tests would bring for us and ensuring that this quality is shared across throughout the whole team everyone's pitching in to write tests and you can start off small like if you haven't started writing any unit or integration tests like just start adding that in those are like the quickest and easiest wins that you can do then if you want to satisfy some time to do research for end-to-end testing tools if you haven't had it yet then start doing that and start paving the way for other teams to start copying those similar patterns because once it's successful in one area like another team could really see the success from that and just piggyback off of that so those are some of the situations that we've seen it kind of grow from this a couple teams using cypress some more teams using cyprus or starting to use jest and react testing library or just an enzyme it's really helpful to see one team do it first and share it with others what they learned um and i think that's what's part of what's going to make test automation successful in any organization there yeah absolutely well after just just you sharing that gets me excited so i i i want to say probably need a good pitch person as well i mean make it sound interesting as well uh kristin is that kind of some of the things that you've experienced or what are your thoughts on can any team be successful i do think that any team can be successful and i definitely echo a lot of the sentiments that alfred and jeff had um i would say that the two biggest things are everyone needs to understand that the whole team needs to contribute to the decision making about what test framework to use and also the implementation that it can't just be one person okay pick your tool and go off and do it by yourself it's got to be the whole team who gets involved and everyone needs to understand that test code should be treated with the same level of care as production code that flaky tests basically mean nothing because they're not providing you with good actionable information so everyone needs to understand it's not enough just to have an automated test that works sometimes but not all the time you want it to be very very reliable and everyone needs to participate in that yeah well i i think it's so important i mean even as you mentioned flaky tests you know i've i've experienced where a lot of the flakiness was on the development side um that you couldn't really get the good results and so you know the teams that work close together were able to say oh okay if we make these couple code changes then we can really get some some valuable results so yeah thank you kristen well i want to turn a little um to some other non-functional testing um i heard you all talk through your stack um are you doing testing um of other areas like accessibility security performance and and who is responsible within your organization for it um so so yeah for um for security testing um we've trained all of our testers and developers in how to do security testing and so that's really helpful because teams can can do some of their own pen testing but we also have a dedicated pen testing team as well who sets up automated scans and then they also do some of their own pen testing you know at a deeper level and then for um performance testing we have a dedicated performance testing team but we're working right now on trying to really do some of that same empowerment with teams so that they can really write their own tests and and run their own performance tests at a sort of a low level not really stressing out the application and taking it down but just getting some some good information about um you know what the response times are um and then finally for accessibility testing um we've been we've got a small team right now that is um just just volunteering to work with our ux people on what kinds of automated accessibility testing we can run what kinds of manual testing we can run and then trying to educate teams about what they can do you know just on their own product so we're pretty excited about that wonderful that's great and does that actually fit into the test cycle it sounds like they're kind of out of you know could they kind of work separately or have you been able or has a company been able to figure out how to get that to work within a sprint or in the testing iteration security testing has definitely been integrated into the test cycle and as a matter of fact oftentimes on a work item like a jira item there will be a little check box that said did you do your security testing um so that's definitely integrated in um the the accessibility testing and the performance testing is is something that we're still working on on integrating and i'm hoping that once teams have some of those tools that they can set up themselves and use individually that they'll be able to integrate it just as we did with the security tests great alfred jeff so um one of the things that i think that we did which was really neat on art saver is we we got up to a point where we could run our performance tests as part of our ci cd pipeline we didn't run them all the time but uh it was a click of a button that we were able to run them unfortunately we moved off of the um that platform provider that we had and so we're kind of back in a mode of it's very easy to run the performance test but we still need to pick a new platform provider and get those up and running again but that was a really great success and it was important because we had some performance issues at the time and so having that confidence at the at the push of a button to be able to get that feedback from a kind of a performance standpoint uh from the web like page speed metrics are we know are an important aspect of things right now we have monitoring around that and it's not built into our our overall testing feedback like you could fail that test and it would prevent something from being pushed out but we do have that afterwards monitoring is part of our operations uh side of things from accessibility uh i know appletools has some capability around that and i had proposed to the web team to look into that but we haven't quite um pushed into that area yet yeah it's still it's still fairly new in comparison to all of these other areas of how to integrate it so yeah it's we're getting there i think it's just a community as a whole how about you alfred yeah i think like pushing for more automation for all these different areas where the security accessibility performance is something that a lot of us would want ideally in the future when there's new libraries that come out to support that but just to cover each of the boxes um for security testing we do have like a product security team who reaches out to all the backend services and web apps to make sure everything is up to a certain standard with security so from there recently we had to add security reader security headers to our cloud front application so we had to really make sure that was up to standard we do have like some automated snick scanners for like dependency upgrades too for our web applications just to make sure we're always up to date and as far as security concern a lot of our developers pretty much know a lot of the basic web vulnerabilities where it's like xss to like look out for and making sure we're always input validating so we usually catch that in code review but we would really love more tools around that to sort of automate in our usual ci cd flow in terms of accessibility we use like the storybook accessibility add-on uh which gives you a lot of like kind of hints of what things could be improved on whether it's like the semantic html part that you need to use actually a button element for a button or you know using the right sort of semantic meaning html elements for that and for accessibility our champions for that are like our page design system team so we have a component library that's shared across all of twilio and they're really great at making sure all of our components are accessible from the very small basic component level so like for each of our buttons reach our inputs they've been kind of our champions for accessibility and giving us guidance on that across all of our front-end teams and for performance testing it's pretty much up to like a lot of our teams each of the teams are responsible for managing that as well for their performance whether they run every now and then like lighthouse audits of like accessibility and performance for their web applications or they run the profiler in the chrome dev tools for the performance um this would be nice to have something like how jeff mentioned where you know it really monitors performance uh in your ci cd uh pipeline but just general guidance stuff for performance testing we make sure to send as minimal amount of bytes over to wire you want to send less javascript over the wire to consumers just so that it will be a much faster page load a much faster page load time and render time and we can monitor monitor those with the third-party applications whether it's like new relic data dog sentry um a lot of our applications have those already included so that we can just log into there and check how fast our pages are doing which ones are slow um we have so much room for improvement i really love to see more integrated interactual deployment flows maybe something that could give more visible warnings or more sort of i guess errors along the way before we deploy it to each environment yeah well it's really great though i think you guys done a lot and uh there's always more we can do right in terms of technology but um it's really good one of the things i've always tried to tell this kind of maybe the smaller midsize companies are just getting started is you know as long as you have something you have at least a foundation to build on even if it's a security test or accessibility um and performance um it's it's something and if you can do it often then you can learn from it and continue to build so thank you all for sharing um we're talking a little bit about the ci when jeff mentioned um i i want to ask how did tests play a role it sounds like most of you most companies now have continuous integration and deployment how do the tests within your organization play into that and um you know is there anything that you'd want the audience is listening to to be mindful of because as they build this out and to integrate like what what are some best practices or tips you may share let's see we'll start with you kristin oh okay we are we are not quite at cicd yet but one thing that we have been working on is something we're calling hands-free deployments and some teams have achieved that and and so basically that idea is um you know you're ready to release uh you've put in a change control the change control has been accepted and then you begin the release and then from the moment that it starts and it actually starts in an automated fashion each uh successive environment that we deploy to it deploys runs a test if those are successful it deploys again runs the test so so we're kind of halfway to ci cd um but i guess one thing that i would say that is so important is just making sure that you've got good quality tests tests that are not going to be flaky tests that are going to give you real actionable information because if you're if you've got an automated deployment that's set to run at 11 30 at night you don't want to get paged to find out that your tests failed in some environment so you want to make sure that all of your environments are set up correctly that your test data is there that your tests are really solved great and kristin you know we all have flaky tests a few um do you suggest that you remove them um for at least this deployment process to just keep it clean or just kind of give yourself um you know a little uh leeway in terms of what what's a pass rate um or you know a bar i'm a big i'm a big fan of the 100 pass rate i think if if your tests are not passing then you either need to um you know what if if you're in the process of deployment you need to get a set of human eyes on it to make sure that that if you saw a failure that the failure was a fluke or something um i think that you know flaky tests if you don't have time to fix them they should be muted or pulled out um i i don't like the idea of like well you know 90 of the tests passed so it's probably good enough for production i don't like that at all all right jeff what are your thoughts on this topic and maybe what's what's your number is it 100 [Music] we are i'm shooting for 100 zero tolerance for the flaky tests and there's a number of different techniques that you can use to deal with them either from disabling them file a ticket for follow-up or have some kind of a quarantine approach as well we are cicd it's not um as automated as developer pushes it goes through all the stages and out to production we still have a manual gate to production but it's a push the button when you're ready to to go out there um i think a couple of other things that are key to making this work is making sure that your tests are built in because if you're relying on developers to push a button then there can be a tendency to either forget or you're in a rush and you don't you're going to do it after the fact and those tests could provide you some valuable feedback to say hey this is not ready to to go um your tests also need to be efficient uh no one wants to push out something and then you come back two hours later to find out if you can actually go to the next phase so that's where being efficient in how you're organizing your tests and you know fast unit tests and integration tests and just uh the right number of end-to-end tests i think is a really key aspect of that as well great i i wanted to ask you i mean i i always was a little sneaky it's been a while since i've been hands-on but i remember in like 2010 um uh setting up a jensen jenkins server uh just because i knew i wanted to have my test built in and it was the only way to really be sure i could do it was to build it myself so i could keep admin admin rights who builds it in um it's your company is it development is it the uh do you have a devops role who makes sure who configures and makes sure that those tests run in your in your um pipeline i think that kind of gets back to that cultural question where everybody owns it and you know that it's um part of the philosophy that you want having uh test engineers around or um software uh developers and tests that's that's one of your jobs right to you to make sure that it's there to propose that it's there and to be able to do that work so that's the value in that role and that's how that's how we've handled it so if it's not getting done i've got the ability to go in we we use git lab uh currently and that's been a fantastic environment they have a lot of uh good hooks for deploying and testing and getting that that feedback that's that's very valuable yeah i like that one too alfred how about you yeah uh so a lot of our teams in the senior side use build kite as our ci provider so we do continuous integration continuous deployment across twilio console side to use jenkins but just high level we always run our unit tests in a docker container we build out our web assets and then we trigger our cypress tests against say the feature branch or staging and then if those are green we then deploy into staging run the tests again against staging deploy the prod if everything is green we're also a big fan of making sure that if a test especially an end to end test goes wrong we are all hands on deck making sure that is fixed and address before we could go to production that's really a big thing for us to censure quality we don't want to have any incident come out from the staging environment to production so that's like our common setup that we have across a lot of our teams we would always run our unit test integration test which should be like the fastest and always 100 for sure and then end to end test since they're talking with the back end it could be a little bit it could be a little bit more flaky a little bit slower but we try and address that and hammer out flakiness as much as possible um so in those steps we like to share at least a lot of our pipeline steps and docker files with a lot of other teams too because they could really reuse a lot of the similar steps in their ci flows so it's a lot about just like broadcasting that to a bunch of the other teams in wikis or in documentation to share with others so that they can have an easier time with setting up cyprus in their flows or adding tests to their repos so each front-end team could have slightly custom pipeline steps for to fit their use case but a lot of them are pretty similar in the same sort of strategy that we do um and i think in general we love to leverage parallelization for end-to-end cypress tests too so that really helps to cut down the waiting time that we had so in some cases from 50 minutes to five minutes for like over 400 tests which is saves us like months and years of waiting time if we were to run them serially so little things like that is what we apply so that it's not too much of a pain to wait as we're waiting to deploy for production or waiting to deploy to staging uh it really helps to empower the developers to write more uh end-to-end tests and unit tests uh to and even with just pushing new code to our pr we would like to also before we even push changes we run a lot of these git hooks like through husky where we would run linters and run the unit test locally as well make sure it's passing there before we could even push a change to the pr so those are a lot of the little gates along the way that ensure that there's always like something being tested and checked before we push new code just to make sure that nothing breaks in our web application that's wonderful and is it is it hands-free i mean it can can something go all the way to production without any manual manual validation so similar to jeff where in buildcat you could do like one click button deploys so like after everything is it's running automatically up to a point and then you could click yes or no to deploy to staging or to deploy to production apps assuming it's green but if it's red then you'll have to retry or push some new changes to make the build turn green again before you could do another one click button deploy there for build kite so there's a lot of cool stuff you could do with build kite that's pretty dynamic you could have your own little choose your own adventure stuff like you choose these cypress tests to run you do a lot of the shell scripting stuff and then you could have a lot of the dynamic sort of pipeline triggers there to run the test that you want and to say yes or no to deploys which is really nice it does sound really nice um so now i'd like to talk a little bit about scaling tests um you know i'm a i'm a user of send grid so i i'm i can imagine i'll go back to you alfred like that um you know scale is uh scaling your test and preparing has to uh has to be thought of as well so when it comes to running across multiple environments multiple configurations such as browsers different devices what are some of the techniques that you've used or recommend to get this accomplished yeah i guess i'll start off again uh because i'm on the road with build kite so with bulkhead we had a lot of like scheduled pipelines to run tests so like we could set up different pipelines to say trigger tests against chrome or firefox using cypress so we built those before where you could select which one you want to run against and then also run them on a schedule so some of our highest priority tests we love to run on a schedule just to be sure that the back end and front end is still working because the back end is always there's so many different back-end services that power send grid and also this twilio apis that we want to make sure that everything is working uh together so that the customer won't have any unexpected surprises so we love to do that in addition to the usual ci cd that i mentioned where we always run our highest priority tests or to select the tests before deploying to staging or production so those are some of like the big things that we do there and i know for like our pace design system they use like uh chromatic to run visual regression tests against a lot of their common their components so like in storybook so show like before and after making sure none of the styles change and that's really huge because a lot of there's a lot of direct consumers of the pace design system where each of the team's web applications still need to look consistent and hopefully not break once they push a change to the base library that everyone uses so that's sort of the areas there and the chromatic already does all the whole you know cross browser testing and running in different devices and we've also used apple tools to do a lot of the page diffs for our web applications too where it would run in different devices and different browsers to to verify that nothing changed like we upgrade a style guide if we upgraded our our component version to the latest one that at least things look consistent the user won't notice a difference so that's pretty much the goal of a lot of these uh automations across browser devices making sure that the user experience and the what the ui looks like hasn't changed in a drastic way great great sounds good how about you kristen um well in terms of parallelization um we have our legacy spec flow tests are running in parallel or at least some of them are um since we've switched to cypress we haven't had much of a need for parallelization because the tests are running so fast um we also have um you know we're we're a real micro services kind of company where we've got lots of lots and lots of small apis lots and lots of small applications that work together so so each team is is running their own automation at their own time against their own software and we really encourage teams to focus on testing as much as they possibly can in the api because that's just so much faster so um because they're limiting the number of the ui tests that they need to run um that i think makes us uh get information more quickly as well yeah great and have you um in terms of like devices and you know i always want to know like is everything done with virtually now or does does anyone still you know make them all school does anyone still have devices in hand anymore plugged in somewhere is everything in the cloud at the moment um for our mobile application we are using real device testing that exists in our home office that um all of us can connect to remotely but it's you know so it's like we're looking at a screen but it is connecting to a real device um and so so that's been really great um there's been a little emulator testing but not too much and then in terms of browser testing it really depends on the team how much they feel that they need to have that multi-browser support we we have our product owners collect information analytics information about you know who's logging on who's still using internet explorer and and who has made the switch over to edge and those kinds of things so and then we have some teams that have pretty much an entire back end and almost no front end so so their testing would look very different okay great what's internet explorer just joking [Laughter] all right jeff uh how about you yeah this was a interesting uh journey for me especially on the website when because we we're using night watch before we switched over to cypress and it was really the developers that proposed uh switching over to it and i was skeptical because i was still coming from the hey you need to run it in this particular browser to see if it's going to work or not but had a really good conversation with a principal ui engineer on the team and we talked about when you're running a chest test or when you're running tests with a react testing library what's actually going on in what's that environment and because of the the js dom stuff that's going on down there you can have a lot of confidence now that if it's running in that in that test environment that is going to run in the different browsers and then what we use to give us that browser um confidence is we're using the apple tools visual testing grid and when we run our test we actually run it in a mobile um display size and because that's what the majority of our users are on and then it gets pushed up to the visual grid and that's where we get the different um the different views and it was nice that they added uh some internet windows ex support and some safari support as well so that just gives us that extra level of confidence and we were willing to go with that and say hey if if this is giving us the confidence we need then then we're just going to stick with it rather than saying no we need something that's running in each of these different types of browser environments on our app side we do use um we're using espresso and xc ui tests which run within those environments and developed by the um you know by the address part of the android development system and also part of the the mac stuff um we are using applet tools for that as well and that's giving us the visual confidence and then we do use kobotron like kobaton or kobotron to do some extra checking and then that we do have some actual physical devices there where we'll run through a very light regression it's app dealing with apps is still a little bit scary because once it's out there it's out there and unless you're willing to force your users to upgrade to the next version so there's there's still a little less risk where on the website hey you can fist you can do a fast fix forward if something comes up so you can move a little bit more agile like good good well thank you thanks jeff you mentioned something that i thought was really really interesting and it's really uh about you talking with um the lead developer about the ui and kind of how it really works oftentimes and i think this is you know credit to the culture you have there you know oftentimes we test in a black box right and we don't really know technically do we even need to test that on 75 000 configurations or not um do you have just i don't know if you can go any deeper but um any i'm glad that you know as as we all browsers and our technologies advance we don't necessarily have to do that kind of stuff anymore is there any advice or like technical technical ahas that you can share with the team that may still be uncomfortable if they don't test everything um in every combination yeah i think it it comes down to having that that conversation and you need to know the risks around the application that you're developing each application is going to be different uh and starting those conversations with the different developers that's i think that's a real key role for um whether a qa quality engineer or software developer and test that's those are the key conversations to be having to come to those those good solutions because you can save yourself so much effort and so much time if we had relied on kind of the the prior approaches and assumptions we would have tons of tests and you'd just be would be finding kind of those test fire um firefights that that come up on a regular basis but i've been able actually to focus on a lot of other things because i'm not like our testing is so good and uh i think having that vision and saying what can we do to make that happen is is really where we all want to go yeah absolutely well thank you yeah i i had a post uh recently and uh some big company very big one of their qa automation managers came to me and said how do i get the development team to to understand our value and i was like relationship building you know that's the number one you know thing is really build that relationship you take them out virtually or what have you but um i can see just from you your experience as a case study that it it's saving a lot of tests and a lot of time so yeah thank you for sharing um many of them many many managers leaders uh asks the question especially and it sounds like each one of your organizations understands the return of investment on building automation but my question for you is how do you measure it and for other com companies and audiences that are listening that are trying to figure out how do they convince um their upper management um to to invest in automation um tell me a little bit about how you measure measure success uh kristin i'll start with you okay um so i am very very excited about something that we implemented last summer called the quality maturity model and what it is is it's a list of behaviors that teams can exhibit that show how how they're ensuring quality value and quality those seven behaviors are or those behaviors are organized into seven different categories valuable functional reliable secure performant usable and maintainable so for example in uh for let's say reliable we might have one of the behaviors would be the team has implemented a retry model so that if they make a request and it doesn't go through it gets retried or for maintainable it would be something like you know the team makes sure that pull requests are reviewed in a timely fashion those kinds of things and so we measure how successful a team is by how mature they are with those behaviors and so the teams over the last year have been working to assess where they are seeing where they're lacking and um setting action items for themselves to try and implement some more of those behaviors and so that's been a lot of fun and really exciting to see the teams grow one of the other things that we have been measuring is how many software releases result in an incident in production and so we try to keep that rate as low as possible so every month i'm measuring you know how how many releases did this team have versus how many times did they have to call a red alert and pull their code back um so so the the quality uh maturity model and the adoption have been working really really well to help reduce that um those release problems right great i i love that um qmm heard it here first how about you alfred yeah uh so to measure the return on investment we really look at similar to what kristen said where uh we just want to see how many support tickets have escalated to us like say if any front-end issues have happened how many have actually bubbled up in production that was really serious that we could have prevented and also how many issues did we actually catch before it went to production too so like we see like say certain test failures that happen uh could be from us or it could be from a downstream backend service that we're relying on where something was down but we caught it because of our cyprus tests that are running on a schedule or in our deployment flows so those are some of the key things just to make sure that the web application is still working as intended and we like to see just overall what's the health of our tests uh how how flaky are our cypress end-to-end tests we could easily check out the dashboard service for that and see okay from our team our tests were passing you know x amount of time hopefully closer to 100 as much as possible um how how often are we going back to maintain and fix those flaky tests you know how much time are we spending there as kind of that sort of just maintenance aspect of it for the cyprus test and for like unit integration tests we would run a lot of the just code coverage tools to see like how are we doing with like actually testing different pages like are we actually covering it enough do we think we should cover it a little bit more and then hopefully that we could tie a lot of those things to just less incidents going forward great yeah how about you i think the the real key um and this is one of the things that i put forward with our team is can we release with confidence like do you do you require someone to manually look at things to say yep this is okay let's go ahead and release it or do you have the right test coverage and i think that's a really good evaluation of how good your tests are and can you rely on them i love the things that alfred and kristen were talking about the the different feedback loops that you can have so as a team figuring out what what are the ones that you need to be able to give you that feedback like are your tests not meeting up to expectations and so issues that are happening in production is a great feedback loop and as long as you are learning from those and are adapting your testing and and all the things that you need to to be able to address those i think you're in a pretty good state we had some work that we did a while ago on this project test health what does it mean to your team to be healthy for your project to be healthy what are the things what are the best practices that you want to see in place you list those out and then you start to say okay how do we go from this theory to actual implementation and as a team if you're having those discussions and you're picking off the high priority items again you're moving your team in a really good direction yeah absolutely well i love all three of your answers um and how they focus on you know the the quality of how we build products and what's the end result i was expecting some of you guys to maybe say um you know we get things out twice as fast or we you know didn't need as much head count but it sounds like uh that hasn't been something that you needed to i mean i think it's implicit you're going automation is faster um but it doesn't seem like it's something that um is kind of like an immediate metric that you you you have to to kind of track right or jump in if you if you have any any in those areas okay well i mean i think you have some good numbers on when moving to cyprus so that would be a a good one to show before and after so well we're coming to the end of the hour um i wanted to uh thank you all for your time and i wanted to just go around and see if there's any parting advice that you have all three of you have mature systems and incredibly great background and experience um we probably have people in this audience that are listening like oh my gosh we're just trying out one of the 25 tools that you mentioned so if you could just leave us with your your words of advice in terms of where they can go with within their organization based on where you've gone and and what you've experienced um alfred i'll start with you yeah i think like a big one is just to start to start testing so if you haven't started testing start trying to research the different libraries we mentioned start trying to integrate it into your flows whether it's like before you push code or in your ci cd flow if you have one so there's a lot of different things that you could just get started so that's a big hurdle just getting started and slowly you'll start to adopt across the whole organization and i think just if we go just across the different testing formats like end-to-end testing maintaining and like making sure that your antenna tests are not flaky is really huge so always try and keep your resources separate as possible when it's run in parallel the order shouldn't matter for your tests maybe look into having a user creation service to create users in certain states to help you out there or just having some persistent users if the data is a little bit hard to achieve and for unit and integration tests pick the one that your team is most comfortable for that's really active in the community something that's really popular that you could get behind really love the testing strategy behind it and start adding a lot of those quick wins that you can do whether it's like small helper files small functions that you could test or you know add for your little components along the way so i think that's just some of just the big takeaways from mayan for just try and start building quality today yes just do it thank you alfred jeff how about you i think having that uh test strategy conversation with your team this doesn't have to stretch across the whole engineering organization just on the project that you're working on what does it take in order to be able to push that code out with confidence what are the tests that you need to have and and at what levels are you testing uh i'd also advocate try and push your testing down have as much of your testing covered in your lower levels your your unit test your your standalone integration and then your end to end don't put your all your eggs in that basket like it's it's good to test the overall connectiveness of your system and some of the configuration of it but you really need to be investing in those lower levels so that you can have lots of tests that are running really well that's that's what i would i'd suggest you do great thank you jeff keep it low there we go and kristen i would suggest to start small and get buy-in from your team as quickly as possible so let's say for example if you're in a team right now you have no automation whatever no uh ui automation whatsoever let's say your team is already doing some unit tests they're already doing some integration tests um choose a good ui automation test framework like cypress and maybe do a little proof of concept and then take it to your team and say here's what i did and get their feedback get them to try it out get them to buy in and then you will find that they will be willing to contribute and help you they'll come up with ideas of oh you know what i think we need this little custom hook right here let me write that for you or they um they might say um uh you know have some ideas of of what could be tested or or have some ideas about how you've organized your code um so i'd say it's it should be a whole team activity and and then no flaky tests get rid of your flaky tests no flaky test there we go start small and collaborate well i want to thank you all for this hour we've learned a lot for the audience i think we can see there's no silver bullet um but passion helps and all the three of you have that and i really appreciate you sharing your passion and your expertise with us today thanks thank you stacy for having us
Info
Channel: Applitools: Visual AI Powered Test Automation
Views: 228
Rating: 5 out of 5
Keywords:
Id: nqWem0FuBXQ
Channel Id: undefined
Length: 58min 46sec (3526 seconds)
Published: Wed Jun 23 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.