David Travis - The 7 deadly sins of user research - #NUX3 - @userfocus

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments
Captions
but he has written a couple of bits on usability and is here today to talk to you about seven deadly sins of user research can we please give a big round of applause to Dave a choice hello everyone can you hear me at the back good so I'm David Travis and let's have a quick show of hands what I'd like you to do is raise your hand if you work for an organization that would claim to design an easy-to-use products and services okay well mixed a mixed response really that's a bit sad isn't it well interestingly let's try the question from the other direction if I asked you to raise your hand if you think most of the technology that you use at the moment is easy to use and by technology I mean things like websites handheld gadgets smartphone apps and so on raise your hand if you think most of those raise is use well got certainly few are at least and it's an interesting distinction isn't it really it's interesting that we've got this divide between what companies think of their products or services and what most users think and in my job I work with organizations to try and bridge that divide I do it by showing them how their customers actually use their products and services the short story is it's not very pretty and sometimes I feel like a counselor working with a divorcing couple if I need users would shut up and read the manual just click here stupid but there's obvious reason for this disconnect between what users people tween or organizations think users won and what users actually won but like all obvious reasons it's wrong and the obvious reason would be for me to exalt you to do more user research now you'll be pleased to know I'm not going to exhort you to do that well actually I will do do more user research it's the right thing to do but it's not about the quantity of the user research because firms already do loads of rich user research surveys focus groups usability tests abound the problem isn't with the quantity of user research that's happen the problem is with the quality of user research organizations struggle to distinguish good user research from bad user research and here's my definition of a good user research good user research is research that gives us actionable and testable insights into users needs and today I'm going to share with you seven common blockers that prevent organizations from doing user research well and here's how I'm going to do it I'm going to review what I've jokingly called the seven deadly sins of user research personally and I like talks like this who have this kind of structure because then I know when the speaker is getting close to the end and as I go through this list it's what I'd like you to do treat it a bit like goal of a month I'd like to pick one favorite choose one that you think is most relevant to your organization and make a pledge but when you work tomorrow you're going to work on that particular issue I don't expect you to remember all seven of these but you should walk away with at least one that will make you and your organization more effective and as you'll see as we get towards the end of the talk that's what's really behind a great design culture so let's start with the first one here credulity now the dictionary tells me that credulity is a state of willingness to believe something without proper proof and the form this takes in user research is asking users what they want and believing the answer a couple of months ago I was attending a usability study on behalf of a client now I'm there because the client thinks of the usability test they're running aren't delivering much predictive value and a client was concerned they weren't recruiting the right participants or maybe there was something wrong with the tasks they were asking people to do and as I sat in the observation room I watched the administrators show three alternative designs of a user interface to the participant and asked which are these three do you prefer why it seems to have lots of face validity doesn't it asking people what they want is very tempting but it makes so much sense so it actually turns out to be the worst thing that you can do let's go back 40 years to explain why that's the case nearly 40 years ago psychologists Richard Nisbett and Timothy Wilson carried out some research outside a bargain store in Ann Arbor in Michigan and the researchers set a table up outside the store with a sign that read consumer evaluation survey which is the best quality and on the table were four pairs of ladies stockings and they labeled them a B C and D and they asked people to pick the one they felt was best on the face of it this is just like the usability test but I observed but there's a twist all pairs of stockings are identical it turned out that most people preferred D 40 percent of fewer preferred I but when they asked people why did they prefer D they came up with some interesting answers even though remember they're exactly the same the fact is the psychologist knew something that the participants didn't what they knew was that there's a natural bias when we see a sequence of things in a lion to pick the one on the right hand side but nobody said that was the reason they picked the one on the right hand side instead they would come up with answers why they would say things for example the elasticity of this pair is much better or they would say these are surer or they would say it's just the way they glisten in the light I don't know what's interesting about this is there's an invisible thread joining that Nisbet and Wilson study - the usability testing I observed and the reason I call the thread invisible is because few user researchers appear to be aware of it despite the fact there's a whole sub discipline of psychology called prospect theory devoted to it that Daniel Kahneman won a Nobel Prize for his work in the area the fact is people don't have reliable insight into their mental processes so there's no point asking them what they want in fact this leads me to the first rule of finding out what people want to do this it's like Fight Club the first rule of finding out what people want is don't ask people what they want this quotation from Rob Fitzpatrick catches it perfectly I think where he says trying to learn from customer conversations it's like excavating a delicate archaeological sites the truth is down there somewhere but it's fragile while with each while each blow with your shovel gets you closer to the truth you're liable to smash it into a million pieces if you use to blunt an instrument how can we overcome this problem so remember my definition of successful user research is research that gives us actionable and testable insights into users needs it's no good asking users what they like or what they dislike what are we doing the future are asking them what other people would do so the best way of achieving this isn't to ask it's actually to observe and your aim is to observe for long enough that you can make a kind of a reasonable guess about what's going on so asking direct questions is going to encourage confabulation it's not going to tell you what's actually going on and there are two ways you can observe the first way is you can go to people's context and look at the way they're doing the thing at the moment observe the way they currently solve the tasks that your system or product is aiming to solve and the second way you can resolve this issue is to actually teleport people to the future and I'm not saying that facetiously you can do that with prototypes and simulations where you create the context where what it will be like in the future and you put people there the key point is what people say and is much more is much less useful than what people actually do people are unreliable witnesses let's move to this second sin of dogmatism and this is the tendency to lay down principles as undeniably true without consideration of evidence or the opinion of others and the form this takes in user research is believing that there's one right way to do user research now I'm sure we've all worked with clients who think that a survey is the right way to understand user needs perhaps because we hear about surveys every day in the news or like this you get them it's sent to you by email people tend to think of them as being more reliable or more useful so here are some that I found in my junk mail folder and I'm sure if you do the search you'd find an equal number as well the notion of using an alternative method like a site visit or a customer interview doesn't have the same face validity because the sample size is comparatively small but sadly having a large number of respondents in a survey it's not going to help you if you don't know the right questions to ask and that's where site visits and customer interviews come in but site visits and customer interviews they're not the whole solution either although they're a great way to get insights into your users needs goals and behavior they shouldn't be the only method that you use recently I work with a user researcher who seemed to think there was no room for any other research method than the customer interview so we wanted to validate some personas and she felt we should run more customer interviews we wanted to identify the top tasks she felt we should run more customer interviews we wanted to compare to alternative landing page designs the solution she felt was to run more customer interviews now that kind of dogmatism is unhelpful site visits and customer interviews give you some posts they don't give you definitive answers it's kind of broad-brush stuff a bit like the weather forecast there may be some patterns in the data but these aren't as useful as the conversation that you have with people when you're carrying out these interviews and the things you observe them do and that's because the gap between what people say and what people will do is often a design opportunity but there comes a point where you need to validate your findings and you do that from with site visits and customer interviews by using a different kind of technique you might choose a survey for example to validate your personas or you might choose multivariate a B testing in order to compare a couple of different landing pages and that brings me to this notion of triangulation so triangulation is where you combine methodologies in the study of the same phenomenon quantitative data tells us what people are doing and qualitative data tells us why people are doing it and with user research is best to have a combination of both quantitative and qualitative data because combining the two can be really quite powerful for example you can generate insights from your observations but you can then follow up with analytical data like site analytics or you can start with the quantitative data that you have about users and then ask you know whether you need more depth and more exploratory data as an analogy it's a bit like a movie so triangulation is like having different camera angles in a movie so it'll be hard to understand the full picture of what's going on in this steel that I've taken from Sonico if every scene in the movie was shot like this as an extreme close-up and similarly it will be hard to understand what's going on if every image was shot as a wide-angle view as in this scene for as it is still from the same movie you can just about see a character in this shot but it's not clear what's happening unlike movies you want your research to show the close-ups but you also want to see the bigger picture the third sin of user research is around the notion of bias and again the dictionary tells me that bias means a special influence that sways one's thinking especially in a way considered to be unfair and user research is a continued fight against bias and really there are three kinds of bias that matter in user research the first is sampling bias and that occurs because you don't cover all of the population that you're studying or you cover only certain types of people because it's easier to do so so surveys often fall into that trap if you survey people with an online survey you're obviously not going to get respondents who don't use the internet and if they're an important part of your market that means your but your sample is going to be biased a second kind of bias is method bias and that's where you collect data using just one kind of methodology and you don't triangulate it's like I already discussed under this the sin of dogmatism but it's response bias that I want to discuss here and that's caused by the way in which you collect the data for example if you ask poor questions you're more likely to get participants to say what you want to hear other than getting real insights into the needs that they have now usually the ballast is obvious in usability tests or user research sometimes it's a bit more subtle so for example here we've got a usability test moderator I'm simply saying I don't want to bias you to order liking the interface you'd think on the face of it that's the right kind of approach but there's an implicit bias in this statement what she's saying is actually I do want to bias you toward liking the interface so clearly what she should be saying here is I don't want to bias you toward liking or disliking the interface also questions like that you think this is a good idea well the users got no skin in the game it's easy for the user to say yeah it's a terrific idea I'd buy seven or to hate the idea it doesn't really matter as far as a users concerned similarly what would be the best way to design this the problem with that particular question is well users own the problem but you own the solution it's not their job to come up with a solution it's your job that's why you're paid so much and the final another example here is it sounds like you definitely use this wouldn't you this sounds to me like pleading you would absolutely use this system wouldn't you and obviously the problem with this is users will willingly give you the answer they want because users can be very polite when we want them to be and when they're being very polite they give you the answers you're looking for but maybe not the answer that's important you know you can correct this bias by teaching people to ask the right questions but there's an even more serious form of this bias which is actually harder to correct and it happens when the design team carry out research are they find that people don't really have a need for the product or service it's ever happened to anyone if carried out researching you felt actually nobody really wants this thing it's tempting to hide this from senior managers because no one wants to be the purveyor of bad news take this particular example it comes from MIT professor Justin Cassell and she describes a series of focus groups but discovered that what teenage girl was wanted from technology was technologically enhanced nail polish it's amazing isn't it you'd never have guessed guess what business the firm that commissioned the research was in it turns out the firm that commissioned the focus group research they were in the business of making technologically enhanced nail polish the reason this is interesting is because in castles own work with over a thousand children in over a hundred countries where she's asked her children how they would use technology to make the world a better place not a single one of them said you know when I need technologically enhanced nail polish it never came up if there's no need for your product there's no point trying to convince senior managers that there is you'll be found out in the end it's a bad idea to cherry-pick your results to support simply what a senior manager or what a design Sivan wants you to come back with so you shouldn't approach user interviews with a vested interest your job is to convince people to use a service but or to get the results that management want it's about digging for the truth to use that Rob Fitzpatrick metaphor now it doesn't mean you shouldn't have a point of view you should have a point of view but your point of view should be but you want the design team to understand the data that's where you're coming from my fourth sin is obscurantism which is the practice Dijkstra tells me of deliberately preventing the full details of something from becoming known and the form this sin takes in user research is keeping their the findings from research in the heads of a single person the user research is often assigned to a single person on a team you have a user researcher it's their de facto role to go ahead and collect the data that person becomes the spokesperson for the users needs the team's experts on users now those of you with long memories might remember this kind of idea from prince2 it's anybody works on projects where they used prints to quite a few really must have had a big thing in the north of the time and they on those projects they had a user ahead of representative loan as the senior user and that person was supposed to represent user needs and usually this senior user was an ex user or a user's manager or someone who used to be the user in a previous life and that person became the Oracle of user needs basically if that person said it was okay then it got in the design and that person was a bit like the precogs in Minority Report they were meant to be able to predict the future now this approach is a poor way to embed user needs not just because obviously it's very difficult for a single person to understand everything that users want but it fails because it encourages the design team to delegate all responsibility for understanding users to a single person if the senior user says it's okay that it gets in the design if not we fix it according to the comments of the senior user and it's a cop-out because it puts a barrier between the design team and end users now I hope I'm not stealing Leslie fountains thunder here but the fact is user research is a team sport and Caroline Jarek captured it really well in this tweet your job as a user researcher she says isn't to learn about users but to help your team learn about users you're a facilitator as much as a researcher you want your research findings to travel through the organization and an even worse example is where you delegate your use of research to an outside firm when you do this it prevents critical knowledge about users from being embedded in the organization you really want to keep this kind of thing in-house just a quick footnote to this by the way there's some evidence that clients are getting this idea in the last month adaptive path has been acquired to act as an internal team for Capital One and Peter morholt's has linked the recent closure of Dan safe as design agency to this growth of in-house user experience teams as well now I'm not saying that the industry is about to collapse but I do think that we've reached peak agency in the UX field and in the near future we're going to see a trend towards companies moving their UX work increasingly in-house now historically the u.s. UX industry is usually a couple of years is ahead of where we are in the UK so if you work for a firm that sells UX research services you've probably got a couple of years to rethink your strategy okay back to the sin and ways of getting around it one way you can prevent this sin on your own project is to encourage everyone on the team to get their exposure hours so Jerrod's spalled has introduced us so this notion of exposure hours his research shows us that the most effective design team spend at least two hours every six weeks observing users for example in field visits usability tests would be another example I've recently been working with a really effective team in the Northeast and as your team and I've encouraged them to adopt this idea by collecting certain metrics so that number on the right which this thing is not showing you but you can see it says 95 that shows the percentage of the team who have observed a user session in the last six weeks that makes sure people are getting their exposure hours at the other metrics here show the number of users tested since inception and that simply makes sure there's a continual focus on users and the other number shows the days elapsed since the last usability test and that ensures that the project's practicing iterative design here's another example of being able to do that within the design team and here we've got a bunch of people who are actually planning a usability test the planning the research they're going to be analyzing it later as well they're even going to be carrying it out and this picture doesn't actually show a team of UX people it actually shows developers business analysts there's some user researchers here simply Zoners even a scrum masters involved in planning this usability test using a one-page usability test plan dashboard how often you see developers actually writing tests asks for usability tests or thinking about what the objectives of the usability test should be because what you're after doing here is trying to build a culture of user centered design and you do that by encouraging the whole design team to take ownership and not by putting it in the head of a single person but you also need to design iteratively and that takes me to my next sin laziness which is the state of being unwilling to exert oneself and the form this takes in user research is in recycling old research data as if it's boilerplate that you can just plug into a new project that you're carrying out I wanted to talk about one specific example here which is to do with personas because it kind of acts I think as a as a nice example and I find that clients often approach the process of developing personas as a one-time activity they'll hire an outside firm to do field research with the requisite number of users that firm will then analyze the data create a set of beautifully presented personas and then give them back to the client who then pays the invoice well we already know this is a bad idea because of the sin of obscurantism we want to keep that knowledge inside the heads that the people within the design team we don't want it in the outside agency but bear with me for the moment let's ignore that issue but the reason I'm using personas in this example is because I'm often asked by a client can they reuse personas that they've developed in the past so they say they're working on a new project it's got a passing resemblance to something they work with previously and I just want to you can we reuse those can we make some slight changes maybe we could give them a different name what do you think would that be okay and since they're based basically their customers are essentially the same it seems like it would be okay but this idea so misses the point of why you develop personas that it serves as a good example so here's a secret many people don't know you don't need to create personas to be user centered the the it really means this really misses the point of what personas are about because it's not about persona it's a personas don't matter creating persona should never be you go personas are an outcome from research that you've done to get closer to users understanding users and user needs should be your goal and in some ways a set of beautifully formatted personas like we've all seen before on projects is simply proof really it should only be proof that you met with users in the same why they're a selfie that you take with the celebrity proves you were at the same restaurant together it's really about proving that you were there with the users not that in itself isn't the goal so the world you want to move to is one where the design team know their users so well the personas aren't needed you don't get to this world by a riesling old research you get there by making use of research part of the culture so Eric Ruiz sells us that the measure of a good design team is and how quickly the team learn it's okay for people to hate your product or early on anyway but your job isn't to create a great product what it's about is helping your user be more successful at the thing they do it's not about your product it's about the experience of using the product and the purpose of multiple iterations is to find that sweet spot now we've known for a long time that iterative design is the way to become user Center to build something you measure its usability you learn from it and you redesign reusing old data whether it's in the form of personas usability test or field visits isn't iterating and it's certainly not learning I've called this in here laziness but it could equally be called bureaucracy or avoiding a kind of a tick box mentality because that's the way often exhibits itself the great thing about iterative design is that it provides you with continual course corrections so even if you're not doing the best user research in the world so long as you're designing iteratively in some ways you'll find the problems because with the next iteration you'll see what you misunderstood from the previous work you went through so it acts as a great course corrector again to avoid this sin you need to make user research part of a culture so my sick sin is vagueness and this means not having clearly or explicitly stated or expressed goals and in terms of user research I see this when a team fails to focus on a single key research question and instead tries to answer several questions at once now this sins really due to the notion of laziness that touched on before if you do research only occasionally you need to answer lots of questions because they've all been shoring up all of these questions that you want to ask and this means you end up learning a little about a lot in fact you can learn a lot about user research from a dishwasher the more you cram in the less of it gets clean and will use the research you actually want to learn a lot about a little and that little question is the specific question that's keeping you up at night and to uncover this question I ask the design team to imagine the most useful actionable research results possible what would those results tell us what would we learn from that particular study now everyone on the team should agree those key questions the assumptions you plan to test and those top questions should be the key driver of all of the research activities that you do and that means you need to get very specific with your research questions if you can't articulate your research questions on a couple of small sticky notes then it's probably much too broad and in fact that leads me to an interesting exercise you can do in order to uncover your focus question on projects so sit the design team in a room and give each person a set of sticky notes and tell them to imagine we've got an all-knowing insightful user outside the room who will answer truthfully any question we float them what questions would you ask I get the team to write a single question on a sticky note and they can have as many sticky notes as they want and then at the end of five minutes we go to a whiteboard and we affinity sort these sticky notes and then we don't vote on the group that's most important and I like that idea for a couple of reasons first of all it gives me the overall theme we need to cover because that's going to be the group with the most votes but the other reason I like it is because it gives me the specific questions that we need to answer asked answer in the research session as well we know specifically what we need to go out and get the answers to here's another technique you can use called the pre-mortem I learnt this idea from Gary Klein that a pre-mortem is a hypothetical opposite of a post mortem and it's based on the finding but you can if you imagine an event has already occurred you're much better at predicting why the event occurred so try this for yourself so imagine a future in which the products or service that you're working on at the moment is an abject failure I know it's not a nice thought to have but let's go there go there in your head try it now and then ask why was it a failure what caused the project to fail and you're normally identify one or two key assumptions that you realize you haven't validated properly yet they're the ones that you can use to drive the user research that you go out and this approach works I think because you acts against the natural poly an ISM that design teams have where they can't imagine there's products or service being anything other than a wonderful success I mean the fact is most products or services foul and this makes us take that perspective my final sin of user research is hubris so hubris means extreme pride or self-confidence and a user research this takes the form of taking undue pride in your reports or in your deliverables and all user researchers are guilty of this to some extent but those with PhDs are the worst and I say that with someone with a PhD user researchers are love data when you love something you want to share it with people so you create detailed reports packed with graphs and quotations and screenshots and call-out to look up my data look at how wonderful my data is a long time ago an academic gave me some advice that stayed with him that he said remember you don't need to show everything that you've done sometimes it's like a child proudly showing a full potty to his mother there's too much detail I thought its gave a whole new meaning to the notion of PhD theses sadly sadly a few other people are fascinated by data as you are as a user researcher your challenge is to turn that data into information and then turn that information into insight because there are a couple of problems with the excessive detail the first problem people don't read the report so they turn the page they see more data they appreciate how clever you are they get bored they move on but a more serious problem is that these kinds of report delay the design process I've worked in organizations where it's typical to have a week between a usability test and the final report a week there's no justification for that kind of delay on a project I've been observing usability tests for about 25 years and in virtually every usability test I've seen we knew what the top three problems were once the final user left the session sometimes before the final user had left the session you don't need to do extensive analysis in morei or Excel to find the top problems now that analysis is useful later when you want to dig into the details but the critical findings need to be fed back quickly and this is so the design can be modified and so that build measure learn cycle can continue instead you need to create information radiators to get teams understanding the data so they can take action on it so as a general rule if people need to turn the page then your report probably too long so how can you capture the results in a single glance so this could be a concise visual way of presenting research data like a user journey map and here are some examples of journey maps i've culled from a web search if one of yours is here thank you very much the point I want to make here is that these are working documents that haven't been perfectly finished you wouldn't look at these and say are there gorgeous documents are there they're not meant to be gorgeous documents they're meant to provide a quick visual summary of in this instance the users journey with a product or service or a persona personas don't need to be beautiful think of them as a design tools you can use to test your hypotheses or in this example of a usability dashboard from red gates software what they've done here is simply listed the problems on sticky notes below the screen they apply to at a glance you get to see the problematic areas that need to be fixed clearly that screen on the right-hand side has got some issues in this particular design and just to show this idea of short reports can work for motive usability test - here's an example of a one-page usability test infographic we didn't actually deliver this 402 if I showed you the results for the client that we did deliver it to I'd have to shoot you before you left the auditorium today and although this looks a bit more swish the report actually took about 10 minutes to produce because it's based on an Excel template where you enter the key results from the session and hey presto it's ready to release the zero delay between running the test and feeding the results back to the design team it's a one-page PDF it's easy to share it has data for people who need it and summaries for people who just want the facts so as I've reviewed these sins you may have noticed that many of them appear to have a common cause the root cause is an organizational culture that can't distinguish good use of research from bad user research companies say they value great design but they assume that to do great design they need a rockstar designer great design doesn't live inside designers great design lives inside the heads of your users you get inside your users heads by doing good user research research that provides actionable and testable insights into user needs good designs a symptom it's a symptom of a culture that values user centered design and bad design well that's a symptom too it's a symptom of an organization that can't distinguish good user research from bad user research and perhaps that's the deadliest sin of them all thanks very much you you
Info
Channel: northernux
Views: 16,369
Rating: 4.918644 out of 5
Keywords: usability, user experience, ux, nux, conference, design, nux3, userfocus, usability testing, user research
Id: sS81W1xHuVw
Channel Id: undefined
Length: 35min 20sec (2120 seconds)
Published: Mon Dec 01 2014
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.