Difference between revisions of "User Testing"

From Managing Nonprofit Technology Projects Wiki
Jump to navigation Jump to search
Line 84: Line 84:
 
*'''Laura:''' User test the current software or competitor's sites (or similar sites) to see how people use them.
 
*'''Laura:''' User test the current software or competitor's sites (or similar sites) to see how people use them.
  
*'''Briana:''' What do you watch for?
+
*'''Brianna:''' What do you watch for?
  
 
*'''Katie:''' We haven't yet, but I imagine we'd watch for (based on specs) see if they have to look around to be able to complete a task that you've given them.  Also ad-hoc observation.
 
*'''Katie:''' We haven't yet, but I imagine we'd watch for (based on specs) see if they have to look around to be able to complete a task that you've given them.  Also ad-hoc observation.
Line 101: Line 101:
  
 
*'''Laura:''' I've done a lot of phone interviews that work well, but if people are more comfortable in person you need to do that.  Many people would recommend a focus group to ask them questions in aggregate.  I'm not a big fan since they give the illusion of group data, but it's not *actually* that you've probably got only one or two.
 
*'''Laura:''' I've done a lot of phone interviews that work well, but if people are more comfortable in person you need to do that.  Many people would recommend a focus group to ask them questions in aggregate.  I'm not a big fan since they give the illusion of group data, but it's not *actually* that you've probably got only one or two.
 
  
 
*'''Brianna:''' In an interview series, what questions would you ask?
 
*'''Brianna:''' In an interview series, what questions would you ask?

Revision as of 20:19, 15 January 2016

Reporter: Tom Sorry: This is a not formatted nicely!

Why are you here?

  • Laura: Go around the room!
  • Tom: Point of Pain Personally
  • Harvey: Need to get admins to listen to me
  • Debbie:
  • Trinh: Getting it right the first round sets things off right
  • Spensor: Attempting to go paperless so more users using systems and how to get feedback
  • Amanda: Large diverse org. need to collaborate & pull people in early
  • Briana: Same as amanda
  • David: More of the same :)
  • Jon: Eval & Measurement
  • Katey (?): Custom Dev. -- Design for development & how to do that.
  • Ken: Some tools that I've made that I've shoved down user's throats and want to get them swallowing on their own.
  • Karen: Interested in the whole understanding site from user perspective
  • Melinda: Figure out best way to get everyone all together on the goals for the database.

Laura's Experiences

  • Laura: Feel very passionate about this & have years of experience as professional Info. Architect. (designs navigation & functionality of websites based on users). Did serious research on understanding what users want.
  • Laura: There's a whole span of user research. Up-front to understand incoming mental model for a site or application: what do they expect the system to do and how to interact and what things are there in their life that this might replace. Also all the way through assessing how important certain things are compared to other things. A big issue is to understand whether users want a feature or not. You can tell when you totally missed the mark when they just don't get it. Also, techniques for doing very specific exercises like card sorting to define categories for (e.g. navigation of a website). End -- understanding how something you have in some form to get user testing. It's futile to do it *only* at the end; better to do it as an interactive process. Ultimately do user testing to understand if your website is doing the things you set out to do (using various metrics).
  • Laura: Questions, reactions, etc.
  • Karen: Card sorting?
  • Laura: I've do a card sorting project where there's a foundation with hundreds of publications and we needed to define a scheme to create 10 buckets to divide up the various publications (we actually did a faceted organization with multiple different categories).
 * Need to get a representative sample of the content (e.g. 100 out of thousands of items).  Optimal # is between 60 - 80.
 * Tools available online, but often done with actual cards.  Label cards with the names of *actual* publications.  You have a stack of cards that represent the publications.
 * Get people one at a time and get them to divide the cards into piles that make sense (e.g. 3 - 10 items per pile).
  • Melida: I have been the subject....then what do you do?
  • Laura: There are utilities that will do statistical analysis to figure out clusters of things. Just as useful are the actual *buckets* that people have created (the categories that people have created). You'll get drastically different kinds of categorization (white paper vs. summary as opposed to topical).
  • Karen: What part of the process do you do this?
  • Laura: Design level after you have a scope for the process.
 * Open: they get to label buckets
 * Can also do a closed sort where you already have the buckets labelled to get cues as to where to put things.
  • Karen: These are the *end users* (either part of org. or otherwise)...do you pay 'em?
  • Laura: Yeah, you might need to pay them. NPOs often have the ability to bring people in. Have others done this?
  • Melinda: This is the best idea I've heard in 3 years!
  • Laura: It's a time-honoured technique. There are tools available online. It's preferable to do it in person because it's less abstract.
  • Melinda: Seems like an easy way to get everyone involved.
  • Jon: What are you trying to get...attitude, analysis?
  • Laura: You're trying to figure out how people categorize things in order to (e.g.) create a navigation for a website...how to we create a structure that makes sense given the data.
  • Melinda: What about a survey to do the same thing?
  • Tom: We've used questionnaires for features!
  • Katie: user testing doesn't always fall as highest priority. Publications to direct us to to help argue the importance?
  • Laura: Jacob Nielson. http://useit.com and several books. He's become a pundit (very black & white about this sort of stuff) but he has a background in this sort of stuff with data to back it up. Another group is http://uie.com User Interface Engineering -- consulting firm that does seminars and training on user interface design but they also have nuanced stuff about doing this sort of work. Jacob Nielson has the data & proof to back you up. If you want the answer to the question, UIE is probably better.
  • Melinda: Lots of research -- there's a lot of info out there but I feel lucky upon finding the info I need.
  • Laura: Blueprints for the Web by Christina Wodtke. Good overview of process of designing a website.
  • Tom: The Humane Interface
  • Melinda: Accessibility? Is that still out there & how does that interact with this?
  • Laura: This is often a separate issue because there are understood standards behind accessibility & usability is much fuzzier. Accessibility is a base. Often. Clearly anything for beyond a very limited known group of users you need to consider.

High Level Process

  • Laura: Surveys are one method for understanding users' desires.
  • Katie: Observation -- just watch users use the site and take notes. We were considering using gotomeeting(.com) to watch users use what you've developed for them.
  • Laura: User test the current software or competitor's sites (or similar sites) to see how people use them.
  • Brianna: What do you watch for?
  • Katie: We haven't yet, but I imagine we'd watch for (based on specs) see if they have to look around to be able to complete a task that you've given them. Also ad-hoc observation.
  • Laura: I'd give a few tasks at the high level to granular level. I've asked people "Where would you go?" "If you wanted to buy your mom something for MOther's Day, find something here!" You can then see what they might do or what they would think about. Doing tests on an interation of your own site. Think through medium-level tasks -- probably not "click on the save button", but more like "You're looking for resources to help kids with anti-drug issues".
  • Melinda: It's important to not to pre-training.
  • Laura: You want to at least know what experience people have when they're doing this. It's very instructive to get users who don't have any knowledge of the org. Have 'em say what they think the organization *does*. Lack of Mission statement, etc.
  • Amanda: I have an idea of what the user base is like & it's varied. Not just education level but the literacy level & do they understand technology. Also culturally. How to gain that kind of info?
  • Laura: Probably interviews or focus groups. I'm a huge fan of interviews. They can be expensive & time-consuming but you can get a whole bunch of info in a one-on-one interview. You'd probably need to do 4 or 5 in a particular group.
  • Melinda: I need to know how literate the potential users are.
  • Laura: I've done a lot of phone interviews that work well, but if people are more comfortable in person you need to do that. Many people would recommend a focus group to ask them questions in aggregate. I'm not a big fan since they give the illusion of group data, but it's not *actually* that you've probably got only one or two.
  • Brianna: In an interview series, what questions would you ask?
  • Laura: You need to define what you want to get out of it. If you're thinking about the website generally, you might ask "Do you use it?" "If so, what do you use it for?" "What websites do you use and which ones do you like?". If you're trying to provide info about what the cops are up to, find out how *they* figure that out currently. How are they currently meeting the need you're trying to solve for them. How do they think about what you're hoping to do.
  • Melinda: People are able to fill out a form and put data into the filemaker but we have no way to search or tag, etc. So we don't have a way to search for an individual cop's name. Admin team came up with filemaker with the theory that if there are issues down the line they will get fixed. And still it's not developed to work as a searchable db. But we're on the wrong track.
  • Laura: To get user input, then you need to understand what people want to know. What *kinds* of things would you want to know about cop's activities in the neighborhood.
  • Melinda: We need to figure out how to include the content of the video tapes.
  • Jon: Is the goal to increase the broad usership of the site or to raise money?
  • Laura: A website has many goals and the point of getting user feedback is to further a particular goal that you've identified.
  • Jon: If you have navigational cues about a goal...when do you get diminishing returns on the data that you've gathered about who uses.
  • Melinda: At a certain point you need to ask people.
  • Spenser: ideas.salesforce.com -- if you have a feature request that you'd like to see made to the project, you submit it somewhere and every month you get X votes that you can allocate. If something gets enough votes it goes to the top and they use that to figure out what's important for users. Using the website to figure out what users want.
  • Laura: If you had a motivated user base, then that might work quite well.
  • Laura: I think it's important to talk about what the return will be. There's not a ton of user input that you can get that's super-easy and "no big deal" so you need to ask yourself "is this worth the effort?" It's better to talk to a few people than no people, but things like web analyitcs can give you interesting insights into what's working and what's not but the "Why?" is tough to discern but it's much more straightforward to just ask. You can go through "If I change X, what happens?"
  • Jon: One meta issue is to see where you are in the rankings. Then there are the perception issues with respect to the site itself. What motivates people in the non-profit world in the to be at a website in the first place? So some sites have a very small, dedicated user base and others have a huge one. You need to ask efficacy and user base questions first.
  • Melinda: Part of what you're doing is just getting general information out there that you feel people need to have exposure to people.
  • Amanda: There are priority levels with various pieces of content.
  • Jon: Do you have questionnaires, do people fill 'em out?
  • Amanda: Good question. We often present the same info in same ways. Distribute info in different formats. Secondly repurpose the same info in different ways.
  • Jon: Analyze by click?
  • Amanda: Yes, but one goal is to inform members and get new members. How do I learn about people who aren't already participating. I can't just pull things and then try to figure out if people who haven't visited before. One way to do that is to identify your potential users and then figure out *how* to ask them. They are not likely to volunteer info and even getting them to answer direct questions.
  • Jon: It's not very personal to just have a generic questionnaire.
  • Laura: Other questions?
  • Ken: I think that to a certain degree one thing that hasn't been discussed is what do you do with quantitative data. Is *that* the answer? Sometimes it's not if you don't know your users. You can manipulate the data and maybe reach conclusions but if we don't have a dialogue with users it's working without any direction. What are best practices in revisiting those metrics? What do you do when you see metrics change in response to something that you've done (e.g. re-organize a menu and see a surprising result).
  • Laura: How are you using web stats?
  • Melinda: I removed an in-between step, people stopped going to personal sites. It's an extra step that I removed and yet people stopped taking that route.
  • Ken: How do you react when audiences do something counter-intuitive?
  • Brianna: One thing that works well is a very active blog that's separate but linked in. We found that more people visited the blog than the website. It was the analytics on both that made the case to move the blog to the website.
  • Jon: Membership orientation? You can maybe get folks to participate in forums? What if you have clicks that indicate you have 1K interesting folks and you might be able to derive 50 - 75 members who will communicate via forums. If you can identify a core group of users who care enough then they can drive the development.
  • Spenser: Then maybe you create the form for recommendations.
  • Brianna: We're getting comments on the blog too and lots of people get very involved there.
  • Laura: In the realm of pulling out interested members. If Gunner were here, he has a very strong idea of pulling out really interest individuals. Finding the folks who feel very passionately who feel that the project is not well and then pulling them into your internal project team. It's two different schools: one is where you're being the researcher in an almost ethnographic way and the other is to bring people in and get them involved in your day-to-day work. It means you can have real deep interactions and get priorities but at the same time you're defining the project around the needs of folks who are very passionate but may not be the core user base and sometimes the bulk are hard to get info from.
  • Debbie: One thing I was consider was how you negotiate between the feedback and core group and the mission of the org. Sometimes you *want* users to see these pages but the users are not seeing it. You have to actually *take* the opinions and use that information. One issue that we deal with is how to meet the needs of the network and the goals of our org.
  • Katie: It's scary to make yourself vulnerable to receive that feedback. As PMs you have to go back to the focus of the goal that you've set for the project and always to manage the scope and say "That's an awesome idea but probably not for the current scope. Save it for the next iteration."
  • Laura: There's a very real tension between say selling shoes and if people are trying to buy clothes then you've got an issue. NPOs are often attempting to sway opinion, but sometimes people are there to get info on how they can help the pollution in the local river and you're trying to convince people to stop drinking bottled water. All you can do is weigh the transaction to make sure that the user gets something that they wanted but also make sure that what you're trying to provide is also happening. How do you meet their goals while serving your own. Also internally in terms of having people enter time but they just want a document! So de.lico.us fills both needs simultaneously *really* well.
  • Brianna: Getting feedback from users and stakeholders is tricky in advance -- by the time you bring them in the input can be so all over the place that it can derail where you've already gone. How do you bring folks who are not designer-techie at an appropriate place?
  • Melinda: E.g. "What is tagging?"
  • Katie: I'd say when creating (I have web dev background). we're changing our process where we collect specs & having QA include user testing and we have clients sign off on what they want to have happen and so they are kinda defining the tests later on. It's not foolproof but having that buy-in very early on in the design/dev process helps a lot.
  • Jon: Is that beta testing? Isn't there a big difference in the two audiences?
  • Katie: Also user testing on back-end and that's what I'm thinking of here.
  • Laura: I'd also say you've described a client-consultant model, but in an internal project making sure that folks are involved form the beginning and trying to keep everything orderly and avoid having people popping in and out in the middle which seems to cause the biggest problems. Also documenting important decisions in a way that's understandable.
  • Melinda: Establishing a framework out front including terminology might help.
  • Debbie: We've got an ED who is very opinionated, we kept them out of the process & we got her approval to have the design team who would be able to make the initial decisions and then she'd be brought in at the very end. When we were down to our last three designs and it was *JUST* choosing. We took everything off the table. We got buy-in from day one that she'd only have feedback on certain issues and keep the choosing options small.
  • Ken: How do you get the person to buy-in on that?
  • Debbie: It was trust on her part to trust the team. It was also a benefit for her because she was very limited on time. So it can be presented as a benefit too.
  • Ken: I can see that happening in other orgs where the person choosing needs to be carefully kept within the parameters.
  • Debbie: It was very clear form the start of the project.
  • Melinda: But the ED might have interfered with the collab.
  • Debbie: You can't talk her out of everything!
  • Jon: Did she need to suspend disbelief?
  • Debbie: She just has lots of opinions.
  • Jon: Boundary establishment.
  • Debbie: It's about how you present it -- we were saving her time. If the "client" trust you it can be very appealing.
  • Karen: You thought about her needs and your needs & compromised in advance too which allowed for rapid and yet careful collaborative development.
  • Debbie: It was a very fast turnaround so everyone knew we needed to keep it quick as possible.
  • Harvey: We've been doing websites for a long time and have had failures for one main reason which is a lack of buy-in from admins. We can't get users interested and we're finally getting usage but had to sacrifice a lot and eventually the admins just start tossing ideas out the window but we have to do some give & take with those admins. They need to feel as though it's their project.
  • Laura: So were you cutting controversial things?
  • Harvey: It was all about how we were going to set it up, and because of affiliation with other company we needed to work through their system and we *had* to use their server, and that gets us stuck with the tools their giving us, but we have to in order to get buy in otherwise no one will use the tools just *we* developed!
  • Laura: There's getting input form core team and the mass user base for the project and then there are internal folks who aren't *on* the team but have input nonetheless. Other techniques to navigate this?
  • Tom: Make it very clear from the get-go as a consultant who can say "We *have* to have a final decision maker who can't be overridden by admin."
  • Laura: Will this work for internal projects?
  • All: NO!
  • Ken: I like tech committes with lots of peopl without a tech background! Am I *really* working on things that people will use? I think that I'm a huge fan of external consultants because the best situations have been a brokerage between the ultimate client and the external developer and the internal tech director.
  • Tom: I have ruled with an iron fist and insisted on getting people to work with me a certain way.
  • Ken: Yeah, that won't work for me with my internal code of ethics.
  • Harvey: I'm agreeing a lot here, I *can* lock someone out and it's up to who was my boss. I've had a new situation where I had everything set up and we were ready to go and the dept. head overrode me as PM. And a year late being waaay over budget and timeline and they come back to me and ask why is it not working the way *you* said! I was able to say "when I lost control of the project, this is how the scope ballooned beyond what *I* said originally!"
  • Laura: End of Time. Takeaways/AH-HA's?
  • Katie: Card sorting
  • Ken: Thanks to debbie the possibility that senior managers are capable of playing nice on large tech projects.
  • Jon: It's too amorphous a subject.
  • Melinda: I like to hear that others are trying to use statistics
  • Debbie: It was inform with nothing written. (Going live June 16th)
  • Katie: The resources that were thrown out there.
  • Debbie: TCG is design and CivicAction is doing back-end.