specify the correct address, but the wrong date.fail to specify a shipping address, and thus have the flowers delivered to their own billing address.order twelve yellow tulips, twenty-four yellow roses, or some other deviant bouquet.If the user fails to place any order, we can just as easily determine the task a failure.īut there are other possibilities as well. If a test user leaves the site in a state where this event will occur, we can certainly score the task as a success. True task success would mean just that: Mom receives a dozen roses on her birthday. Let's say, for example, that the users' task is to order twelve yellow roses to be delivered to their mothers on their birthday. Success rates are easy to measure, with one major exception: How do we account for cases of partial success? If users can accomplish part of a task, but fail other parts, how should we score them? User success is the bottom line of usability. After all, if users can't accomplish their target task, all else is irrelevant. Nonetheless, success rates are easy to collect and a very telling statistic. Like most metrics, it is fairly coarse - it says nothing about why users fail or how well they perform the tasks they did complete. ![]() When we run a study with multiple users, we usually report the success (or task-completion) rate: the percentage of users who were able to complete a task in a study. One of the more common metrics used in user experience is task success or completion. Thus, the best usability methodology is the one least suited for generating detailed numbers. Plus, qualitative tests often involve small tweaks from one session to the next, and, because of that metrics, collected in such tests are rarely measuring the same thing. Think-aloud protocols are the best way to understand users' thinking and thus how to design for them, but the extra time it takes for users to verbalize their thoughts contaminates task time measures. ![]() You then test again to see if the "fix" solved the problem.Īlthough small tests give you ample insight into how to improve design, such tests do not generate the sufficiently tight confidence intervals that traditional metrics require. As soon as users identify a problem, you fix it immediately (rather than continue testing to see how bad it is). You gain maximum insight by working with 4–5 users and asking them to think out loud during the test. The best usability tests involve frequent small tests, rather than a few big ones. Thus, some of the best research methods for usability (and, in particular, qualitative usability testing) conflict with the demands of metrics collection. Although numbers can help you communicate usability status and the need for improvements, the true purpose of a user experience practice is to set the design direction, not to generate numbers for reports and presentations. ![]() ![]() Unfortunately, there is a conflict between the need for numbers and the need for insight. They are an integral part of a benchmarking program and can be used to assess if the money you invested in your redesign project was well spent. Metrics are great for assessing long-term progress on a project and for setting goals. Saying, for example, that " complies with 72% of the e-commerce usability guidelines" is a much more specific statement than " has great usability, but it doesn't do everything right." They offer a simple way to communicate usability findings to a general audience. Numbers are powerful (even though they are often misused in user experience).
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |