As part of our LMS selection process, we’re conducting some small-scale usability testing. We really want to see how much people can figure out in each of these systems without any instructions at all to help determine which system will be most intuitive. We also want to identify specific places where we definitely will need to do training and get a better idea of the trouble spots for users.
Although I’ve read Jakob Nielsen’s Designing Web Usability and other books, this is the first time I’ve had an opportunity to design and conduct any usability testing. It’s definitely been an enlightening experience, so I want to reflect on the process and what I’ve learned in this first round of testing.
Test Design & Process
We are testing about a dozen people total, with everyone on the team conducting some tests. The testers are divided into 4 groups:
|LMS A||LMS B|
|Student||Student: LMS A||Student: LMS B|
|Facilitator||Facilitator: LMS A||Facilitator: LMS B|
We also have 2 people who are testing both systems; one in the student role (thanks Mom!) and one as a facilitator. The testers have a pretty wide technical range, although we aren’t including anyone with extremely low technical experience. Partly, that’s because we’re using Adobe Connect to record the actions of most testers, and we need people who can manage multiple programs at once.
For each set of tasks, we have a script with a set of prompts for each task. We tried to have a somewhat logical flow for the tasks. The facilitator testers they grade the assignments right after viewing each type of student work. The tasks are grouped, and every few tasks we’re also asking the testers to rate the overall ease of use for those features. We could have asked it for every task, but we knew we had a lot of tasks to test and wanted to reduce the time.
We used partial mock courses for the tests with dummy accounts and student work provided.
In writing the prompts for the facilitator tasks, I realized that this took me much longer than I originally anticipated. Part of the problem was just not having the actual site completed when I was writing the directions; we hadn’t made all the layout and design decisions when I started writing. I also caught myself repeatedly giving too many hints in the prompts. Sometimes I did too much to describe the process of how to do a task, instead of just describing the desired end result and letting the tester figure it out. For example, I might write “Go to Joe Smith’s profile to view his blog and read the post for Activity 6-A-4” instead of simply “Find and read Joe Smith’s blog post for Activity 6-A-4.”
Although I had tested all of the tasks individually on my own, I hadn’t actually run through the entire test from start to finish. I should have. Today I discovered that I needed to reverse the order of 2 tasks; archiving a discussion prevents the facilitator from being able to grade it. Oops. Fortunately I think I was the only person who conducted tests with that order of directions, so no one else was affected. For the next round of testing though, I definitely need to complete a full walk-through myself.
During the actual testing, I had planned to keep track of the timing for each task. With juggling the task prompts, keeping notes on the tester actions and thought processes, and orally guiding the tester as needed, I just wasn’t able to do that too. Fortunately, I have the recordings to review and determine the timing. With more practice, maybe I’d be able to manage. In the next round of testing, I’ll have one in-person tester (my dear husband
was coerced volunteered to help). Since I won’t have a backup recording for that, I’ll probably just have to make him wait between tasks so I can finish my notes.
We’re still a ways from having all the data collected and analyzed, but the testing has already given me ideas about training and places we could improve the interface. If you’re selecting a new LMS, I would highly recommend doing this kind of testing. When you spend so much time working with these systems, it’s easy to be blinded to some of the tricky spots for new users. Even just watching one or two newbies walk through your system for the first time can be really beneficial.
If you’ve done usability testing for an LMS or any e-learning before, I’d love to hear about your experiences. What did you learn about the process?