Improving UX on the Cheap
Digital Innovation Gazette: UI/UX
By John Moore for Digital Innovation Gazette
Being an app developer is a lot like being a parent: You never want to hear that you have an ugly baby.
But in the case of the developer, a bit of criticism can lead to improvement. App builders can avoid a lot of pain down the road if the bad news comes early enough in the software lifecycle. Smaller shops and individual developers can test early iterations of their apps on users, and the resulting feedback can inform subsequent versions that win over customers.
Usability testing is one technique for flagging dire problems before an app is unleashed on an unsuspecting world. Usability testing provides a snapshot of a user’s initial reaction to an app, while other methods may be employed to track longer-term usage.
Testing requires planning and the ability to craft focused and probing questions. As for monetary investment, tests don’t have to be expensive. There are plenty of low-cost DIY affairs, and even those involving an outside testing firm need not break the bank.
Usability testing can help developers unearth some scary app-killing deficiencies. “Usability testing is a wonderful, powerful research technique that finds the big hitters, the big problems that will shatter the user experience,” says Gavin Lew, managing director of User Centric Inc., a user experience research and design firm based in Oakbrook Terrace, Ill.
Developers should quickly build a simulator that users can test as opposed to taking the time to create a “gorgeous prototype,” suggests Lew. The objective is to focus on a handful of core app functions that naive users can interact with and put through their paces. Users may stumble on some features and find that others work well. The resulting feedback should help guide subsequent app iterations.
“Usability testing and the interactive design process are all about making mistakes faster,” says Lew. And fortunately, the user test population doesn’t have to be enormous to obtain actionable results.
Blink Interactive Inc., a user experience research and design firm in Seattle, Wash., recommends having eight to 10 users for simple studies, says Tom Satwicz, user researcher at Blink, on the company’s blog.
Developers that are concerned with focus group size -- and, more to the point, cost -- can use Blink’s usability sample size calculator to get an idea of what to expect. The calculator lets developers adjust such factors as the number of user groups to be compared (e.g., novice and expert users) and the number of designs to be compared, for example.
Lew, meanwhile, says seven participants will work for a usability test. With usability testing -- also called formative or iterative user research -- major issues that unravel the user experience tend to manifest quickly and often, he notes. “So, if a user can test these two to three features in a 60-minute, one-on-one usability testing session -- without introducing undue bias from the feature use -- pragmatically speaking, seven participants is sufficient,” says Lew.
Lew’s company can conduct an application test involving two or three features in a single day, which shouldn’t be cost-prohibitive, he says. User Centric’s one-day test includes the user test and a workshop with developers to explain the results. Developers should participate in the one-day test to watch the user test to get a firsthand look at the users’ experience and understand the context of use, says Lew.
The usability test isn’t the only one developers should consider, however. Usability testing, says Lew, works well when it comes to gauging the first hour or so of a user’s experience. After that point, other forms of testing will be required to determine how the user’s experience evolves over a week or a quarter of ownership.
“You have to recognize that usability testing is not the only technique,” he says. “We must use other research techniques to truly create engaging user experiences.” He cites soak-testing and longitudinal studies as examples of tests that track an app over an extended period of use.
Tests that focus on the user experience require planning to get the most out of the process.
Developers should take the time to define what a meaningful research question is before pursuing a user test, says Lew. Just asking users if they like an app isn’t particularly revealing. The idea is to focus on specific instances that can generate design insight.
“The point is to not have blanket statements like, ‘What are users doing?’” says Lew. Align your questions to business goals (e.g., features) and to whether the users actually use the features as the business intended, suggests Lew. “This way, the research is grounded. We can relate observations and findings to business objectives. This makes context and recommendations much more relevant.”
Some companies offer assistance with creating tests. UserTesting.com, a Mountain View, Calif., company that provides a usability testing service for websites and mobile apps, offers a Task Bank from which the company’s clients can draw upon when setting up their mobile tests, notes a company spokesman.
UserTesting.com also lets customers select a test’s participants from the company’s user panel. The company is still in the process of growing its mobile user panel, so at this point the only demographic selectors are age range and gender. But a strategy game developer seeking strategy gamers could list “special requirements/demographic requirements” in the “Scenario” of the test. Users self-select based on the information listed. UserTesting.com charges $39 per test participant.
John Moore has written about business and technology topics for
more than 20 years. Moore’s articles have appeared in
publications and on websites, including Baseline, CIO
Insight, Federal Computer Week, Government Health IT and
Tech Target. Areas of focus include cloud
computing, health information technology, systems integration, and
virtualization. Moore’s articles have previously appeared in
John Moore has written about business and technology topics for more than 20 years. Moore’s articles have appeared in publications and on websites, including Baseline, CIO Insight, Federal Computer Week, Government Health IT and Tech Target. Areas of focus include cloud computing, health information technology, systems integration, and virtualization. Moore’s articles have previously appeared in Intelligence in Software.