User testing (or battling assumptions and the death of reason)

Author: Ash Mann

Everyone makes assumptions, I do, you do, everyone does. All the time. Look at us all, making an ass out of you and umptions.

However it’s a problem when these assumptions make their way into your website in the form of language, design decisions, functionality, IA, etc.

A swift way of challenging (and usually puncturing) these assumptions is by letting actual, real users with their actual, real opinions use your site, watching them use it and asking them to tell you what they think.

There are numerous ways of asking for and capturing this feedback, from quick and straightforward remote testing (a suite of options is available at Usability Hub) through to more involved/sophisticated options (such as Optimal Workshop’s Tree Jack). You could use session recording on your site with a tool like Hotjar. Deploy micro-surveys on your site with Mare. You could run more straight-forward surveys with Typeform or good old Google Forms. Or you could do some observed user testing with a tool like Silverback or at a facility like Fluent Studios or The Insight Rooms. Or you could just have a chat with some of your users in person, via email, or social media.

Basically there are loads of options and I’d thoroughly recommend you give at least some of them a go.

Of course watching someone else use the internet is maddening. Try it, it’s incredibly frustrating (and doubly so if you’re related to them, that’s a universal rule). Even moreso when they’re using a site that’s yours, or you made.

But the insights you’ll gain from watching someone else (or lots of someone elses) using your site will be invaluable.

No site is perfect, there will always be things you can look to improve. And this needn’t always involve additional development or new features, we often find that simple changes to language or tweaks to design can solve a lot of user issues.

As with any testing you need to have a specific thing that you’re focusing on testing or you’ll just be collecting lots of fairly useless data.

Running this sort of testing will identify issues, rather than tell you how to solve them. But if you don’t know that there’s a problem that needs solving then you can’t come up with potential solutions.

Examples of some of the issues this sort of testing has helped us identify include; the way the navigation worked on the English National Opera site (we originally had a burger menu which was swiftly dropped); the mobile layout on the site for The Bridge Theatre (particularly on the what’s on and production pages); the navigation labels and the on-page navigation on the Directors UK site; the UX of the purchase pathway on the Royal Court site.

Being aware of how users are actually using your site (rather than how you planned or hoped they might), where the knots are, and how you could go about solving them will mean that you can treat your website as the evolving, ongoing project that it is rather than letting it wither on the vine under the weight of fixable, but perhaps hidden, user frustrations.

If you want help setting up, running and/or interpreting the results of tests like those mentioned above on your site then drop us a line: