In our recent webchat alpha, we looked at whether there are common webchat needs across government, and what opportunities there might be to meet those needs in a more consistent way. In this post, I’ll explain how and why we deliberately broke things to observe the webchat user journey.

Making research realistic

Researching webchat in a realistic way is quite tricky. People only ever need to use webchat when they hit a problem. Once this has been resolved, their journey in a service comes to an abrupt end. So to research webchat we needed to test journeys that were impossible to complete without additional support.

We had some doubts about this methodology. Is it ethical to break a user’s journey to test webchat? Will we still gain a true sense of people’s use of the service? How could we know which support channel they’d be most likely to use? By creating a ‘broken’ journey, would we be inflicting too much stress on participants?

Testing webchat with broken journeys

Despite these doubts, we went ahead and sent our users off on tricky journeys.

We looked for research scenarios that were complex, but not too personal. We created prototypes that included a breaking point in a user journey. This meant giving people tasks they wouldn’t be able to complete and making information impossible to find.

We looked at the whole user journey for this research, from the point at which the person started using the service, right to the end of their webchat session.

We didn’t just want to test people’s use of webchat, we wanted to see if they would choose it in the first place. In particular, we looked at the journey from the point when people started to struggle and needed help.

We disabled phone and email support options in the prototypes, and we limited support exclusively to webchat. We did this discreetly because we didn’t want to influence user’s initial support channel choice too heavily.

Information links were also disabled or stripped from pages. This meant people had to seek help to complete the tasks they’d been set.

Although we didn’t explicitly tell people they’d be pushed to use webchat to get help, some users may have guessed this was our intention. We noticed that users who were resistant to using webchat seemed more likely to notice it was their only option.

To counteract this, we ran shorter research sessions with fewer tests. We limited the amount of tasks to a maximum of 2 per person to make sure people wouldn’t be influenced by their first task. But a few users still realised we were testing webchat. It was especially clear when we had to prompt people to use it.

Learning from broken journeys

Testing these broken journeys felt strange at first, but it taught us a lot about webchat.

We found that when they were looking for help, only 2 of our 28 participants chose to use webchat. The rest looked at webchat only when we asked them to try it.

Some people didn’t notice the webchat icons and didn’t realise it was an option. Most saw the webchat, but were reluctant to use it.

The reasons that people were reluctant to use webchat varied, but we saw common patterns. Our participants:

  • thought they might not talk to a human, and didn’t want to talk to a robot
  • preferred to call so they could speak to a person
  • didn’t use computers often and weren’t familiar with the idea of webchat
  • preferred to call so they could multi task whilst on a call
  • had previous bad experiences of webchat and didn’t want to use it again
  • didn’t like the idea of a webchat adviser seeing their bank details on screen (our test scenario included payment steps)

When we asked people to try webchat, we still learned a lot:

  • most people saw phone as the best support channel, especially for complex issues
  • call cost, waiting times and where webchat icons are positioned, all drive channel choice
  • people want to talk to another person, not a robot
  • webchat adviser names and photos made people more likely to think they were talking to a person
  • webchat icons on the right hand side of a screen aren’t generally visible
  • users were more comfortable putting personal details into webchat than bank details
  • timeout warnings shouldn’t appear when users are typing – these can cause panic and frustration

And once people had tried webchat, they often said they would be keen to use it in the future. This included people who’d never used webchat before, or who were anxious about using it.

Learning about user needs makes services better

Before we tried using these broken journeys, we weren’t sure whether they would work. But the approach taught us a lot about webchat, and about users’ wider support needs.

We’ll be using these findings to help make our services stronger and more user friendly.

If you’d like to have a copy of the webchat alpha patterns and findings, please do get in touch.

Keep in touch. Sign up to email updates from this blog. Follow Emma on Twitter.

Original source – User research

Comments closed

Bitnami