Self service ruler

Self-Service or Not, What We Can’t Measure Is Important Too

Questions around the performance of IT self-service, compared to more traditional IT support access and communication channels, opens up a “can of worms” re the IT issues that never make it to any IT support channel.

Self-service isn’t a new idea anymore, it’s mainstream IT service management (ITSM) behavior. Just about every organization – and, more importantly, just about every user of IT services – expects self-service to play an active role in diagnosing, recording, and resolving the issues they encounter.

Of course, most organizations still maintain the human alternatives too – with a real person who can be contacted to explain your issues to, and to get support from. And so, once organizations have both of these IT access and communication channels, it inevitably raises questions about which is the better option. Either to encourage more self-service or to emphasize the availability and attractions of asking for help rather than self-help. So we measure stuff.

What Can and Do We Measure?

In IT, we can – and do – measure all sorts of things, but can we ever really know the complete consequences of providing or using one channel compared to the other? And if we can’t, does it actually matter?

There’s already a vast range of things measured in pursuit of understanding and improving the end-user self-service experience, for instance:

  • The ratio of interactions with the self-service and FAQ systems compared to calls to support staff.
  • Satisfaction surveys of staff who access the self-service and FAQ capabilities. (And we can compare that to satisfaction levels in those who chose, or needed, to call the service desk instead.)
  • The number of times people spent time trying self-service but then had to call the service desk.
  • The percentage of visitors who click through several self-service screens before they successfully get the information they need.

So, there are lots of potential measures – and all give us some useful information. They help us decide on the required balance between self-service and requiring real people for delivering IT support.

And What Can’t We Measure?

Truthfully, we can only really measure what does happen, but there’s a whole raft of things that don’t happen too. What can we do with that? Don’t worry I’ll get to the answer after a brief detour.

Let’s visualize the two ends of a spectrum of perceived self-service satisfaction:

  • “Everything’s fine”– end users access the FAQs, log their own concerns, and call the service desk for help with complex issues or to complain.
  • “It’s not right” – very few calls or issues are logged via self-service, and end users don’t spend much time looking at FAQs, how-to videos, and the like.

But there’s a big space in-between, wherein lies some potentially insidious and hard-to-detect issues – issues that affect all channels, not just self-service. And, addressing them can create real effectiveness and efficiency improvements for an organization.

A good example of what I’m talking about here is understanding how often a service, system, or piece of IT equipment might be running below optimum performance, but nobody reports it or seeks help in getting optimum performance restored. You might call them “can’t be bothered” issues – they won’t be stopping a company from continuing to work, but we should be concerned to find out how much they matter in terms of reduced performance, profit, or business ability.

In IT, we measure stuff constantly but how do we measure if not everything's recorded? What about the "can't be bothered to report" issues that come up? Click To Tweet

Is It a Case of “Just Soldiering On?”

Why would people not seek help with an IT issue? There could be several reasons why someone decides just to live with a degraded level of performance rather than get the issue resolved.

It happens in our real lives at home too, sometimes even more than at work. I can think of a slew of reasons why we might not be seeking repair or improvement to services (or products). Reasons like:

  • “It’s good enough for what I need, I’m too busy to take time out to report it.”
  • “If I try to get it fixed, they might make it worse.”
  • “It’s too complicated/boring/unpleasant to report it, if it gets worse I suppose I will have to though.”

These are attitudes that you probably recognize in terms of how you deal with your cars, or even your own health. So be sure to recognize that they also exist with your customers, and users, in terms of the services you provide to them.

Or is It a Case of “Just Not Knowing?”

Even worse, both in terms of business impact and the ability to detect, is where service users don’t even realize things aren’t right.

Nowadays we expect applications and services to be intuitive – and thus it’s less and less likely that training will be offered on new or changed services. This makes it even more likely that end users simply accept the performance a service delivers rather than questioning it. Ultimately, no one will complain that something they don’t know about is missing.

And now back to the answer I promised…

What to Do About the Unreported Stuff

There are some pretty obvious things that can help you better see (or imagine)  situations where end users are being failed or lost (to IT support) altogether:

  • Think about it being possible. That just because your users are quiet, and apparently happy, it doesn’t mean that they are as happy as they could be, or even should be.
  • Be sure that you know how things should be working. You can’t expect others to if you don’t.
  • Go talk to people, find out what is actually happening, measure the service being consumed and compare it with expectations. For instance, Internet speed and availability metrics might show that service level targets are being met but end users might be unhappy with, and adversely affected by, the quality of service they are receiving.
  • Don’t just wait for complaints. Maybe even consider applying Chaos Monkey concepts to the people aspect by deliberately “breaking” things and seeing what happens – from the business impact to the end-user response to the service desk and/or self-service capability. If you degrade the service, or turn off peripheral aspects, then do you get a rush of issues being logged, people flocking to FAQs, and so on? If not, then find out why.
  • Test the self-service and issue-logging system frequently by getting someone who doesn’t already know the answers to try to find them.
  • Those responsible for the design and testing of services know the targets they are aiming to deliver against. Don’t put those targets aside after you’ve gone live. Keep them updated and compare service performance against them frequently.
  • Just ask end users simple direct questions. Like “Do you often not bother to log issues?” This will highlight both good and bad things. For instance, that some end users are highly self-sufficient or that IT support capabilities are out of touch with end-user requirements and are thus ignored in favor of other support avenues (or end users quietly soldering on with their issues never resolved).

If you don’t detect these hidden situations, then you are missing out on some of the cheapest possible improvement opportunities and possibly also allowing quiet discontent among end users to fester and grow.


Posted by Joe the IT Guy

Joe the IT Guy

Native New Yorker. Loves everything IT-related (and hugs). Passionate blogger and Twitter addict. Oh...and resident IT Guy at SysAid Technologies (almost forgot the day job!).