How well is your customer service
operation doing?
(What do your
customers really think?)
(c) Aleks de Gromoboy 1997
aleks@cix.co.uk

Looking for an easy way to measure how well
your customer service operation is performing?
I've set up satisfaction surveys for a number of
customer service operations. The bad news that there
isn't one easy measurement you can make. If you do go for
excessive simplicity it may hide as much as it reveals.
However getting good value out of customer surveys isn't
that difficult. You need to identify what you want to
know and the key factors contributing to those
goals. Then look at both the average satisfaction
against each factor and the spread of responses.
Some responses saying you're excellent and some poor is
not the same as all users saying you're OK!
The graph below illustrates the kind of spread you
often see. It shows results of a helpdesk survey where
the average score was a bit below acceptable - but there
were clearly two different types of users. By focusing on
the lower segment the helpdesk substantially improved the
average perception. If they hadn't recognised the two
types of user they might never have acheived acceptable
results. (The lower segment was the group of users who
had home PCs so had some expertise already.)

Some key considerations:
1: Identify the goal of the survey
There are a number of common survey goals. Typically
these include: Identifying weak spots, noting trends from
the last survey, comparing with industry standards,
testing the results of specific changes, finding out how
satisfied customers are with you, proving that you're
meeting specification. All these have different needs.
Most important is to beware of measuring aspects that
you can't or won't be able to change. It can be a waste
of money and time.
If it's customer satisfaction you're looking at do you know
what's important to your customers? How do you know? One
of the classic mistakes of customer surveys is measuring
the wrong thing.
There are a number of ways of identifying customers' real
needs. Don't just rely on gut feel. Qualitative
information can be found through through a small
preliminary survey or focus groups. You'll get a mix of
factors - identify the ones they consider most important
(say the top 5). There are a number of techniques for
identifying their priorities: e.g. asking for a top 10,
to allocating 100 points amongst the categories, pairing
(if you had to choose between this and that e.g. speed of
answering and technical expertise) which would you
prefer, having a list and ask them to score each item.
For example - one place I surveyed were sure that they
did a great job because calls were answered quickly and
problems were fixed quickly. The log data bore this out.
However users were dissatisfied because what they
actually wanted was to be kept informed about when the
engineer would turn up, not just "within 2
hours". They also felt it was more important to have
technical expertise in the call centre not just message
taking and customer care skills.
In that example a survey that asked "Was the call
answered quickly enough", "Did the engineer
turn up in time", "Was the engineer
competent" would score highly. But if you asked
"Is the service good" the score would be low.
So make sure you're asking the right questions.
There are a number of theoretical models for this sort
of feedback. While not that useful on their own they
provide a "sanity check" once you have found
out your customers specific needs.
2: Get a random sample.
You'll never get truly random samples. There's always
some degree of self selection - but if you (say) do a
postal survey with only 10% response then it's a very
small and specific subset of your users that you're
analysing. The statistical validity is low.
Try to force a random sample by making it easy to get the
feedback and not allowing the person doing the analysis
to (perhaps unwittingly) skew the sample. For example you
could call back everyone who has a log number ending in
zero who hasn't been surveyed over the last 6 months.
It's a simple field to add to the logging database and
produces a reasonable sample.
3: Ask measurable questions.
For example ask the subjects to score the particular
call between highly satisfied to highly dissatisfied.
Don't have an obvious central score - people usually use
even numbers so there's no central score.
Also remember you're asking for perceptions not facts.
"Was the call answered quickly enough" not
"Was the call answered in 3 rings". You're
really trying to identify how you're doing against what
they consider acceptable and against what they consider
to be excellent. It's more important that you're
acceptable in all areas rather than excellent in some and
unacceptable in others.
I also usually ask then to think about their last
encounter rather than try to average things out - there
are pros and cons to this.
4: Segment
You will have different user groups. Old/young -
callers at different times - business/residential
Frequent/occasional users - expert/newbie. Each group has
different needs, sometimes conflicting. If you can tune
into your segments you will have a definite advantage.
Finally
Once you've identified the issues that each customer
segment considers as critical there are a number of quite
powerful general questions you should keep asking to stay
ahead of the game:
* Is the service better or worse than the competition
* What one thing would you improve
* How much would you pay for the service
* Would you recommend to a friend
If you
would like to know more please email me. Thanks for
dropping in.
home
I hope you've found this useful. If you
did please click on the graphic to vote for this
page as a Starting Point Hot
Site
Aleks de Gromoboy 
|