Criteria Requirement 3.1b.(1) How do you determine customer satisfaction and engagement?

Judge's Survey

 

Posted by Jeff Lucas

One of the ways that we do it, is by conducting an annual survey of the applicants for the Malcolm Baldrige National Quality Award.  This is a limited but important segment of our overall customer group, since these organizations have made a significant investment in putting together their application, and they are the recipients of one of our core products, the Feedback Report.  We do have other listening posts for our products and services (Bob Hunt recently reported on the evaluations from this year’s Examiner training here) including the use of social media.

We have conducted a survey of our applicants every year for at least a decade, but starting in 2009 we made a change to the format of the survey.  In the past, we had done a fairly traditional satisfaction survey of approximately 35 questions, where we asked for ratings of specific steps in the process, features of the criteria, helpfulness of staff, elements of the feedback report and overall satisfaction.  There were a number of problems with the survey.  First, the amount of effort required to answer all the questions seemed to keep response rates down in what would seem to be a relatively motivated population.  Even with electronic distribution, multiple reminders and the like, we were never able to top a 65% response rate.  Second, while we were getting a lot of data about likes and dislikes, we didn’t have any mechanism to find out which of the items were really important to the customers. Third, it generated a lot of suggestions that weren’t really actionable (give everyone a site visit) and therefore we didn’t act, probably creating a feeling of non-responsiveness on the part of respondents.

So in 2009 we began using a Net Promoter Score type approach.  We had done some benchmarking of Baldrige applicants and recipients and decided that this method of assessing your overall relationship with customers through the use of a single question was an appropriate way to get at our intended outcomes for the survey.  The question is frequently in the format “How likely is it that you would recommend our X to a friend/colleague.”  The question is rated on a 10-point scale from Extremely Likely to Extremely Unlikely and the score is calculated by adding the percentage of respondents giving 9 or 10 and then subtracting the sum of all responses 6 or below. In other words, a pretty high standard.

There is then a follow-up opportunity to provide information on “what one thing kept you from giving a higher rating?”  We did go ahead and add a couple of specific questions on the Criteria and the Feedback Report, but keeping the same measurement methodology.

This change almost immediately achieved one of our goals.  The response rate climbed to 79% for the first year.  We also got some interesting feedback and a much clearer picture of what we could do to improve the applicant experience.  You can see the presentation that we made to the Panel of Judges, including both the NPS scores and the aggregated themes for how we could improve here.  For those who don’t want to click through a quick summary would be:

  • We have established a strongly positive relationship with these customers (NPS of 65 in 2009 and 70 in 2010; these are really strong scores on this metric with a pretty high bar — equivalent to those reported for folks like Intuit and Southwest Airlines).
  • They love the relevance of the Criteria (71 and 82 respectively)
  • Their Feedback Reports?  Not so much.  Those scores were only 6 and 10 for the last two years (Remember the calculation above.  This doesn’t mean we had single digit satisfaction, just that the support for that product is much softer than for participation in the program as a whole)

Not surprisingly, on the overall question, the major reason for a lower score was “feedback issues”.  When we did the drill down of the Feedback Report itself, the top three responses were:  “difficult to understand”, “not Criteria based”, and “didn’t get our business”.  This data was one of the major drivers for the changes we made this year to what comments should look like.  So, we do collect customer data and try to act on it.  Have we made the right choices to improve the customer experience?  Look for the definitive answer in this space six months from now, or let us know what you think.

facebooktwittergoogle_plusredditpinterestlinkedinmail

About Barbara Fischer

NIST Baldrige
This entry was posted in Customer Focus and tagged , , . Bookmark the permalink.

2 Responses to Criteria Requirement 3.1b.(1) How do you determine customer satisfaction and engagement?

  1. Steve Guns says:

    I was interested in reviewing the feedback you received from the NPS survey and the improvement themes (second last link) but could not resolve the link when I clicked on it.

  2. Jeff Lucas says:

    Steve and everyone else
    The link should take you here: http://www.nist.gov/baldrige/enter/benefits_applying.cfm
    and then the powerpoint with the survey results can be accessed by clicking the “MS-Powerpoint” link above the chart in the right column. Sorry, it is not the most transparent navigation.

Leave a Reply

Your email address will not be published. Required fields are marked *

*