Customer Experience Survey Insights: Key Findings and Opportunities for Improvement

As part of our ongoing commitment to service improvement, we analysed the latest set of customer experience surveys submitted during July and August. While the data set is relatively small, it offers useful insights into how our services are being received and where we can continue to grow.

 

Customer Ratings Breakdown

Across the submitted responses, a clear positive trend emerged. A significant portion of the surveys fell into the top scoring bracket, with high ratings throughout. A few surveys that just missed this range still reflected strong satisfaction, nearly maximizing their potential scores based on the questions answered with a point value. In contrast, lower-scoring responses tended to feature a larger number of “Not Applicable” (N/A) and blank answers, which reduced their overall scoring potential.

 

Section-by-Section Analysis

SAFE

This section received particularly high marks. A combined 80% of responses were either “Outstanding” or “Good,” with 67% selecting “Outstanding” for most questions. Notably, the question about whether customers feel safe was unanimously rated “Outstanding”—the only question on the survey to achieve this.

EFFECTIVE

Responses in this section were more evenly distributed: 60% “Outstanding,” 20% “Good,” and 20% “N/A.” This section also recorded the highest number of “N/A” responses—six in total, with five coming from a single survey.

Despite these variations, the scores remained fairly consistent, with a narrow 0.5-point range between the highest and lowest averages. Most questions aligned with a “Good” rating, with the exception of the first, which averaged slightly lower at 2.67.

CARING

The “Caring” section continued the positive trend, with 60% “Outstanding” and 30% “Good” responses—30% being the highest “Good” response rate across all sections. Only 10% of answers were marked “N/A.”

Average scores mostly hovered between “Outstanding” and “Good.” The fourth question dipped to “Requires Improvement” due to receiving two “N/A” responses, lowering its average to 2.5.

RESPONSIVE

This was the highest-scoring section overall. A substantial 77% of responses rated it “Outstanding.” The remaining 23% was divided among “Good,” “N/A,” and “Blank.”

Despite one question averaging lower at 2.67—impacted by a combination of “N/A” and “Blank” responses—this section recorded the highest individual question averages in the entire survey.

WELL-LED

The “Well-Led” section saw a more mixed set of responses. One survey skipped four out of five questions, while another selected “N/A” for the final three. In total, five responses across this section were left blank.

Despite this, 50% of responses were still rated “Outstanding,” with 20% marked “Good.” It was also the only section to receive every possible response type, reflecting a broader range of experiences or uncertainty.

Most question averages here fell between “Requires Improvement” and “Good.” The first question stood out with an average of 3.67, being the only one to avoid any “N/A” or “Blank” responses.

 

Overall Sentiment

When looking at the survey as a whole, “Outstanding” and “Good” together accounted for 82% of all responses, reflecting a generally positive customer sentiment. The remaining responses were composed of 17% “N/A” or “Blank,” with just 1% marked as “Inadequate.”

Analysis suggests that the lower average scores were primarily due to the frequency of “N/A” and “Blank” responses. In one instance, a single survey contributed 11 such responses out of the 25 total questions. Interestingly, replacing all “N/A” and “Blank” answers with the next lowest score category (“Inadequate”) would raise the average score from 3.1 to 3.43—indicating just how significantly these response types affect overall outcomes.

 

Final Thoughts

Overall, the feedback collected shows a strong positive trend, with customers frequently rating their experience as “Outstanding.” While a broader data set would offer deeper insight, the trends observed here still reflect a consistent customer satisfaction with the services delivered. Adjusting the survey design to improve relevance could further enhance the value of future feedback.