Results from an audit customer satisfaction survey:
"The review was more in-depth than what I was expecting."
"The scale of questions was more indicative of a mature process."
Last week I received results from an audit customer satisfaction survey.
I typically do not worry too much about these, but this time, I was concerned.
Our team reviewed a completely new area in the business that had been established mid-last year, and the review included four different departments with varying degrees of involvement.
The review took longer than expected. We had planned to finish around mid to late December, but we completed it a little over a week ago.
We failed.
I failed.
I want to share that because I'm tired of hearing only about successes. We need to share our failure stories as well, as we can learn more from those than from our successes.
Let's dive into what happened.
As mentioned, this was a completely new and complex area. We began with pre-planning by benchmarking against a company in our industry that was already doing what we do. They operate in a different footprint, so it wasn’t like anyone was sharing trade secrets or giving competitors an edge—I probably need to emphasize that.
From there, we started reviewing massive amounts of documentation (thanks, Microsoft Copilot Studio, for that) to compare industry best practices to our policies and practices. We met with the business, presented a proposed scope, and incorporated the feedback we received.
Then we... and here is where I think we didn’t do as good of a job—and as I mentioned earlier, it’s my responsibility—started creating test procedures for the proposed scope area. Two things about these test procedures:
They were not really “sharp” — meaning they were a bit vague and could be interpreted in more than one way during the review.
They were overengineered compared to simply covering the key controls over key risks.
The result?
A review that was more in-depth than expected, with questions that were more indicative of a mature process.
How do you prevent that?
With really good, concise, risk-covering test steps.
How do you come up with good test steps?
You cover only the key risks.
This was a major point made by Linh Truong on my podcast—she experienced a 70% reduction in project hours by adopting this approach. The same approach was shared by David Dufek in his Be Lazier article.
As he puts it, "Knowing what not to audit is as important as knowing what to audit." He concludes by saying, "...be lazier...[make] sure our efforts correspond with the underlying risks. Do no more than that."
How about you?
What lessons from failures have you learned from recent audit customer satisfaction surveys?