Questionnaire Design – Why most questions get bad answers

“O Deep Thought computer” [Fook] said, “the task we have designed you to perform is this. We want you to tell us …” he paused, “the Answer!”

Seven and a half million years of processing later, Deep Thought pronounced that the answer to Life, the Universe, and Everything, was in fact, 42.

So goes the Douglas Adams’ “The Hitchhikers Guide to the Galaxy“, and serves as a really splendid illustration of how a poorly fashioned question will quite often soak up resources, time, and opportunity, and then deliver an answer that is almost entirely useless to man or beast.

There are several reasons why a questionnaire may turn out to be a crock, but here are some of my favorites:

  • You didn’t have a clear problem in mind
  • You don’t have a purpose for the answer
  • You were just filling dead air

For example, here are some clangers that are apparently very popular with many recruiters.

“If you were a shoe, what kind would you be?”

Nothing against footwear fetishists, but unless this question reliably predicts performance or tenure, what on earth will you do with the answers?
If current and past performer’s profiles didn’t suggest and correlate strongly with a specific set of answers, and you aren’t going to operationalize and score the answers in order to rank-order applicants and then track the predictive power of the question over time, then of what possible use is such a question?
Unless you really have a system to categorize and put values to “I would be a size 5 red Prada stiletto” and rank it against “I would be an Adidas Predator Powerswerve”, this kind of question is a non-starter, and will just make the brighter of your applicants wonder if you are perhaps a bit bored or dim. I mean really, does a size 6 Prada rank higher than a 5 and would a red one be more desirable when employing a software engineer than blue, and what to do if one applicant nominated patent leather and another suggested suede?

It may be a “fun question”, but what will you do with the answers?

At best you just have some “fun” and get paid for it, and at worst your own biases and beliefs will fill in the gaps and color the answers, and you will tend to hire people you like and who are more “like you”, rather than hiring people who will give your employer a competitive advantage.

“Why should we hire you instead of one of the other candidates?”

Same as above, but introduces a whole new conceptual problem – that of sending the more analytical applicants down a rabbit-hole of wondering just how you expect them to know who else you have interviewed.
They will sit there wondering if this is some sort of trick or if you are really that stupid that you didn’t notice that the answer was impossible for them to know.
It is also likely to massively bias against Introverts.

“Tell me about a time when you …?”

Well this one mainly measures two things, neither of which are likely to be germane to the role.
It measures memory, and it measures story-telling ability.
It may also be measuring the ability to confabulate, which is a nice way of saying that it invites the person to spin a grandiloquent bit of fiction – which is fine if you are trying to hire a novelist, but not so hot if it is a Financial Controller that you are after. Yarn-spinning finance staff may not be such a great idea.

I could list dozens more, but the simple rule of thumb is – derive questions from existing conditions like high performers and critical incidents, have a clear idea of what you will do with the answers, and if you don’t have anything to ask, don’t ask anything.


By the time I wrote this paragraph, my questionnaire for measuring Knowledge Management behavior and climate had progressed nicely.

The questionnaire wiki had attracted some dialogue and I had progressed from just a raw list to a refined list of items, each with a rudimentary explanation, and results from 20 people who took the beta version.
From this I was able to determine how long the questionnaire typically took (10 minutes), and get feedback of any questions that were difficult to understand or that felt wrong to the participants.
On the wiki I was therefore able to state average scores from an n=20 beta test, and to produce a 1st Release Candidate of the KM questionnaire – which is open for anybody to try out at time of writing this blog post.

Why am I doing this all out in public?

Well simple, it’s cheaper and faster – Usually doing a beta test and refining items is expensive and time-consuming, and by crowd-sourcing the beta, I could do it free and fast.
By giving something away for free, I get a lot more free in return.

The next step in the lifecycle of a questionnaire like this is to put it to use – after all, the purpose of this is to derive competitive advantage through knowledge.

Next Steps – Action Piloting

There is nothing quite like reality to put something to the test and to force fast refinement and evolution, so the next step is to recruit companies who want to get the benefit but are prepared to be part of a pilot.
They get value and I get a fast refinement process. (If you want to participate, simply contact me).

The most probable outcomes for them are insights, specific action items, and a benchmark against which to measure the effects of remedial actions.
What I expect out of it is to see which question items lacked variation, which duplicated another in effect, and which failed to give usable measurements.
I also expect to discover some gaps – unexpected twists that I hadn’t foreseen and which need additional thought or question items.

Final Goal

The final desired outcome is to have two things:

  1. A reliable instrument with which to measure a defined set of constructs in order to know where to make changes and whether the changes had the desired effect in both magnitude and direction.
  2. A dataset of responses that can be used to further refine the instrument or to develop a new one, and against which more research could be mounted.

So why, you might ask, should one do this at all?

Well because it is fun of course, but it also makes business sense – it reduces wasted effort due to efforts that don’t actually deliver, and it gives visibility that allows targeted change.
Going by “gut feel” is also a lot of fun, but it isn’t typically very effective nor very reliable, and by introducing solid questionnaires and other instruments, one can have more targeted and effective interventions that are far more likely to reduce cost and improve performance.

That’s my story and I am sticking to it.


Please contribute to my self-knowledge and take this 1-minute survey that tells me what my blog tells you about me. – Completely anonymous.


Matthew Loxton is a Knowledge Management professional and holds a Master’s degree in Knowledge Management from the University of Canberra. Mr. Loxton has extensive international experience and is currently available as a Knowledge Management consultant or as a permanent employee at an organization that wishes to put knowledge to work.


Tags: , , , ,

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: