Posts Tagged ‘questionnaire design’

Interview Questions – A way to get better performers, or get sued?

January 20, 2011

I was going to post a nice article on a fashion game-show and what it can teach us about business, and my stand-in article was on how to get value out of all that ubiquitous company gossip and rumor.
… but then I got into another long exchange with several recruiters and HR professionals about interview questions, and I decided to talk about that instead.

Besides, I love yapping on about statistics and also came to realize that this was a subject that is a foreign area to many HR people – apparently and according to three HR Professionals, statistics and questionnaire design are not typically in the training for HR staff and recruiters.

Status Quo

Here’s the situation:

Many recruiters have lists of their favorite questions to ask candidates, and there are more blogs and articles online than you can shake a stick at with lists of “best questions”, “favorite questions”, and “most common questions”. Some recruiters have their own lists, some draw from those blogs and articles, and others make up new questions as they go, or even do it on the fly during an interview.
What gets my giddy-goat is that while they all wax lyrical about how wonderful their questions are and how happy they are with the results, almost none volunteer how they determine that their questions do anything whatsoever other than make them happy and take time.

The articles tend to be empty when it comes to explaining the reasoning behind the “top ten/twenty/forty-two” list, and can’t point to results other than (at best) a few hand-picked and probably fictitious anecdotes. Many recruiters also espouse questions to “throw” the candidate, catch them off-guard, or startle them, which is supposedly going to reveal a “true character” or do something else that is simply marvelous but undisclosed.
… and of course there are many examples of those “why is a manhole cover round” sort of question which are spoken of in hushed and reverent terms.

My question is why is one asking these questions at all.
I mean, it takes time, effort, and presumably one needs to take notes and then compare answers, and time is money.
The answer is that the questions are going to give insight into the applicant’s personality and abilities.

Fair enough, I say.

… but this is hiring and we are presumably trying to get a better performer than those of our competition, and whose performance translates into achievement of corporate objectives – EBITDA, for example.
In which case, I am not so sure that we are trying to discover “true character” as much as simply trying to match applicants to a role in such a way that we are more likely to achieve operational goals – in other words, performance.

What to Do

Firstly, don’t ask illegal questions, it can get your employer into a whole heap of pain.
I say this because over the years I have been asked about my religion, my age, my national origin, and even my political affiliations, and each time I made a mental note to eradicate that if I joined the firm.
There is silly, and then there is just plain ridiculously silly – Don’t ask questions that expose your employer to legal action for improper discrimination. It costs money, it harms the reputation, and it just isn’t necessary.
Simple rule, if you aren’t sure of the legality of a question, leave it out!

Secondly, get a book on questionnaire design and interviewing (Scheaffer, Mendenhall Iii et al. ; Oppenheim 1998; Van Bennekom 2002; Swanson 2005), and maybe one on qualitative analysis (Ezzy 2002)

Here are some basic points before we get to my recommendations on designing a process for interview questions:

  1. Getting experts for technical or specialized tasks is a good idea, and you shouldn’t stop when you reach staffing – IO Psychology was founded precisely to address staffing issues.
  2. Don’t fret unnecessarily about sample size – a sample is used to predict a feature of a population to which it belongs and you aren’t trying to tell what is going on in the general population by using your sample of applicants, so sample size is not as relevant.
  3. There are robust statistical methods to deal with non-parametric situations in which sample sizes are small, such as Kolmogorov-Smirnov, Willcoxon, Kendall, and other tests
  4. Statistical tests are pretty much always going to be better than gut-feel and guessing since that is precisely what they have been designed to do. They exist because of the many and various biases and errors that come factory-installed in our Neolithic brains.

I often encounter this chestnut – “Past performance is a better predictor of success than chance.”

Yes it is a better predictor than chance, but that doesn’t mean that the question or the specific past behavior selected are better than chance.
While it is true that amongst the myriad past behaviors there are those that would predict specific future behavior, there is no reason to believe that we have selected the right predictors or that what we believe to have been a predictive behavior is going to be so.
In addition, don’t forget that people learn, and learning is exactly the opposite of past predicting future.

Dr.Shepell the EAP expert has suggested a regimen of measuring the predictive power of your questions over time. He suggested 2yr tenure as a performance measure, and that the scores from recruitment questions be correlated to whether the person is still employed at the 2yr anniversary to see if the questions had higher predictive power than chance. This is a simple task that can be done with standard features in Excel.

My suggestion is more complex and involves (a) post-diction to see if a question would have predicted known performers and (b) for prediction I choose the regular performance review scores. Predictive questions should correlate strongly with performance evaluation scores (unless the appraisals are rubbish).

An additional suggestion is to code the probationary outcome and either produce a dummy Boolean variable to correlate against the questions, or to expand the probationary result into a Likert scale with negative values if the person was released and positive values if they were retained. That allows a “no thanks” or “ok,sure” to be distinguished from a “Hell, No!” and a “Hell, YES” evaluation.

Here’s what I am recommending:

  1. Derive interview questions from four sources:
    1. previous critical events in the company’s history
    2. desired operational outcomes or goals
    3. the characteristics of known performers
    4. Industry-specific authorities (but make sure you understand the heritage of the questions)
    5. Like unpackaged drugs, do not get them from anyone without solid credentials
  2. Test them before use
    1. Examine them for Content Validity and Construct Validity i.e. do they test the things they are meant to and do they do so exhaustively and exclusively – the whole truth and nothing but the truth
    2. Check with simple correlation that currently known high performers answer the questions as you would have expected. If you have a top-gun Software engineer and want to get another, make sure the questions would be answered by them in the way you expected – if they don’t then modify your question or drop it.
    3. Check that the answers by existing staff correlate to their performance reviews – unless you are making a pig’s ear of the regular performance reviews, you should have simple numerical ratings that can be correlated to the answers to your questions. If there isn’t a strong positive relationship between the appraisal scores and your questions, then one or both are a mess.
    4. Take your questions to the company lawyer who knows employment law in your locality. This is not a DIY step, get legal advice before you put the company’s neck on the block.
    5. Take them to the Marketing department and get them to give you a feel for whether you are damaging the brand in any way. You shouldn’t have many questions so they should be able to give you a feel in a few breaths.
  3. Use them in a consistent manner
    1. Don’t ad lib and don’t change the wording or delivery
    2. Explain how long the questioning will take, who will use the answers and for what purpose, and how long they will be kept on record
    3. Keep records – this is company property, not yours to discard or lose
  4. Test them over time
    1. Use Dr.Shepell’s criterion – if the results don’t predict tenure, then something is wrong, probably the question itself.
    2. At each performance review, run correlations again and see how the questions are doing at predicting performance – if the correlation isn’t higher than 0.5 then you might as well be flipping a coin! You should be refining the question battery to give you an overall predictive power of 0.85 or above.
    3. Once you have a few years of data, get a good statistician in to do some fancier tests like Discriminant and Factor analysis.
  5. Be Sneaky Observant
    1. See if you can get people at other companies and particularly your competitors to answer the questions – the objective is to get a competitive edge over other firms in your market space by hiring better people than they are.
    2. Put some of the questions online in forums where SMEs that you typically hire would congregate, and see if they correlate to how senior the respondents appear to be in their area of expertise
    3. Approach known experts in the field to answer some of your questions and check those correlations

… but Why?

So why all this bother, after all stats is hard, and isn’t this going to take a lot of effort and time?

If you are keeping records of the answers people gave and how you scored those answers (and please tell me you are keeping scrupulous records), and if you have six-monthly or annual performance reviews that include numerical scores for various categories of performance (and please tell me you are doing this and keeping records), then all you need is to spare a few paltry minutes on extracting the values and running a correlation between the scores from the questions and the performance scores.

The IT folks can write a script to do all that automatically if your appraisal system doesn’t already have that functionality.

The effort is therefore not all that great since you should be doing most of it anyway.

The benefit is that you …

  1. Don’t waste time and effort asking, coding, and using questions that don’t do anything – if the question is as effective as flipping a coin, leave it off the list
  2. Get a solid basis for a defense if your hiring practices wind up being challenged in court – it is a whole lot easier to defend if you can show statistical tracking over time for questions used than standing there looking earnest and saying how you really really believe they are good questions.
  3. You get to demonstrate in real and tangible terms the value of your profession – you can show in hard numbers how the hiring processes lead to competitive advantage and shareholder value. Not a bad thing to be able to show these days!


Building interview and selection questions in a methodical way and tracking their predictive power eliminates many of the inbuilt biases that come with the standard-issue human brain, and creates intellectual capital that moves the questioning process from a smoke & mirror charade to a solid foundation and translates into real operational advantage.
The costs of doing it are lower than simply carrying on a status quo based on belief and opinion, and the additional effort involved in running basic statistical correlations is negligible.

There is simply no reason not to do so.


Matthew Loxton is a Knowledge Management expert and holds a Master’s degree in Knowledge Management from the University of Canberra. Mr. Loxton has extensive international experience and is currently available as a Knowledge Management consultant or as a permanent employee at an organization that wishes to put knowledge assets to work.


Ezzy, D. (2002). Qualitative Analysis: Practice and Innovation (New South Wales, Allen & Unwin.

Oppenheim, A. N. (1998). Questionnaire design, interviewing and attitude measurement, Pinter Pub Ltd.

Scheaffer, R. L., W. Mendenhall Iii, et al. “Elementary survey sampling. USA: IPT, 1996.” Links: 126-195.

Swanson, R. A. (2005). Research in organizations: Foundations and methods of inquiry, Berrett-Koehler Publishers.

Van Bennekom, F. C. (2002). Customer surveying: A guidebook for service managers, Customer Service Press.


Knowledge Management Climate Survey – Consulting Packages Available

December 27, 2010

Provision of Services

Organizations that wish to contract the services of the author to perform customization, deployment, and analysis of the Climate Survey can do so by contacting Matthew Loxton directly or via eLance.

Two standard work packages are available, as well as customized or bespoke projects

  • Basic Deployment Service ($1,200 USD)
    Benefits: Low-cost DIY approach for the budget-conscious but which provides a solid and tested instrument for measuring a baseline plus providing norm-based evidence that can be used to initiate and focus intervention measures. 

    Includes set-up of up to four groups or categories and delivers raw un-analyzed data in spread-sheet format as well as:

    • Design & Setup of categories
    • Design & Setup of collectors
    • Collection & Packaging of respondent data
  • Standard Analysis Package ($4,800 USD)
    Benefits: Provides an expert analysis ready for action that identifies specific action items and provides a skeleton action-plan for immediate use that includes both a Executive Overview and Management Plan that can be used as the basis for a budget request or business plan.
    Includes all of the above plus Deployment and Analysis given below. 

    • Deployment
      • Information session with Managers
      • eMail campaign to participating managers and staff
      • Qualitative interviews (5)
    • Analysis
      • Executive overview including highlight risk and opportunity areas
      • Analysis for Operational Managers
      • Operational analysis and recommendations
      • Action Plan

Bespoke or tailored packages can be built on request.

Concept Map



The questionnaire instrument is designed around a six-level KM Maturity Model that I built out from the basic CMMI model, and it highlights the climate in terms of internal drivers, environmental factors, and external contact.

The basic model looks like this:

  • Level 0 – Learned Incompetence
  • Level 1 – Awareness of Process
  • Level 2 – Repeatable Process
  • Level 3 – Defined Process
  • Level 4 – Managed Process
  • Level 5 – Optimized Process
  • Level 6 – Double-Loop Learning Process

As results are gathered across many organizations, the instrument will be refined – questions will be dropped if they seem to duplicate others in construct or show poor variance, and even though the average respondent took around ten minutes, we should try to reduce the number of items needed to get validity. I may also do a split-form version if some questions mirror each other closely.

The beta test was completed via Survey Monkey, and a 2nd Release Candidate is currently open for people to try out

This questionnaire also needs to be correlated against business and performance measures – the basic assumptions are:

  1. The BFI portion will show specific profile regularities over large numbers of respondents
  2. Organisations that score highly on learning and sharing measures will have lower turnover and higher profit per headcount than equitable organizations who score lower
  3. High measures on the trust and sharing items should predict both higher job satisfaction and performance
  4. High measures on external awareness and learning should predict higher CoP maturity
  5. High efficacy measures should also predict both higher job satisfaction and performance

Another basic premise is the same as I articulated in a Knowledge Management blog post some time ago – we already “Do KM”, the question is whether the way we do it enables us to achieve organizational goals or reduces our ability to achieve them. This questionnaire measures to a large extent, whether our KM behaviors and beliefs are congruent with positive outcomes.


The outcome of a survey with this tool will probably include various obvious gaps and inconsistencies, but we also need to offer a normative model of what the preferred profile would be.
Two possibilities:

  • A general or universal norm profile that matches all situations
  • Some kind of context-sensitive tool that builds the norm along the lines that the Sebenza tool by PIA did job profiles, or perhaps a manual hand-crafted norm profile

Suggested manual norms are provided for each item.

If you wish to participate in the testing, please just go to the RC-2 survey link, and if you would like to put a little budget into using this tool to help your organization put its knowledge assets to work, please contact me.


Matthew Loxton is a Knowledge Management expert and holds a Master’s degree in Knowledge Management from the University of Canberra. Mr. Loxton has extensive international experience and is currently available as a Knowledge Management consultant or as a permanent employee at an organization that wishes to put knowledge assets to work.

Knowledge Management and Organizational Learning – How does your firm shape up?

December 8, 2010

I was going to publish a different post today, but the excitement of having responses to a questionnaire got the better of me.

This week I put up the first release candidate after doing beta testing of a KM/OL climate survey questionnaire designed to measure the Knowledge Management and Organizational Learning beliefs and activity in an organization.

The sample frame was mainly KM/OL people, so there are some inherent biases that limit the degree to which the results could be generalized to the whole working population and all firms – one would for instance expect KMers to be active in social networking, to use Twitter, and probably have a blog or a web page. However, there are still many aspects which would be generalizable and which indeed stood out in the survey results, such as whether the performance appraisal system in use “makes a big contribution to helping [them] learn and develop” – to which a full 50% of respondents said it didn’t. This alone should be a wake-up call to HR and managers, because if your performance appraisals don’t drive learning and development, you are shooting yourself in the foot.

It is a “hair on fire” kind of thing.

.. but first some technical points.

Questionnaire Design

This is a questionnaire, not the result of a magical truth-serum, so there is some margin of doubt as to whether people are telling the truth or fooling around – that said, people are usually quite serious about giving their point of view, and there are ways to detect horseplay.

The sample is small: I had 20 responses to the beta, and 21 to the RC-1 version of which I deleted three either because they were incomplete or because the person was clearly just walking through the questionnaire to see what was in it. With 18 responses there isn’t a lot of generalization that can be done, but it certainly is enough to generate some “hey, what’s this!” moments.

Finally, this is a fast-track survey that I designed in just a few weeks and I didn’t have the luxury of a team of analysts, survey technicians, statisticians, and a trove of existing question items with a known behavior and pedigree. I borrowed some questions from Dubrin (DuBrin and Dalglish 2003), Debowski (Debowski 2006), and some previous work I have done over the years including a survey on eLearning use which I blogged about a while back, but on the whole the question items have little provenance and so one cannot compare this survey too finely against other or previous research uses.
From the beta version I also got a few good ideas from the test respondents, so a big thanks to them in helping me create this instrument. Without their help this would have taken far longer and been far more difficult.

Bottom line, this instrument is a marvelous (says I) tool for identifying trends and issues within a single organization, but you cannot at this point draw any conclusions about industry trends etc. from it.

Now, let’s look as some of the more interesting findings from the sample we have, and discuss what the implications would be if this was your firm.

Let’s start with the big-ticket items.

Frowns and Smiles

Only 27.8% of respondents said that their manager would smile if they saw them doing self-study during work-time, versus 33% that said they would frown, and 39% that said they would be neutral.
This is actually a bit of a disaster, as my previous research showed – if people think their manager would smile, then they not only keep their skills up to date by improving them at work, but they also do it on their own time. In contrast, those that think their manager is neutral about it, do far less at work and none at home, and those who think their manager would frown tend to do nothing about their own professional development at all.
What I found previously was that there is no correlation between what staff think their manager would do and what the managers report they would do – managers almost to a person think they would smile and be supportive, but their staff typically think differently, much of which is based on whether the manager themselves put visible effort into their own ongoing professional development.
Simply put if the managers do not model learning behavior, their staff presume that learning is not valued.

Apropos of which are the 28.7% who thought their managers did not “make a visible effort to improve their own knowledge and skills” and the 33% that couldn’t tell.

If you don’t fix this in your organization you had better like high staff turnover, low levels of discretionary effort and under-performing morose staff, because you are going to have a whole heap of it!


It doesn’t take a genius to figure out that teamwork makes the difference between success and failure, and that when it comes to knowledge, sharing behavior is a critical component of achieving that end goal of maximizing shareholder value.

That is why it is alarming to see that while 61% feel at ease to access others in the organization for help and guidance, the same percentage feel that it isn’t true that “Knowledge-sharing is incorporated in the regular staff performance reviews”

So let’s get this right, we think it is important, our people want to do it, but we don’t make it part of how people are measured?
If your performance reviews don’t measure behavior that you definitely want, then what exactly is the point of a performance review?

HR, are you awake?

Organizational Assets

Here’s an IQ test: your company relies on some fancy factory equipment that costs about $100k a year in lease and maintenance and you have several hundred of these machines but don’t keep track of what they are and what they can do.
Is this a company that is going to survive over the long term?

Well over 60% report that their firms do not maintain “a current database of knowledge and skills of all employees”
Ok, so let’s get this straight, the assets that account for about 80% of a company’s value and we are in the dark about their location or capabilities?

HR, are you awake?

Organizational Learning

This one gets interesting.

Although the vast bulk of innovation and growth comes from learning from mistakes, 50% of respondents say their firms do not treat mistakes as learning opportunities and a further 22.2% don’t know, 50% also say that they focus on fixing low performance rather than replicating high performance, and to cap it all 55.5% report that each time their team encounters a problem, they seem to start from scratch to solve it, with the balance reporting that they feel only somewhat that they learnt from previous experiences.

Put that into perspective with the 72% that feel that their firms celebrate “the Superman who saves the day rather than the person who prevents a situation in the first place” and you have a picture of an organization that doesn’t learn or even know what to learn from, reinvents the wheel constantly, and then celebrates disasters.
Remember, you get more of what you celebrate, so if you make a big fuss of people doing heroic things rather than preventing the need for heroism, you will get more occasions that are a crisis and require a hero.
If you ever get the feeling that your company seems to lurch from one disaster to the next, this is probably why.

Work Health & Safety

With over 60% reporting that their work environment was characterized by “interruptions, noises or other distractions” you have accidents waiting to happen, low productivity, high stress, and of course, higher costs.
Research into medical mistakes shows that a huge proportion can be laid at the door of interruptions, and I have no doubt that the same applies to other fields and activities.

HR, are you awake?

Now that you are feeling depressed, you might wonder if there was any good news.

The Good News

Firstly, people seem to be experts in what they do, with over 80% reporting that they regard themselves as an expert in their subject domain, 78% know who the other experts are in the organization, over 60% make the effort during breaks to discuss their work with others, and 50% report that they get good ideas from customers and business partners.
Furthermore, not only are 68% passionate about what their organization is trying to do, but 89% say that having specialized knowledge has cachet in their organization.
Over 77% say that they keep up to date with what goes on in their area of expertise and also regularly attend external seminars and events in that regard, while over 60% go as far as presenting papers or delivering addresses in public on their subject area.
Nearly 80% indicate that they feel at ease in asking for clarification in the event that somebody in the firm said something they didn’t understand, which truly is a triumph of culture.

Of course this is somewhat offset by the fact that 60% also think that doing your job well means not having to care about what goes on in the rest of the company and 56% think that in their teams if something works ok there is no need to experiment to make it better – a surefire way to become obsolescent by embracing creeping conservatism, and it also means that 20% feel inexpert at what they do, 22% don’t know who the experts are, 40% make little effort to talk about what they do, and a stunning 50% don’t see customers as sources of knowledge.

… but then who’s perfect, right?


The use of web2.0 technologies like blogs, twitter, tagging, etc. was high in this sample, but that was to be expected and was even a bit low – fewer had their own web pages than I would have expected, and a some don’t even subscribe to podcasts.
In a more typical cross-section of a workforce, this would be lower, but it would be important that there was some activity in this region and a solid training plan behind teaching people how to use the technologies without making a fool of themselves, exposing the company, or getting themselves dooced.
If your senior staff and SMEs aren’t using podcasts to get domain-specific information while they are on the road and sitting in planes, trains and automobiles, then it is an early sign of trouble.


Knowledge management isn’t about buying a product – it’s about what you do with the knowledge at your disposal, and whether you put your knowledge assets to work in achieving your organizational goals – and whether your bad knowledge management habits work against you, is all up to you.
Whether your staff are keeping themselves at the peak of their game and know who to contact and feel free to do so is as much part of being competitive as having a good product but, is often neglected.

This post covered a survey tool that has been developed to measure the knowledge management beliefs and habits within an organization, and by using the current respondent data as if they were all in a single company, provided an example of what it might discover and where the critical areas might be.

The survey questionnaire is made available under a copyleft attribution basis free of charge, and the author can be commissioned to provide guidance and assessment.
You can try KM/OL climate survey questionnaire out online.

That’s my story and I am sticking to it.

Please contribute to my self-knowledge and take this 1-minute survey that tells me what my blog tells you about me. – Completely anonymous.


Matthew Loxton is a Knowledge Management expert and holds a Master’s degree in Knowledge Management from the University of Canberra. Mr. Loxton has extensive international experience and is currently available as a Knowledge Management consultant or as a permanent employee at an organization that wishes to put knowledge to work.


Debowski, S. (2006). Knowledge management. Milton Qld, John Wiley & Sons.

DuBrin, A. J. and C. Dalglish (2003). Leadership, an Australasian focus, John Wiley and Sons Australia.


Questionnaire Design – Why most questions get bad answers

November 30, 2010

“O Deep Thought computer” [Fook] said, “the task we have designed you to perform is this. We want you to tell us …” he paused, “the Answer!”

Seven and a half million years of processing later, Deep Thought pronounced that the answer to Life, the Universe, and Everything, was in fact, 42.

So goes the Douglas Adams’ “The Hitchhikers Guide to the Galaxy“, and serves as a really splendid illustration of how a poorly fashioned question will quite often soak up resources, time, and opportunity, and then deliver an answer that is almost entirely useless to man or beast.

There are several reasons why a questionnaire may turn out to be a crock, but here are some of my favorites:

  • You didn’t have a clear problem in mind
  • You don’t have a purpose for the answer
  • You were just filling dead air

For example, here are some clangers that are apparently very popular with many recruiters.

“If you were a shoe, what kind would you be?”

Nothing against footwear fetishists, but unless this question reliably predicts performance or tenure, what on earth will you do with the answers?
If current and past performer’s profiles didn’t suggest and correlate strongly with a specific set of answers, and you aren’t going to operationalize and score the answers in order to rank-order applicants and then track the predictive power of the question over time, then of what possible use is such a question?
Unless you really have a system to categorize and put values to “I would be a size 5 red Prada stiletto” and rank it against “I would be an Adidas Predator Powerswerve”, this kind of question is a non-starter, and will just make the brighter of your applicants wonder if you are perhaps a bit bored or dim. I mean really, does a size 6 Prada rank higher than a 5 and would a red one be more desirable when employing a software engineer than blue, and what to do if one applicant nominated patent leather and another suggested suede?

It may be a “fun question”, but what will you do with the answers?

At best you just have some “fun” and get paid for it, and at worst your own biases and beliefs will fill in the gaps and color the answers, and you will tend to hire people you like and who are more “like you”, rather than hiring people who will give your employer a competitive advantage.

“Why should we hire you instead of one of the other candidates?”

Same as above, but introduces a whole new conceptual problem – that of sending the more analytical applicants down a rabbit-hole of wondering just how you expect them to know who else you have interviewed.
They will sit there wondering if this is some sort of trick or if you are really that stupid that you didn’t notice that the answer was impossible for them to know.
It is also likely to massively bias against Introverts.

“Tell me about a time when you …?”

Well this one mainly measures two things, neither of which are likely to be germane to the role.
It measures memory, and it measures story-telling ability.
It may also be measuring the ability to confabulate, which is a nice way of saying that it invites the person to spin a grandiloquent bit of fiction – which is fine if you are trying to hire a novelist, but not so hot if it is a Financial Controller that you are after. Yarn-spinning finance staff may not be such a great idea.

I could list dozens more, but the simple rule of thumb is – derive questions from existing conditions like high performers and critical incidents, have a clear idea of what you will do with the answers, and if you don’t have anything to ask, don’t ask anything.


By the time I wrote this paragraph, my questionnaire for measuring Knowledge Management behavior and climate had progressed nicely.

The questionnaire wiki had attracted some dialogue and I had progressed from just a raw list to a refined list of items, each with a rudimentary explanation, and results from 20 people who took the beta version.
From this I was able to determine how long the questionnaire typically took (10 minutes), and get feedback of any questions that were difficult to understand or that felt wrong to the participants.
On the wiki I was therefore able to state average scores from an n=20 beta test, and to produce a 1st Release Candidate of the KM questionnaire – which is open for anybody to try out at time of writing this blog post.

Why am I doing this all out in public?

Well simple, it’s cheaper and faster – Usually doing a beta test and refining items is expensive and time-consuming, and by crowd-sourcing the beta, I could do it free and fast.
By giving something away for free, I get a lot more free in return.

The next step in the lifecycle of a questionnaire like this is to put it to use – after all, the purpose of this is to derive competitive advantage through knowledge.

Next Steps – Action Piloting

There is nothing quite like reality to put something to the test and to force fast refinement and evolution, so the next step is to recruit companies who want to get the benefit but are prepared to be part of a pilot.
They get value and I get a fast refinement process. (If you want to participate, simply contact me).

The most probable outcomes for them are insights, specific action items, and a benchmark against which to measure the effects of remedial actions.
What I expect out of it is to see which question items lacked variation, which duplicated another in effect, and which failed to give usable measurements.
I also expect to discover some gaps – unexpected twists that I hadn’t foreseen and which need additional thought or question items.

Final Goal

The final desired outcome is to have two things:

  1. A reliable instrument with which to measure a defined set of constructs in order to know where to make changes and whether the changes had the desired effect in both magnitude and direction.
  2. A dataset of responses that can be used to further refine the instrument or to develop a new one, and against which more research could be mounted.

So why, you might ask, should one do this at all?

Well because it is fun of course, but it also makes business sense – it reduces wasted effort due to efforts that don’t actually deliver, and it gives visibility that allows targeted change.
Going by “gut feel” is also a lot of fun, but it isn’t typically very effective nor very reliable, and by introducing solid questionnaires and other instruments, one can have more targeted and effective interventions that are far more likely to reduce cost and improve performance.

That’s my story and I am sticking to it.


Please contribute to my self-knowledge and take this 1-minute survey that tells me what my blog tells you about me. – Completely anonymous.


Matthew Loxton is a Knowledge Management professional and holds a Master’s degree in Knowledge Management from the University of Canberra. Mr. Loxton has extensive international experience and is currently available as a Knowledge Management consultant or as a permanent employee at an organization that wishes to put knowledge to work.


Building a Questionnaire – Why My Survey Stinks

November 24, 2010

Why do people ask such repetitive and stupid questions?

Simply put, questions are hard, answers are easy – coming up with a good question and being able to do something with the answer is far harder than one might think.
This will be a two-part post in which I cover a case study of why what I am doing with my Knowledge Management Survey questionnaire is wrong, why recruiters ask so many dumb questions, and why people ask questions that have already been answered many times before.

More usefully though, these blog posts will guide you through a process of how to ask questions and design questionnaires that give you usable answers, lead to better questions, and make you look smart.

Firstly, my Knowledge Management survey.

I have many books on Knowledge Management, in fact they occupy two entire shelves, and several have questionnaires to scope out all kinds of things about Knowledge Management adoption, climate, culture, etc.
In fact, I have shamelessly re-used them (as any good KMer should) over the years and had pretty good results.
However, it always bothered me that not all the question items made sense, some seemed to break question rules (like being double-barreled), and they didn’t cover all the areas I wanted to know about.
Plus of course, I wanted my own.

The problem though, is although I have the questions, and these are doubtless the work of clever and experienced people, I don’t have the handbook that they must have created in order to generate the questions – I only have the questions.
You heard right, a questionnaire isn’t a matter of banging out some good-sounding questions, checking for spelling and launching.

The Right Way

You start by creating a handbook (Oppenheim 1998)

In fact you start with an initial Problem Area description, then progress to considering content and consider the Mental Model, Literature & Experience, and Process & Outcomes, and then formulate a Research Problem Statement.

Once that is securely tidied away, you move on to what the specific Research Question is, its Paradigm, applicable Research Method and Context, (Swanson 2005) and once you have that in hand, you will know if this is best done as qualitative or quantitative research.

If you decide a questionnaire is the best approach, you go through a whole other process that entails identifying the Constructs, determining the target population, and the survey methodology.
Then you bang out loads of ideas about the dimensions of the constructs and come up with candidate questions – questions which must obey a whole slew of criteria

For instance:

  1. They must be in active voice
  2. They must be collectively exclusive and exhaustive
  3. One question, one concept – no double-barreled questions (so avoid “or”, “and”, “therefore”, “either”, “both”, etc.)
  4. No Slang
  5. No loaded language or leading questions
  6. Avoid negative statements
  7. Agent of action must be clear
  8. Etc.

(Oppenheim 1998; Collins, du Plooy et al. 2000)

Collection and Analysis are two subjects all of their own, just like sampling techniques.

The “Easy” Way

You do like I did, you grab nice questions from elsewhere (trusted sources of course), create a questionnaire wiki, like mine, invite a bunch of people to help for free (which I am doing with you), and then add an explanation of what each question is intended to measure. Put the questionnaire online, like I did, and invite people to take the survey (like I just did) and to tell you if anything didn’t make sense and how long it took.
Then you start rewording and deleting or splitting questions so they obey the rules above.

At this point you can write a blog post on your questionnaire and invite people to recruit others to help, and tell your readers that when the questionnaire is finished, they will gain greatly by being able to use the survey for their own purposes and at their own firms, and of course, you can emphasize how this makes it look like they are members of a big Community of Practice.

Next week I will discuss how you helped.

Until then, happy Thanksgiving and remember to eat plenty of greens and to drink one glass of water for each glass of beer or wine.


Collins, K., G. du Plooy, et al. (2000). Research in the Social Science. Pretoria, University of South Africa.

Oppenheim, A. N. (1998). Questionnaire design, interviewing and attitude measurement, Pinter Pub Ltd.

Swanson, R. A. (2005). Research in organizations: Foundations and methods of inquiry, Berrett-Koehler Publishers.


Please contribute to my self-knowledge and take this 1-minute survey that tells me what my blog tells you about me. – Completely anonymous.


Matthew Loxton is a Knowledge Management professional and holds a Master’s degree in Knowledge Management from the University of Canberra. Mr. Loxton has extensive international experience and is currently available as a Knowledge Management consultant or as a permanent employee at an organization that wishes to put knowledge to work.


%d bloggers like this: