Interview Questions – A way to get better performers, or get sued?

I was going to post a nice article on a fashion game-show and what it can teach us about business, and my stand-in article was on how to get value out of all that ubiquitous company gossip and rumor.
… but then I got into another long exchange with several recruiters and HR professionals about interview questions, and I decided to talk about that instead.

Besides, I love yapping on about statistics and also came to realize that this was a subject that is a foreign area to many HR people – apparently and according to three HR Professionals, statistics and questionnaire design are not typically in the training for HR staff and recruiters.

Status Quo

Here’s the situation:

Many recruiters have lists of their favorite questions to ask candidates, and there are more blogs and articles online than you can shake a stick at with lists of “best questions”, “favorite questions”, and “most common questions”. Some recruiters have their own lists, some draw from those blogs and articles, and others make up new questions as they go, or even do it on the fly during an interview.
What gets my giddy-goat is that while they all wax lyrical about how wonderful their questions are and how happy they are with the results, almost none volunteer how they determine that their questions do anything whatsoever other than make them happy and take time.

The articles tend to be empty when it comes to explaining the reasoning behind the “top ten/twenty/forty-two” list, and can’t point to results other than (at best) a few hand-picked and probably fictitious anecdotes. Many recruiters also espouse questions to “throw” the candidate, catch them off-guard, or startle them, which is supposedly going to reveal a “true character” or do something else that is simply marvelous but undisclosed.
… and of course there are many examples of those “why is a manhole cover round” sort of question which are spoken of in hushed and reverent terms.

My question is why is one asking these questions at all.
I mean, it takes time, effort, and presumably one needs to take notes and then compare answers, and time is money.
The answer is that the questions are going to give insight into the applicant’s personality and abilities.

Fair enough, I say.

… but this is hiring and we are presumably trying to get a better performer than those of our competition, and whose performance translates into achievement of corporate objectives – EBITDA, for example.
In which case, I am not so sure that we are trying to discover “true character” as much as simply trying to match applicants to a role in such a way that we are more likely to achieve operational goals – in other words, performance.

What to Do

Firstly, don’t ask illegal questions, it can get your employer into a whole heap of pain.
I say this because over the years I have been asked about my religion, my age, my national origin, and even my political affiliations, and each time I made a mental note to eradicate that if I joined the firm.
There is silly, and then there is just plain ridiculously silly – Don’t ask questions that expose your employer to legal action for improper discrimination. It costs money, it harms the reputation, and it just isn’t necessary.
Simple rule, if you aren’t sure of the legality of a question, leave it out!

Secondly, get a book on questionnaire design and interviewing (Scheaffer, Mendenhall Iii et al. ; Oppenheim 1998; Van Bennekom 2002; Swanson 2005), and maybe one on qualitative analysis (Ezzy 2002)

Here are some basic points before we get to my recommendations on designing a process for interview questions:

  1. Getting experts for technical or specialized tasks is a good idea, and you shouldn’t stop when you reach staffing – IO Psychology was founded precisely to address staffing issues.
  2. Don’t fret unnecessarily about sample size – a sample is used to predict a feature of a population to which it belongs and you aren’t trying to tell what is going on in the general population by using your sample of applicants, so sample size is not as relevant.
  3. There are robust statistical methods to deal with non-parametric situations in which sample sizes are small, such as Kolmogorov-Smirnov, Willcoxon, Kendall, and other tests
  4. Statistical tests are pretty much always going to be better than gut-feel and guessing since that is precisely what they have been designed to do. They exist because of the many and various biases and errors that come factory-installed in our Neolithic brains.

I often encounter this chestnut – “Past performance is a better predictor of success than chance.”

Yes it is a better predictor than chance, but that doesn’t mean that the question or the specific past behavior selected are better than chance.
While it is true that amongst the myriad past behaviors there are those that would predict specific future behavior, there is no reason to believe that we have selected the right predictors or that what we believe to have been a predictive behavior is going to be so.
In addition, don’t forget that people learn, and learning is exactly the opposite of past predicting future.

Dr.Shepell the EAP expert has suggested a regimen of measuring the predictive power of your questions over time. He suggested 2yr tenure as a performance measure, and that the scores from recruitment questions be correlated to whether the person is still employed at the 2yr anniversary to see if the questions had higher predictive power than chance. This is a simple task that can be done with standard features in Excel.

My suggestion is more complex and involves (a) post-diction to see if a question would have predicted known performers and (b) for prediction I choose the regular performance review scores. Predictive questions should correlate strongly with performance evaluation scores (unless the appraisals are rubbish).

An additional suggestion is to code the probationary outcome and either produce a dummy Boolean variable to correlate against the questions, or to expand the probationary result into a Likert scale with negative values if the person was released and positive values if they were retained. That allows a “no thanks” or “ok,sure” to be distinguished from a “Hell, No!” and a “Hell, YES” evaluation.

Here’s what I am recommending:

  1. Derive interview questions from four sources:
    1. previous critical events in the company’s history
    2. desired operational outcomes or goals
    3. the characteristics of known performers
    4. Industry-specific authorities (but make sure you understand the heritage of the questions)
    5. Like unpackaged drugs, do not get them from anyone without solid credentials
  2. Test them before use
    1. Examine them for Content Validity and Construct Validity i.e. do they test the things they are meant to and do they do so exhaustively and exclusively – the whole truth and nothing but the truth
    2. Check with simple correlation that currently known high performers answer the questions as you would have expected. If you have a top-gun Software engineer and want to get another, make sure the questions would be answered by them in the way you expected – if they don’t then modify your question or drop it.
    3. Check that the answers by existing staff correlate to their performance reviews – unless you are making a pig’s ear of the regular performance reviews, you should have simple numerical ratings that can be correlated to the answers to your questions. If there isn’t a strong positive relationship between the appraisal scores and your questions, then one or both are a mess.
    4. Take your questions to the company lawyer who knows employment law in your locality. This is not a DIY step, get legal advice before you put the company’s neck on the block.
    5. Take them to the Marketing department and get them to give you a feel for whether you are damaging the brand in any way. You shouldn’t have many questions so they should be able to give you a feel in a few breaths.
  3. Use them in a consistent manner
    1. Don’t ad lib and don’t change the wording or delivery
    2. Explain how long the questioning will take, who will use the answers and for what purpose, and how long they will be kept on record
    3. Keep records – this is company property, not yours to discard or lose
  4. Test them over time
    1. Use Dr.Shepell’s criterion – if the results don’t predict tenure, then something is wrong, probably the question itself.
    2. At each performance review, run correlations again and see how the questions are doing at predicting performance – if the correlation isn’t higher than 0.5 then you might as well be flipping a coin! You should be refining the question battery to give you an overall predictive power of 0.85 or above.
    3. Once you have a few years of data, get a good statistician in to do some fancier tests like Discriminant and Factor analysis.
  5. Be Sneaky Observant
    1. See if you can get people at other companies and particularly your competitors to answer the questions – the objective is to get a competitive edge over other firms in your market space by hiring better people than they are.
    2. Put some of the questions online in forums where SMEs that you typically hire would congregate, and see if they correlate to how senior the respondents appear to be in their area of expertise
    3. Approach known experts in the field to answer some of your questions and check those correlations

… but Why?

So why all this bother, after all stats is hard, and isn’t this going to take a lot of effort and time?

If you are keeping records of the answers people gave and how you scored those answers (and please tell me you are keeping scrupulous records), and if you have six-monthly or annual performance reviews that include numerical scores for various categories of performance (and please tell me you are doing this and keeping records), then all you need is to spare a few paltry minutes on extracting the values and running a correlation between the scores from the questions and the performance scores.

The IT folks can write a script to do all that automatically if your appraisal system doesn’t already have that functionality.

The effort is therefore not all that great since you should be doing most of it anyway.

The benefit is that you …

  1. Don’t waste time and effort asking, coding, and using questions that don’t do anything – if the question is as effective as flipping a coin, leave it off the list
  2. Get a solid basis for a defense if your hiring practices wind up being challenged in court – it is a whole lot easier to defend if you can show statistical tracking over time for questions used than standing there looking earnest and saying how you really really believe they are good questions.
  3. You get to demonstrate in real and tangible terms the value of your profession – you can show in hard numbers how the hiring processes lead to competitive advantage and shareholder value. Not a bad thing to be able to show these days!

Conclusion

Building interview and selection questions in a methodical way and tracking their predictive power eliminates many of the inbuilt biases that come with the standard-issue human brain, and creates intellectual capital that moves the questioning process from a smoke & mirror charade to a solid foundation and translates into real operational advantage.
The costs of doing it are lower than simply carrying on a status quo based on belief and opinion, and the additional effort involved in running basic statistical correlations is negligible.

There is simply no reason not to do so.

~~~


Matthew Loxton is a Knowledge Management expert and holds a Master’s degree in Knowledge Management from the University of Canberra. Mr. Loxton has extensive international experience and is currently available as a Knowledge Management consultant or as a permanent employee at an organization that wishes to put knowledge assets to work.

Bibliography

Ezzy, D. (2002). Qualitative Analysis: Practice and Innovation (New South Wales, Allen & Unwin.

Oppenheim, A. N. (1998). Questionnaire design, interviewing and attitude measurement, Pinter Pub Ltd.

Scheaffer, R. L., W. Mendenhall Iii, et al. “Elementary survey sampling. USA: IPT, 1996.” Links: 126-195.

Swanson, R. A. (2005). Research in organizations: Foundations and methods of inquiry, Berrett-Koehler Publishers.

Van Bennekom, F. C. (2002). Customer surveying: A guidebook for service managers, Customer Service Press.

 

Advertisements

Tags: , , , , , , , , , , ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: