Psychometric tests: Stop wasting your company money
Psychometric tests: Stop wasting your company money on psychometric tests
Would you use the tossing of a coin to make a recruitment decision? Of course not! It is a bizarre proposition! But, if you did, you would make better decisions than if you used psychometric tests. Yes, again, you read correctly. The predictive validity of psychometric tests, even ideal conditions, is less than 0.5. Some can measure who has the abilities to achieve a certain result. For instance, if you wanted to determine whether or not a member of the public had sufficient numerical reasoning ability to be a finance director, a suitable test can help you identify which members of the public have that skill with a predictive validity score of around r0.3.
However, and this a major factor, which people are you going to invite to interview for the post? mary or Joe Bloggs picked at random off the street? Or, people who are already finance directors or people who are now ready to take that role (successful one level down)? The later, obviously.
Was any theoretically appropriate “test” designed to give meaningful information about people who are already known to have the skills being tested? Not usually. Does a test with r0.3 when testing the general population still have that level of predictive validity when using it to assess high performers? Almost never. Why? Once the skills reach a certain level, (and every one under consideration has the skills way beyond that level) the “test” has no value.
Using a “test” designed for the general population, with such a specialised group, is about as predictive as tossing a coin. That means you will be right 50% of the time – with a coin, NOT with the “test.”
Anyone who has been in the field of professional psychology for more than 5 minutes knows that to be true.
The question then is, why have they not told you? Hmmm… well, if they are selling tests, it won’t take too much to figure out the reasons and financial motives for their silence on such damning facts.
When challenged with the above facts many proponents of psychometric testing claim: “Of course, we don’t just use one test. We use many for greater accuracy.”
Let’s examine that reasoning.
If you use a test that has a predictive validity of 0.3 in conjunction with another that has a predictive validity of 0.25, are you going to produce a more or less reliable result? Most people think the answer is more reliable. Alas, that is not the case. Here is why.
o.3 x 0.25 =0.075. Meaning that in combination “tests” are more likely to get it wrong, unless used properly (and that almost never happens). Those of you who want make a living selling blood pressure tablets could share those stats with any passing advocate of psychometric “tests”. Alternatively, if you love free entertainment, of a fireworks kind, use the stats to light the touch paper and retreat to a safe distance.
What does that mean?
More seriously, what that means is that the more psychometric “tests” are used the greater the chances of getting it wrong, since they are almost never used properly.
“Tests” has repeatedly been put in inverted commas to indicate irony.
The word test implies some kind of unjustifiable soundness. Indeed the phrase “psychometric tests” may be partly responsible for companies buying the worst products that are clearly not fit for purpose. Or to express it more cautiously, the misleading term “psychometric tests” may be partly responsible for companies buying apparently useful products, which are not cost-effective for purpose.
Paradoxically, it is known that companies that use “assessment centres” (properly) make better recruitment decisions than those that don’t, to a level of r0.55 to r0.65. Let’s express that observation in terms that make day-to-day sense. Such companies spend vast amounts of money to get results that are just marginally better than the tossing of coin (r0.5). They achieve that marginal “improvement”at huge cost. Can the tiny improvement be attributed to the “tests”?
When you spend money on “assessment centres”, are you going to invite anyone to interview who couldn’t do the job? Of course not! What does that mean? That anyone attending the “assessment centre” could do the job, and as such anyone you select will make it APPEAR that the test was predictively accurate, when in fact anyone and everyone, ALL, who attended the “assessment centre” could have done the job.
A second factor is what I once heard a cynic describe as “The Compliant Minion Factor.”
Upon asking for an explanation I was told: “What kind of person is going to jump through assessment centre hoops like a performing dog? Someone who is going to do as they as told: a compliant minion. When people are hired to do a job and they do exactly as they are told to, in the job, should you be surprised that the person can do the job?”
Toning down the cynic’s polemic, if a highly obedient person is required, in a company that seeks to recruit people who can do their job as long as they do as they are told, what better way to ensure such people are found than to make the recruitment process put off all but the most compliant. “Assessment centres” are very effective at driving away all but the most compliant.
The net effect is that the “tests” appears to have been predictive, when in fact the process caused high initiative, creative, and independent thinkers to self-deselect.
What makes companies disposed to use methods that are not fit for purpose, and about as predictive as coin tossing?
An interplay of several well-known psychological phenomena.
Here’s one: the certainty illusion. People believe that psychometric tests give certainty; they believe that testing removes subjectivity and replaces it with certainty.
As must be clear from the typical predictive validity figures 0.25 – 0.3 (which means being 25% to 30% right), psychometric tests predict wrongly 70% to 75% of the time. Whatever psychometric tests provide (other than wasting money) it is certainly not certainty. It is the illusion of certainty.
Here is another phenomenon that accounts for the desire to use psychometric tests: the objectivity illusion. We like to believe that we can be, and are being objective. It appeals to our sense of rationality.
Again, anyone who has been in professional psychology for more than a few minutes has learned that human beings make their decisions emotionally and then set about justifying them “objectively” or “rationally.” Objectivity, is at best, an illusion, one that is most usually based in emotions.
Assessing psychometric assessment
Companies that have assessed the psychometric approach, and that is VERY few, have found that performance is NOT predicted by their assessment centre/degree possession combination. Many have now adopted more effective mechanisms to recruit talent. It would be unfair on those companies to name them or their methods: they have expensively identified a competitive advantage and ought to benefit from it.
Some companies have stopped the private school + Oxbridge (Ivy League) recruitment sifting method and are now recruiting people from the opposite end of the social spectrum, and for good performance psychology reasons.
As more and more companies realise that psychometric tests, when everyone interviewed is know to be able to do the job, are about as useful as tossing a coin, perhaps, they too will start to adopt performance psychology based recruitment techniques.
If you wish to speak to PsyPerform about effective selection methods, based on the Psychology of Performance, you can make contact here.