Psychometric “Tests”: A call to evidence or a call to action.
A call to evidence or a call to action.
Of the thousands of psychometric “tests” the LSAT (Law Schools Admission Council www.lsac.org) test is one of the best and most accurate predictors of performance ever devised.
The LSAT’s mean validity is 0.37. Meaning, it can predict at the 0.37 level whether an applicant will complete the 1st year of law school (meaning that most of the variance is caused by factors other than those measured in the test)
What does that tell us?
In an environment
Where the “job” requirements are precisely known.
Where the job requirements are amazingly stable and consistent across time.
When the “test” is meticulously devised drawing only on the known success factors.
When the “test” has vast numbers of samples to draw on (100,000 annually).
When the “test” is conducted under the most exacting conditions.
When the “test” results are interpreted only by highly experienced and highly competent people.
The best, that one of the most successful “tests” ever devised can do is r0.37.
In short, under the most perfect possible conditions, one of the best psychometric “tests,” ever known, the LSAT, can only predict at the r0.37 level. To remind readers, predicting the tossing a coin would give you a 50% chance of being right.
Now let’s compare the highest possible standard of the LSAT to the setting for a typical leadership job “test”environment.
- The requirements of the role are NOT precisely known, and at best, an overview of what is required has been written by an HR person in the form of a job description.
- The role almost certainly changed under the most recent of many recent “reorganisations,” meaning that there has not been enough time to gather the success factors data for that role. (In reality, that means that at best a wild guess can be made about the success factors.)
- Since the success factors are not known, and no data has been gathered, no “test” designed for purpose was or could have been designed over the decades that it took to create the LSAT.
- That means that some generalised “test” that was designed for other purposes is applied instead.
- Almost certainly whatever “generalised” “test” is chosen, will not be applied under exacting conditions, by the most competent of people.
- When the “test” “results” are “interpreted” it will most certainly not be by the most experienced and highly competent people.
Under such, typical circumstances, what chance then of getting a worthwhile ROI on conducting psychometric “tests”? Little, or none. What chance of wasting company money on psychometric “tests”? Almost complete.
Perhaps this question will illustrate: how many times have you observed a company conducting an ROI analysis on its “testing” procedures?
The problem with “tests” is that they give the illusion of objectivity, but don’t deliver. In most circumstances they can’t deliver, but companies are prepared to pay for the illusion that they can and do. Why? They want to feel as though they are being objective, impartial, fair. They can make it look as though they are recruiting based on unbiased criteria… of course, that too is an illusion.
If those observations were not true, you would expect to see regular ROI analysis of the “testing” procedures, and as you have no doubt noticed, you do not.
It seems then, that the “testing” industry is not in the business of selling effective selection methods, but is in the business of selling the illusion of impartial selection methods. Or as some more cynical people might put it, they are selling an apparently legitimate cover to select based on the same old biases that they have always used.
Of course, as with all chains of reasoning, the above could be plain wrong. If so, then there ought to be many “tests” with a predictive validity of > 0.5. And a vast body of research on the compelling ROI of conducting “tests” for leadership selection should be available. Cough… cough!
If you have a “test,” that has a predictive validity of > 0.5, for real world leadership job performance, under real world job selection conditions, and can demonstrate a viable ROI for using the test, please post the independently verified evidence here. Please do not post manufacturer’s figures, they are not independent.
If you cannot provide such evidence, or you are sure it does not exist, then perhaps, you will wish to join the exploration of how we can make psychometric tools better than the tossing of a coin. PsyPerform is interested in talking to people who share that objective, for the purposes of helping companies recruit the best possible leaders, using methods that are cost effective and provide a viable, and verifiable ROI. All suggestions welcome!