27/11/2012

Is Item Response a Measure of Conscientiousness?

Abstract. No.
Bryan Caplan promotes a new paper called "The Dog that Didn't Bark: What Item Non-Response Shows about Cognitive and Non-Cognitive Ability" by David Hedengren and Thomas Stratmann. He rather likes it: "I've never been more confident that a GMU student's working paper will end up in a top journal." Fabio Rojas says the paper presents "a great use of non-response". What's going on?

Conscientiousness, according to a definition that Hedengren and Stratmann quote (p. 5), is the "tendency to be organized, responsible, and hardworking". It's an interesting construct, as it predicts lots of outcomes that social scientists are interested in - most notably, economic success. Trouble is, not all survey data contain measures of conscientiousness, and including it in new surveys will come at the cost of leaving out something else.

The paper is based on a clever idea: Respondents differ in how many of the questions they answer. It seems reasonable to assume that the more conscientious a respondent is, the more questions she's going to answer. If so, then each and every survey dataset contains a measure of conscientiousness, because you can always calculate item response, the proportion of the questions that were answered.

In order to test this idea, the authors look at three datasets that contain standard measures of conscientiousness and regress them on the item response measure. In their verbal discussion, they emphasize how in all three cases item response is a significant positive predictor of the standard conscientiousness measure (p. 14):
The results in Table 2, Panel A and Panel B show that the beta coefficients on the fraction answered is statistically significant and positive when the dependent variable is Conscientiousness, indicating that individuals who are more conscientious answered more of the questions they were asked. The point estimates in Table 2, Panel A suggest that a one standard deviation increase in Conscientiousness is associated with about a one-third standard deviation increase in the fraction of questions answered. [etc.]
But, now, look at the actual results (p. 32; click to enlarge):
I've put ovals around the numbers of central interest. In each of the surveys used, item response explains about one per mille in the variation of the conscientiousness measure, or in common parlance: nothing. The results are statistically significant because the variance explained isn't exactly zero and the datasets are large. Nonetheless, these results show conclusively that the conscientiousness and item response variables measure two things that are completely different from each other.

It was a good idea, but it didn't work out. Frankly, I don't understand why the authors wrote up these results at all.

P.S.: If you click on the link to the paper, and are a very careful reader, you will see that the authors ask readers not to cite the paper without permission. I saw that just before I was going to publish the finished post, so I asked the first author for permission, which he kindly gave. Now I feel a little bad about writing such a negative post. But I guess Hedengren and Stratmann are academics enough to take some data-based criticism on a little-read blog. And who knows - maybe they'll point out why I'm all wrong.

No comments: