Technical Comments

Response to Comment on “Poverty Impedes Cognitive Function”

See allHide authors and affiliations

Science  06 Dec 2013:
Vol. 342, Issue 6163, pp. 1169
DOI: 10.1126/science.1246799

Abstract

Wicherts and Scholten criticized our study on statistical and psychometric grounds. We show that (i) using a continuous income variable, the interaction between income, and experimental manipulation remains reliable across our experiments; (ii) our results in the cognitive control task do not appear driven by ceiling effects; and (iii) our observed post-harvest improvement is robust to the presence of learning.

Wicherts and Scholten (1) criticize our study (2) on three grounds: (i) the use of binary income variable rather than continuous income variable, (ii) potential ceiling effects in the cognitive control test, and (iii) retesting effects in the field study. We address each point below.

(i) Wicherts and Scholten argue that when using the continuous income variable, the interaction between income and condition (easy versus hard scenarios) is insignificant. Our income data, as is typically the case, and as Wicherts and Scholten recognize and we point out, are noisy. It is standard to create binary variables when dealing with noisy data (38). It is heartening, furthermore, that across all experiments, even with the continuous income variable (which they report), the effects are of the same sign. More important, we ran the same regression with data collapsed across the three core experiments (1, 3, and 4), and the interaction between income and condition is significant (P < 0.02), which they did not report. Results are shown in Table 1. Furthermore, it’s worth noting that in our field study a similar effect was observed in the absence of any income data.

Table 1 Regression of Raven’s accuracy on income and condition.

B indicates unstandardized regression weight, with standard error (SE).

View this table:

(ii) Wicherts and Scholten are concerned about possible ceiling effects among rich participants on the cognitive control test. Many such studies have been conducted on adults, and error rates, just as observed by us, are normally quite low. The possibility of a ceiling problem in the cognitive control task can be better observed by looking separately at compatible and incompatible trials in experiment 1. Because incompatible trials were more difficult than compatible trials, both the poor and the rich participants performed worse on incompatible trials than compatible ones (Table 2).

Table 2 Performance on cognitive control test for compatible and incompatible trials.
View this table:

If ceiling effects were driving our results, then we should expect the interaction between income and condition to be stronger for compatible trials than for incompatible trials. However, this was not true. A three-way analysis of variance (income, poor versus rich; condition, easy versus hard; trial, compatible versus incompatible) showed a lack of three-way interaction [F(1,194) = 0.03; P = 0.86], suggesting a similar interaction effect for compatible and incompatible trials. Importantly, the two-way interaction remains significant for both compatible and incompatible trials [F(1,194) = 5.69; P < 0.02].

We ran an additional analysis to address the possibility of ceiling effects: Some items on Raven’s matrices are harder than others (as measured by the fraction who got each item right). We reanalyzed the Raven data item by item in experiment 1, examining whether there was a correlation between item difficulty (percentage of people who got the item right) and our interaction effect (the difference between rich and poor in the hard condition minus the difference between rich and poor in the easy condition). Indeed, there is no such relationship between item difficulty and our effect (r = 0.18; P = 0.50). Finally, our field study replicated these effects within a single population.

(iii) Wicherts and Scholten’s comment on retesting effects seems to be a misunderstanding of our table S3. We are not arguing that there is no retesting effect. We are simply pointing out that the post-harvest improvement appears robust to the presence of learning. We have no informed perspective on the magnitude of retesting effects to expect in this population, given the differences in time taken to complete the various tests and the drastically different testing conditions and populations. Among other things, the lack of stronger retest effects may be due to brief exposure, limited to a total of 12 items, whereas a full Raven’s battery typically includes 60. Our only point is that the harvest effect transcends the learning effects, as originally reported. Of course, none of this can apply to the laboratory studies, where there was no retesting.

References

View Abstract

Navigate This Article