Report

The role of education interventions in improving economic rationality

See allHide authors and affiliations

Science  05 Oct 2018:
Vol. 362, Issue 6410, pp. 83-86
DOI: 10.1126/science.aar6987

Educating for economic rationality

The hypothesis that education enhances economic decision-making has been surprisingly underexplored. Kim et al. studied this question using a randomized control trial in a sample of 2812 girls in secondary schools in Malawi. Four years after providing financial support for a year's schooling, they presented the subjects with a set of decision problems (for example, allocating funds to immediate versus future expenses) that test economic rationality. The education intervention enhanced both educational outcomes and economic rationality as measured by consistency with utility maximization in the long run.

Science, this issue p. 83

Abstract

Schooling rewards people with labor market returns and nonpecuniary benefits in other realms of life. However, there is no experimental evidence showing that education interventions improve individual economic rationality. We examine this hypothesis by studying a randomized 1-year financial support program for education in Malawi that reduced absence and dropout rates and increased scores on a qualification exam of female secondary school students. We measure economic rationality 4 years after the intervention by using lab-in-the-field experiments to create scores of consistency with utility maximization that are derived from revealed preference theory. We find that students assigned to the intervention had higher scores of rationality. The results remain robust after controlling for changes in cognitive and noncognitive skills. Our results suggest that education enhances the quality of economic decision-making.

Rationality in human choices has been a cornerstone assumption in traditional economic analysis and yet one of the most controversial issues in social and behavioral sciences (1). Mounting evidence shows that people tend to make systematic errors in judgment and decision-making and that there is a high level of heterogeneity in the extent to which rationality is limited across decisions and individuals (2, 3). The welfare loss resulting from poor decisions can be substantial, which implies that policy-makers might want to rethink the role of public policy in response to the failure of rationality (4).

The behavioral science literature has accumulated evidence on ways of improving people’s capabilities and quality of decision-making: changing incentives, restructuring choice architecture, and debiasing training (57). Most of these approaches target the reduction of decision biases in particular contexts of economic activities but do not address the improvement of general capabilities of decision-making that are transferrable across decision domains. It is often controversial to judge whether decision biases are driven by the failure of rationality or other factors such as anomalous preferences.

Schooling has been shown to influence a wide range of outcomes, including income, health, and crime (8, 9). One little-explored hypothesis is that education improves people’s decision-making abilities and leads them to make better decisions across various choice environments. The impacts of education on decision-making can then be a potential mechanism underlying the pecuniary and nonpecuniary returns to education.

We examine this hypothesis by studying a nongovernmental organization–implemented randomized controlled trial of education support in Malawi, an environment where, among young females, only 21.4% have received some secondary education and 9.8% have completed secondary school education (10). The program randomly provided financial support for education in a sample of 2812 female 9th and 10th graders from 83 classrooms in 33 public schools between the third semester of the academic year 2011–2012 and the second semester of the academic year 2012–2013. The program was randomized at the classroom level and consisted of the payment of school tuition and fees for 1 year, as well as a monthly cash stipend. The total amount of support was ~$70 per student as long as the student remained in school until the end of the program.

We conducted a short-term follow-up survey about 1 year later that measured short-term educational impacts. Four years after the intervention, we conducted a long-term follow-up survey that measured longer-term educational outcomes and implemented laboratory experiments of presenting subjects with a set of decision problems under risk and over time using a two-dimensional budget set. The risk-domain experiment consists of 20 decision problems representing a set of portfolio options associated with two equally probable unknown states. The time-domain experiment consists of two frames wherein the budget set represents a set of money allocations between two payment dates. The near time frame comprises 15 decisions of allocating money between tomorrow and 31 days from the time of the experiment. The distant time frame consists of 15 decisions of allocating money between 1 year and 1 year and 30 days from the time of the experiment.

This tool of laboratory experiments generates a rich set of individual choice data that are well suited to testing for consistency with utility maximization as the criterion for economic rationality (3, 11, 12). Classical revealed preference theory shows that choices from a finite collection of budget lines are consistent with maximizing a (well-behaved) utility function if and only if they satisfy the Generalized Axiom of Revealed Preference (GARP) (13). When the choice data do not satisfy GARP, we use Afriat’s critical cost efficiency index (CCEI) to measure how closely they comply with the utility maximization hypothesis (14). We compute CCEI in each experimental domain to generate an index of the subject’s level of economic rationality. The CCEI is bounded between 0 and 1. The closer the CCEI is to 1, the more closely the choice data are consistent with utility maximization. As the summary of economic rationality indices for the time domain, we use the minimum of two CCEIs at the near and distant time frames. The reason why consistency with utility maximization may be key to economic survival and thus serve as the basic criterion of economic rationality is offered by the classic money pump argument, which shows that inconsistent behavior can be exploited indefinitely by arbitrageurs. In addition, we consider two measures of compliance with stochastic dominance in the risk-domain experiment as an alternative criterion for economic rationality. Further details of the education intervention, laboratory experiment, and measurements are reported in the supplementary materials (15).

We present coefficients from regressions with baseline controls consisting of individual characteristics, parents’ education and occupation, and school type. We cluster our standard errors at the classroom level. Because we deal with multiple outcomes of education and economic rationality, as well as the heterogeneous effects for 9th and 10th graders, we account for multiple hypothesis testing by following the approach in our preanalysis plan (16). We group all outcomes for the whole sample, 9th graders, and 10th graders in each realm of education or economic rationality and report standardized treatment effects with baseline controls as in (17), as well as family-wise adjusted P values.

First we evaluate the impacts of the intervention on various education outcomes: number of days absent during the past semester, school dropout rate, taking the Junior Certificate Examination (JCE) in 10th grade, passing the JCE, and total years of education. The information on the JCE comes from administrative data, whereas absence, dropout rate, and years of education are self-reported. Results for the whole sample, 9th graders, and 10th graders are presented in Table 1.

Table 1 Impacts of education support program on education outcomes.

Coefficients are from linear regressions of each education outcome on the education intervention indicator. Standard errors (in parentheses) are clustered at the classroom level. FU, follow-up; N/A, not applicable.

View this table:

Students in the treated classrooms (i.e., those that were assigned the intervention) have better education outcomes compared with those in the control classrooms. Specifically, the treated students in the whole sample are 40% (1.6 days) less likely to be absent and 7% (5.5 percentage points) and 14% (8.6 percentage points) more likely to take and pass the JCE, respectively. The self-reported dropout rate decreases by 3.4 percentage points (30%), and the total years of education increases by ~0.1 of a year; however, neither change is statistically significant, and we expect a lot of measurement error in these self-reported measures. These treatment effects are heterogeneous between the two cohorts and come mainly from 9th graders: Those treated 9th graders are 42% (1.5 days) less likely to be absent per semester, 61% (8.3 percentage points) less likely to have dropped out, and 20% (12.3 percentage points) and 28% (14.1 percentage points) more likely to take and pass the JCE, respectively.

The standardized treatment effect shown in column 6 of Table 1 confirms that the intervention was successful in enhancing schooling, and this result is mainly driven by 9th graders. We interpret this heterogeneity because 9th graders are more vulnerable to dropping out of school and therefore could benefit more from the education intervention than 10th graders. Using data from our study, we estimate that the dropout rate was 26.4% in 9th grade compared with only 11.2% in 10th grade. This pattern is consistent with data from Malawian national statistics, as well as other settings (15).

Next we study whether the educational intervention affected economic rationality. Columns 1 and 2 of Table 2 present the average treatment effects on the CCEIs in the risk and time domains with baseline controls. For the overall sample, we observe that girls who received the intervention display an increase in CCEIs measured in the risk and time domain of 1.3 percentage points (1.6%) and 1.4 percentage points (1.7%), respectively, but only the time domain effect is significant at the 5% level. Turning to the alternative measure of economic rationality, both the relative frequency and the expected payoff ratio of complying with stochastic dominance exhibit similar patterns as shown in columns 3 and 4. The standardized treatment effect across all four measures indicates that the treatment is associated with a 0.02 standard deviation [standard error (SE) = 0.009] increase in economic rationality scores. Figure S2 shows that the intervention improves economic rationality throughout most of the range of CCEIs.

Table 2 Impacts of education support program on economic rationality.

Coefficients are from linear regressions of each rationality measure on the education intervention indicator. Standard errors (in parentheses) are clustered at the classroom level. N/A, not applicable.

View this table:

Table 2 confirms that the intervention had heterogeneous impacts. For 9th graders in the control group, the mean CCEIs measured by risk and time domain are 0.81 and 0.82, respectively. The CCEIs of the treatment group are 3.3 percentage points (4.0%) and 3.1 percentage points (3.7%) higher than those of the control group (columns 1 and 2, respectively). Those treated among 9th graders are more likely to make decisions in conformity with stochastic dominance than those in the control group (columns 3 and 4). The standardized treatment effect for 9th graders across all measures is 0.038 standard deviations (SE = 0.011) (column 5). We do not find any treatment effect for 10th graders, but the statistical significance of the treatment effects on economic rationality remain robust when using family-wise adjusted P values to account for multiple hypothesis testing.

A natural question that arises is whether our measures of economic rationality are proxies that are correlated with other primitives of decision-making that might also be affected by the intervention. To address this issue, we first examined the treatment effects of the intervention on time and risk preferences, cognitive abilities, and personality (15) and found that the intervention did not affect risk attitudes and time impatience but did enhance cognitive skills measured by the math test score and some aspects of personality traits (table S5). We then investigated the effects of the intervention on economic rationality, controlling for measures of risk and time preferences, cognitive skills, and personality. Our rationality scores are explained only partially by these control variables (table S6): For example, for 9th graders, the control variables reduce the impacts on rationality scores by about one-third.

There could be other explanations for our findings. First, the intervention might help subjects better understand the experiment instructions. However, when we drop the first three choices in the experiments, our results are robust, which suggests that differential learning during the experiment is not important (table S7). Second, beneficiaries might exert differential effort during the experimental games, despite the fact that they are incentivized. We created several measures aimed at capturing effort during the survey, including indexes of missing and “do not know” responses and did not find differences between the treatment and control groups (table S8). Third, we cannot positively distinguish whether the intervention improved rationality only via its effect through increased education. For example, the monthly stipend that was part of the intervention could lead girls to think more rationally about how to spend their money.

Using a randomized controlled trial of education support and financially incentivized laboratory experiments, we established causal evidence that an education intervention increases not only educational outcomes but also economic rationality. The size of the treatment effects on CCEIs is economically meaningful and larger than the cross-sectional relationship between education and CCEIs in our control group and the study from the Netherlands (3), as well as recent work on this relationship using changes in compulsory schooling in England (18). The direct comparison of the results between our study and the two aforementioned studies (3, 18) is difficult for several reasons. First, measures of years of schooling coming from self-reported levels of education achievement are generally noisy (8), especially so in a developing country setting where drop-out and grade repetition are frequent (19). Second, there are differences in laboratory experimental design such as the number of choices per subject and the variations of budget sets. Third, our treatment effects are measured after 4 years, whereas the other studies (3, 18) measure outcomes during adulthood. If program effects fade out over time, they could help reconcile the different results in these three studies (20). Fourth, the English and Dutch samples differ from our sample along many dimensions of socioeconomic status, and therefore our findings might not apply to populations in developed countries. For example, people in developed countries may have more opportunities to learn to make more rational decisions outside of school. Finally, on a hopeful note, our relatively larger impacts of education interventions are consistent with the literature that shows larger returns to cognitive and noncognitive investments in resource constraint settings (21, 22). In our setting, the impact of the educational intervention is large, not just in terms of the effects on economic rationality but also on cognitive outcomes (15). More research is needed to determine the reproducibility and generalizability of our findings.

Supplementary Materials

www.sciencemag.org/content/362/6410/83/suppl/DC1

Materials and Methods

Figs. S1 and S2

Tables S1 to S8

Survey Instruments

References (2459)

References and Notes

  1. See supplementary materials.
Acknowledgments: We thank the Africa Future Foundation (AFF), Korea International Cooperation Agency (KOICA), Bundang Cheil Women’s Hospital (BCWH), and Daeyang Luke Hospital (DLH) for implementing the interventions. AFF, KOICA, BCWH, and DLH supported data collection but played no role in analysis, the decision to publish, or manuscript preparation. We also thank the team members of Project Malawi of AFF and DLH—S. Kim, Y. Baek, D. Lungu, E. Kim, S. Park, E. Baek, J. Kim, J. Kim, T. Kim, H. So, S. Lee, N. Shenavai, and J. Jung—for support, project management, and research assistance. The IRB for this research project was approved by Malawi National Health Science Research Committee (Malawi NHSRC 902), Cornell University (Protocol ID 1310004153), and Columbia University (IRB-AAAL8400). The preanalysis plan of this study is registered in the American Economic Association’s registry for randomized controlled trials under ID AEARCTR-0001243. Funding: This work is supported by grants from KOICA and BCWH to AFF (H.B.K.), Seoul National University (Creative-Pioneering Researchers Program) (S.C.), and KDI School of Public Policy and Management (KDIS Research Grant 20150070) (B.K.). Author contributions: H.B.K., B.K., and C.P.-E. implemented the main education trial and designed the study; S.C. developed and implemented the laboratory experiment; H.B.K., S.C., B.K., and C.P.-E. conducted and analyzed the data and wrote the manuscript. Competing interests: The authors declare no competing interests. Data and materials availability: The data and code for both the manuscript and the supplementary materials are publicly available at the CISER Data Archive (23).
View Abstract

Stay Connected to Science

Navigate This Article