Report

An fMRI Investigation of Emotional Engagement in Moral Judgment

See allHide authors and affiliations

Science  14 Sep 2001:
Vol. 293, Issue 5537, pp. 2105-2108
DOI: 10.1126/science.1062872

Abstract

The long-standing rationalist tradition in moral psychology emphasizes the role of reason in moral judgment. A more recent trend places increased emphasis on emotion. Although both reason and emotion are likely to play important roles in moral judgment, relatively little is known about their neural correlates, the nature of their interaction, and the factors that modulate their respective behavioral influences in the context of moral judgment. In two functional magnetic resonance imaging (fMRI) studies using moral dilemmas as probes, we apply the methods of cognitive neuroscience to the study of moral judgment. We argue that moral dilemmas vary systematically in the extent to which they engage emotional processing and that these variations in emotional engagement influence moral judgment. These results may shed light on some puzzling patterns in moral judgment observed by contemporary philosophers.

The present study was inspired by a family of ethical dilemmas familiar to contemporary moral philosophers (1). One such dilemma is the trolley dilemma: A runaway trolley is headed for five people who will be killed if it proceeds on its present course. The only way to save them is to hit a switch that will turn the trolley onto an alternate set of tracks where it will kill one person instead of five. Ought you to turn the trolley in order to save five people at the expense of one? Most people say yes. Now consider a similar problem, the footbridge dilemma. As before, a trolley threatens to kill five people. You are standing next to a large stranger on a footbridge that spans the tracks, in between the oncoming trolley and the five people. In this scenario, the only way to save the five people is to push this stranger off the bridge, onto the tracks below. He will die if you do this, but his body will stop the trolley from reaching the others. Ought you to save the five others by pushing this stranger to his death? Most people say no.

Taken together, these two dilemmas create a puzzle for moral philosophers: What makes it morally acceptable to sacrifice one life to save five in the trolley dilemma but not in the footbridge dilemma? Many answers have been proposed. For example, one might suggest, in a Kantian vein, that the difference between these two cases lies in the fact that in the footbridge dilemma one literally uses a fellow human being as a means to some independent end, whereas in the trolley dilemma the unfortunate person just happens to be in the way. This answer, however, runs into trouble with a variant of the trolley dilemma in which the track leading to the one person loops around to connect with the track leading to the five people (1). Here we will suppose that without a body on the alternate track, the trolley would, if turned that way, make its way to the other track and kill the five people as well. In this variant, as in the footbridge dilemma, you would use someone's body to stop the trolley from killing the five. Most agree, nevertheless, that it is still appropriate to turn the trolley in this case in spite of the fact that here, too, we have a case of “using.” These are just one proposed solution and one counterexample, but together they illustrate the sort of dialectical difficulties that all proposed solutions to this problem have encountered. If a solution to this problem exists, it is not obvious. That is, there is no set of consistent, readily accessible moral principles that captures people's intuitions concerning what behavior is or is not appropriate in these and similar cases. This leaves psychologists with a puzzle of their own: How is it that nearly everyone manages to conclude that it is acceptable to sacrifice one life for five in the trolley dilemma but not in the footbridge dilemma, in spite of the fact that a satisfying justification for distinguishing between these two cases is remarkably difficult to find (2)?

We maintain that, from a psychological point of view, the crucial difference between the trolley dilemma and the footbridge dilemma lies in the latter's tendency to engage people's emotions in a way that the former does not. The thought of pushing someone to his death is, we propose, more emotionally salient than the thought of hitting a switch that will cause a trolley to produce similar consequences, and it is this emotional response that accounts for people's tendency to treat these cases differently. This hypothesis concerning these two cases suggests a more general hypothesis concerning moral judgment: Some moral dilemmas (those relevantly similar to the footbridge dilemma) engage emotional processing to a greater extent than others (those relevantly similar to the trolley dilemma), and these differences in emotional engagement affect people's judgments. The present investigation is an attempt to test this more general hypothesis. Drawing upon recent work concerning the neural correlates of emotion (3–5), we predicted that brain areas associated with emotion would be more active during contemplation of dilemmas such as the footbridge dilemma as compared to during contemplation of dilemmas such as the trolley dilemma. In addition, we predicted a pattern of behavioral interference similar to that observed in cognitive tasks in which automatic processes can influence responses, such as the Stroop task (in which the identity of a color word can interfere with participants' ability to name the color in which it is displayed; e.g., the ability to say “green” in response to the word “red” written in green ink) (6, 7). In light of our proposal that people tend to have a salient, automatic emotional response to the footbridge dilemma that leads them to judge the action it proposes to be inappropriate, we would expect those (relatively rare) individuals who nevertheless judge this action to be appropriate to do so against a countervailing emotional response and to exhibit longer reaction times as a result of this emotional interference. More generally, we predicted longer reaction times for trials in which the participant's response is incongruent with the emotional response (e.g., saying “appropriate” to a dilemma such as the footbridge dilemma). We predicted the absence of such effects for dilemmas such as the trolley dilemma which, according to our theory, are less likely to elicit a strong emotional response.

In each of two studies, Experiments 1 and 2, we used a battery of 60 practical dilemmas (8). These dilemmas were divided into “moral” and “non-moral” categories on the basis of the responses of pilot participants (8). (Typical examples of non-moral dilemmas posed questions about whether to travel by bus or by train given certain time constraints and about which of two coupons to use at a store.) Two independent coders evaluated each moral dilemma using three criteria designed to capture the difference between the intuitively “up close and personal” (and putatively more emotional) sort of violation exhibited by the footbridge dilemma and the more intuitively impersonal (and putatively less emotional) violation exhibited by the trolley dilemma (8,9). Moral dilemmas meeting these criteria were assigned to the “moral-personal” condition, the others to the “moral-impersonal” condition. Typical moral-personal dilemmas included a version of the footbridge dilemma, a case of stealing one person's organs in order to distribute them to five others, and a case of throwing people off a sinking lifeboat. Typical moral-impersonal dilemmas included a version of the trolley dilemma, a case of keeping money found in a lost wallet, and a case of voting for a policy expected to cause more deaths than its alternatives. Participants responded to each dilemma by indicating whether they judged the action it proposes to be “appropriate” or “inappropriate.”

In each experiment, nine participants (10) responded to each of 60 dilemmas (11) while undergoing brain scanning using fMRI (12). Figures 1 and 2 describe brain areas identified in Experiment 1 by a thresholded omnibus analysis of variance (ANOVA) performed on the functional images (13). In each case, the ANOVA identified all brain areas differing in activity among the moral-personal, moral-impersonal, and non-moral conditions. Planned comparisons on these areas revealed that medial portions of Brodmann's Areas (BA) 9 and 10 (medial frontal gyrus), BA 31 (posterior cingulate gyrus), and BA 39 (angular gyrus, bilateral) were significantly more active in the moral-personal condition than in the moral-impersonal and the non-moral conditions. Recent functional imaging studies have associated each of these areas with emotion (5, 14–16). Areas associated with working memory have been found to become less active during emotional processing as compared to periods of cognitive processing (17). BA 46 (middle frontal gyrus, right) and BA 7/40 (parietal lobe, bilateral)—both associated with working memory (18, 19)—were significantly less active in the moral-personal condition than in the other two conditions. In BA 39 (bilateral), BA 46, and BA 7/40 (bilateral), there was no significant difference between the moral-impersonal and the non-moral condition (20, 21).

Experiment 2 served to replicate the results of Experiment 1 (22) and to provide behavioral data concerning participants' judgments and reaction times. Planned comparisons on the seven brain areas identified in Experiment 1 yielded results nearly identical to those of Experiment 1 with the following differences. In Experiment 2 there was no difference in BA 9/10 between the moral-impersonal and non-moral conditions, and no differences were found for BA 46 (23).

Reaction time data from Experiment 2 are described by Fig. 3. Our theory concerning emotional interference predicted longer reaction times for emotionally incongruent responses, which occur when a participant responds “appropriate” in the moral-personal condition (e.g., judging it “appropriate” to push the man off the footbridge in the footbridge dilemma) but which do not occur in the moral-impersonal and non-moral conditions. As predicted, responses of “appropriate” (emotionally incongruent) were significantly slower than responses of “inappropriate” (emotionally congruent) within the moral-personal condition, and there was no significant difference in reaction time between responses of “appropriate” and “inappropriate” in the other two conditions. In fact, the data exhibit a trend in the opposite direction for the other two conditions (24), with responses of “inappropriate” taking slightly longer than responses of “appropriate.”

Figure 1

Effect of condition on activity in brain areas identified in Experiment 1. R, right; L, left; B, bilateral. Results for the middle frontal gyrus were not replicated in Experiment 2. The moral-personal condition was significantly different from the other two conditions in all other areas in both Experiments 1 and 2. In Experiment 1 the medial frontal and posterior cingulate gyri showed significant differences between the moral-impersonal and non-moral conditions. In Experiment 2 only the posterior cingulate gyrus was significantly different in this comparison. Brodmann's Areas and Talairach (28) coordinates (x, y, z) for each area are as follows (left to right in graph): 9/10 (1, 52, 17); 31 (–4, –54, 35); 46 (45, 36, 24); 7/40 (–48, –65, 26); 7/40 (50, –57, 20).

Figure 2

Brain areas exhibiting differences in activity between conditions shown in three axial slices of a standard brain (28). Slice location is indicated by Talairach (28) z coordinate. Data are for the main effect of condition in Experiment 1. Colored areas reflect the thresholded F scores. Images are reversed left to right to follow radiologic convention.

Figure 3

Mean reaction time by condition and response type in Experiment 2. A mixed-effects ANOVA revealed a significant interaction between condition and response type [F(2, 8) = 12.449, P < 0.0005). Reaction times differed significantly between responses of “appropriate” and “inappropriate” in the moral-personal condition [t(8) = 4.530, P < 0.0005] but not in the other conditions (P > 0.05). Error bars indicate two standard errors of the mean.

In each of the brain areas identified in both Experiments 1 and 2, the moral-personal condition had an effect significantly different from both the moral-impersonal and the non-moral conditions. All three areas showing increased relative activation in the moral-personal condition have been implicated in emotional processing. The behavioral data provide further evidence for the increased emotional engagement in moral-personal condition by revealing a reaction time pattern that is unique to that condition and that was predicted by our hypothesis concerning emotional interference. Moreover, the presence of this interference effect in the behavioral data strongly suggests that the increased emotional responses generated by the moral-personal dilemmas have an influence on and are not merely incidental to moral judgment (25). These data also suggest that, in terms of the psychological processes associated with their production, judgments concerning “impersonal” moral dilemmas more closely resemble judgments concerning non-moral dilemmas than they do judgments concerning “personal” moral dilemmas.

The trolley and footbridge dilemmas emerged as pieces of a puzzle for moral philosophers: Why is it acceptable to sacrifice one person to save five others in the trolley dilemma but not in the footbridge dilemma? Here we consider these dilemmas as pieces of a psychological puzzle: How do people manage to conclude that it is acceptable to sacrifice one for the sake of five in one case but not in the other? We maintain that emotional response is likely to be the crucial difference between these two cases. But this is an answer to the psychological puzzle, not the philosophical one. Our conclusion, therefore, is descriptive rather than prescriptive. We do not claim to have shown any actions or judgments to be morally right or wrong. Nor have we argued that emotional response is the sole determinant of judgments concerning moral dilemmas of the kind discussed in this study. On the contrary, the behavioral influence of these emotional responses is most strongly suggested in the performance of those participants who judge in spite of their emotions.

What has been demonstrated is that there are systematic variations in the engagement of emotion in moral judgment. The systematic nature of these variations is manifest in an observed correlation between (i) certain features that differ between the trolley dilemma and the footbridge dilemma and (ii) patterns of neural activity in emotion-related brain areas as well as patterns in reaction time. Methodological constraints led us to characterize these “certain features” by means of a highly regimented distinction between actions that are “personal” and “impersonal” (8). This personal-impersonal distinction has proven useful in generating the present results, but it is by no means definitive. We view this distinction as a useful “first cut,” an important but preliminary step toward identifying the psychologically essential features of circumstances that engage (or fail to engage) our emotions and that ultimately shape our moral judgments—judgments concerning hypothetical examples such as the trolley and footbridge dilemmas but also concerning the more complicated moral dilemmas we face in our public and private lives. A distinction such as this may allow us to steer a middle course between the traditional rationalism and more recent emotivism that have dominated moral psychology (26).

The present results raise but do not answer a more general question concerning the relation between the aforementioned philosophical and psychological puzzles: How will a better understanding of the mechanisms that give rise to our moral judgments alter our attitudes toward the moral judgments we make?

  • * To whom correspondence should be addressed. E-mail: jdgreene{at}princeton.edu

REFERENCES AND NOTES

View Abstract

Navigate This Article