Report

Evidence for a Collective Intelligence Factor in the Performance of Human Groups

Science  29 Oct 2010:
Vol. 330, Issue 6004, pp. 686-688
DOI: 10.1126/science.1193147

Meeting of Minds

The performance of humans across a range of different kinds of cognitive tasks has been encapsulated as a common statistical factor called g or general intelligence factor. What intelligence actually is, is unclear and hotly debated, yet there is a reproducible association of g with performance outcomes, such as income and academic achievement. Woolley et al. (p. 686, published online 30 September) report a psychometric methodology for quantifying a factor termed “collective intelligence” (c), which reflects how well groups perform on a similarly diverse set of group problem-solving tasks. The primary contributors to c appear to be the g factors of the group members, along with a propensity toward social sensitivity—in essence, how well individuals work with others.

Abstract

Psychologists have repeatedly shown that a single statistical factor—often called “general intelligence”—emerges from the correlations among people’s performance on a wide variety of cognitive tasks. But no one has systematically examined whether a similar kind of “collective intelligence” exists for groups of people. In two studies with 699 people, working in groups of two to five, we find converging evidence of a general collective intelligence factor that explains a group’s performance on a wide variety of tasks. This “c factor” is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.

As research, management, and many other kinds of tasks are increasingly accomplished by groups—working both face-to-face and virtually (13)—it is becoming ever more important to understand the determinants of group performance. Over the past century, psychologists made considerable progress in defining and systematically measuring intelligence in individuals (4). We have used the statistical approach they developed for individual intelligence to systematically measure the intelligence of groups. Even though social psychologists and others have studied for decades how well groups perform specific tasks (5, 6), they have not attempted to measure group intelligence in the same way individual intelligence is measured—by assessing how well a single group can perform a wide range of different tasks and using that information to predict how that same group will perform other tasks in the future. The goal of the research reported here was to test the hypothesis that groups, like individuals, do have characteristic levels of intelligence, which can be measured and used to predict the groups’ performance on a wide variety of tasks.

Although controversy has surrounded it, the concept of measurable human intelligence is based on a fact that is still as remarkable as it was to Spearman when he first documented it in 1904 (7): People who do well on one mental task tend to do well on most others, despite large variations in the tests’ contents and methods of administration (4). In principle, performance on cognitive tasks could be largely uncorrelated, as one might expect if each relied on a specific set of capacities that was not used by other tasks (8). It could even be negatively correlated, if practicing to improve one task caused neglect of others (9). The empirical fact of general cognitive ability as first demonstrated by Spearman is now, arguably, the most replicated result in all of psychology (4).

Evidence of general intelligence comes from the observation that the average correlation among individuals’ performance scores on a relatively diverse set of cognitive tasks is positive, the first factor extracted in a factor analysis of these scores generally accounts for 30 to 50% of the variance, and subsequent factors extracted account for substantially less variance. This first factor extracted in an analysis of individual intelligence tests is referred to as general cognitive ability, or g, and it is the main factor that intelligence tests measure. What makes intelligence tests of substantial practical (not just theoretical) importance is that intelligence can be measured in an hour or less, and is a reliable predictor of a very wide range of important life outcomes over a long span of time, including grades in school, success in many occupations, and even life expectancy (4).

By analogy with individual intelligence, we define a group’s collective intelligence (c) as the general ability of the group to perform a wide variety of tasks. Empirically, collective intelligence is the inference one draws when the ability of a group to perform one task is correlated with that group’s ability to perform a wide range of other tasks. This kind of collective intelligence is a property of the group itself, not just the individuals in it. Unlike previous work that examined the effect on group performance of the average intelligence of individual group members (10), one of our goals is to determine whether the collective intelligence of the group as a whole has predictive power above and beyond what can be explained by knowing the abilities of the individual group members.

The first question we examined was whether collective intelligence—in this sense—even exists. Is there a single factor for groups, a c factor, that functions in the same way for groups as general intelligence does for individuals? Or does group performance, instead, have some other correlational structure, such as several equally important but independent factors, as is typically found in research on individual personality (11)?

To answer this question, we randomly assigned individuals to groups and asked them to perform a variety of different tasks (12). In Study 1, 40 three-person groups worked together for up to 5 hours on a diverse set of simple group tasks plus a more complex criterion task. To guide our task sampling, we drew tasks from all quadrants of the McGrath Task Circumplex (6, 12), a well-established taxonomy of group tasks based on the coordination processes they require. Tasks included solving visual puzzles, brainstorming, making collective moral judgments, and negotiating over limited resources. At the beginning of each session, we measured team members’ individual intelligence. And, as a criterion task at the end of each session, each group played checkers against a standardized computer opponent.

The results support the hypothesis that a general collective intelligence factor (c) exists in groups. First, the average inter-item correlation for group scores on different tasks is positive (r = 0.28) (Table 1). Next, factor analysis of team scores yielded one factor with an initial eigenvalue accounting for more than 43% of the variance (in the middle of the 30 to 50% range typical in individual intelligence tests), whereas the next factor accounted for only 18%. Confirmatory factor analysis supported the fit of a single latent factor model with the data [χ2 = 1.66, P = 0.89, df = 5; comparative fit index (CFI) =.99, root mean square error of approximation (RMSEA) = 0.01]. Furthermore, when the factor loadings for different tasks on the first general factor are used to calculate a c score for each group, this score strongly predicts performance on the criterion task (r = 0.52, P = 0.01). Finally, the average and maximum intelligence scores of individual group members are not significantly correlated with c [r = 0.19, not significant (ns); r = 0.27, ns, respectively] and not predictive of criterion task performance (r = 0.18, ns; r = 0.13, ns, respectively). In a regression using both individual intelligence and c to predict performance on the criterion task, c has a significant effect (β = 0.51, P = 0.001), but average individual intelligence (β = 0.08, ns) and maximum individual intelligence (β =.01, ns) do not (Fig. 1).

Table 1

Correlations among group tasks and descriptive statistics for Study 1. n = 40 groups; *P ≤ 0.05; **P ≤ 0.001.

View this table:
Fig. 1

Standardized regression coefficients for collective intelligence (c) and average individual member intelligence when both are regressed together on criterion task performance in Studies 1 and 2 (controlling for group size in Study 2). Coefficient for maximum member intelligence is also shown for comparison, calculated in a separate regression because it is too highly correlated with individual member intelligence to incorporate both in a single analysis (r = 0.73 and 0.62 in Studies 1 and 2, respectively). Error bars, mean ± SE.

In Study 2, we used 152 groups ranging from two to five members. Our goal was to replicate these findings in groups of different sizes, using a broader sample of tasks and an alternative measure of individual intelligence. As expected, this study replicated the findings of Study 1, yielding a first factor explaining 44% of the variance and a second factor explaining only 20%. In addition, a confirmatory factor analysis suggests an excellent fit of the single-factor model with the data (χ2 = 5.85, P = 0.32, df = 5; CFI = 0.98, NFI = 0.89, RMSEA = 0.03).

In addition, for a subset of the groups in Study 2, we included five additional tasks, for a total of ten. The results from analyses incorporating all ten tasks were also consistent with the hypothesis that a general c factor exists (see Fig. 2). The scree test (13) clearly suggests that a one-factor model is the best fit for the data from both studies [Akaike Information Criterion (AIC) = 0.00 for single-factor solution]. Furthermore, parallel analysis (13) suggests that only factors with an eigenvalue above 1.38 should be retained, and there is only one such factor in each sample. These conclusions are supported by examining the eigenvalues both before and after principal axis extraction, which yields a first factor explaining 31% of the variance in Study 1 and 35% of the variance in Study 2. Multiple-group confirmatory factor analysis suggests that the factor structures of the two studies are invariant (χ2 = 11.34, P = 0.66, df = 14; CFI = 0.99, RMSEA = 0.01). Taken together, these results provide strong support for the existence of a single dominant c factor underlying group performance.

Fig. 2

Scree plot demonstrating the first factor from each study accounting for more than twice as much variance as subsequent factors. Factor analysis of items from the Wonderlic Personnel Test of individual intelligence administered to 642 individuals is included as a comparison.

The criterion task used in Study 2 was an architectural design task modeled after a complex research and development problem (14). We had a sample of 63 individuals complete this task working alone, and under these circumstances, individual intelligence was a significant predictor of performance on the task (r = 0.33, P = 0.009).

When the same task was done by groups, however, the average individual intelligence of the group members was not a significant predictor of group performance (r = 0.18, ns). When both individual intelligence and c are used to predict group performance, c is a significant predictor (β = 0.36, P = 0.0001), but average group member intelligence (β = 0.05, ns) and maximum member intelligence (β = 0.12, ns) are not (Fig. 1).

If c exists, what causes it? Combining the findings of the two studies, the average intelligence of individual group members was moderately correlated with c (r = 0.15, P = 0.04), and so was the intelligence of the highest-scoring team member (r = 0.19, P = 0.008). However, for both studies, c was still a much better predictor of group performance on the criterion tasks than the average or maximum individual intelligence (Fig. 1).

We also examined a number of group and individual factors that might be good predictors of c. We found that many of the factors one might have expected to predict group performance—such as group cohesion, motivation, and satisfaction—did not.

However, three factors were significantly correlated with c. First, there was a significant correlation between c and the average social sensitivity of group members, as measured by the “Reading the Mind in the Eyes” test (15) (r = 0.26, P = 0.002). Second, c was negatively correlated with the variance in the number of speaking turns by group members, as measured by the sociometric badges worn by a subset of the groups (16) (r = –0.41, P = 0.01). In other words, groups where a few people dominated the conversation were less collectively intelligent than those with a more equal distribution of conversational turn-taking.

Finally, c was positively and significantly correlated with the proportion of females in the group (r = 0.23, P = 0.007). However, this result appears to be largely mediated by social sensitivity (Sobel z = 1.93, P = 0.03), because (consistent with previous research) women in our sample scored better on the social sensitivity measure than men [t(441) = 3.42, P = 0.001]. In a regression analysis with the groups for which all three variables (social sensitivity, speaking turn variance, and percent female) were available, all had similar predictive power for c, although only social sensitivity reached statistical significance (β = 0.33, P = 0.05) (12).

These results provide substantial evidence for the existence of c in groups, analogous to a well-known similar ability in individuals. Notably, this collective intelligence factor appears to depend both on the composition of the group (e.g., average member intelligence) and on factors that emerge from the way group members interact when they are assembled (e.g., their conversational turn-taking behavior) (17, 18).

These findings raise many additional questions. For example, could a short collective intelligence test predict a sales team’s or a top management team’s long-term effectiveness? More importantly, it would seem to be much easier to raise the intelligence of a group than an individual. Could a group’s collective intelligence be increased by, for example, better electronic collaboration tools?

Many previous studies have addressed questions like these for specific tasks, but by measuring the effects of specific interventions on a group’s c, one can predict the effects of those interventions on a wide range of tasks. Thus, the ability to measure collective intelligence as a stable property of groups provides both a substantial economy of effort and a range of new questions to explore in building a science of collective performance.

Supporting Online Material

www.sciencemag.org/cgi/content/full/science.1193147/DC1

Materials and Methods

Tables S1 to S4

References

References and Notes

  1. Materials and methods are available as supporting material on Science Online.
  2. This work was made possible by financial support from the National Science Foundation (grant IIS-0963451), the Army Research Office (grant 56692-MA), the Berkman Faculty Development Fund at Carnegie Mellon University, and Cisco Systems, Inc., through their sponsorship of the MIT Center for Collective Intelligence. We would especially like to thank S. Kosslyn for his invaluable help in the initial conceptualization and early stages of this work and I. Aggarwal and W. Dong for substantial help with data collection and analysis. We are also grateful for comments and research assistance from L. Argote, E. Anderson, J. Chapman, M. Ding, S. Gaikwad, C. Huang, J. Introne, C. Lee, N. Nath, S. Pandey, N. Peterson, H. Ra, C. Ritter, F. Sun, E. Sievers, K. Tenabe, and R. Wong. The hardware and software used in collecting sociometric data are the subject of an MIT patent application and will be provided for academic research via a not-for-profit arrangement through A.P. In addition to the affiliations listed above, T.W.M. is also a member of the Strategic Advisory Board at InnoCentive, Inc.; a director of Seriosity, Inc.; and chairman of Phios Corporation.
View Abstract

Cited By...

Subjects

Navigate This Article