PerspectiveApplied Physics

Directing Data Center Traffic

Science  11 Oct 2013:
Vol. 342, Issue 6155, pp. 202-203
DOI: 10.1126/science.1242906

You are currently viewing the summary.

View Full Text

Summary

The widespread adoption of cloud computing has led to the construction of large-scale data centers hosting applications serving millions of users. Underpinning these data centers are tens to hundreds of thousands of servers that communicate internally with each other at high server-to-server bandwidths that are orders of magnitude greater than their connections to end users. Today's data centers consist of racks of 20 to 40 discrete servers, each configured with 8 to 16 CPU cores, hundreds of gigabytes of memory, and potentially tens of terabytes of storage. To meet cost and energy scaling requirements, a new data center design will be required in which a rack of multiple, discrete servers, including the top-of-rack network switch, is integrated into a single chip (see the figure). These integrated “rack-on-chips” will be networked, internally and externally, with both optical circuit switching (to support large flows of data), and electronic packet switching (to support high-priority data flows).