Report

An Economics Approach to Hard Computational Problems

See allHide authors and affiliations

Science  03 Jan 1997:
Vol. 275, Issue 5296, pp. 51-54
DOI: 10.1126/science.275.5296.51

Abstract

A general method for combining existing algorithms into new programs that are unequivocally preferable to any of the component algorithms is presented. This method, based on notions of risk in economics, offers a computational portfolio design procedure that can be used for a wide range of problems. Tested by solving a canonical NP-complete problem, the method can be used for problems ranging from the combinatorics of DNA sequencing to the completion of tasks in environments with resource contention, such as the World Wide Web.

Extremely hard computational problems are pervasive in fields ranging from molecular biology to physics and operations research. Examples include determining the most probable arrangement of cloned fragments of a DNA sequence (1), the global minima of complicated energy functions in physical and chemical systems (2), and the shortest path visiting a given set of cities (3), to name a few. Because of the combinatorics involved, their solution times grow exponentially with the size of the problem (a basic trait of the so-called NP-complete problems), making it impossible to solve very large instances in reasonable times (4).

In response to this difficulty, a number of efficient heuristic algorithms have been developed. These algorithms, although not always guaranteed to produce a good solution or to finish in a reasonable time, often provide satisfactory answers fairly quickly. In practice, their performance varies greatly from one problem instance to another. In many cases, the heuristics involve randomized algorithms (5), giving rise to performance variability even across repeated trials on a single problem instance.

In addition to combinatorial search problems, there are many other computational situations where performance varies from one trial to another. For example, programs operating in large distributed systems or interacting with the physical world can have unpredictable performance because of changes in their environment. A familiar example is the action of retrieving a particular page on the World Wide Web. In this case, the usual network congestion leads to a variability in the time required to retrieve the page, raising the dilemma of whether to restart the process or wait.

In all of these cases, the unpredictable variation in performance can be characterized by a distribution describing the probability of obtaining each possible performance value. The mean or expected values of these distributions are usually used as an overall measure of quality (6, 7, 8, 9). We point out, however, that expected performance is not the only relevant measure of the quality of an algorithm. The variance of a performance distribution also affects the quality of an algorithm because it determines how likely it is that a particular run's performance will deviate from the expected one. This variance implies that there is an inherent risk associated with the use of such an algorithm, a risk that, in analogy with the economic literature, we will identify with the standard deviation of its performance distribution (10).

Risk is an important additional characteristic of algorithms because one may be willing to settle for a lower average performance in exchange for increased certainty in obtaining a reasonable answer. This situation is often encountered in economics when trying to maximize a utility that has an associated risk. It is usually dealt with by constructing mixed strategies that have desired risk and performance (11). In analogy with this approach, we here present a widely applicable method for constructing “portfolios” that combine different programs in such a way that a whole range of performance and risk characteristics become available. Significantly, some of these portfolios are unequivocally preferable to any of the individual component algorithms running alone. We verify these results experimentally on graph-coloring, a canonical NP-complete problem, and by constructing a restart strategy for access to pages on the Web.

To illustrate this method, consider a simple portfolio of two Las Vegas algorithms, which, by definition, always produce a correct solution to a problem but with a distribution of solution times (5). Let t1 and t2 denote the random variables, which have distributions of solution times p1(t) and p2(t). For simplicity, we focus on the case of discrete distributions, although our method applies to continuous distributions as well. The portfolio is constructed simply by letting both algorithms run concurrently but independently on a serial computer. Let f1 denote the fraction of clock cycles allocated to algorithm 1 and f2 = 1 − f1 be the fraction allocated to the other. As soon as one of the algorithms finds a solution, the run terminates. Thus, the solution time t is a random variable related to those of the individual algorithms by Embedded Image(1)

The resulting portfolio algorithm is characterized by the probability distribution p(t) that it finishes in a particular time t. This probability is given by the probability that both constituent algorithms finish in time ≥ t minus the probability that both algorithms finish in time > t Embedded Image Embedded Image(2)

Each value of f1 therefore corresponds to a distribution whose mean and variance can be calculated. By varying f1 from 0 to 1 and defining the risk σ as Embedded Image, a curve in the plane of expected solution time 〈t〉 versus risk can be determined.

The simplest algorithm that exhibits interesting behavior is one with two possible solution times; that is, the algorithm can solve a problem in time t = 1 with probability p or in time t = T with probability 1 − p. By letting two independent instances of such an algorithm run, we obtain, using the above results, the risk versus expected time curve shown in Fig. 1 for particular values of p and T.

Fig. 1.

Expected solution time versus risk for the case of two independent algorithms with identical discrete bimodal distributions. Here p = 0.7 and T = 10. Each point on the curve represents the risk and performance of the “portfolio” algorithm when the fraction f1 ranges from 0 to 1: endpoint A marks f1 = 1 − f2 = 0 or 1, and endpoint B marks f1 = f2 = ½. The solid segment corresponds to the efficient frontier. Note that the shorter the expected solution time, the higher the performance, and vice versa.

There are several features of this curve that are worth noting. First, endpoint A corresponds to the algorithm running alone (f1 = 1 or 0), and point B corresponds to both algorithms sharing computer time equally (f1 = f2 = ½). Second, there exists a regime, the efficient frontier (indicated by the solid segment of the curve), defined by the fact that for every point on the curve, there is at least one point on the efficient frontier that is always preferable, that is, has a lower risk or higher performance, or both. Once this efficient frontier is determined, one can choose the desired performance-risk combination on it and calculate the corresponding fraction of computer cycles to be allocated to algorithm 1. This calculation can be done by plotting both the expected solution time and the risk as a function of f1 (Fig. 2).

Fig. 2.

Risk (solid) and expected solution time (dashed) versus fraction f1 for the situation depicted in Fig. 1.

We point out that the efficient frontier for this distribution will persist as T is reduced from 10 to 3.5. The lower bound corresponds to a value of the ratio σ/〈t〉 = 0.65, known as the Sharpe ratio in the finance literature, which indicates the kinds of distributions that can be used in this approach.

Because Eq. 2 applies to any discrete distribution and can readily be generalized to continuous ones, this procedure also applies to more complicated situations. For example, by extending Eq. 2 to include the minimum of more than two random variables, the portfolio approach can be used in the case of many algorithms, each with their own fractional share of computer cycles. In that case, varying the fractions allocated to each algorithm produces a two-dimensional region in the risk-expected time plane, rather than a single curve. The efficient frontier is then a subset of the boundary of that region.

To test this portfolio approach with more realistic distribution functions, we experimented with the often studied NP-complete problem of graph-coloring (12, 13, 14). This problem consists of a graph (a collection of points or nodes, some of which are connected by straight edges), a specified number of colors, and the requirement to assign a color to each node in the graph such that no two nodes linked by an edge have the same color. The average degree of the graph γ (the average number of edges coming from a node in the graph) is a convenient parameter describing the amount of constraint in the problem and determines the typical behavior of a variety of search methods (15). Here, we focus on the case of three-coloring with γ = 3.5, which has been shown to exhibit a large variance of finishing times over the class of such graphs (16).

We used a complete, depth-first backtracking search based on the Brelaz heuristic (13), which assigns the most constrained nodes first (that is, those with the most distinctly colored neighbors), breaking ties by choosing nodes with the most uncolored neighbors (with any remaining ties broken randomly). For each node, the smallest numbered color consistent with the previous assignments was chosen first, with successive choices made when the search was forced to backtrack. This search method is guaranteed to terminate eventually having correctly found a possible coloring or having concluded that no such coloring exists for the graph. Thus, it is a Las Vegas algorithm.

We first produced a probability distribution of solution times (Fig. 3) by running the Brelaz heuristic algorithm repeatedly on a graph-coloring instance (16). Because the distribution shows a large variability in solution time, we expect that our method will make accessible points in the risk-expected solution time plane that are preferable to the one point accessible by running the heuristic by itself.

Fig. 3.

A measured probability distribution for the Brelaz heuristic used on a particular 100-node graph-coloring problem with connectivity γ = 3.5. The distribution is the result of searching the graph 10,000 times with different random seeds.

Application of Eq. 2 to two independent instances of this algorithm produces the expected solution time versus risk curve shown in Fig. 4. The functional similarity between this curve and that in Fig. 1 is apparent, confirming the predictions of our simplified example. In this case, the efficient frontier is in the range 0.013 < f1 < 0.060. The low end of this range locates the point of minimum risk, and the high end locates the point of maximum performance. The smallness of f1 in this case shows that only a slight “mixing” of strategies is required to provide benefits because of the relatively high probability that the algorithm will find the solution fairly quickly. These results confirm the benefits to be accrued, for as the graph shows, performance can be increased by about 30% at reduced risk when a combined algorithm is used.

Fig. 4.

Experimental performance versus risk curve for graph-coloring by a search method based on the Brelaz heuristic. This curve was produced by varying f1 from 0 to 1. For each fraction, the risk and expected solution time was calculated from the distribution in Fig. 3 by using Eq. 2. Both axes are shown in units of 103 time steps. The expected performance and risk for the heuristic running alone is indicated by point A; point B indicates f1 = ½.

We also tested whether an efficient portfolio constructed for this particular graph would be effective on other graphs whose distributions of solution times were unknown. Using a portfolio with f1 = 0.013 on 20 randomly chosen graphs yielded average speed and risk improvements of 22 and 10%, respectively.

Having established the utility of a portfolio approach when the component algorithms have highly variable performance, we point out that independent studies of a variety of NP-complete problems, using very different search algorithms, have discovered similar distributions in the context of phase transitions in search (15, 17). It has also been pointed out that any algorithm that performs a depth-first backtracking search through a hierarchical tree will have a highly extended distribution of performance because early, high-level choices can decide immediately whether a particular run will take a short time or a much longer time (7, 9). This variability in performance suggests that it is possible to predict when a particular instance of a heuristic is likely to have the right properties for this approach to be useful, thus making it very general in terms of applications.

So far we have assumed that the component algorithms are completely independent of each other and do not communicate. They can be thought of as “competing” with one another for machine resources. However, allowing for cooperation or dependencies among the individual algorithms while they are running simultaneously can improve performance (18, 19). This possibility raises the interesting question of the extent to which our economics approach to portfolio algorithms can also benefit from cooperation. Basically, cooperation will introduce statistical correlations between the performance of the individual algorithms, and we will accordingly define the correlation between them as Embedded Image(3)

where cov(t1, t2) denotes the covariance of the performance of the two algorithms. The effect of cooperation, when manifested in negative correlations, is to increase the performance as well as reduce the risk (Fig. 5). This change is easily understood in that negative correlations mean that one algorithm is particularly good precisely on those cases that are more difficult for the other one, and vice versa. This allows the portfolio, which terminates as soon as the first algorithm completes, to work even better than when the individual algorithms are independent. In the case of the graph-coloring problem, cooperation can be implemented by allowing an algorithm to use incomplete assignments of colors to nodes, posted to a common memory by another algorithm, as a “hint” in its own further explorations (19).

Fig. 5.

Effect of cooperation among algorithms. For the case of two algorithms with the discrete bimodal distribution of solution times studied above, the correlation between the two distributions was varied to model the effect of cooperation between them. For values of ρ ranging from −0.42 at the lower left to 0.42 at the upper right, the efficient frontiers are plotted. The entire risk-expected solution time curve corresponding to ρ = 0 from Fig. 1 is superimposed.

This economics approach, emphasizing risk as well as expected performance, has applicability far beyond the solution of NP-complete problems with heuristics, for it addresses any problems that involve variability in performance. In the example of the World Wide Web, one can use a restart strategy where one collects access time statistics, which play the same role as time series in financial markets. The data can then be used to generate performance versus risk curves that specify how to resolve the dilemma of either restarting a request that is taking a long time or waiting in case the Web page will appear in the next few seconds. We tested this scheme by collecting access times for a periodically requested page on the Web. The results show that there are particular periods during the day when the distribution of the access times undergoes qualitative changes. During low congestion periods, the distribution has a relatively small variance in access time, and during high congestion periods, the distribution of access times has a larger variance with an extended tail. Using data from the high congestion period, we varied the time before a restart and found that although expected access time under such a strategy could only be reduced slightly, the standard deviation or risk was reduced by nearly 20%.

Another interesting extension of this methodology is the possibility of dynamically changing the strategy online, so that it can either adapt to changing conditions or optimally exploit information gained in time about an unknown environment. For example, suppose one wishes to use a portfolio strategy with two identical algorithms whose distributions are, however, unknown. This situation is described by Eq. 2 when p1(t) = p2(t) is unknown. With a maximum likelihood estimate of p1(t) (20), it is possible to exploit optimally observations of p(t) while dynamically adjusting f1 in order to get progressively better estimates of p1(t) and thereby converge on an efficient value of f1 as more information is received.

An important generalization is provided by the way Monte Carlo algorithms for optimization are usually implemented. If one considers the time it takes to find an acceptable move in an optimization problem, it will have the characteristics of a Las Vegas-type algorithm. We tested this idea by using a simulated annealing algorithm on a vector-quantization problem associated with clustering and found a speed improvement of 5 to 10 times (20).

A more exotic example of the applicability of this approach is the construction of a portfolio for a general database search that exploits the properties of quantum computation (21). Because the probability distribution of search times in such cases is known beforehand (22), the methods presented here can be used to optimize the tradeoff between risk and expected search time.

Given the present trend toward electronic commerce and the explosive use of the Internet, this economics framework can play an important role in allocating computational resources, and thus money, to complete tasks efficiently.

REFERENCES AND NOTES

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
View Abstract

Navigate This Article