461 115 27MB
English Pages [1079] Year 2019
Table of contents :
Introduction
Karger's mincut algorithm
The algorithm
Analysis
The KargerStein algorithm
The algorithm
Analysis
Classical Median Finding Algorithms
Find Median With High Probability
Analysis
Runtime Analysis
Failure Probability
Bernoulli Random Variables and Chebychev's Inequality
Proofs
Chernoff Bounds
Markov and Chebyshev Inequality
Chernoff ``argument" for Bin(n,12)
Chernoff Inequality
Warmup example: Median estimate
Problem definition
Solution
Analysis
Streaming
Problem definition
Solution
Analysis
Example: Frequency Moments
F2 Estimation
Distinct Elements
Pairwise Independent
Simple Construction
Hashing
Application: Streaming
The AMS algorithm
Analysis of Algorithm
Space Complexity
Failure Probability
Maximal Independent Sets
Expected Running Time
Derandomizing MIS
History and Open Questions
Data Streaming
The algorithm
Preliminaries: The Data Stream Model
The Basic Setup
The Quality of an Algorithm's Answer
Variations of the Basic Setup
Finding Frequent Items Deterministically
The Problem
Frequency Estimation: The Misra–Gries Algorithm
Analysis of the Algorithm
Finding the Frequent Items
Exercises
Estimating the Number of Distinct Elements
The Problem
The Tidemark Algorithm
The Quality of the Algorithm's Estimate
The Median Trick
Exercises
A Better Estimate for Distinct Elements
The Problem
The BJKST Algorithm
Analysis: Space Complexity
Analysis: The Quality of the Estimate
Optimality
Exercises
Approximate Counting
The Problem
The Algorithm
The Quality of the Estimate
The MedianofMeans Improvement
Exercises
Finding Frequent Items via (Linear) Sketching
The Problem
Sketches and Linear Sketches
CountSketch
The Quality of the Basic Sketch's Estimate
The Final Sketch
The CountMin Sketch
The Quality of the Algorithm's Estimate
Comparison of Frequency Estimation Methods
Exercises
Estimating Frequency Moments
Background and Motivation
The (Basic) AMS Estimator for Fk
Analysis of the Basic Estimator
The Final Estimator and Space Bound
The SoftO Notation
The TugofWar Sketch
The Basic Sketch
The Quality of the Estimate
The Final Sketch
A Geometric Interpretation
Exercises
Estimating Norms Using Stable Distributions
A Different 2 Algorithm
Stable Distributions
The Median of a Distribution and its Estimation
The Accuracy of the Estimate
Annoying Technical Details
Sparse Recovery
The Problem
Special case: 1sparse recovery
Analysis: Correctness, Space, and Time
General Case: ssparse Recovery
Exercises
WeightBased Sampling
The Problem
The 0Sampling Problem
An Idealized Algorithm
The Quality of the Output
2 Sampling
An 2sampling Algorithm
Analysis:
Finding the Median
The Problem
Preliminaries for an Algorithm
The Munro–Paterson Algorithm
Computing a Core
Utilizing a Core
Analysis: Pass/Space Tradeoff
Exercises
Geometric Streams and Coresets
Extent Measures and Minimum Enclosing Ball
Coresets and Their Properties
A Coreset for MEB
Data Stream Algorithm for Coreset Construction
Exercises
Metric Streams and Clustering
Metric Spaces
The Cost of a Clustering: Summarization Costs
The Doubling Algorithm
Metric Costs and Threshold Algorithms
Guha's Cascading Algorithm
Space Bounds
The Quality of the Summary
Exercises
Graph Streams: Basic Algorithms
Streams that Describe Graphs
SemiStreaming Space Bounds
The Connectedness Problem
The Bipartiteness Problem
Shortest Paths and Distance Estimation via Spanners
The Quality of the Estimate
Space Complexity: HighGirth Graphs and the Size of a Spanner
Exercises
Finding Maximum Matchings
Preliminaries
Maximum Cardinality Matching
Maximum Weight Matching
Exercises
Graph Sketching
The Value of Boundary Edges
Testing Connectivity Using Boundary Edges
Testing Bipartiteness
The AGM Sketch: Producing a Boundary Edge
Counting Triangles
A SamplingBased Algorithm
A SketchBased Algorithm
Exercises
Communication Complexity and a First Look at Lower Bounds
Communication Games, Protocols, Complexity
Specific TwoPlayer Communication Games
Definitions
Results and Some Proofs: Deterministic Case
More Proofs: Randomized Case
Data Streaming Lower Bounds
Lower Bound for Majority
Lower Bound for Frequency Estimation
Exercises
Further Reductions and Lower Bounds
The Importance of Randomization
MultiPass Lower Bounds for Randomized Algorithms
Graph Problems
The Importance of Approximation
Space versus Approximation Quality
Exercises
Derandomization of an Algorithm
Maximal Independent Set Algorithm
Sequential Algorithm
Parallel Algorithm
Proof with Pairwise Independence
Maximal Independent Sets
Expected Running Time
Derandomizing MIS
History and Open Questions
Maximum Satisfiability (MAXSAT)
Problem Statement
Random Assignment
Derandomization
Randomized Rounding
Integer (Linear) Programming
Reduction
Algorithm
A Third Algorithm
MaxCut Problem
Simple 12approximation Algorithm
MaxCut as an Integer Linear Programming (ILP) Problem
Semidefinite Programming
Summary
Polynomial identity testing
Matrix multiplication
Polynomial Equality Testing
Perfect Matching
Preliminaries in number theory
RSA encryption algorithm
Correctness
Implementation
Primality testing
Fermat witness
Primality Test when n is not Carmichael
Correctness
Millerâ•ﬁRabin Primality Test
Correctness
Dimension Reduction
The Johnson Lindenstrauss lemma
The construction
Some properties of Gaussians
The proof
The proof, this time for real
The Expectation
Concentration about the Mean
Using Random Signs instead of Gaussians
Concentration Around the Mean via Subgaussianness
Compressive Sensing
The Curse of Dimensionality in the Nearest Neighbor Problem
Point of Lecture
Role Model: Fingerprints
L2 Distance and Random Projections
The HighLevel Idea
Review: Gaussian Distributions
Step 1: Unbiased Estimator of Squared L2 Distance
Step 2: The Magic of Independent Trials
The JohnsonLindenstrauss Transform
Jaccard Similarity and MinHash
The HighLevel Idea
MinHash
A Glimpse of Locality Sensitive Hashing
Generic PrimalDual Algorithm
2( 1  1k )  approximation algorithm for steiner tree problem
Steiner Tree Problem
Primal
Dual
Algorithm
Analysis
Lovasz Local Lemma
Lemma1
Lemma2
Lovasz Local Lemma
Asymmetric Lovasz Local Lemma
Algorithmic Lovász Local Lemma
Witness trees
Proof of Algorithmic Lovász Local Lemma
Introduction
Execution logs and witness trees
Random generation of witness trees
Analyzing the parallel algorithm
A deterministic variant
The Lopsided Local Lemma
Conclusion
Introduction
Bloom Filters
Bloom Filter Algorithms for Operations
Bloom Filter Querying Analysis
Bloom Filter Time Complexities
Cuckoo Hashing
Cuckoo Filter Algorithms for Operations
Cuckoo Hashing Analysis
Conclusion
#DNF
Problem Statement
The Monte Carlo Method
Applying the Monte Carlo Method to #DNF
Choosing a Better Sample Space
The Network Unreliability Problem
Problem Statement
Overview
pC n4
Enumerating small cuts
A FPRAS for #DNF
pC n4
Putting it all together
Open problems
2SAT
A randomized algorithm
Analysis
Markov Chain
Definitions
Properties
PageRank
Problem Statement
Sampling and Counting
Canonical Path
Example: Random walk on hypercube
CS 6550: Randomized algorithms
Spring 2019
Lecture 1: Karger’s mincut algorithm and the KargerStein algorithm January 8, 2019 Lecturer: Eric Vigoda
Scribes: Yingjie Qian, John Lambert
Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.
1.1
Introduction
¯ Definition 1.1 Given an undirected graph G = (V, E), let S ⊆ V and S¯ = V \ S. We define the cut(S, S) ¯ to be the set of all edges with one end in S and the other in S. We use notation δ(S) = {(v, w) ∈ E : ∀v ∈ ¯ for simplicity. S, w ∈ S} In this lecture, we want to solve the mincut problem: Given G = (V, E), find S ⊂ V such that δ(S) is minimum. One easy way is to use maxflow. Recall the mins,tcut problem, which is given G and two vertices ¯ where s ∈ S, t ∈ S, ¯ with δ(S) minimum. To solve mincut from mins,tcut: fix s, t ∈ V , find cut(S, S) s ∈ V and run it for all n − 1 other vertices as t. To solve mincut: we can reduce it to maxs,tflow and use Edmond’s Karp algorithm to solve it. We refer the readers to CS 6505 notes for details. In this lecture, we present Karger’s mincut algorithm [1] in section 1.2.1 and the KargerStein algorithm [2] in section 1.3.1. We will then analyze their running times and success probabilities in section 1.2.2 and 1.3.2, respectively.
1.2 1.2.1
Karger’s mincut algorithm The algorithm
In this section, we first introduce a new action on graph called contraction. Then, we provide some intuition before formally presenting Karger’s mincut algorithm. Given a graph G = (V, E), for an edge e = (v, w) ∈ E, we call the following action as contracting e to obtain a new graph, denoted by G/e: 1. Replace vertices v and w by new vertex y. 2. For all edges (v, z) ∈ E in original graph G, replace by (y, z); for all edges (w, z) ∈ E, replace by (y, z). 3. Drop all edges (v, w) ∈ E. Note that G/e is a multigraph without selfloops. We have the following observations on edge contractions: For a given graph G = (V, E) and e = (v, w) ∈ E, let G0 = G/e. Consider cuts in G and cuts in G0 , then every cut in G0 corresponds to a cut in G. For a subset S ⊂ V , where e ∈ S, then δG (S) = δG0 (S). Now, consider contracting a sequence of edges e1 , e2 , . . . , en−2 . Since the number of vertices decreases by one via each contraction, there are two big metavertices left. Note that the (multi)edges between those two vertices represent the cut(S 0 , S¯0 ) in G, where the vertices in S 0 and S¯0 are contracted into those two big metavertices respectively. Fix a cut (S ? , S¯? ) of minimum size δ(S ? ) = k. We want all of these edges e1 , e2 , . . . , en−2 ∈ / δ(S ? ). In a randomized algorithm, just take guesses at choosing edges. The minimum cut is small, so if you choose a random edge from the whole graph, it is unlikely that it will be in the cut. Below is the formal algorithm. 11
Lecture 1: Karger’s mincut algorithm and the KargerStein algorithm
12
Algorithm 1: Karger’s mincut algorithm input : Graph G = (V, E). 1 n ← V (G), C ← E; n 2 for i ← 1 to l 2 do 3 G0 ← G; 4 repeat 5 Choose an edge e ∈ E(G0 ); 6 G0 ← G0 /e; 7 until V (G0 ) = 2; 8 if C ≥ E(G0 ) then 9 C ← E(G0 ); output: C.
1.2.2
Analysis
We can use adjacency lists or an adjacency matrix to represent the graph. Then, a single edge contraction operation can be implemented with a linear number of updates to the data structure. It follows that the running time for contracting any given graph to two vertices is O(n2 ) as we contract n − 2 edges. So the total running time for the algorithm is O(n4 ). Now, we will show that the success probability of the algorithm is: P ∗ := Pr algorithm outputs the global min cut ≥ 1 − e−l . Fix S ∗ ⊂ V such that cut (S ∗ , S¯∗ ) is of minimum size k. It suffices to show that for a single run, 1 P := Pr algorithm outputs min cut (S ∗ , S¯∗ ) for a single run ≥ n . 2
n
Suppose we can prove the above inequality, since we run the algorithm l 2 times and output the best cut n found, the probability of all l n2 runs do not find the min cut is (1 − P )l( 2 ) . Thus, n
n
P ∗ ≥ 1 − (1 − P )l( 2 ) ≥ 1 − e−P l( 2 ) ≥ 1 − e−l . We are left to show P ≥
1
(n2 )
. By observation, P = Pr e1 , e2 , . . . , en−2 ∈ / δ(S ? ) .
Then, Pr(e1 ∈ / δ(S ∗ )) = 1 −
k . E(G)
We now claim that G has minimum degree k, otherwise G has cut of size smaller than k, set Pwhich is the kn of all edges that has one fixed minimum degree vertex as one end. Then, E(G) = 21 deg(v) ≥ 2 . v∈V (G))
Thus, Pr(e1 ∈ / δ(S ∗ )) = 1 −
k k 2 ≥1− =1− . E(G) kn/2 n
G/e1 still has minimum degree k, otherwise G/e1 has cut of size smaller than k, which corresponds to cut of size smaller than k in G. Then, V (G/e1 ) = n − 1 and E(G/e1 ) ≥ k(n−1) . Thus, 2 Pr e2 ∈ / δ(S ? )  e1 ∈ / δ(S ? ) ≥ 1 −
k 2 =1− . k(n − 1)/2 n−1
Lecture 1: Karger’s mincut algorithm and the KargerStein algorithm
13
Similarly, for i = 3, 4, . . . , n − 2, Pr ei ∈ / δ(S ? )  ej ∈ / δ(S ? ), ∀j < i ≥ 1 −
k 2 =1− . k(n − (i − 1))/2 n−i+1
Thus, P = Pr e1 , e2 , . . . , en−2 ∈ / δ(S ? ) = P r(e1 ∈ / δ(S ? )) × P r(e2 ∈ / δ(S ? )  e1 ∈ / δ(S ? )) × P r(e3 ∈ / δ(S ? )  e1 , e2 ∈ / δ(S ? )) × . . . 2 2 2 2 ≥ (1 − ) × (1 − ) × (1 − ) × · · · × (1 − ) n n−1 n−2 3 n−3 n−4 2 1 n−2 )×( )×( ) × ··· × ( ) × ( ) =( n n−1 n−2 4 3 2 = . n × (n − 1) 1 = n as desired. 2
Note that the algorithm also implies that for any graph G, there are at most
1.3
n 2
minimum cuts.
The KargerStein algorithm
Consider the calculations in the last section. During the sequence of edge contractions, the probability of choosing right edges is high at the beginning but low later since there are small number of vertices left. The KargerStein algorithm uses the general framework of that in the Karger’s mincut algorithm but runs the “later” part more times.
1.3.1
The algorithm
Suppose we contract edges from n vertices to l vertices twice. We can choose l to have the probability of success is about 21 . So in expectation, one of the two runs work. Then we recurse each of these bifurcates. Take best of two, then pop back up from the recursion. Now, we want to choose l. Similar to the previous calculations, the probability of success of running from n vertices to l is l l−1 l(l − 1) n−2 n−3 2 × × ··· × = = n . n n−1 l+1 n(n − 1) 2 Then, we have
(2l ) ≥ (n2 )
1 2
by taking l =
n √ 2
+ 1. Below is the algorithm:
Lecture 1: Karger’s mincut algorithm and the KargerStein algorithm
14
Algorithm 2: The KargerStein algorithm: KargerStein(G) input : Graph G with n vertices. 1 if n ≥ 6 then 2 Do 2 runs and output the best of the 2: 3 Use Karger’s mincut algorithm to contract edges until √n2 + 1 vertices are left, denoted as G0 ; 4 Recurse KargerStein(G’).
Note that 6 is chosen in the algorithm as
1.3.2
√6 2
+ 1 ≥ 2.
Analysis
First, let us calculate the new running time for running it once. Then, n T (n) = 2T ( √ ) + O(n2 ). 2 By the Master Theorem, T (n) = O(n2 log n). Now, we can also write a recurrence relation for the success probability. The probability of success to contract from n vertices to √n2 + 1 is no smaller than a half. Thus, 1 n 2 P (n) ≥ 1 − 1 − P ( √ ) . 2 2 One can easily use induction to verify that P (n) = Ω( ln1n ). So if we do O(log2 n) runs, the success probability 1 . Hence, the total running time of the algorithm is O(n2 log3 n). is at least 1 − poly(n)
References [1] Karger, David R.. Global mincuts in RNC, and other ramifications of a simple mincut algorithm. Proceedings of the Fourth Annual ACMSIAM Symposium on Discrete Algorithms, 1993. [2] Karger, David R. and Stein, Clifford. A new approach to the minimum cut problem. Journal of the ACM, 43(4):601640, 1996.
Lecture Notes on Karger’s MinCut Algorithm. Eric Vigoda Georgia Institute of Technology Last updated for 7530  Randomized Algorithms, Spring 2010.
1
Today’s topic: Karger’s mincut algorithm. Let G = (V, E) be an undirected graph. Let n = V , m = E. For S ⊂ V , the set δ(S) = {(u, v) ∈ E : u ∈ S, v ∈ S} is a cut since their removal from G disconnects G into more than one component. Goal: Find the cut of minimum size. Closely related is the minimum stcut problem. In this problem, for specified vertices s and t we restrict attention to cuts δ(S) where s ∈ S, t ∈ / S. Traditionally, the mincut problem was solved by solving n − 1 minstcut problems. In the minstcut problem we are given as input two vertices s and t, our aim is to find the set S where s ∈ S and t ∈ / S which minimizes the size of the cut (S, S), i.e., δ(S). The size of the minstcut is equal to the value of the maxstflow (equivalent by linear programming duality). The fastest algorithm for solving maxstflow runs in O(nm log(n2 /m)) time [1]. In fact, all n − 1 maxstflow computations can be done simultaneously with the same time bounds [2]. Karger [3] devised a simple, clever algorithm to solve the mincut problem (without the stcondition) without using any maxflow computations. An refinement of Karger’s algorithm, due to Karger and Stein [4] is also faster than the above approach using maxstflow computations. We will present Karger’s algorithm, followed by the refinement. The basic operation in the algorithm is to contract edges, which is defined below. The algorithm will start initially with a simple graph as input, and then it will need to consider multigraphs. In a multigraph there are possibly multiple edges between a pair of vertices. We do not have edges of the form (v, v) in our multigraphs; we refer to these types of edges as “selfloops”. Definition 1 Let G = (V, E) be a multigraph without self loops. For e = {u, v} ∈ E, the contraction with respect to e, denoted G/e, is formed by: 1. Replace vertices u and v with a new vertex, w. 2. Replace all edges (u, x) or (v, x) with an edge (w, x). 3. Remove selfloops to w. G/e is a multigraph. The key observation is that if we contract an edge (u, v) then we preserve those cuts where u and v are both in S or both in S. Observation 2 Let e = (u, v) ∈ E. There is a onetoone correspondence between cuts in G which do not contain any edge (u, v), and cuts in G/e. In fact, for S ⊂ V such that u, v ∈ S, δG (S) = δG/e (S) (with w substituted for u and v). The idea of the algorithm is to contract n − 2 edges and then two vertices remain. These two vertices correspond to a partition (S, S) of the original graph, and the edges remaining in the two vertex graph 1
Based on scribe notes first prepared by Tom Hayes at the University of Chicago in the winter quarter, 2003.
1
correspond to δ(S) in the original input graph. So we will output this cut δ(S) as what think is the minimum cut of the original input graph. What edges do we contract? If we never contract edges from a minimum cut δ(S ∗ ), then, by Observation 2, this is the cut the algorithm will end up with. Since δ(S ∗ ) is of minimum size, it has relatively few edges, so if we contract a random edge it turns out that we will have a reasonable probability of preserving this mincut δ(S ∗ ). Here is the formal algorithm. Karger’s mincut algorithm: Starting from the input graph G = (V, E), repeat the following process until only two vertices remain: 1. Choose an edge e = (u, v) uniformly at random from E. 2. Set G = G/e. These final two vertices correspond to sets S, S of vertices in the original graph, and the edges remaining in the final graph correspond to the edges in δ(S), a cut of the original graph. Claim: This process has a reasonable chance of ending at a minimum cut of the original graph. Fix some minimum cut δ(S) in the original graph. Let δ(S) = k. Lemma 3 Let δ(S) be a cut of minimum size of the graph G = (V, E). Pr ( Karger’s algorithm ends with the cut δ(S) ) ≥
1 n . 2
Proof: Denote the edges we contract in the algorithm as {e1 , e2 , . . . , en−2 }. The algorithms succeeds if none of the contracted edges are in δ(S). To upper bound the probability that the algorithm succeeds in the first contraction, i.e., that e1 ∈ / δ(S) we need to lower bound the number of edges in the input graph G in terms of k. Note, the minimum degree is at least k in G, otherwise we have a cut of size smaller than k since we can disconnect the vertex from the rest of the graph by removing all edges incident to it. This implies that the original input graph G has at least nk/2 edges. By Observation 2, since every cut in an intermediate multigraph corresponds to a cut of the original graph, we have the following observation. Observation 4 The minimum degree in all of the intermediate multigraphs is at least k. Otherwise, the edges incident the (meta)vertex with degree smaller than k would correspond to a cut of size < k in the original graph. After j contractions, the multigraph, denote as Gj , contains n − j vertices since we lose one vertex per contraction. Then, Observation 4 implies that Gj has at least (n − j)k/2 edges. We can now compute the probability the algorithm successfully finds our specific minimum cut δ(S). To do so, all of the contracted edges must not be in δ(S): Pr ( final graph = δ(S) ) = =
Pr ( e1 , e2 , . . . , en−2 ∈ / δ(S) ) Pr ( e1 ∈ / δ(S) )
n−3 Y
Pr ( ej+1 ∈ / δ(S)e1 , . . . , ej ∈ / δ(S) )
j=1
≥
n−3 Y j=0
k 1− (n − j)k/2 2
= = =
n−2 n−3 2 1 × × ··· × × n n−1 4 3 2 (n)(n − 1) 1 n . 2
In order to boost the probability of success, we simply run the algorithm ` at least one run succeeds is at least !`(n2 ) 1 1 − 1 − n ≥ 1 − e−` .
n 2
times. The probability that
2
Setting ` = c ln n we have error probability ≤ 1/nc . It’s easy to implement Karger’s algorithm so that one run takes O(n2 ) time. Therefore, we have an O(n4 log n) time randomized algorithm with error probability 1/poly(n). A faster version of this algorithm was devised by Karger and Stein [4]. The key idea comes from looking at the telescoping product. In the initial contractions it’s very unlikely we contracted an edge in the minimum cut. Towards the end of the algorithm, our probability of contracting an edge in the minimum cut grows. From the earlier analysis we have the following. cut δ(S), the probability that this cut √ For a fixed minimum survives down to ` vertices is at least 2` / n2 . Thus, for ` = n/ 2 we have probability ≥ 1/2 of succeeding. Hence, in expectation two trails should suffice. Improved algorithm: From a multigraph G, if G has at least 6 vertices, repeat twice: √ 1. run the original algorithm down to n/ 2 + 1 vertices. 2. recurse on the resulting graph. Return the minimum of the cuts found in the two recursive calls. The choice of 6 as opposed to some other constant will only affect the running time by a constant factor. We can easily compute the running time via the following recurrence (which is straightforward to solve, e.g., the standard Master theorem applies): √ T (n) = 2 n2 + T (n/ 2) = O(n2 log n). √ Since we succeed down to n/ 2 with probability ≥ 1/2, we have the following recurrence for the probability of success, denote by P (n): 2 √ 1 P (n) ≥ 1 − 1 − P (n/ 2 + 1) . 2 This solves to P (n) = Ω log1 n . Hence, similar to the earlier argument for the original algorithm, with O(log2 n) runs of the algorithm, the probability of success is ≥ 1 − 1/poly(n). Therefore, in O(n2 log3 n) total time, we can find the minimum cut with probability ≥ 1 − 1/poly(n). Before finishing, we observe an interesting corollary of Karger’s original algorithm which we will use in the next lecture to estimate the (un)reliability of a network. 3
Corollary 5 Any graph has at most O(n2 ) minimum cuts. This follows from Lemma 3 since that holds for any specified minimum cut. Note, we can also enumerate all of these cuts by the above algorithm.
References [1] A. V. Goldberg and R. E. Tarjan. A new approach to the maximumflow problem. J. Assoc. Comput. Mach., 35(4):921–940, 1988. [2] J. Hao and J. B. Orlin. A faster algorithm for finding the minimum cut in a graph. In Proceedings of the Third Annual ACMSIAM Symposium on Discrete Algorithms (Orlando, FL, 1992), pages 165–174, New York, 1992. ACM. [3] D. R. Karger. Global mincuts in RNC, and other ramifications of a simple mincut algorithm. In Proceedings of the Fourth Annual ACMSIAM Symposium on Discrete Algorithms (Austin, TX, 1993), pages 21–30, New York, 1993. ACM. [4] D. R. Karger and C. Stein. A new approach to the minimum cut problem. J. ACM, 43(4):601–640, 1996.
4
CS 6550: Randomized Algorithms
Spring 2019
Lecture 2: Median Finding January 10, 2019 Lecturer: Eric Vigoda
Scribes: Daniel Hathcock, Shyamal Patel
Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.
2.1
Classical Median Finding Algorithms
Given an unsorted list of n numbers S = [S1 , . . . , Sn ], find the median (n is assumed to be odd for convenience). There are many algorithms to do this: • Easy: sort in O(n log n) time, output middle element. • Nontrivial: median of medians algorithm, deterministically finds median in O(n) time via divide and conquer, originally by Blum et. al [2]. • Trivial (randomized): quickselect algorithm finds median in O(n) expected time • Nontrivial (randomized): the main algorithm presented here finds the median in O(n) with high probability (whp). Definition 2.1. An event An which depends on some parameter n occurs with high probability (whp) if P(An ) ≥ 1 −
1 nc
for some c > 0. Or sometimes, alternatively, if P(An ) ≥ 1 − o(1) The randomized quickselect approach to finding the median of S is to choose some “good” pivot p, then partition S into S
p . Then recurse, using the sizes of parts to determine which contains the median. What does it mean to have a good pivot? S
p  ≤ 34 n. If this is the case, then the running time has the recurrence relation T (n) ≤ T ( 43 n) + O(n) which is O(n) by the master theorem. (note that 34 is arbitrary. Any constant larger than 12 and less than 1 is fine). To find a good pivot, choose randomly. Half of the elements are good pivots, so in expectation, we will have to choose two pivots. Since the pivot is good in expectation after only O(1) trials, then the algorithms finishes in expected O(n) time. However, if we want to get O(n) time with high probability instead, then we need a different algorithm.
2.2
Find Median With High Probability
Idea: Suppose m is the median. Choose some ` and u such that ` ≤ m ≤ u, and both ` and u are “close” to m. Consider the set n C = {x ∈ S  ` ≤ x ≤ u}, with C = o log n
21
Lecture 2: Median Finding
22
C contains the median by our assumption on ` and u, and it is small enough to sort efficiently. So, we sort th C and find the ( n+1 smallest element, which is the median of S. 2 − S u}; if Su  ≥ n2 then output FAIL; if C > 4n3/4 then output FAIL;
11
else sort C;
12
output
10
n+1 2
th − S 4n3/4 =⇒ output FAIL 2 2
Since we assumed that the algorithm does not output fail, this means that S 4n3/4 E1 := S 4n3/4 then we must have that either {x ∈ C : x ≥ m} > 2n3/4 or {x ∈ C : x ≤ m} > 2n3/4 . We will show that P({x ∈ C : x ≥ m} > 2n3/4 ) < 4n11/4 . Notice that if we have that {x ∈ C : x ≥ m} > 2n3/4 then we must have that u is one of the n2 − 2n3/4 √ 3/4 largest elements of S. Hence at least n 2 − n elements of R, namely the elements of R that are bigger than u, are from the n2 − 2n3/4 largest elements of S. So now let: ( 1 if ri ≥ n2 + 2n3/4 smallest element of S Xi = 0 otherwise Then we have that n 2
E[Xi ] = If we again define Y =
Pn3/4 i=1
− 2n3/4 1 2 = − 1/4 n 2 n
Xi then it follows that
√ n3/4 −2 n 2 1 2 1 2 n3/4 n3/4 3/4 Var(Y ) = n + 1/4 − 1/4 = − 4n1/4 < 2 n 2 n 4 4 E[Y ] = n3/4
1 2 − 2 n1/4
=
So by Chebychev’s Inequality we have that P(Y ≥
√ Var(Y ) n3/4 √ n3/4 1 − n) ≤ P Y − E[Y ] ≥ n ≤ < = 1/4 2 n 4n 4n
By an analogous argument, we have that P({x ∈ C : x ≤ m} > 2n3/4 ) < bound argument, we have that P (E3 )
0, then Pr(X > a) ≤
µ a
Lemma 3.2 (Chebyshev Inequality) Let X be a nonnegative random variable for which Var(X) exists, then for all k > 0 Pr(X − µ > kσ) ≤
1 k2
A more general form being, r ≥ 0Pr(x − µ > r) ≤
Var(X) r2
Proof: Note that Y = (X − µ)2 is a nonnegative random variable, we can then apply the Markov Inequality to Y . Note that Chebyshev does not always give a good bound. We give an example. Let ( 1 with probability 21 Xi = 0 with probability 12 Pn Let X = i=1 Xi , then from previous lecture we know E[X] = n2 and Var(X) = Note for n = 1000, X = Bin(1000, 21 ) by the Chebyshev Inequality we have, Pr(X ≥ 750) =
1 1 250 Pr(X − 500 ≥ 250) ≤ = 0.002 2 2 2502
We can calcultate this probability directly,
Pr(X ≥ 70) =
1000 X i=750
1000 −1000 2 ≈ 60 × 10−58 i
Note that the Chebyshev Inequality is significantly off. 31
n 4,
√
with σ =
n 2 .
Lecture 3: Chernoff Bounds
3.1.2
32
Chernoff “argument” for Bin(n, 12 )
Note that if X = Bin(n, 21 ), using Chernoff Bounds we can obtain bounds, √
2 n ) ≤ e−t /2 (∗) 2√ 2 n Pr(X ≤ µ − t ) ≤ e−t /2 2
Pr(X ≥ µ + t
We first argue (*) to show the inuition behind the general Chernoff Bound. Proof: We first want to transform X = X1 + · · · + Xn such that it has mean 0. Let ( Yi = −1 + 2Xi =
1 with probability 12 −1 with probability
1 2
Note then that E[Y ] = 0. Since Var(Yi ) = 1, this implies Var(Y ) = n. Note that Y1 + · · · Yk can be interpreted as a√unbiased random walk on the integers starting √at 0. √ The original bound we wanted was X ≥ n2 +t 2n , which is equivalent to Y ≥ −n+2( n2 +2( n2 +t 2n ) = t n. When we are far away from 0, adding 1 can be approximated by instead multiplying by (1 + λ) for some tiny λ. If λ is small enough then (1 + λ)(1 + λ) = 1 + 2λ + λ2 ≈ 1 + 2λ. Let Zi = (1 + λ)Yi where we will choose λ later. Note ( 1 + λ with probability 12 Zi = 1 1 1+λ with probability 2 Note then that Z = Z1 · Z2 · · · Zn = (1 + λ)Y1 ...(1 + λ)Yn = (1 + λ)Y . What we have done is transformed the random walk model, where if the random walk was at u, then in the new model the random walk would be at (1 + λ)u . Since Z is now a nonnegative random variable, we can now utilize the Markov Inequality. Since Yi are pairwise independent, so are the Zi . It follows,
Pr(X ≥
√ √ n n +t ) = Pr(Y ≥ t n) 2 2 √ = Pr(Z ≥ (1 + λ)t n
For example, by a smart choice of lambda and Taylor Series approximation, 1 + λ ≈ e √ n
Pr(Z ≥ (1 + λ)100
√1 n
.
) = Pr(Z ≥ e100 )
Note that e100 is a big number, thus Markov Inequality would give a good bound. To make things rigorous, 1 1 1 (1 + λ) + ( 2 2 1+λ 1 λ2 + 2λ + 2 = ( ) 2 1+λ λ2 =1+ 2 + 2λ λ2 ≤1+ 2
E[Zi ] =
Lecture 3: Chernoff Bounds
It follows that E[Z] ≤ (1 +
33
λ2 n 2 ) .
Note then that for λ =
Pr(Z ≥ (1 + λ)t
√
n
)≤
√1 , n
E[Z] √ (1 + λ)t n 2
(1 + λ2 )n √ = (1 + λ)t n 2
t n (1 + 2n ) √ = t t n √ (1 + n ) t2
−t2 e2 ≤ t2 = e 2 . e
where we are cheating on the denominator of ≤ inequality.
3.1.3
Chernoff Inequality
Now that we have given an intuition on the proof we can move to the rigorous proof. Pn Let Xi , · · · , Xn be Independent Bernoulli R.V.s where 0 ≤ Xi ≤ 1 Let X = i=1 Xi and µ = E[X] For all 0 < ε ≤ 1 Pr(X ≥ µ(1 + ε)) ≤ e−µ·(ε
2
/3)
Pr(X ≤ µ(1 − ε)) ≤ e−µ·(ε
2
/2)
We want to know : Pr(X ≥ µ(1 + ε)) Note as X is nonnegative, we can choose an arbitrary t, then we exponentiate both sides and raise both sides to the power t for some arbitrary t, Pr(eX ≥ eµ(1+ε) ) Pr(etX ≥ etµ(1+ε) ) We know the applying Markov’s inequality: Pr(etX ≥ etµ(1+ε) ) ≤
E[etX ] etµ(1+ε)
(A)
Because the Xi = Bernoulli(pi ) and 1 + x ≤ ex then E[etXi ] = pi et + (1 − pi ) = 1 + pi (et − 1) ≤ epi (e
t
−1)
Then the moment generating function:
E[etX ] ≤
n Y i=1
Let’s substitute B in A:
epi (e
t
−1)
= eµ(e
t
−1)
(B)
Lecture 3: Chernoff Bounds
34
Pr(etX ≥ etµ(1+ε) ) ≤ (
eε−1 µ ) = (eε−(1+ε) log(1+ε) )µ et(1+ε)
(C)
In the last equality we plugged in t = ln(1 + ε)to minimize. Taylor expansion for : log(1 + ε) = ε − ε2 /2 + ε3 /3 + · · · Then: (1 + ε) log(1 + ε) = ε − ε2 /2 + ε2 + ε3 /3 − ε3 /2 + · · · ≥ ε + ε2 /2 − ε3 /6 = ε + ε2 /3 Using this in C: (
2/3 eε ) µ ≤ eε µ 1+ε (1 + ε)
CS 6550: Randomized Algorithms
Spring 2019
Lecture 4: Streaming: Frequency moments January 17, 2019 Lecturer: Eric Vigoda
Scribes: Mengfei Yang, Yatharth Dubey
Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications. Theorem 4.1 Chernoff bounds: Let X1 , X2 , ..., Xn be independent variables, where 0 ≤ Xi ≤ 1. Let X=
n X
Xi ,
µ = E[X]
i=1
. Then for 0 ≤ δ ≤ 1, P r[X ≥ (1 + δ)µ] ≤ e− P r[X ≤ (1 − δ)µ] ≤ e−
4.1 4.1.1
δ2 µ 3 δ2 µ 2
Warmup example: Median estimate Problem definition
Definition 4.2 − approximate median: Given unordered list S = [X1 , X2 , ..., Xm ], for simplicity, assume Xi0 s are distinct. The rank of y is given by rank(y) = {x ∈ S : x ≤ y} The goal is to find an − approximate median of S. That is, given > 0, find y ∈ S where m m − m < rank(y) < + m 2 2
4.1.2
Solution
The intuition is choose some random elements from the list, and output the median of these elements. Then prove this median is − approximate median. algorithm 1 select t ≥ 22 log 1δ random elements from S, then sort these random elements and output the median. Algorithm 1: Find median input : An unordered list S = [X1 , X2 , ..., Xm ]. output: One integer represents the median 1 2 3
R = [r1 , r2 , ..., rt ] ←−choose t random elements from S; sort(R); return median(R)
41
Lecture 4: Streaming: Frequency moments
4.1.3
42
Analysis
Claim 4.3 Assume p is the median returned by Algorithm 1. P r[p is − approximate median] ≥ 1 − δ This means p is an (, δ) − approximation of the median. Proof: Divide S into 3 parts: = {y ∈ S : rank(y) ≤
SL
m − m} 2
m m − m < rank(y) < + m} 2 2 m = {y ∈ S : rank(y) ≥ + m} 2 = {y ∈ S :
SM SU
If both R ∩ SL  < 2t and R ∩ SU  < 2t hold, then p = r 2t ∈ SM , which means p is − approximate median. We will only show R ∩ SL  < 2t . The other inequality will follow by an analogous argument. Set random variables Xi to indicate whether element ri is belong to SL , and X to the summation of Xi . ( 1, if ri ∈ SL ; Xi = 0, otherwise. X= E[Xi ] =
t X
Xi
i=1 m 2 −
m
m
=
1 − 2
1 µ = E[X] = t( − ) 2 Now we can use Chernoff bounds P r[X ≥
t ] = P r[X ≥ µ + t] 2 ≤ P r[X ≥ µ(1 + 2)] 1 2 ( 2 −)t 3
≤ e−(2) ≤ e− δ ≤ 2
42 7
t
Hence, P r[R ∩ SL  ≥
t δ ]≤ 2 2
Similarly, t δ ]≤ 2 2 t t δ δ P r[R ∩ SL  ≤ and R ∩ SU  ≤ ] ≤ + = δ 2 2 2 2 P r[R ∩ SU  ≥
P r[p is − approximate median] ≥ 1 − δ
Lecture 4: Streaming: Frequency moments
4.2 4.2.1
43
Streaming Problem definition
Definition 4.4 Streaming: get onebyone m elements X1 , X2 , ..., Xm , where Xi ∈ {1, 2, ..., n}(Xi is repeatable). m is huge so we can’t store the entire stream. Let fi be the frequency of number i in the stream. Set f = (f1 , f2 , ..., fn ) Definition 4.5 Reservoir Sampling: choose an element S uniformly at random from {X1 , X2 , ..., Xm } without knowing m beforehand. Pn The problem is, give a function g(fi ), where g(0) = 0, compute i=1 g(fi )
4.2.2
Solution
Algorithm 2 resolves Reservoir Sampling problem. Detailed analysis is given in later section. Algorithm 2: Reservoir Sampling input : streaming elements X1 , X2 , ... output: one randomly chosen integer.
3
set S ← X1 ; for t > 1 do upon seeing tth elements Xt , with probability
4
return S
1 2
To compute
Pn
i=1
1 t
set S = Xt
g(fi ), we introduce the unbiased estimator: a random variable X, where E[X] =
n X
g(fi )
i=1
Algorithm 3 is AMS algorithm [AMS], shows how to calculate X. Algorithm 3: AMS algorithm input : streaming elements X1 , X2 , ... output: Integer X 1 2 3 4
use Reservoir Sampling to choose random index J ∈ {1, 2, ..., m}; r ← {j ≥ J : Xj = XJ }; // of occurrences of xJ after J X ← m × (g(r) − g(r − 1)); return X;
4.2.3
Analysis
For algorithm 2, S = Xi means set S while seeing ith element, and never set S after that. The probability that S = Xi for some time t ≥ i is 1 1 1 1 P r[S = Xi ] = × (1 − ) × (1 − ) × ... × (1 − ) i i+1 i+2 t 1 i i+1 t−1 = × × × ... × i i+1 i+2 t 1 = t
Lecture 4: Streaming: Frequency moments
44
We only need to keep track of one number S, so it takes O(logn) bits of space to get S. It will take O(klogn) bits of space to get k samples. Now analyze algorithm 3, X is the output of this algorithm. Claim 4.6 E[X] =
n X
g(fi )
i=1
Proof: E[X] = P r[XJ = i]E[XXJ = i] fi X fi X m(g(r) − g(r − 1)) m fi r=1 i X g(fi ) =
=
i
4.3
Example: Frequency Moments
For integer k ≥ 1, the kth frequency moment is denoted Fk =
n X
fik .
i=1
Computing an (, δ)approximation of Fk will be the goal of this section. Note that g(r) = rk , where g plays the same role as in the previous section. We can now apply the AMS algorithms from the previous section to this function g. Then X = m(rk − (r − 1)k ). Pl We know that E[X] = Fk . Then, we can conduct l independent trials to get X1 , ..., Xl , and output 1l i=1 Xi , an unbiased estimator for E[X] and therefore for Fk . To show that these are close with high probability, we plan to show that Var[X] is small and apply Chebyshev’s Inequality. For this we employ the following lemma. Lemma 4.7 Var[X] ≤ kn1−1/k Fk2 . Then for l= let Y = Y.
1 l
Pl
i=1
1 3Var[X] 3kn1−1/k Fk2 ≤ = 3kn1− k −2 , 2 2 E[X] 2 Fk2
Xi , the mean of l independent trials. Now we compute the expected value and variance of E[Y ] = E[Xi ] = Fk Var[Y ] =
l 1 X Var[X] 2 E[X]2 2 Fk2 Var[X ] = = = i l2 i=1 l 3 3
Then by Chebyshev’s Inequality, we have Pr [Y − E[Y ] ≥ E[Y ]] = Pr[Y − Fk  ≥ Fk ]
Lecture 4: Streaming: Frequency moments
45
≤
Var[Y ] 1 = . (Fk )2 3
So with probability at least 2/3, Y is an approximation of Fk . How can we boost this probability to at least 1 − δ? We repeat the above procedure T times and take the median of the T estimates. Suppose we do this T = c log(1/δ) times and get estimates Y1 , ..., YT . Then, consider the indicator random variable for Yj being an approximation ( 1 Yj − Fk  ≤ Fk . Zj = 0 otherwise P t 2 Then, for Z = j Zj , we have E[Z] ≥ 3 t. Note that if Z ≥ 2 , the median of Y1 , ..., YT must be an approximation. We now analyze the probability of this event Pr[Z
B more ideas are required. 2/12
Linear Sketches
I
I
A sketch algorithm stores a random matrix Z ∈ Rk×n where k n and computes projection Zf of the frequency vector. Can be computed incrementally: I I I
Suppose we have sketch Zf of current frequency vector f . If we see an occurrence of i, the new frequency vector is f 0 = f + ei Can update sketch be just adding i column of Z to Zf : Zf 0 = Z (f + ei ) = Zf + Zei = Zf + (ith column of Z )
I
Useful? Need to choose random matrices such that relevant properties of f can be estimated with high probability from Zf .
3/12
Outline
F2 Estimation
Distinct Elements
4/12
F2
I I
Problem: Construct an (, δ) approximation for F2 = Algorithm: I
I
I
P
Let Z ∈ {−1, 1}k×n where entries of each row are 4wise independent and rows are independent. Compute Zf and average squared entries appropriately.
Analysis: I I I
i fi
Let s = z.f be an entry of Zf where z is a row of Z . Lemma: E s 2 = F2 Lemma: V s 2 ≤ 4F22
5/12
2
Expectation Lemma
I
s = z.f where zi ∈R {−1, 1} are 4wise independent.
I
Then X X X 2 zi zj fi fj = fi fj E [zi zj ] = fi 2 E s = E i,j∈[n]
i,j∈[n]
since E [zi zj ] = 0 unless i = j,
6/12
i∈[n]
Variance Lemma
I
E [zi zj zk zl ] = 0 unless (i, k) = (j, l), (i, j) = (k, l) or (i, j) = (l, k)
I
Then 2 V s2 = E s4 − E s2
=
X
fi 4 + 6
i
=
4
i (1 + )T and F0 < (1 − )T
I
If we can solve simpler problem, can solve original problem by trying O(−1 log n) values of T T = 1, (1 + ), (1 + )2 , . . . , n
I
Algorithm: I I I
I
Choose randomPsets S1 , S2 , . . . , Sk ⊂ [n] where P [i ∈ Sj ] = 1/T Compute sj = i∈Sj fi If at least k/e of the sj are zero, output F0 < (1 − )T
Analysis: I I I
If F0 > (1 + )T , P [sj = 0] < 1/e − /3 If F0 < (1 − )T , P [sj = 0] > 1/e + /3 Chernoff: k = O(−2 log δ −1 ) ensures correctness with prob. 1 − δ.
11/12
Analysis
I
Suppose T is large and is small: P [sj = 0] = (1 − 1/T )F0 ≈ e −F0 /T
I
If F0 > (1 + )T , e −F0 /T ≤ e −(1+) ≤ e −1 − /3
I
If F0 < (1 − )T , e −F0 /T ≥ e −(1−) ≥ e −1 + /3
12/12
CS 6550: Randomized Algorithms
Spring 2019
Lecture 5: Pairwise Independence and streaming January 22, 2019 Lecturer: Eric Vigoda
Scribes: Aditi Laddha, He Jia
Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.
5.1
Pairwise Independent
Suppose X1 , . . . , Xn are n random variables on Ω. Definition 5.1 X1 , . . . , Xn are mutually independent if for all α2 , . . . , αn ∈ Ω Pr(X1 = α1 , . . . , Xn = αn ) =
n Y
Pr(Xi = αi ).
i=1
Definition 5.2 X1 , . . . , Xn are pairwise independent if for all i, j ∈ {1, . . . , n}, α, β ∈ Ω Pr(Xi = α1 , Xj = β) = Pr(Xi = α) Pr(Xj = β). Next we give two examples of pairwise independent variables.
5.1.1
Simple Construction
Suppose X1 , . . . , Xm ∈ {0, 1} are m mutually independent random bits with Bernoulli distribution of p = 1/2. We will make 2m −1 pairwise independent random variables Y1 , . . . , Y2m −1 ∈ {0, 1}. Enumerate all nonempty subsets of {1, . . . , m} as S1 , . . . , S2m −1 . Let X Yi = ⊕ Xj = Xj mod 2. j∈Si
j∈Si
Lemma 5.3 Y1 , . . . , Y2m −1 are pairwise independent. Proof: (Uniform) First show that Pr(Yi = 0) = Pr(Yi = 1) = 1/2. Suppose Si = {t1 , . . . , t` } ⊂ {1, . . . , m}. Then Yi =
` X
Xtj
`−1 X mod 2 = ( Xtj
j=1
mod 2 + Xt` )
mod 2.
j=1
Reveal Xt1 , . . . , Xt`−1 . We can see Pr(Yi = 1) = Pr(Xt` = 0 ∩
`−1 X
Xtj
mod 2 = 1) + Pr(Xt` = 1 ∩
j=1
`−1 X
Xtj
mod 2 = 0) =
j=1
1 . 2
(Pairwise independent) For any i 6= j, without loss of generality, we may assume Si \ Sj 6= ∅. Then 1 Pr(Yi = α, Yj = β) = Pr(Yi = α  Yj = β) Pr(Yj = β) = Pr(Yi = α  Yj = β) × . 2 51
Lecture 5: Pairwise Independence and streaming
52
Take t ∈ Si \ Sj . Reveal {X1 , . . . , Xm } \ {Xt }. Then with probability 1/2, Xt = 1 and Yj flips; with probability 1/2, Xt = 0 and Yj is the same. Therefore, Pr(Yi = α  Yj = β)) = Pr(Yi = α  X1 , . . . , Xm \ Xt ) = Thus Pr(Yi = α, Yj = β) =
5.1.2
1 . 2
1 . 4
Hashing
For prime p, given a, b which are independent and uniform over {0, . . . , p − 1}. We construct Y0 , . . . , Yp−1 which are pairwise independent and uniform over {0, . . . , p − 1}. Namely, let Yi = a + ib
mod p.
Lemma 5.4 Y0 , . . . , Yp−1 are pairwise independent. Proof: (Uniform) First show that Pr(Yi = α) = 1/p. For any b, i, α ∈ {0, . . . , p − 1} Pr(Yi = α) = Pr(a + ib ≡ α
mod p) = Pr(a ≡ α − ib
mod p) =
1 p
since there is a unique such a ∈ {0, . . . , p − 1}. (Pairwise independent) Consider i, j ∈ {0, . . . , p − 1}, i 6= j and α, β ∈ {0, . . . , p − 1}, we will show Pr(Yi = α, Yj = β) =
1 . p2
Yi = α ⇐⇒ a + ib ≡ α
mod p
Yj = β ⇐⇒ a + jb ≡ β
mod p
Thus α − β ≡ (a + ib) − (a + jb)
and
mod p
α − β ≡ b(i − j) mod p α−β b≡ i−j a ≡ α − ib mod p
So there is a unique (a, b) pair so that Yi = α, Yj = β. Therefore, Pr(Yi = α, Yj = β) = Pr(b ≡
α−β 1 , a ≡ α − ib mod p) = 2 . i−j p
Lecture 5: Pairwise Independence and streaming
5.2
53
Application: Streaming
Given a stream S = {s1 , s2 , . . . , sm } where ∀i, si ∈ {1, . . . , n} and m is a very large number. The elements of the sequence are given one by one and cannot be stored. Define fi = {sj ∈ S : sj = i}. For an integer k ≥ 1, the kth frequency moment is defined as Fk =
n X
fik
i=1
F0 is the number of distinct elements in S = {i : fi > 0} Definition 5.5 For integer k, zeros(k) = # trailing zeros in binary representation of k = max{l : 2l divides k} l≥0
5.2.1
The AMS algorithm
Find a prime p such that n ≤ p < 2n. Pad f such that fi = 0, ∀i ∈ {n + 1, . . . , p}. Algorithm 1: AMS Algorithm for estimating F0 input : S = {s1 , s2 , . . . , sm } where si ∈ {1, . . . , n}. ˆ a (3,0.96) approximation of F0 . output: d,
6
Choose a, b randomly from {0, 1, . . . , p − 1} and define h(k) = a + kb mod p; z = 0; for i ← 1 to m do compute zeros(h(si )); if zeros(h(si )) > z then z = zeros(h(si ))
7
Output 2z+1/2 ;
1 2 3 4 5
5.3 5.3.1
Analysis of Algorithm Space Complexity
a, b ≤ p ≤ 2n, so space needed to store a, b is O(log(n)) and z ≤ log(n), so space needed to store z is O(log log(n)). Overall space needed is O(log(n)).
5.3.2
Failure Probability
ˆ Let F0 = d = {i : fi > 0} and let the output of the Algorithm 1 be d. Lemma 5.6 Pr(dˆ ≥ 3d or d ≤
dˆ ) ≤ 0.96. 3
For k ∈ {1, . . . , p} and integer l ≥ 0, Pr(zeros(h(k)) ≥ l) = Pr(last l bits of h(k) are all 0) =
1 2l
because h(k) is a uniformly random bit string. We cannot use Chernoff bounds on h(k) as they are not mutually independent only pairwise independent.
Lecture 5: Pairwise Independence and streaming
54
For k ∈ {1, . . . , n} and integer l ≥ 0, define a random variable ( 1 if zeros(h(k)) ≥ l Xk,l = 0 otherwise Let X
Yl =
Xk,l
k:fk >0
If 2z+1/2 is the output of algorithm 1, then z ≥ l ⇔ Yl > 0 z ≤ l − 1 ⇔ Yl = 0 For a fixed l, X1,l , . . . , Xn,l are pairwise independent due to the construction of h from last section. E(Xk,l ) = Pr[zeros(h(k)) ≥ l] =
1 2l
and 2 2 var(Xk,l ) = E(Xk,l ) − E(Xk,l )2 ≤ E(Xk,l )=
E(Yl ) = E(
X
k:fk >0
Var(Yl ) = Var(
X
Xk,l ) = X
E(Xk,l ) =
k:fk >0
X
Xk,l ) =
k:fk >0
1 2l
d 2l
Var(Xk,l )
k:fk >0
d Linearity of variance holds over pairwise independent variables too. So, var(Yl ) ≤ l . 2 By Markov’s inequality, Pr(Yl > 0) = Pr(Yl ≥ 1) E(Yl ) 1 ≤ d2−l ≤
Using Chebyshev’s inequality, Pr(Yl = 0) ≤ Pr(Yl − E[Yl ] ≥ d2−l ) Var(Yl ) (d2−l )2 2l = d
≤
We want
dˆ ˆ Let a be the smallest integer such that 2a+1/2 ≥ 3d and let b be the largest integer ≤ d ≤ 3d. 3
Lecture 5: Pairwise Independence and streaming
55
such that 2b+1/2 ≤ d/3. Then, Pr(dˆ ≥ 3d) = Pr[z ≥ a] = Pr(Ya > 0) d ≤ a 2 2a+1/2 ≤ a √3.2 2 = 3 < 0.48 and Pr(dˆ ≤ d/3) = Pr[z ≤ b] = Pr[Yb+1 = 0] 2b+1 d 2b+1 ≤ 3.2b+1/2 √ 2 = 3 < 0.48 ≤
This gives a 3approximation algorithm with error probability 0.96. To get a 3approximation algorithm with error probability ≤ δ, we can boost the algorithm by running it 1 r = O(log( )). Let the outputs be dˆ1 , dˆ2 , . . . , dˆr . Then with probability ≥ 1 − δ, the median of dˆ1 , dˆ2 , . . . , dˆr δ 1 is a 3approximation of d. This takes O(log( ) log(n)) bits. For every run of the AMS algorithm, we select δ a new hash function making each run independent of the others.
References [1] Alon, N. & Matias, Y. & Szegedy, M. The space complexity of approximating the frequency moments. Journal of Computer and system sciences, pages 137–147, 1999
Lecture Notes on a Parallel Algorithm for Generating a Maximal Independent Set Eric Vigoda Georgia Institute of Technology Last updated for 7530  Randomized Algorithms, Spring 2010. In this lecture we present a randomized parallel algorithm for generating a maximal independent set. We then show how to derandomize the algorithm using pairwise independence. For an input graph with n vertices, our goal is to devise an algorithm that works in time polynomial in log n and using polynomial in n processors. See Chapter 12.1 of Motwani and Raghavan [5] for background on parallel models of computation (specifically the CREW PRAM model) and the associated complexity classes NC and RNC. Here we present the algorithm and proof of Luby [4], see Section 4 for more on the history of this problem. Our goal is to present a parallel algorithm for constructing a maximal independent set of an input graph on n vertices, in time polynomial in log n and using polynomial in n processors.
1
Maximal Independent Sets
For a graph G = (V, E), an independent set is a set S ⊂ V which contains no edges of G, i.e., for all (u, v) ∈ E either u 6∈ S and/or v 6∈ S. The independent set S is a maximal independent set if for all v ∈ V , either v ∈ S or N (v) ∩ S 6= ∅ where N (v) denotes the neighbors of v. It’s easy to find a maximal independent set. For example, the following algorithm works: 1. I = ∅, V 0 = V . 2. While (V 0 6= ∅) do (a) Choose any v ∈ V 0 . (b) Set I = I ∪ v. (c) Set V 0 = V 0 \ (v ∪ N (v)). 3. Output I. Our focus is finding an independent set using a parallel algorithm. The idea is that in every round we find a set S which is an independent set. Then we add S to our current independent set I, and we remove S ∪ N (S) from the current graph V 0 . If S ∪ N (S) is a constant fraction of V 0 , then we will only need O(log V ) rounds. We will instead ensure that by removing S ∪ N (S) from the graph, we remove a constant fraction of the edges. 1
To choose S in parallel, each vertex v independently adds themselves to S with a well chosen probability p(v). We want to avoid adding adjacent vertices to S. Hence, we will prefer to add low degree vertices. But, if for some edge (u, v), both endpoints were added to S, then we keep the higher degree vertex. Here’s the algorithm:
The Algorithm Problem : Given a graph find a maximal independent set. 1. I = ∅, V 0 = V and G0 = G. 2. While (V 0 6= ∅) do: (a) Choose a random set of vertices S ⊂ V 0 by selecting each vertex v independently with probability 1/(2dG0 (v)) where dG0 (v) is the degree of v in the graph G0 . (b) For every edge (u, v) ∈ E(G0 ) if both endpoints are in S then remove the vertex of lower degree from S (Break ties arbitrarily). Call this new set S 0 . (c) I = I S 0 . Let V 0 = V 0 \ (S 0 on V 0 . S
S
NG0 (S 0 )). Finally, let G0 be the induced subgraph
3. Output I Fig 1: The algorithm Correctness : We see that at each stage the set S 0 that is added is an independent set. S Moreover since we remove, at each stage, S 0 N (S 0 ) the set I remains an independent set. Also note that all the vertices removed from G0 at a particular stage are either vertices in I or neighbours of some vertex in I. So the algorithm always outputs a maximal independent set. We also note that it can be easily parallelized on a CREW PRAM.
2
Expected Running Time
In this section we bound the expected running time of the algorithm and in the next section we derandomize it. Let Gj = (Vj , Ej ) denote the graph G0 after stage j. 2
Main Lemma: For some c < 1, E [Ej   Ej−1 ] < cEj−1 . Hence, in expectation, only O(log m) rounds will be required, where m = E0 . For graph Gj , we classify the vertices and edges as GOOD and BAD to distinguish those that are likely to be removed. We say vertex v is BAD if more than 2/3 of the neighbors of v are of higher degree than v. We say an edge is BAD if both of its endpoints are bad; otherwise the edge is GOOD. The key claims are that at least half the edges are GOOD, and each GOOD edge is deleted with a constant probability. The main lemma then follows immediately. Here are the main lemmas. Lemma 1. At least half the edges are GOOD. Lemma 2. If anedge e is GOOD then the probability that it gets deleted is at least α where 1 α := 2 1 − e−1/6 . The constant α is approximately 0.07676. We can now restate and prove the main lemma: Main Lemma: E [Ej   Ej−1 ] ≤ Ej−1 (1 − α/2). Proof of Main Lemma. E [Ej   Ej−1 ] =
X
1 − Pr [e gets deleted]
e∈Ej−1
≤ Ej−1  − αGOOD edges ≤ Ej−1 (1 − α/2).
Thus, α j E [Ej ] ≤ E0  1 − ≤ m exp(−jα/2) < 1, 2 for j > α2 log m. Therefore, the expected number of rounds required is ≤ 30 log m = O(log m). Moreover, by Markov’s inequality,
Pr [Ej 6= ∅] ≤ E [Ej ] < m exp(−jα/2) ≤ 1/4, for j = α4 log m. Hence, with probability at least 3/4 the number of rounds is ≤ 60 log m, and therefore we have an RNC algorithm for MIS. 3
It remains to prove Lemmas 1 and 2. We begin with Lemma 1 which is a cute combinatorial proof.
Proof of Lemma 1. Denote the set of bad edges by EB . We will define f : EB → E2 so that for all e1 6= e2 ∈ EB , f (e1 ) ∩ f (e2 ) = ∅. This proves EB  ≤ E/2, and we’re done. The function f is defined as follows. For each (u, v) ∈ E, direct it to the higher degree vertex. Break ties as in the algorithm. Now, suppose e = (u, v) ∈ EB , and is directed towards v. Since e is BAD then v is BAD. Therefore, by the definition of a BAD vertex, at least 2/3 of the edges incident to v are directed away from v, and at most 1/3 of the edges incident to v are directed into v. In other words, v has at least twice as many outedges as inedges. Hence for each edge into v we can assign a disjoint pair of edges out of v. This gives our mapping f since each BAD edge directed into v has a disjoint pair of edges directed out of v. We now prove Lemma 2. To that end we prove the following lemmas, which say that GOOD vertices are likely to have a neighbor in S, and vertices in S have probability at least 1/2 of being in S 0 . From these lemmas, Lemma 2 will easily follow since the neighbors of S 0 are deleted from the graph. Lemma 3. If v is GOOD then Pr [N (v) ∩ S 6= ∅] ≥ 2α, where α := 21 (1 − e−1/6 ). Proof. Define L(v) := {w ∈ N (v)  d(w) ≤ d(v)}. By definition, L(v) ≥ d(v) if v is a GOOD vertex. 3 Pr [N (v) ∩ S 6= ∅] = 1 − Pr [N (v) ∩ S = ∅] Y = 1− Pr [w 6∈ S] using full independence w∈N (v)
Y
≥ 1−
Pr [w 6∈ S]
w∈L(v)
Y
= 1−
w∈L(v)
Y
≥ 1−
w∈L(v)
1 1− 2d(w) 1 1− 2d(v)
!
!
≥ 1 − exp(−L(v)/2d(v)) ≥ 1 − exp(−1/6).
Note, the above lemma is using full independence in its proof. Lemma 4. Pr [w ∈ / S 0  w ∈ S] ≤ 1/2. 4
Proof. Let H(w) = N (w) \ L(w) = {z ∈ N (w) : d(z) > d(w)}.
Pr [w ∈ / S 0  w ∈ S] = Pr [H(w) ∩ S 6= ∅  w ∈ S] X ≤ Pr [z ∈ S  w ∈ S] z∈H(w)
≤
Pr [z ∈ S, w ∈ S] Pr [w ∈ S] z∈H(w)
=
Pr [z ∈ S]Pr [w ∈ S] using pairwise independence Pr [w ∈ S] z∈H(w)
=
X
X
X
Pr [z ∈ S]
z∈H(w)
=
1 2d(z) z∈H(w)
≤
1 2d(v) z∈H(w)
≤
1 . 2
X
X
From Lemmas 3 and 4 we get the following result that GOOD vertices are likely to be deleted. Lemma 5. If v is GOOD then Pr [v ∈ N (S 0 )] ≥ α Proof. Let VG denote the GOOD vertices. We have Pr [v = = ≥ ≥ =
∈ N (S 0 )  v ∈ VG ] Pr [N (v) ∩ S 0 6= ∅  v ∈ VG ] Pr [N (v) ∩ S 0 6= ∅  N (v) ∩ S 6= ∅, v ∈ VG ]Pr [N (v) ∩ S 6= ∅  v ∈ VG ] Pr [w ∈ S 0  w ∈ N (v) ∩ S, v ∈ VG ]Pr [N (v) ∩ S 6= ∅  v ∈ VG ] (1/2)(2α) by Lemmas 4 and 3 α.
Since vertices in N (S 0 ) are deleted, an immediate corollary of Lemma 5 is the following. Corollary 6. If v is GOOD then the probability that v gets deleted is at least α. Finally, from Corollary 6 we can easily prove Lemma 2, which was our main task remaining in the analysis of the RNC algorithm. 5
Proof of Lemma 2. Let e = (u, v) and at least one of the endpoints is GOOD, so assume v is GOOD. Therefore, by Lemma 5 we have: Pr [e = (u, v) ∈ Ej−1 \ Ej ] ≥ Pr [v gets deleted]. ≥ α.
3
Derandomizing MIS
The only step where we use full independence is in Lemma 3 for lower bounding the probability that a GOOD vertex gets picked. The argument we used was essentially the following: Lemma 7. Let Xi , 1 ≤ i ≤ n, be {0, 1} random variables and pi := Pr [Xi = 1]. If the Xi are fully independent then Pr
" n X
#
Xi > 0 ≥ 1 −
n Y
(1 − pi )
i=1
i=1
Here is the corresponding bound if the variables are pairwise independent Lemma 8. Let Xi , 1 ≤ i ≤ n, be {0, 1} random variables and pi := Pr [Xi = 1]. If the Xi are pairwise independent then n 1 X 1 pi , Xi > 0 ≥ min Pr 2 2 i=1 i=1
" n X
(
#
)
Proof. Suppose ni=1 pi ≤ 1. Then we have the following, (the condition come into some algebra at the end) P
Pr
" n X
#
Xi > 0
≥ Pr
i=1
" n X
P
i
pi ≤ 1 will only
#
Xi = 1
i=1
≥
X
=
X
Pr [Xi = 1] −
i
pi −
i
≥
X
=
X
i
i
≥
1X pi pj 2 i6=j
1 X pi − pi 2 i pi
1X Pr [Xi = 1, Xj = 1] 2 i6=j
!2
1X 1− pi 2 i
!
X 1X pi when pi ≤ 1. 2 i i
6
This proves the lemma when
P
i
pi ≤ 1.
If i pi > 1, then we restrict our index of summation to a set S ⊆ [n] = {1, . . . , n} such P P that 1/2 ≤ i∈S pi ≤ 1. Note, if i pi > 1, there always must exist a subset S ⊆ [n] where P 1/2 ≤ i∈S pi ≤ 1, since either there is an index j where 1/2 ≤ pj ≤ 1 and we can then choose S = {j}, or for all i we have pi < 1/2 and it is easy to see then that there is a such a subset S in this case. Given this S, following the above proof we have: P
Pr
" n X i=1
since
P
i∈S
#
Xi > 0 ≥ Pr
"
# X
Xi = 1 ≥
i∈S
1X pi ≥ 1/4, 2 i∈S
pi ≥ 1/2, and this proves the conclusion of the lemma in this case.
Using Lemma 8 in the proof of Lemma 3 we get 2α replaced by 1/12. Hence, using the construction of pairwise random variables described in an earlier lecture, we can now derandomize the algorithm to get a deterministic algorithm that runs in O(mn log n) time (this is asymptotically almost as good as the sequential algorithm). The advantage of this method is that it can be easily parallelized to give an N C 2 algorithm (using O(m) processors).
4
History and Open Questions
The k−wise independence derandomization approach was developed in [2, 1, 4]. The maximal independence problem (MIS) was first shown to be in NC by Karp and Wigderson [3]. They showed that MIS is in N C 4 . Subsequently, improvements and simplifications on their result were found by Alon et al [1] and Luby [4]. The algorithm and proof described in these notes is the result of Luby [4]. The question of whether MIS is in N C 1 is still open.
References [1] N Alon, L. Babai, A. Itai, A Fast and Simple Randomized Parallel Algorithm for the Maximal Independent Set Problem, Journal of Algorithms, 7:567583, 1986. [2] B. Chor, O. Goldreich, On the power of twopoint sampling, Journal of Complexity, 5:96106, 1989. [3] R.M. Karp, A. Wigderson, A fast parallel algorithm for the maximal independent set problem, Proc. 16th ACM Symposium on Theory of Computing (STOC), 266272, 1984. [4] M. Luby, A simple parallel algorithm for the maximal independent set problem, Proc, 17th ACM Symposium on Theory of Computing (STOC), 110, 1985. 7
[5] R. Motwani and P. Raghavan, Randomized Algorithms, Cambridge University Press, New York, 1995.
8
CS 6550: Randomized Algorithms
Spring 2019
Lecture 6: Data Streaming January 24, 2019 Lecturer: Eric Vigoda
Scribes: Caitlin Beecham, Qiaomei Li
Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.
6.1
Data Streaming
General idea: We want to count d, the number of distinct values that appear in our stream S. However, storing the desired frequency vector might take a lot of space. So instead of storing it, we hash the values, k, that appear in the stream one by one (Side note: hashing these values takes the arbitrary distribution of values which appears in S and makes the hashed output roughly uniform over the set {0, 1, . . . , p − 1}). Then, one by one we add (h(k), zeros(h(k))) = (h(k),[the number of trailing zeros in h(k)] when written in binary]) to a set (NOT a multiset) B. The idea is that if we know the number of times we have a hashed value h(k) with more than some number of zeros, this number gives us an estimate on the number of distinct values that appear in the stream. In particular, say we have a hashed value h(k) written in binary. Since the hashed distribution is uniform, we know that the probability that h(k) has at least l trailing zeros is = 21l . So, the probability that an element of B has ≥ l zeros is about 2dl (I say about because there could be hash collisions). Now, as this algorithm goes on, we actually make sure to only keep elements of B in B for which the number of trailing zeros is above this threshold. So, at the end of the algorithm, B consists of only the elements with enough trailing zeros. The expected number of these is E[B] = 2dl . Thus, our algorithm outputting B2l should be a reasonably good estimate of d. Note: the answer to the question asked during class about whether we could just store zeros(h(k)) instead of (h(k), zeros(h(k))) is that we can’t. The crux of what makes this algorithm works is that we are storing the elements of B as a set and not a multiset, which is what gives us some kind of relation to the number of distinct elements d (because we’re not keeping elements with multiplicity). So, in particular, what allows us to know that we are only keeping one copy of each value (well, really we’re keeping the hashed value, but distinct values give distinct hash values) is that we actually know that value h(k) because it is stored. If we only stored zeros(h(k)), we would end up counting many distinct hash values as one thing because they have the same number of trailing zeros. So, basically the whole reason we have a connection between d and B is that we are storing all hash values that appear and storing them exactly once (until they get thrown out). If we only kept track of zeros(h(k)) instead of (h(k), zeros(h(k))) we would lose that connection. Let the list S = {S1 , ..., Sm } s.t. m is huge and f = (f1 , ..., fn ), where fi = {j : 1 ≤ j ≤ m, sj = i}. Our goal is to compute d = F0 − {i : fi > 0}=the number of distinct items. From last class we found dˆ ˆ ˆ ≥ 0.04. Next, we want to boost it to be ≥ 1 − δ by finding the median of O(log( 1 ) where P r( d3 ≥ d ≥ 3d) δ trials. ˆ − ) ≥ d ≥ d(1 ˆ + )) ≥ 1 − . Given h : [n] → [n], P r(zeros(h(k)) ≥ l) = 2−l . We show ∀ > 0, δ > 0, P r(d(1 Recall from last class, the algorithm that finds max{l : zeros(h(k)), k ∈ S} outputs 2l+1/2 . Consider the problem of finding the number of k such that zeros(h(k)) ≥ l, we expect 2dl = B, and the corresponding algorithm outputs B2l .
61
Lecture 6: Data Streaming
1 2
3 4 5 6 7 8 9 10
62
choose a random pairwise independent hash function h : {[1, ..., n} → {[1, ..., n}; find prime p s.t. n ≤ p ≤ 2n; choose a,b independently and uniformly from {0, 1, ..., p − 1} and define the hash function h(i) = a + bi mod p; set the counter z = 0, and B = ∅; goes through data stream S, at k: hash it, look at zeros(h(k)) if zeros(h(k))≥ z then B = B ∪ (h(k), zeros(h(k))); while B > 100/2 do z = z + 1; remove all (α, β) from B where β < z output B2Z
6.1.1
The algorithm
For k ∈ {1, ..., n} and integer l ≥ 1, Xl,k Let Yl =
P
k:fk >0
( 1, if zeros(h(k)) ≥ l = 0, otherwise
(6.1)
Xl,k , output dˆ = Yz 2z . E[Yl ] =
X
d 2l
(6.2)
Xl,k )
(6.3)
V ar(Xl,k )
(6.4)
E[Xl,k ] =
k:k>0
V ar(Yl ) = V ar(
X k:k>0
=
X k:k>0
≤
X
2 E[Xl,k ]
(6.5)
d 2l
(6.6)
k:k>0
=
X k:k>0
E[Xl,k ] =
We want (1 − )dˆ ≤ d ≤ (1 + )dˆ Our algorithm fails if dˆ − d > d. Note that dˆ − d = yz 2z − d > d if and only if yz − 2dz  > 2dz . Intuitively, we break this into two cases: if z < [some threshold] we bound using Chebyshev and if z ≥ [that threshold] we use Markov’s Inequality to say that having such a large z is unlikely to happen. z) Now, P r(Yz − E[Yz ] > 2dz ) ≤ Var(Y ≤ 2 (1d ) . 2 ( 2d z) 2z Plog(n) d So, P r(FAILURE) = P r(Yz − E[Yz ] > 2dz ) = r=0 P r(Yr − E[Yr ] > 2r AND (z = r)). We break this summand into two sums. Note that we use the following fact: P r(A AND B) ≤ P r(A) and also
Lecture 6: Data Streaming
63
P r(A AND B) ≤ P r(B). log(n)
X r=0
P r(Yr − E[Yr ] >
s−1 X d d AND (z = r)) = P r(Yr − E[Yr ] > r AND (z = r))+ r 2 2 r=0 log(n)
X
P r(Yr − E[Yr ] >
d AND (z = r)) 2r
(6.8)
P r(Yr − E[Yr ] >
d )+ 2r
(6.9)
r=s
≤
s−1 X
(6.7)
r=0 log(n)
X
P r(z = r)
(6.10)
r=s
=
s−1 r X 2 + P r(z ≥ s) 2d r=0
(6.11)
=
s−1 1000 1 X r ( 2 ) + P r(Ys−1 > 2 ) 2 d r=0
(6.12)
2s 2 E[Ys−1 ] + 2 d 1000 d2 2s = 2 + d 1000 ∗ 2s−1
≤
s
2
s
s
(6.13) (6.14)
2 2 24 2 48 2 1 1 1 1 12 d Choose s = max{n ∈ N 2ds < 24 2 } ≤ 2 d + 1000 2 = 2 d + 1000 ≤ 2 d + 12 ≤ 12 + 12 = 6 . Now, 2 ≤ 2s 2 0 2 implies that 2s ≤ d 12 so that s = O(log2 (c d )). What space does this algorithm use? Well, each h(k) uses O(log(n)) bits. Also, zeros(h(k)) uses O(log(n)) bits. So, overall, this algorithm uses O(log(n)) ∗ O( 12 ) bits to store B and also O(log(log(n))) bits to store z and finally O(log(n)) bits to store a and b. In total, it uses O(log(n)) ∗ O( 12 ) + O(log(log(n))) + O(log(n)) = O(log(n)) ∗ O( 12 ) bits. So, what if we want to do better than 12 ? Really, we should take O(log( 12 )) to remember each item in B, which means that we should make another hash function. We still use the original hash function h : [n] → [n], as well as a new hash function, 6 g : [n] → [ 10 2 ] with some constant probability collisions.
T DR AF Data Stream Algorithms Lecture Notes Amit Chakrabarti Dartmouth College
Latest Update: July 2, 2020
T
DR AF
Preface
This book grew out of lecture notes for offerings of a course on data stream algorithms at Dartmouth, beginning with a first offering in Fall 2009. It’s primary goal is to be a resource for students and other teachers of this material. The choice of topics reflects this: the focus is on foundational algorithms for spaceefficient processing of massive data streams. I have emphasized algorithms that are simple enough to permit a clean and fairly complete presentation in a classroom, assuming not much more background than an advanced undergraduate student would have. Where appropriate, I have provided pointers to more advanced research results that achieve improved bounds using heavier technical machinery.
I would like to thank the many Dartmouth students who took various editions of my course, as well as researchers around the world who told me that they had found my course notes useful. Their feedback inspired me to make a book out of this material. I thank Suman Kalyan Bera, Sagar Kale, and Jelani Nelson for numerous comments and contributions. Of special note are the 2009 students whose scribing efforts got this book started: Radhika Bhasin, Andrew Cherne, Robin Chhetri, Joe Cooley, Jon Denning, Alina Djamankulova, Ryan Kingston, Ranganath Kondapally, Adrian Kostrubiak, Konstantin Kutzkow, Aarathi Prasad, Priya Natarajan, and Zhenghui Wang. Amit Chakrabarti April 2020
Preliminaries: The Data Stream Model
6
0.1
The Basic Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
The Quality of an Algorithm’s Answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
Variations of the Basic Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
0.2 0.3 1
Finding Frequent Items Deterministically
8
1.1
The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
Frequency Estimation: The Misra–Gries Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
Analysis of the Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
Finding the Frequent Items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
1.2 1.3 1.4
2
Estimating the Number of Distinct Elements
11
2.1
The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
The Tidemark Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
The Quality of the Algorithm’s Estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
The Median Trick . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
2.2 2.3 2.4
3
A Better Estimate for Distinct Elements
15
3.1
The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
The BJKST Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
Analysis: Space Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
Analysis: The Quality of the Estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
Optimality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
3.2 3.3 3.4 3.5
4
DR AF
0
T
Contents
Approximate Counting
19
4.1
The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
4.2
The Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
2
Dartmouth: CS 35/135 Data Stream Algorithms
CONTENTS
The Quality of the Estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
4.4
The MedianofMeans Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22 23
5.1
The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
5.2
Sketches and Linear Sketches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
5.3
CountSketch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
5.3.1
The Quality of the Basic Sketch’s Estimate . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
5.3.2
The Final Sketch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
The CountMin Sketch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
5.4.1
The Quality of the Algorithm’s Estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
Comparison of Frequency Estimation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
5.4 5.5
6
Estimating Frequency Moments
29
6.1
Background and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
The (Basic) AMS Estimator for Fk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
Analysis of the Basic Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
30
The Final Estimator and Space Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
6.4.1
32
6.2 6.3 6.4
7
33
7.1
The Basic Sketch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
7.1.1
The Quality of the Estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
The Final Sketch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
7.2.1
A Geometric Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
Estimating Norms Using Stable Distributions 8.1 8.2 8.3 8.4 8.5
9
The SoftO Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The TugofWar Sketch
7.2
8
T
Finding Frequent Items via (Linear) Sketching
DR AF
5
4.3
36
A Different `2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
Stable Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
The Median of a Distribution and its Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
The Accuracy of the Estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
Annoying Technical Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
Sparse Recovery
40
9.1
The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
9.2
Special case: 1sparse recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
9.3
Analysis: Correctness, Space, and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
9.4
General Case: ssparse Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
44
3
Dartmouth: CS 35/135 Data Stream Algorithms
CONTENTS
10 WeightBased Sampling
45 45
10.2 The `0 Sampling Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
10.2.1 An Idealized Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
10.2.2 The Quality of the Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
10.3 `2 Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
10.3.1 An `2 sampling Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
10.3.2 Analysis: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
T
10.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11 Finding the Median
51 51
11.2 Preliminaries for an Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
11.3 The Munro–Paterson Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
11.3.1 Computing a Core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
11.3.2 Utilizing a Core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
11.3.3 Analysis: Pass/Space Tradeoff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55
DR AF
11.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12 Geometric Streams and Coresets
56
12.1 Extent Measures and Minimum Enclosing Ball . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
56
12.2 Coresets and Their Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
56
12.3 A Coreset for MEB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
12.4 Data Stream Algorithm for Coreset Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . .
58
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59
13 Metric Streams and Clustering
60
13.1 Metric Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
13.2 The Cost of a Clustering: Summarization Costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
13.3 The Doubling Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
13.4 Metric Costs and Threshold Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
13.5 Guha’s Cascading Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
13.5.1 Space Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
13.5.2 The Quality of the Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65
14 Graph Streams: Basic Algorithms
66
14.1 Streams that Describe Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
66
14.1.1 SemiStreaming Space Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67
14.2 The Connectedness Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67
14.3 The Bipartiteness Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67
14.4 Shortest Paths and Distance Estimation via Spanners . . . . . . . . . . . . . . . . . . . . . . . . . .
68
14.4.1 The Quality of the Estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
4
Dartmouth: CS 35/135 Data Stream Algorithms
CONTENTS
14.4.2 Space Complexity: HighGirth Graphs and the Size of a Spanner . . . . . . . . . . . . . . . .
69
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
70
15 Finding Maximum Matchings
71 71
15.2 Maximum Cardinality Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
15.3 Maximum Weight Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
72
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
74
T
15.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16 Graph Sketching
75 75
16.1.1 Testing Connectivity Using Boundary Edges . . . . . . . . . . . . . . . . . . . . . . . . . .
75
DR AF
16.1 The Value of Boundary Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1.2 Testing Bipartiteness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
16.2 The AGM Sketch: Producing a Boundary Edge . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
77
17 Counting Triangles
79
17.1 A SamplingBased Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
17.2 A SketchBased Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
80
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
80
18 Communication Complexity and a First Look at Lower Bounds
81
18.1 Communication Games, Protocols, Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81
18.2 Specific TwoPlayer Communication Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
82
18.2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
82
18.2.2 Results and Some Proofs: Deterministic Case . . . . . . . . . . . . . . . . . . . . . . . . . .
83
18.2.3 More Proofs: Randomized Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
84
18.3 Data Streaming Lower Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
86
18.3.1 Lower Bound for Majority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
86
18.3.2 Lower Bound for Frequency Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
87
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
87
19 Further Reductions and Lower Bounds
88
19.1 The Importance of Randomization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
88
19.2 MultiPass Lower Bounds for Randomized Algorithms . . . . . . . . . . . . . . . . . . . . . . . . .
89
19.2.1 Graph Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
89
19.2.2 The Importance of Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
90
19.3 Space versus Approximation Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
90
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
90
5
0
T
Unit
0.1
DR AF
Preliminaries: The Data Stream Model The Basic Setup
In this course, we are concerned with algorithms that compute some function of a massively long input stream σ . In the most basic model (which we shall call the vanilla streaming model), this is formalized as a sequence σ = ha1 , a2 , . . . , am i, where the elements of the sequence (called tokens) are drawn from the universe [n] := {1, 2, . . . , n}. Note the two important size parameters: the stream length, m, and the universe size, n. If you read the literature in the area, you will notice that some authors interchange these two symbols. In this course, we shall consistently use m and n as we have just defined them. Our central goal will be to process the input stream using a small amount of space s, i.e., to use s bits of randomaccess working memory. Since m and n are to be thought of as “huge,” we want to make s much smaller than these; specifically, we want s to be sublinear in both m and n. In symbols, we want s = o (min{m, n}) .
(1)
s = O(log m + log n) ,
(2)
The holy grail is to achieve
because this amount of space is what we need to store a constant number of tokens from the stream and a constant number of counters that can count up to the length of the stream. Sometimes we can only come close and achieve a space bound of the form s = polylog(min{m, n}), where f (n) = polylog(g(n)) means that there exists a constant c > 0 such that f (n) = O((log g(n))c ). The reason for calling the input a stream is that we are only allowed to access the input in “streaming fashion.” That is, we do not have random access to the tokens and we can only scan the sequence in the given order. We do consider algorithms that make p passes over the stream, for some “small” integer p, keeping in mind that the holy grail is to achieve p = 1. As we shall see, in our first few algorithms, we will be able to do quite a bit in just one pass.
0.2
The Quality of an Algorithm’s Answer
The function we wish to compute—φ (σ ), say—is often realvalued. We shall typically seek to compute only an estimate or approximation of the true value of φ (σ ), because many basic functions can provably not be computed exactly using sublinear space. For the same reason, we shall often allow randomized algorithms than may err with some small, but controllable, probability. This motivates the following basic definition. Definition 0.2.1. Let A (σ ) denote the output of a randomized streaming algorithm A on input σ ; note that this is a random variable. Let φ be the function that A is supposed to compute. We say that the algorithm (ε, δ )estimates φ if 6
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 0. PRELIMINARIES: THE DATA STREAM MODEL
we have
A (σ ) − 1 > ε ≤ δ . Pr φ (σ )
T
Notice that the above definition insists on a multiplicative approximation. This is sometimes too strong a condition when the value of φ (σ ) can be close to, or equal to, zero. Therefore, for some problems, we might instead seek an additive approximation, as defined below. Definition 0.2.2. In the above setup, the algorithm A is said to (ε, δ )+ estimate φ if we have Pr [A (σ ) − φ (σ ) > ε] ≤ δ .
0.3
DR AF
We have mentioned that certain things are provably impossible in sublinear space. Later in the course, we shall study how to prove such impossibility results. Such impossibility results, also called lower bounds, are a rich field of study in their own right.
Variations of the Basic Setup
Quite often, the function we are interested in computing is some statistical property of the multiset of items in the input stream σ . This multiset can be represented by a frequency vector f = ( f1 , f2 , . . . , fn ), where f j = {i : ai = j} = number of occurrences of j in σ .
In other words, σ implicitly defines this vector f , and we are then interested in computing some function of the form Φ( f ). While processing the stream, when we scan a token j ∈ [n], the effect is to increment the frequency f j . Thus, σ can be thought of as a sequence of update instructions, updating the vector f . With this in mind, it is interesting to consider more general updates to f : for instance, what if items could both “arrive” and “depart” from our multiset, i.e., if the frequencies f j could be both incremented and decremented, and by variable amounts? This leads us to the turnstile model, in which the tokens in σ belong to [n] × {−L, . . . , L}, interpreted as follows: Upon receiving token ai = ( j, c) , update f j ← f j + c . Naturally, the vector f is assumed to start out at 0. In this generalized model, it is natural to change the role of the parameter m: instead of the stream’s length, it will denote the maximum number of items in the multiset at any point of time. More formally, we require that, at all times, we have k f k1 =  f 1  + · · · +  f n  ≤ m .
A special case of the turnstile model, that is sometimes important to consider, is the strict turnstile model, in which we assume that f ≥ 0 at all times. A further special case is the cash register model, where we only allow positive updates: i.e., we require that every update ( j, c) have c > 0.
7
1
T
Unit
DR AF
Finding Frequent Items Deterministically Our study of data stream algorithms begins with statistical problems: the input stream describes a multiset of items and we would like to be able to estimate certain statistics of the empirical frequency distribution (see Section 0.3). We shall study a number of such problems over the next several units. In this unit, we consider one such problem that admits a particularly simple and deterministic (yet subtle and powerful) algorithm.
1.1
The Problem
We are in the vanilla streaming model. We have a stream σ = ha1 , . . . , an i, with each ai ∈ [n], and this implicitly defines a frequency vector f = ( f1 , . . . , fn ). Note that f1 + · · · + fn = m. In the MAJORITY problem, our task is as follows. If ∃ j : f j > m/2, then output j, otherwise, output “⊥”. This can be generalized to the FREQUENT problem, with parameter k, as follows: output the set { j : f j > m/k}. In this unit, we shall limit ourselves to deterministic algorithms for this problem. If we further limit ourselves to onepass algorithms, even the simpler problem, MAJORITY, provably requires Ω(min{m, n}) space. However, we shall soon give a onepass algorithm—the Misra–Gries Algorithm [MG82]—that solves the related problem of estimating the frequencies f j . As we shall see, 1. the properties of Misra–Gries are interesting in and of themselves, and
2. it is easy to extend Misra–Gries, using a second pass, to then solve the FREQUENT problem. Thus, we now turn to the FREQUENCY ESTIMATION problem. The task is to process σ to produce a data structure that can provide an estimate fˆa for the frequency fa of a given token a ∈ [n]. Note that the query a is given to us only after we have processed σ . This problem can also be thought of as querying the frequency vector f at the point a, so it is also known as the POINT QUERY problem.
1.2
Frequency Estimation: The Misra–Gries Algorithm
As with all onepass data stream algorithms, we shall have an initialization section, executed before we see the stream, a processing section, executed each time we see a token, and an output section, where we answer question(s) about the stream, perhaps in response to a given query. This algorithm (see Algorithm 1) uses a parameter k that controls the quality of the answers it gives. (Looking ahead: to solve the FREQUENT problem with parameter k, we shall run the Misra–Gries algorithm with parameter k.) It maintains an associative array, A, whose keys are tokens seen in the stream, and whose values are counters associated with these tokens. We keep at most k − 1 counters at any time. 8
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 1. FINDING FREQUENT ITEMS DETERMINISTICALLY
Process (token j) : 2: if j ∈ keys(A) then 3: A[ j] ← A[ j] + 1 4: else if keys(A) < k − 1 then 5: A[ j] ← 1 6: else 7: foreach ` ∈ keys(A) do 8: A[`] ← A[`] − 1 9: if A[`] = 0 then remove ` from A
1.3
DR AF
Output (query a) : 10: if a ∈ keys(A) then report fˆa = A[a] else report fˆa = 0
T
Algorithm 1 The Misra–Gries frequency estimation algorithm Initialize: 1: A ← (empty associative array)
Analysis of the Algorithm
To process each token quickly, we could maintain the associative array A using a balanced binary search tree. Each key requires dlog ne bits to store and each value requires at most dlog me bits. Since there are at most k − 1 key/value pairs in A at any time, the total space required is O(k(log m + log n)). We now analyze the quality of the algorithm’s output.
Pretend that A consists of n key/value pairs, with A[ j] = 0 whenever j is not actually stored in A by the algorithm. Consider the increments and decrements to A[ j]s as the stream comes in. For bookkeeping, pretend that upon entering the loop at line 7, A[ j] is incremented from 0 to 1, and then immediately decremented back to 0. Further, noting that each counter A[ j] corresponds to a set of occurrences of j in the input stream, consider a variant of the algorithm (see Algorithm 2) that explicitly maintains this set. Of course, actually doing so is horribly spaceinefficient! Algorithm 2 Thoughtexperiment algorithm for analysis of Misra–Gries Initialize: 1: B ← (empty associative array) Process (streamposition i, token j) : 2: if j ∈ keys(B) then 3: B[ j] ← B[ j] ∪ {i} 4: else 5: B[ j] ← {i} 6: if keys(B) = k then 7: foreach ` ∈ keys(B) do 8: B[`] ← B[`] \ min(B[`]) 9: if B[`] = ∅ then remove ` from B
. forget earliest occurrence of `
Output (query a) : 10: if a ∈ keys(B) then report fˆa =  B[a]  else report fˆa = 0
Notice that after each token is processed, each A[ j] is precisely the cardinality of the corresponding set B[ j]. The increments to A[ j] (including the aforementioned pretend ones) correspond precisely to the occurrences of j in the stream. Thus, fˆj ≤ f j . On the other hand, decrements to counters (including the pretend ones) occur in sets of k. To be precise, referring to Algorithm 2, whenever the sets B[`] are shrunk, exactly k of the sets shrink by one element each and the removed 9
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 1. FINDING FREQUENT ITEMS DETERMINISTICALLY
elements are k distinct stream positions. Focusing on a particular item j, each decrement of A[ j]— is “witnessed” by a collection of k distinct stream positions. Since the stream length is m, there can be at most m/k such decrements. Therefore, fˆj ≥ f j − m/k. Putting these together we have the following theorem. Theorem 1.3.1. The Misra–Gries algorithm with parameter k uses one pass and O(k(log m + log n)) bits of space, and provides, for any token j, an estimate fˆj satisfying
1.4
m ≤ fˆj ≤ f j . k
T
fj −
Finding the Frequent Items
DR AF
Using the Misra–Gries algorithm, we can now easily solve the FREQUENT problem in one additional pass. By the above theorem, if some token j has f j > m/k, then its corresponding counter A[ j] will be positive at the end of the Misra–Gries pass over the stream, i.e., j will be in keys(A). Thus, we can make a second pass over the input stream, counting exactly the frequencies f j for all j ∈ keys(A), and then output the desired set of items. Alternatively, if limited to a single pass, we can solve FREQUENT in an approximate sense: we may end up outputting items that are below (but not too far below) the frequency threshold. We explore this in the exercises.
Exercises
11 Let mˆ be the sum of all counters maintained by the Misra–Gries algorithm after it has processed an input stream, i.e., mˆ = ∑`∈keys(A) A[`]. Prove that the bound in Theorem 1.3.1 can be sharpened to fj −
m − mˆ ≤ fˆj ≤ f j . k
(1.1)
12 Items that occur with high frequency in a dataset are sometimes called heavy hitters. Accordingly, let us defined the HEAVY HITTERS problem, with real parameter ε > 0, as follows. The input is a stream σ . Let m, n, f have their usual meanings. Let HHε (σ ) = { j ∈ [n] : f j ≥ εm} be the set of εheavy hitters in σ . Modify Misra–Gries to obtain a onepass streaming algorithm that outputs this set “approximately” in the following sense: the set H it outputs should satisfy HHε (σ ) ⊆ H ⊆ HHε/2 (σ ) .
Your algorithm should use O(ε −1 (log m + log n)) bits of space.
13 Suppose we have run the (onepass) Misra–Gries algorithm on two streams σ1 and σ2 , thereby obtaining a summary for each stream consisting of k counters. Consider the following algorithm for merging these two summaries to produce a single kcounter summary. 1: Combine the two sets of counters, adding up counts for any common items. 2: If more than k counters remain:
2.1: c ← value of (k + 1)th counter, based on decreasing order of value. 2.2: Reduce each counter by c and delete all keys with nonpositive counters.
Prove that the resulting summary is good for the combined stream σ1 ◦ σ2 (here “◦” denotes concatenation of streams) in the sense that frequency estimates obtained from it satisfy the bounds given in Eq. (1.1).
10
2
T
Unit
DR AF
Estimating the Number of Distinct Elements We continue with our study of data streaming algorithms for statistical problems. Previously, we focused on identifying items that are particularly dominant in a stream, appearing with especially high frequency. Intuitively, we could solve this in sublinear space because only a few items can be dominant and we can afford to throw away information about nondominant items. Now, we consider a very different statistic: namely, how many distinct tokens (elements) appear in the stream. This is a measure of how “spread out” the stream is. It is not intuitively clear that we can estimate this quantity well in sublinear space, because we can’t afford to ignore rare items. In particular, merely sampling some tokens from the stream will mislead us, since a sample will tend to pick up frequent items rather than rare ones.
2.1
The Problem
As in Unit 1, we are in the vanilla streaming model. We have a stream σ = ha1 , . . . , an i, with each ai ∈ [n], and this implicitly defines a frequency vector f = ( f1 , . . . , fn ). Let d = { j : f j > 0} be the number of distinct elements that appear in σ . In the DISTINCT ELEMENTS problem, our task is to output an (ε, δ )estimate (as in Definition 0.2.1) to d. It is provably impossible to solve this problem in sublinear space if one is restricted to either deterministic algorithms (i.e., δ = 0), or exact algorithms (i.e., ε = 0). Thus, we shall seek a randomized approximation algorithm. In this unit, we give a simple algorithm for this problem that has interesting, but not optimal, quality guarantees. Despite being suboptimal, it is worth studying because • the algorithm is especially simple;
• it introduces us to two ingredients used in tons of randomized algorithms, namely, universal hashing and the median trick; • it introduces us to probability tail bounds, a basic technique for the analysis of randomized algorithms.
2.2
The Tidemark Algorithm
The idea behind the algorithm is originally due to Flajolet and Martin [FM85]. We give a slightly modified presentation, due to Alon, Matias and Szegedy [AMS99]. Since that paper designs several other algorithms as well (for other problems), it’s good to give this particular algorithm a name more evocative than “AMS algorithm.” I call it the tidemark algorithm because of how it remembers information about the input stream. Metaphorically speaking, each token has an opportunity to raise the “water level” and the algorithm simply keeps track of the highwater mark, just as a tidemark records the highwater mark left by tidal water. 11
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 2. ESTIMATING THE NUMBER OF DISTINCT ELEMENTS
For an integer p > 0, let zeros(p) denote the number of zeros that the binary representation of p ends with. Formally, zeros(p) = max{i : 2i divides p} .
T
Our algorithm’s key ingredient is a 2universal hash family, a very important concept that will come up repeatedly. If you are unfamiliar with the concept, working through Exercises 21 and 22 is strongly recommended. Once we have this key ingredient, our algorithm is very simple. Algorithm 3 The tidemark algorithm for the number of distinct elements Initialize: 1: Choose a random hash function h : [n] → [n] from a 2universal family 2: z ← 0
DR AF
Process (token j) : 3: if zeros(h( j)) > z then z ← zeros(h( j)) 1
Output: 2z+ 2
The basic intuition here is that we expect 1 out of the d distinct tokens to hit zeros(h( j)) ≥ log d, and we don’t expect any tokens to hit zeros(h( j)) log d. Thus, the maximum value of zeros(h( j)) over the stream—which is what we maintain in z—should give us a good approximation to log d. We now analyze this.
2.3
The Quality of the Algorithm’s Estimate
Formally, for each j ∈ [n] and each integer r ≥ 0, let Xr, j be an indicator random variable for the event “zeros(h( j)) ≥ r,” and let Yr = ∑ j: f j >0 Xr, j . Let T denote the value of z when the algorithm finishes processing the stream. Clearly, Yr > 0 ⇐⇒ T ≥ r .
(2.1)
We can restate the above fact as follows (this will be useful later):
Yr = 0 ⇐⇒ T ≤ r − 1 .
(2.2)
Since h( j) is uniformly distributed over the (log n)bit strings, we have
E Xr, j = P{zeros(h( j)) ≥ r} = P{2r divides h( j)} =
1 . 2r
We now estimate the expectation and variance of Yr as follows. The first step of Eq. (2.3) below uses the pairwise independence of the random variables {Xr, j } j∈[n] , which follows from the 2universality of the hash family from which h is drawn. EYr =
E Xr, j =
∑
j: f j >0
VarYr =
∑
j: f j >0
Var Xr, j ≤
∑
d . 2r
E(Xr,2 j ) =
j: f j >0
∑
j: f j >0
E Xr, j =
d . 2r
(2.3)
Thus, using Markov’s and Chebyshev’s inequalities respectively, we have EYr d = r , and 1 2 VarYr 2r r P{Yr = 0} ≤ P {Yr − EYr  ≥ d/2 } ≤ ≤ . r 2 (d/2 ) d P{Yr > 0} = P{Yr ≥ 1} ≤
12
(2.4) (2.5)
UNIT 2. ESTIMATING THE NUMBER OF DISTINCT ELEMENTS
Dartmouth: CS 35/135 Data Stream Algorithms
1 Let dˆ be the estimate of d that the algorithm outputs. Then dˆ = 2T + 2 . Let a be the smallest integer such that 1 2a+ 2 ≥ 3d. Using eqs. (2.1) and (2.4), we have √ d 2 ˆ P d ≥ 3d = P{T ≥ a} = P{Ya > 0} ≤ a ≤ . (2.6) 2 3 1
T
Similarly, let b be the largest integer such that 2b+ 2 ≤ d/3. Using Eqs. (2.2) and (2.5), we have √ 2 2b+1 ˆ ≤ . P d ≤ d/3 = P{T ≤ b} = P{Yb+1 = 0} ≤ d 3
(2.7)
2.4
DR AF
These guarantees are weak in two ways. Firstly, the estimate dˆ is only of the “same order of magnitude” as d, and is not an arbitrarily√good approximation. Secondly, these failure probabilities in eqs. (2.6) and (2.7) are only bounded by the rather large 2/3 ≈ 47%. Of course, we could make the probabilities smaller by replacing the constant “3” above with a larger constant. But a better idea, that does not further degrade the quality of the estimate d,ˆ is to use a standard “median trick” which will come up again and again.
The Median Trick
Imagine running k copies of this algorithm in parallel, using mutually independent random hash functions, and outputting the median of the k answers. √ If this median exceeds 3d, then at least k/2 of the individual answers must exceed 3d, whereas we only expect k 2/3 of them to exceed 3d. By a standard Chernoff bound, this event has probability 2−Ω(k) . Similarly, the probability that the median is below d/3 is also 2−Ω(k) . Choosing k = Θ(log(1/δ )), we can make the sum of these two probabilities work out to at most δ . This gives us an (O(1), δ )estimate for d. Later, we shall give a different algorithm that will provide an (ε, δ )estimate with ε → 0. The original algorithm requires O(log n) bits to store (and compute) a suitable hash function, and O(log log n) more bits to store z. Therefore, the space used by this final algorithm is O(log(1/δ ) · log n). When we reattack this problem with a new algorithm, we will also improve this space bound.
Exercises
These exercises are designed to get you familiar with the very important concept of a 2universal hash family, as well as give you constructive examples of such families. Let X and Y be finite sets and let Y X denote the set of all functions from X to Y . We will think of these functions as “hash” functions. [The term “hash function” has no formal meaning; strictly speaking, one should say “family of hash functions” or “hash family” as we do here.] A family H ⊆ Y X is said to be 2universal if the following property holds, with h ∈R H picked uniformly at random: 1 ∀ x, x0 ∈ X ∀ y, y0 ∈ Y x 6= x0 ⇒ Ph h(x) = y ∧ h(x0 ) = y0 = . Y 2 We shall give two examples of 2universal hash families from the set X = {0, 1}n to the set Y = {0, 1}k (with k ≤ n). 21 Treat the elements of X and Y as column vectors with 0/1 entries. For a matrix A ∈ {0, 1}k×n and vector b ∈ {0, 1}k , define the function hA,b : X → Y by hA,b (x) = Ax + b, where all additions and multiplications are performed mod 2. Prove that the family of functions H = {hA,b : A ∈ {0, 1}k×n , b ∈ {0, 1}k } is 2universal.
13
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 2. ESTIMATING THE NUMBER OF DISTINCT ELEMENTS
22 Identify X with the finite field F2n using an arbitrary bijection—truly arbitrary: e.g., the bijection need not map the string 0n to the zero element of F2n . For elements a, b ∈ X, define the function ga,b : X → Y as follows: ga,b (x) = rightmost k bits of fa,b (x) , fa,b (x) = ax + b ,
where
with addition and multiplication performed in F2n .
DR AF
T
Prove that the family of functions G = {ga,b : a, b ∈ F2n } is 2universal. Is the family G better or worse than H in any sense? Why?
14
3
T
Unit
3.1
DR AF
A Better Estimate for Distinct Elements The Problem
We revisit the DISTINCT ELEMENTS problem from Unit 2, giving a better solution, in terms of both approximation guarantee and space usage. We also seek good time complexity. Thus, we are again in the vanilla streaming model. We have a stream σ = ha1 , a2 , a3 , . . . , am i, with each ai ∈ [n], and this implicitly defines a frequency vector f = ( f1 , . . . , fn ). Let d = { j : f j > 0} be the number of distinct elements that appear in σ . We want an (ε, δ )approximation (as in Definition 0.2.1) to d.
3.2
The BJKST Algorithm
In this section we present the algorithm dubbed BJKST, after the names of the authors: BarYossef, Jayram, Kumar, Sivakumar and Trevisan [BJK+ 02]. The original paper in which this algorithm is presented actually gives three algorithms, the third (and, in a sense, “best”) of which we are presenting. The “zeros” notation below is the same as in Section 2.2. The values b and c are universal constants that will be determined later, based on the desired guarantees on the algorithm’s estimate. Algorithm 4 The BJKST algorithm for DISTINCT ELEMENTS Initialize: 1: Choose a random hash function h : [n] → [n] from a 2universal family 2: Choose a random hash function g : [n] → [bε −4 log2 n] from a 2universal family 3: z ← 0 4: B ← ∅ Process (token j) : 5: if zeros(h( j)) ≥ z then 6: B ← B ∪ {(g( j), zeros(h( j))} 7: while B ≥ c/ε 2 do 8: z ← z+1 9: shrink B by removing all (α, β ) with β < z Output: B2z Intuitively, this algorithm is a refined version of the tidemark algorithm from Section 2.2. This time, rather than simply tracking the maximum value of zeros(h( j)) in the stream, we try to determine the size of the bucket B consisting 15
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 3. A BETTER ESTIMATE FOR DISTINCT ELEMENTS
of all tokens j with zeros(h( j)) ≥ z. Of the d distinct tokens in the stream, we expect d/2z to fall into this bucket. Therefore B2z should be a good estimate for d.
T
We want B to be small so that we can store enough information (remember, we are trying to save space) to track B accurately. At the same time, we want B to be large so that the estimate we produce is accurate enough. It turns out that letting B grow to about Θ(1/ε 2 ) in size is the right tradeoff. Finally, as a spacesaving trick, the algorithm does not store the actual tokens in B but only their hash values under g, together with the value of zeros(h( j)) that is needed to remove the appropriate elements from B when B must be shrunk. We now analyze the algorithm in detail.
3.3
Analysis: Space Complexity
DR AF
We assume that 1/ε 2 = o(m): otherwise, there is no point to this algorithm! The algorithm has to store h, g, z, and B. Clearly, h and B dominate the space requirement. Using the finitefieldarithmetic hash family from Exercise 22 for our hash functions, we see that h requires O(log n) bits of storage. The bucket B has its size capped at O(1/ε 2 ). Each tuple (α, β ) in the bucket requires log(bε −4 log2 n) = O(log(1/ε) + log log n) bits to store the hash value α, which dominates the dlog log ne bits required to store the number of zeros β . Overall, this leads to a space requirement of O(log n + (1/ε 2 )(log(1/ε) + log log n)).
3.4
Analysis: The Quality of the Estimate
The entire analysis proceeds under the assumption that storing hash values (under g) in B, instead of the tokens themselves, does not change B. This is true whenever g does not have collisions on the set of tokens to which it is applied. By choosing the constant b large enough, we can ensure that the probability of this happening is at most 1/6, for each choice of h (you are asked to flesh this out in Exercise 31). Thus, making this assumption adds at most 1/6 to the error probability. We now analyze the rest of the error, under this nocollision assumption. The basic setup is the same as in Section 2.3. For each j ∈ [n] and each integer r ≥ 0, let Xr, j be an indicator random variable for the event “zeros(h( j)) ≥ r,” and let Yr = ∑ j: f j >0 Xr, j . Let T denote the value of z when the algorithm finishes processing the stream, and let dˆ denote the estimate output by the algorithm. Then we have YT = value of B when the algorithm finishes, and dˆ = YT 2T .
Proceeding as in Section 2.3, we obtain
∀r :
EYr =
d ; 2r
VarYr ≤
d . 2r
(3.1)
Notice that if T = 0, then the algorithm never incremented z, which means that d < c/ε 2 and dˆ = B = d. In short, the algorithm computes d exactly in this case. Otherwise (T ≥ 1), we say that a FAIL event occurs if dˆ is not a (1 ± ε)approximation to d. That is, d εd T FAIL ⇐⇒ YT 2 − d ≥ εd ⇐⇒ YT − T ≥ T . 2 2 We can estimate this probability by summing over all possible values r ∈ {1, 2, . . . , log n} of T . For the small values of r, a failure will be unlikely when T = r, because failure requires a large deviation of Yr from its mean. For the large values of r, simply having T = r is unlikely. This is the intuition for splitting the summation into two parts below. We need to choose the threshold that separates “small” values of r from “large” ones and we do it as follows. Let s be the unique integer such that 12 d 24 ≤ s< 2. 2 ε 2 ε 16
(3.2)
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 3. A BETTER ESTIMATE FOR DISTINCT ELEMENTS
Then we calculate log n
T
d εd P(FAIL) = ∑ P Yr − r ≥ r ∧ T = r 2 2 r=1 log n s−1 d εd ≤ ∑ P Yr − r ≥ r + ∑ P{T = r} 2 2 r=s r=1 s−1 εd = ∑ P Yr − EYr  ≥ r + P{T ≥ s} 2 r=1 s−1 εd = ∑ P Yr − EYr  ≥ r + P{Ys−1 ≥ c/ε 2 } . 2 r=1
B by eq. (3.1) (3.3)
DR AF
Bounding the terms in (3.3) using Chebyshev’s inequality and Markov’s inequality, respectively, we continue: s−1
P(FAIL) ≤
r=1 s−1
≤
VarYr
∑ (εd/2r )2 + 2r
EYs−1 c/ε 2
ε 2d
∑ ε 2 d + c2s−1
r=1 2s
2ε 2 d ε 2d c2s 2 2ε 2 24 1 ε + · ≤ 2· ε 12 c ε2 1 ≤ , 6 ≤
+
B by eq. (3.2) (3.4)
where the final bound is achieved by choosing a large enough constant c.
Recalling that we had started with a nocollision assumption for g, the final probability of error is at most 1/6 + 1/6 = 1/3. Thus, the above algorithm (ε, 13 )approximates d. As before, by using the median trick, we can improve this to an (ε, δ )approximation for any 0 < δ ≤ 1/3, at a cost of an O(log(1/δ ))factor increase in the space usage.
3.5
Optimality
This algorithm is very close to optimal in its space usage. Later in this course, when we study lower bounds, we shall show both an Ω(log n) and an Ω(ε −2 ) bound on the space required by an algorithm that (ε, 13 )approximates the number of distinct elements. The small gap between these lower bounds and the above upper bound was subsequently closed by Kane, Nelson, and Woodruff [KNW10]: using considerably more advanced ideas, they achieved a space bound of O(ε −2 + log n).
Exercises
31 Let H ⊆ Y X be a 2universal hash family, with Y  = cM 2 , for some constant c > 0. Suppose we use a random function h ∈R H to hash a stream σ of elements of X, and suppose that σ contains at most M distinct elements. Prove that the probability of a collision (i.e., the event that two distinct elements of σ hash to the same location) is at most 1/(2c). 32 Recall that we said in class that the buffer B in the BJKST Algorithm for DISTINCT ELEMENTS can be implemented cleverly by not directly storing the elements of the input stream in B, but instead, storing the hash values of these elements under a secondary hash function whose range is of size cM 2 , for a suitable M. 17
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 3. A BETTER ESTIMATE FOR DISTINCT ELEMENTS
DR AF
T
Using the above result, flesh out the details of this clever implementation. (One important detail that you must describe: how do you implement the buffershrinking step?) Plug in c = 3, for a target collision probability bound of 1/(2c) = 1/6, and figure out what M should be. Compute the resulting upper bound on the space usage of the algorithm. It should work out to 1 1 O log n + 2 log + log log n . ε ε
18
4
T
Unit
DR AF
Approximate Counting
While observing a data stream, we would like to know how many tokens we have seen. Just maintain a counter and increment it upon seeing a token: how simple! For a stream of length m, the counter would use O(log m) space which we said (in eq. 2) was the holy grail. Now suppose, just for this one unit, that even this amount of space is too much and we would like to solve this problem using o(log m) space: maybe m is really huge or maybe we have to maintain so many counters that we want each one to use less than the trivial amount of space. Since the quantity we wish to maintain—the number of tokens seen—has m + 1 possible values, maintaining it exactly necessarily requires ≥ log(m + 1) bits. Therefore, our solution will have to be approximate.
4.1
The Problem
We are, of course, in the vanilla streaming model. In fact, the identities of the tokens are now immaterial. Therefore, we can think of the input as a stream (1, 1, 1, . . .). Our goal is to maintain an approximate count of n, the number of tokens so far, using o(log m) space, where m is some promised upper bound on n. Note that we’re overriding our usual meaning of n: we shall do so for just this unit.
4.2
The Algorithm
The idea behind the algorithm is originally due to Morris [Sr.78], so the approximate counter provided by it is called a “Morris counter.” We now present it in a more general form, using modern notation. Algorithm 5 The Morris counter Initialize: 1: x ← 0
Process (token) : 2: with probability 2−x do x ← x + 1 Output: 2x − 1
As stated, this algorithm requires dlog me space after all, because the counter x could grow up to m. However, as we shall soon see, x is extremely unlikely to grow beyond 2 log m (say), so we can stay within a space bound of dlog log me + O(1) by aborting the algorithm if x does grow larger than that. 19
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 4. APPROXIMATE COUNTING
4.3
The Quality of the Estimate
The interesting part of the analysis is understanding the quality of the algorithm’s estimate. Let Cn denote the (random) value of 2x after n tokens have been processed. Notice that n, ˆ the algorithm’s estimate for n, equals Cn − 1. We shall prove that ECn = n + 1, which shows that nˆ is an unbiased estimator for n. This by itself is not enough: we would like to prove a concentration result stating that nˆ is unlikely to be too far from n. Let’s see how well we can do.
T
To aid the analysis, it will help to rewrite Algorithm 5 in the numerically equivalent (though spaceinefficient) way shown below. Algorithm 6 Thoughtexperiment algorithm for analysis of Morris counter Initialize: 1: c ← 1
DR AF
Process (token) : 2: with probability 1/c do c ← 2c Output: c − 1
Clearly, the variable c in this version of the algorithm is just 2x in Algorithm 5, so Cn equals c after n tokens. Here is the key lemma in the analysis. Lemma 4.3.1. For all n ≥ 0, ECn = n + 1 and VarCn = n(n − 1)/2.
Proof. Let Zi be an indicator random variable for the event that c is increased in line 2 upon processing the (i + 1)th token. Then, Zi ∼ Bern(1/Ci ) and Ci+1 = (1 + Zi )Ci . Using the law of total expectation and the simple formula for expectation of a Bernoulli random variable, 1 ECi+1 = E E[(1 + Zi )Ci  Ci ] = E 1 + Ci = 1 + ECi . Ci Since EC0 = 1, this gives us ECn = n + 1 for all n. Similarly,
2 ECi+1 = E E[(1 + 2Zi + Zi2 )Ci2  Ci ]
= E E[(1 + 3Zi )Ci2  Ci ] 3 Ci2 = E 1+ Ci
B since Zi2 = Zi
= ECi2 + 3(i + 1) .
B by our above formula for ECi
Since EC02 = 1, this gives us ECn2 = 1 + ∑ni=1 3i = 1 + 3n(n + 1)/2. Therefore, VarCn = ECn2 − (ECn )2 = 1 +
3n(n + 1) n(n − 1) − (n + 1)2 = , 2 2
upon simplification.
Unfortunately, this variance is too large for us to directly apply the median trick introduced in Section 2.4. (Exercise: try it out! See what happens if you try to bound P{nˆ < n/100} using Chebyshev’s inequality and Lemma 4.3.1). However, we can still get something good out of Algorithm 5, in two different ways. • We could tune a parameter in the algorithm, leading to a lower variance at the cost of higher space complexity. This technique, specific to this algorithm, is explored further in the exercises. • We could plug the algorithm into a powerful, widelyapplicable template that we describe below.
20
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 4. APPROXIMATE COUNTING
4.4
The MedianofMeans Improvement
T
As noted above, we have an estimator (n, ˆ in this case) for our quantity of interest (n, in this case) that is unbiased but with variance so large that we are unable to get an (ε, δ )estimate (as in Definition 0.2.1). This situation arises often. A generic way to deal with it is to take our original algorithm (Algorithm 5, in this case) to be a basic estimator and then build from it a final estimator that runs several independent copies of the basic estimator in parallel and combines their output. The key idea is to first bring the variance down by averaging a number of independent copies of the basic estimator, and then apply the median trick. Lemma 4.4.1. There is a universal positive constant c such that the following holds. Let random variable X be an unbiased estimator for a real quantity Q. Let {Xi j }i∈[t], j∈[k] be a collection of independent random variables with each Xi j distributed identically to X, where 1 , and δ 3 Var X . k= 2 ε (E X)2
DR AF
t = c log
Let Z = mediani∈[t]
1 k ∑ Xi j . Then, we have P{Z − Q ≥ εQ} ≤ δ , i.e., Z is an (ε, δ )estimate for Q. k j=1
Thus, if an algorithm can produce X using s bits of space, then there is an (ε, δ )estimation algorithm using 1 Var X 1 · log O s· (E X)2 ε 2 δ bits of space.
Proof. For each i ∈ [t], let Yi = k−1 ∑kj=1 Xi j . Then, by linearity of expectation, we have EYi = Q. Since the variables Xi j are (at least) pairwise independent, we have VarYi =
1 k2
k
∑ Var Xi j =
j=1
Var X . k
Applying Chebyshev’s inequality, we obtain
P{Yi − Q ≥ εQ} ≤
Var X 1 VarYi = 2 = . 2 2 (εQ) kε (E X) 3
Now an application of a Chernoff bound (exactly as in the median trick from Section 2.4) tells us that for an appropriate choice of c, we have P{Z − Q ≥ εQ} ≤ δ . Applying the above lemma to the basic estimator given by Algorithm 5 gives us the following result. Theorem 4.4.2. For a stream of length at most m, the problem of approximately counting the number of tokens admits an (ε, δ )estimation in O(log log m · ε −2 log δ −1 ) space. Proof. If we run the basic estimator as is (according to Algorithm 5), using s bits of space, the analysis in Lemma 4.3.1 together with the medianofmeans improvement (Lemma 4.4.1) gives us a final (ε, δ )estimator using O(sε −2 log δ −1 ) space. Now suppose we tweak the basic estimator to ensure s = O(log log m) by aborting if the stored variable x exceeds 2 log m. An abortion would imply that Cn ≥ m2 ≥ n2 . By Lemma 4.3.1 and Markov’s inequality, P{Cn ≥ n2 } ≤
ECn n+1 = 2 . n2 n
Then, by a simple union bound, the probability that any one of the Θ(ε −2 log δ −1 ) parallel runs of the basic estimator aborts is at most o(1). Thus, a final estimator based on this tweaked basic estimator produces an (ε, δ + o(1))estimate within the desired space bound. 21
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 4. APPROXIMATE COUNTING
Exercises 41 Here is a different way to improve the accuracy of the basic estimator in Algorithm 5. Observe that the given algorithm roughly tracks the logarithm (to base 2) of the stream length. Let us change the base from 2 to 1 + β instead, where β is a small positive value, to be determined. Update the pseudocode using this idea, ensuring that the output value is still an unbiased estimator for n.
T
Analyze the variance of this new estimator and show that, for a suitable setting of β , it directly provides an (ε, δ )estimate, using only log log m + O(log ε −1 + log δ −1 ) bits of space. 42 Show how to further generalize the version of the Morris counter given by the previous exercise to solve a more general version of the approximate counting problem where the stream tokens are positive integers and a token j is to be interpreted as “add j to the counter.” As usual, the counter starts at zero. Provide pseudocode and rigorously analyze the algorithm’s output quality and space complexity.
DR AF
43 Instead of using a median of means for improving the accuracy of an estimator, what if we use a mean of medians? Will it work just as well, or at all?
22
5
T
Unit
5.1
DR AF
Finding Frequent Items via (Linear) Sketching The Problem
We return to the FREQUENT problem that we studied in Unit 1: given a parameter k, we seek the set of tokens with frequency > m/k. The Misra–Gries algorithm, in a single pass, gave us enough information to solve FREQUENT with a second pass: namely, in one pass it computed a data structure which could be queried at any token j ∈ [n] to obtain a sufficiently accurate estimate fˆj to its frequency f j . We shall now give two other onepass algorithms for this same problem, that we can call FREQUENCY ESTIMATION.
5.2
Sketches and Linear Sketches
Let MG(σ ) denote the data structure computed by Misra–Gries upon processing the stream σ . In Exercise 13, we saw a procedure for combining two instances of this data structure that would let us spaceefficiently compute MG(σ1 ◦ σ2 ) from MG(σ1 ) and MG(σ2 ), where “◦” denotes concatenation of streams. Clearly, it would be desirable to be able to combine two data structures in this way, and when it can be done, such a data structure is called a sketch. Definition 5.2.1. A data structure DS(σ ) computed in streaming fashion by processing a stream σ is called a sketch if there is a spaceefficient combining algorithm COMB such that, for every two streams σ1 and σ2 , we have COMB(DS(σ1 ), DS(σ2 )) = DS(σ1 ◦ σ2 ) .
However, the Misra–Gries has the drawback that it does not seem to extend to the turnstile (or even strict turnstile) model. In this unit, we shall design two different solutions to the FREQUENT problem that do generalize to turnstile streams. Each algorithm computes a sketch of the input stream in the above sense, but these sketches have an additional important property that we now explain. Since algorithms for FREQUENT are computing functions of the frequency vector f (σ ) determined by σ , their sketches will naturally be functions of f (σ ). It turns out that for the two algorithms in this unit, the sketches will be linear functions. That’s special enough to call out in another definition. Definition 5.2.2. A sketching algorithm “sk” is called a linear sketch if, for each stream σ over a token universe [n], sk(σ ) takes values in a vector space of dimension ` = `(n), and sk(σ ) is a linear function of f (σ ). In this case, ` is called the dimension of the linear sketch. Notice that the combining algorithm for linear sketches is to simply add the sketches (in the appropriate vector space). A data stream algorithm based on a linear sketch naturally generalizes from the vanilla to the turnstile model. If the arrival of a token j in the vanilla model causes us to add a vector v j to the sketch, then an update ( j, c) in the turnstile model is handled by adding cvv j to the sketch: this handles both cases c ≥ 0 and c < 0. 23
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 5. FINDING FREQUENT ITEMS VIA (LINEAR) SKETCHING
We can make things more explicit. Put f = f (σ ) ∈ Rn and y = sk(σ ) ∈ R` , where “sk” is a linear sketch. Upon choosing a basis for Rn and one for R` , we can express the algorithm “sk” as left multiplication by a suitable sketch matrix S ∈ R`×n : i.e., y = S f . The vector v j defined above is simply the jth column of S . Importantly, a sketching algorithm should not be storing S : that would defeat the purpose, which is to save space! Instead, the algorithm should be performing this multiplication implicitly.
5.3
T
All of the above will be easier to grasp upon seeing the concrete examples in this unit, so let us get on with it.
CountSketch
DR AF
We now describe the first of our sketching algorithms, called CountSketch, which was introduced by Charikar, Chen and FarachColton [CCFC04]. We start with a basic sketch that already has most of the required ideas in it. This sketch takes an accuracy parameter ε which should be thought of as small and positive. Algorithm 7 CountSketch: basic estimator Initialize: 1: C[1 . . . k] ← ~0, where k := 3/ε 2 2: Choose a random hash function h : [n] → [k] from a 2universal family 3: Choose a random hash function g : [n] → {−1, 1} from a 2universal family Process (token ( j, c)) : 4: C[h( j)] ← C[h( j)] + c g( j) Output (query a) : 5: report fˆa = g(a)C[h(a)]
The sketch computed by this algorithm is the array of counters C, which can be thought of as a vector in Zk . Note that for two such sketches to be combinable, they must be based on the same hash functions h and g.
5.3.1
The Quality of the Basic Sketch’s Estimate
Fix an arbitrary token a and consider the output X = fˆa on query a. For each token j ∈ [n], let Y j be the indicator for the event “h( j) = h(a)”. Examining the algorithm’s workings we see that a token j contributes to the counter C[h(a)] iff h( j) = h(a), and the amount of the contribution is its frequency f j times the random sign g( j). Thus, n
X = g(a) ∑ f j g( j)Y j = fa + j=1
∑
f j g(a)g( j)Y j .
j∈[n]\{a}
Since g and h are independent and g is drawn from a 2universal family, for each j 6= a, we have E[g(a)g( j)Y j ] = E g(a) · E g( j) · EY j = 0 · 0 · EY j = 0 .
(5.1)
Therefore, by linearity of expectation, we have
E X = fa +
∑
f j E[g(a)g( j)Y j ] = fa .
(5.2)
j∈[n]\{a}
Thus, the output X = fˆa is an unbiased estimator for the desired frequency fa . We still need to show that X is unlikely to deviate too much from its mean. For this, we analyze its variance. By 2universality of the family from which h is drawn, we see that for each j ∈ [n] \ {a}, we have EY j2 = EY j = P{h( j) = h(a)} = 24
1 . k
(5.3)
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 5. FINDING FREQUENT ITEMS VIA (LINEAR) SKETCHING
Next, we use 2universality of the family from which g is drawn, and independence of g and h, to conclude that for all i, j ∈ [n] with i 6= j, we have E[g(i)g( j)YiY j ] = E g(i) · E g( j) · E[YiY j ] = 0 · 0 · E[YiY j ] = 0 .
(5.4)
Thus, we calculate #
Var X = 0 + Var g(a)
∑
T
" f j g( j)Y j
j∈[n]\{a}
= E g(a)2
=
f j2Y j2 + g(a)2
f j2
∑ i, j∈[n]\{a} i6= j
∑
j∈[n]\{a}
=
k
!2
f j E[g(a)g( j)Y j ]
j∈[n]\{a}
B using g(a)2 = 1, (5.3), (5.4), and (5.1)
+0−0
DR AF
∑
∑ j∈[n]\{a}
fi f j g(i)g( j)YiY j −
k f k22 − fa2 , k
(5.5)
where f = f (σ ) is the frequency distribution determined by σ . From (5.2) and (5.5), using Chebyshev’s inequality, we obtain q q 2 2 2 2 ˆ P  fa − fa  ≥ ε k f k2 − fa = P X − E X ≥ ε k f k2 − fa ≤
Var[X]
ε 2 (k f k22 − fa2 )
1 kε 2 1 = . 3
=
For j ∈ [n], let us define f − j to be the (n − 1)dimensional vector obtained by dropping the jth entry of f . Then
f − j 2 = k f k2 − f 2 . Therefore, we can rewrite the above statement in the following more memorable form. 2
2
j
1 Pr  fˆa − fa  ≥ ε f −a 2 ≤ . 3
5.3.2
(5.6)
The Final Sketch
The sketch that is commonly referred to as “Count Sketch” is in fact the sketch obtained by applying the median trick (see Section 2.4) to the above basic sketch, bringing its probability of error down to δ , for a given small δ > 0. Thus, the Count Sketch can be visualized as a twodimensional array of counters, with each token in the stream causing several counter updates. For the sake of completeness, we spell out this final algorithm in full below. As in Section 2.4, a standard Chernoff bound argument proves that this estimate fˆa satisfies
P  fˆa − fa  ≥ ε f −a 2 ≤ δ .
(5.7)
With a suitable choice of hash family, we can store the hash functions above in O(t log n) space. Each of the tk counters in the sketch uses O(log m) space. This gives us an overall space bound of O(t log n + tk log m), which is 1 1 O 2 log · (log m + log n) . ε δ
25
UNIT 5. FINDING FREQUENT ITEMS VIA (LINEAR) SKETCHING
Dartmouth: CS 35/135 Data Stream Algorithms
Algorithm 8 CountSketch: final estimator Initialize: 1: C[1 . . .t][1 . . . k] ← ~0, where k := 3/ε 2 and t := O(log(1/δ )) 2: Choose t independent hash functions h1 , . . . ht : [n] → [k], each from a 2universal family 3: Choose t independent hash functions g1 , . . . gt : [n] → {−1, 1}, each from a 2universal family
T
Process (token ( j, c)) : 4: for i ← 1 to t do 5: C[i][hi ( j)] ← C[i][hi ( j)] + c gi ( j) Output (query a) : 6: report fˆa = median1≤i≤t gi (a)C[i][hi (a)]
The CountMin Sketch
DR AF
5.4
Another solution to FREQUENCY ESTIMATION is the socalled “CountMin Sketch”, which was introduced by Cormode and Muthukrishnan [CM05]. As with the Count Sketch, this sketch too takes an accuracy parameter ε and an error probability parameter δ . And as before, the sketch consists of a twodimensional t × k array of counters, which are updated in a very similar manner, based on hash functions. The values of t and k are set, based on ε and δ , as shown below. Algorithm 9 CountMin Sketch Initialize: 1: C[1 . . .t][1 . . . k] ← ~0, where k := 2/ε and t := dlog(1/δ )e 2: Choose t independent hash functions h1 , . . . ht : [n] → [k], each from a 2universal family Process (token ( j, c)) : 3: for i ← 1 to t do 4: C[i][hi ( j)] ← C[i][hi ( j)] + c
Output (query a) : 5: report fˆa = min1≤i≤t C[i][hi (a)]
Note how much simpler this algorithm is, as compared to Count Sketch! Also, note that its space usage is 1 1 O log · (log m + log n) , ε δ which is better than that of Count Sketch by a 1/ε factor. The place where CountMin Sketch is weaker is in its approximation guarantee, which we now analyze.
5.4.1
The Quality of the Algorithm’s Estimate
We focus on the case when each token ( j, c) in the stream satisfies c > 0, i.e., the cash register model. Clearly, in this case, every counter C[i][hi (a)], corresponding to a token a, is an overestimate of fa . Thus, we always have fa ≤ fˆa , where fˆa is the estimate of fa output by the algorithm. For a fixed a, we now analyze the excess in one such counter, say in C[i][hi (a)]. Let the random variable Xi denote this excess. For j ∈ [n] \ {a}, let Yi, j be the indicator of the event “hi ( j) = hi (a)”. Notice that j makes a contribution to
26
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 5. FINDING FREQUENT ITEMS VIA (LINEAR) SKETCHING
the counter iff Yi, j = 1, and when it does contribute, it causes f j to be added to this counter. Thus, Xi =
f jYi, j .
∑ j∈[n]\{a}
T
By 2universality of the family from which hi is drawn, we compute that EYi, j = 1/k. Thus, by linearity of expectation,
f −a fj k f k1 − fa 1 E Xi = ∑ = = . k k k j∈[n]\{a} Since each f j ≥ 0, we have Xi ≥ 0, and we can apply Markov’s inequality to get
f −a
1 1
P{Xi ≥ ε f −a 1 } ≤ = ,
2 kε f −a 1 by our choice of k.
DR AF
The above probability is for one counter. We have t such counters, mutually independent. The excess in the output, fˆa − fa , is the minimum of the excesses Xi , over all i ∈ [t]. Thus,
P fˆa − fa ≥ ε f −a 1 = P min{X1 , . . . , Xt } ≥ ε f −a 1 ( ) t ^
=P Xi ≥ ε f −a 1
i=1
t
= ∏ Pr{Xi ≥ ε f −a 1 } i=1
1 , 2t and using our choice of t, this probability is at most δ . Thus, we have shown that, with high probability,
fa ≤ fˆa ≤ fa + ε f −a 1 , ≤
where the left inequality always holds, and the right inequality fails with probability at most δ .
The reason than that of Count Sketch is that its deviation is bounded by ε f −a 1 , rather
this estimate is weaker than ε f −a 2 . For all vectors z ∈ Rn , we have kzzk1 ≥ kzzk2 . The inequality is tight when z has a single nonzero √ entry. It is at its weakest when all entries of z are equal in absolute value: the two norms are then off by a factor of n from each other. Thus, the quality of the estimate of Count Sketch gets better (in comparison to CountMin Sketch) as the stream’s frequency vector gets more “spread out”.
5.5
Comparison of Frequency Estimation Methods
At this point, we have studied three methods to estimate frequencies of tokens in a stream. The following table throws in a fourth method, and compares these methods by summarizing their key features. Method
Misra–Gries
CountSketch
CountMin Sketch Count/Median
fˆa − fa ∈ · · ·
− ε f −a 1 , 0
− ε f −a 2 , ε f −a 2
0, ε f −a 1
− ε f −a , ε f −a 1
1
Space, O(·)
Error Probability
Model
1 ε (log m + log n) 1 log δ1 · (log m + log n) ε2 1 1 ε log δ · (log m + log n) 1 1 ε log δ · (log m + log n)
0 (deterministic)
Cash register
δ (overall)
Turnstile
δ (upper bound)
Cash register
δ (overall)
Turnstile
The claims in the first row can be proved by analyzing the Misra–Gries algorithm from Lecture 1 slightly differently. The last row refers to an algorithm that maintains the same data structure as the CountMin Sketch, but answers queries by reporting the median of the values of the relevant counters, rather than the minimum. It is a simple (and instructive) exercise to analyze this algorithm and prove the claims in the last row. 27
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 5. FINDING FREQUENT ITEMS VIA (LINEAR) SKETCHING
Exercises √ 51 Prove that for every integer n ≥ 1 and every vector z ∈ Rn , we have kzzk1 / n ≤ kzzk2 ≤ kzzk1 . For both inequalities, determine when equality holds.
DR AF
T
52 Write out the Count/Median algorithm formally and prove that it satisfies the properties claimed in the last row of the above table.
53 Our estimate of ε f −a 2 on the absolute error of CountSketch is too pessimistic in many practical situations where the data are highly skewed, i.e., where most of the weight of the vector f is supported on a small “constant” res(`) number of elements. To make this precise, we define f −a to be the (n − 1)dimensional vector obtained by dropping the ath entry of f and then setting the ` largest (by absolute value) entries to zero. Now consider the CountSketch estimate fˆa as computed by the algorithm in Sec 5.3.2 with the only change being that k is set to 6/ε 2 . Prove that
o n
res(`) P  fˆa − fa  ≥ ε f −a ≤ δ , 2
where ` = 1/ε 2 .
54 Consider a stream σ in the turnstile model, defining a frequency vector f ≥ 0 . The CountMin Sketch solves the problem of estimating f j , given j, but does not directly give us a quick way to identify, e.g., the set of elements with frequency greater than some threshold. Fix this. In greater detail: Let α be a constant with 0 < α < 1. We would like to maintain a suitable summary of the stream (some enhanced version of CountMin Sketch, perhaps?) so that we can, on demand, quickly produce a set S ⊆ [n] satisfying the following properties w.h.p.: (1) S contains every j such that f j ≥ αF1 ; (2) S does not contain any j such that f j < (α − ε)F1 . Here, F1 = F1 (σ ) = k f k1 . Design a data stream algorithm that achieves this. Your space usage, as well as the time taken to process each token and to produce the set S, should be polynomial in the usual parameters, log m, log n, and 1/ε, and may depend arbitrarily on α.
28
6
T
Unit
6.1
DR AF
Estimating Frequency Moments Background and Motivation
We are in the vanilla streaming model. We have a stream σ = ha1 , . . . , am i, with each a j ∈ [n], and this implicitly defines a frequency vector f = f (σ ) = ( f1 , . . . , fn ). Note that f1 + · · · + fn = m. The kth frequency moment of the stream, denoted Fk (σ ) or simply Fk , is defined as follows: n
Fk :=
∑ f jk = k f kkk .
(6.1)
j=1
Using the terminology “kth” suggests that k is a positive integer. But in fact the definition above makes sense for every real k > 0. And we can even give it a meaning for k = 0, if we first rewrite the definition of Fk slightly: Fk = ∑ j: f j >0 f jk . With this definition, we get F0 =
∑
f j0 = { j : f j > 0} ,
j: f j >0
which is the number of distinct tokens in σ . (We could have arrived at the same result by sticking with the original definition of Fk and adopting the convention 00 = 0.) We have seen that F0 can be (ε, δ )approximated using space logarithmic in m and n. And F1 = m is trivial to compute exactly. Can we say anything for general Fk ? We shall investigate this problem in this and the next few lectures. By way of motivation, note that F2 represents the size of the self join r o n r, where r is a relation in a database, with f j denoting the frequency of the value j of the join attribute. Imagine that we are in a situation where r is a huge relation and n, the size of the domain of the join attribute, is also huge; the tuples can only be accessed in streaming fashion (or perhaps it is much cheaper to access them this way than to use random access). Can we, with one pass over the relation r, compute a good estimate of the self join size? Estimation of join sizes is a crucial step in database query optimization. The solution we shall eventually see for this F2 estimation problem will in fact allow us to estimate arbitrary equijoin sizes (not just self joins). For now, though, we give an (ε, δ )approximation for arbitrary Fk , provided k ≥ 2, using space sublinear in m and n. The algorithm we present is due to Alon, Matias and Szegedy [AMS99], and is not the best algorithm for the problem, though it is the easiest to understand and analyze.
6.2
The (Basic) AMS Estimator for Fk
We first describe a surprisingly simple basic estimator that gets the answer right in expectation, i.e., it is an unbiased estimator. Eventually, we shall run many independent copies of this basic estimator in parallel and combine the results to get our final estimator, which will have good error guarantees. 29
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 6. ESTIMATING FREQUENCY MOMENTS
The estimator works as follows. Pick a token from the stream σ uniformly at random, i.e., pick a position J ∈R [m]. Count the length, m, of the stream and the number, r, of occurrences of our picked token aJ in the stream from that point on: r = { j ≥ J : a j = aJ }. The basic estimator is then defined to be m(rk − (r − 1)k ).
Algorithm 10 AMS basic estimator for frequency moments Initialize: 1: (m, r, a) ← (0, 0, 0)
DR AF
Process (token j) : 2: m ← m + 1 3: with probability 1/m do 4: a← j 5: r←0 6: if j = a then 7: r ← r+1
T
The catch is that we don’t know m beforehand, and picking a token uniformly at random requires a little cleverness, as seen in the pseudocode below. This clever way of picking a uniformly random stream element is called reservoir sampling.
Output: m(rk − (r − 1)k )
This algorithm uses O(log m) bits to store m and r, plus dlog ne bits to store the token a, for a total space usage of O(log m + log n). Although stated for the vanilla streaming model, it has a natural generalization to the cash register model. It is a good homework exercise to figure this out.
6.3
Analysis of the Basic Estimator
First of all, let us agree that the algorithm does indeed compute r as described above. For the analysis, it will be convenient to think of the algorithm as picking a random token from σ in two steps, as follows. 1. Pick a random token value, a ∈ [n], with P{a = j} = f j /m for each j ∈ [n]. 2. Pick one of the fa occurrences of a in σ uniformly at random.
Let A and R denote the (random) values of a and r after the algorithm has processed σ , and let X denote its output. Taking the above viewpoint, let us condition on the event A = j, for some particular j ∈ [n]. Under this condition, R is equally likely to be any of the values {1, . . . , f j }, depending on which of the f j occurrences of j was picked by the algorithm. Therefore, fj
E[X  A = j] = E[m(Rk − (R − 1)k )  A = j] = ∑
i=1
1 m · m(ik − (i − 1)k ) = ( f jk − 0k ) . fj fj
By the law of total expectation,
n
EX =
n
∑ P{A = j} E[X  A = j] =
j=1
fj m
∑ m · f j · f jk = Fk .
j=1
This shows that X is indeed an unbiased estimator for Fk . We shall now bound Var X from above. Calculating the expectation as before, we have n
f
f
2 j n fj j 1 Var X ≤ E X = ∑ ∑ · m2 (ik − (i − 1)k )2 = m ∑ ∑ ik − (i − 1)k . j=1 m i=1 f j j=1 i=1 2
30
(6.2)
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 6. ESTIMATING FREQUENCY MOMENTS
By the mean value theorem (from basic calculus), for all x ≥ 1, there exists ξ (x) ∈ [x − 1, x] such that xk − (x − 1)k = kξ (x)k−1 ≤ kxk−1 , where the last step uses k ≥ 1. Using this bound in (6.2), we get
T
n fj Var X ≤ m ∑ ∑ kik−1 ik − (i − 1)k j=1 i=1
fj
n
≤m∑
k f jk−1
j=1 n
∑
ik − (i − 1)k
i=1
= m ∑ k f jk−1 f jk j=1
DR AF
= kF1 F2k−1 .
(6.3)
For reasons we shall soon see, it will be convenient to bound Var X by a multiple of (E X)2 , i.e., Fk2 . To do so, we shall use the following lemma. Lemma 6.3.1. Let n > 0 be an integer and let x1 , . . . , xn ≥ 0 and k ≥ 1 be reals. Then
∑ xi
2 2k−1 1−1/k k x ≤ n x ∑i ∑i ,
where all the summations range over i ∈ [n].
Proof. We continue to use the convention that summations range over i ∈ [n]. Let x∗ = maxi∈[n] xi . Then, we have x∗k−1 = x∗k
(k−1)/k
≤
∑ xik
(k−1)/k
.
(6.4)
Since k ≥ 1, by the power mean inequality (or directly, by the convexity of the function x 7→ xk ), we have 1 xi ≤ n∑
1 xk n∑ i
1/k
.
(6.5)
Using (6.4) and (6.5) in the second and third steps (respectively) below, we compute ∑ xi ∑ xi2k−1 ≤ ∑ xi x∗k−1 ∑ xik (k−1)/k ≤ ∑ xi ∑ xik ∑ xik 1/k (k−1)/k ≤ n1−1/k ∑ xik ∑ xik ∑ xik 2 = n1−1/k ∑ xik , which completes the proof.
Using the above lemma in (6.3), with x j = f j , we get
Var X ≤ kF1 F2k−1 ≤ kn1−1/k Fk2 .
31
(6.6)
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 6. ESTIMATING FREQUENCY MOMENTS
6.4
The Final Estimator and Space Bound
Our final Fk estimator, which gives us good accuracy guarantees, is obtained by combining several independent basic estimators and using the medianofmeans improvement (Lemma 4.4.1 in Section 4.4). By that lemma, we can obtain an (ε, δ )estimator for Fk by combining O(tε −2 log δ −1 ) independent copies of the basic estimator X, where kn1−1/k Fk2 Var X = kn1−1/k . ≤ (E X)2 Fk2
T
t=
6.4.1
DR AF
As noted above, the space used to compute each copy of X, using Algorithm 10, is O(log m + log n), leading to a final space bound of 1 1 e −2 n1−1/k ) . O 2 log · kn1−1/k (log m + log n) = O(ε (6.7) ε δ
The SoftO Notation
For the first time in these notes, we have a space bound that is sublinear, but not polylogarithmic, in n (or m). In such e cases it is convenient to adopt an Onotation, also known as a “softO” notation, which suppresses factors polynomial −1 in log m, log n, log ε , and log δ −1 . We have adopted this notation in eq. (6.7), leading to the memorable form e −2 n1−1/k ). Note that we are also treating k as a constant here. O(ε The above bound is good, but not optimal, as we shall soon see. The optimal bound (upto polylogarithmic factors) e −2 n1−2/k ) instead; there are known lower bounds of Ω(n1−2/k ) and Ω(ε −2 ). We shall see how to achieve this is O(ε better upper bound in a subsequent unit.
32
7
DR AF
The TugofWar Sketch
T
Unit
At this point, we have seen a sublinearspace algorithm — the AMS estimator — for estimating the kth frequency moment, Fk = f1k + · · · + fnk , of a stream σ . This algorithm works for k ≥ 2, and its space usage depends on n as e 1−1/k ). This fails to be polylogarithmic even in the important case k = 2, which we used as our motivating example O(n when introducing frequency moments in the previous lecture. Also, the algorithm does not produce a sketch in the sense of Section 5.2. But Alon, Matias and Szegedy [AMS99] also gave an amazing algorithm that does produce a sketch—a linear sketch of merely logarithmic size—which allows one to estimate F2 . What is amazing about the algorithm is that seems to do almost nothing.
7.1
The Basic Sketch
We describe the algorithm in the turnstile model.
Algorithm 11 TugofWar Sketch for F2 Initialize: 1: Choose a random hash function h : [n] → {−1, 1} from a 4universal family 2: z ← 0 Process (token ( j, c)) : 3: z ← z + c h( j) Output: z2
The sketch is simply the random variable z. It is pulled in the positive direction by those tokens j that have h( j) = 1 and is pulled in the negative direction by the rest of the tokens; hence the name TugofWar Sketch. Clearly, the absolute value of z never exceeds f1 + · · · + fk = m, so it takes O(log m) bits to store this sketch. It also takes O(log n) bits to store the hash function h, for an appropriate 4universal family.
33
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 7. THE TUGOFWAR SKETCH
7.1.1
The Quality of the Estimate
Let Z denote the value of z after the algorithm has processed σ . For convenience, define Y j = h( j) for each j ∈ [n]. Then Z = ∑nj=1 f jY j . Note that Y j2 = 1 and EY j = 0, for each j. Therefore,
n
∑
j=1
n n f j2Y j2 + ∑ ∑ fi f jYiY j = i=1 j=1
n
∑
j=1
n
n
f j2 + ∑ ∑ fi f j EYi EY j = F2 , i=1 j=1
T
E Z2 = E
j6=i
j6=i
where we used the fact that {Y j } j∈[n] are pairwise independent (in fact, they are 4wise independent, because h was picked from a 4universal family). This shows that the algorithm’s output, Z 2 , is indeed an unbiased estimator for F2 . The variance of the estimator is Var Z 2 = E Z 4 − (E Z 2 )2 = E Z 4 − F22 . We bound this as follows. By linearity of expectation, we have n
n
E Z4 = ∑ ∑
n
n
∑ ∑ fi f j fk f` E[YiY jYkY` ] .
DR AF
i=1 j=1 k=1 `=1
Suppose one of the indices in (i, j, k, `) appears exactly once in that 4tuple. Without loss of generality, we have i∈ / { j, k, `}. By 4wise independence, we then have E[YiY jYkY` ] = EYi · E[Y jYkY` ] = 0, because EYi = 0. It follows that the only potentially nonzero terms in the above sum correspond to those 4tuples (i, j, k, `) that consist either of one index occurring four times, or else two distinct indices occurring twice each. Therefore we have n
E Z4 =
n
n
∑ f j4 EY j4 + 6 ∑ ∑
j=1
i=1 j=i+1
where the coefficient “6” corresponds to the
4 2 =6
n
fi2 f j2 E[Yi2Y j2 ] = F4 + 6 ∑
n
∑
fi2 f j2 ,
i=1 j=i+1
permutations of (i, i, j, j) with i 6= j. Thus, n
Var Z 2 = F4 − F22 + 6 ∑
n
∑
fi2 f j2
i=1 j=i+1 2 n n = F4 − F22 + 3 f j2 − f j4 j=1 j=1
∑
∑
= F4 − F22 + 3(F22 − F4 ) ≤ 2F22 .
7.2
The Final Sketch
As before, having bounded the variance, we can design a final sketch from the above basic sketch by a medianofmeans improvement. By Lemma 4.4.1, this will blow up the space usage by a factor of 2F22 Var Z 2 1 1 1 1 1 1 · O 2 log ≤ 2 · O 2 log = O 2 log (E Z 2 )2 ε δ ε δ ε δ F2 in order to give an (ε, δ )estimate. Thus, we have estimated F2 using space O(ε −2 log(δ −1 )(log m + log n)), with a sketching algorithm that in fact computes a linear sketch.
7.2.1
A Geometric Interpretation
The AMS TugofWar Sketch has a nice geometric interpretation. Consider a final sketch that consists of t independent copies of the basic sketch. Let M ∈ Rt×n be the matrix that “transforms” the frequency vector f into the tdimensional sketch vector y . Note that M is not a fixed matrix but a random matrix with ±1 entries: it is drawn from a certain distribution described implicitly by the hash family. Specifically, if Mi j denotes the (i, j)entry of M , then Mi j = hi ( j), where hi is the hash function used by the ith basic sketch.
34
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 7. THE TUGOFWAR SKETCH
Let t = 6/ε 2 . By stopping the analysis in Lemma 4.4.1 after the Chebyshev step (and before the “median trick” Chernoff step), we obtain that ) ( 1 t 2 1 P ∑ yi − F2 ≥ εF2 ≤ , t i=1 3
DR AF
T
where the probability is taken with respect to the above distribution of M , resulting in a random sketch vector y = (y1 , . . . , yt ). Thus, with probability at least 2/3, we have
h√ i
1 √ 1
√ M f = √ kyyk2 ∈ 1 − ε k f k2 , 1 + ε k f k2 ⊆ (1 − ε) k f k2 , (1 + ε) k f k2 . (7.1)
t t 2 √ This can be interpreted as follows. The (random) matrix M / t performs a dimension reduction, simplifying an ndimensional vector f to a tdimensional sketch y —with t = O(1/ε 2 )—while preserving `2 norm within a (1 ± ε) factor. Of course, this is only guaranteed to happen with probability at least 2/3. But clearly this correctness probability can be boosted to an arbitrary constant less than 1, while keeping t = O(1/ε 2 ). The “amazing” AMS sketch now feels quite natural, under this geometric interpretation. We are using dimension reduction to maintain a lowdimensional image of the frequency vector. This image, by design, has the property that its `2 length approximates that of the frequency vector very well. Which of course is what we’re after, because the second frequency moment, F2 , is just the square of the `2 length. Since the sketch is linear, we now also have an algorithm to estimate the `2 difference k f (σ ) − f (σ 0 )k2 between two streams σ and σ 0 .
Exercises
71 In Section 6.1, we noted that F2 represents the size of a self join in a relational database and remarked that our F2 estimation algorithm would allow us to estimate arbitrary equijoin sizes (not just self joins). Justify this by designing a sketch that can scan a relation in one streaming pass such that, based on the sketches of two different relations, we can estimate the size of their join. Explain how to compute the estimate. Recall that for two relations (i.e., tables in a database) r(A, B) and s(A,C), with a common attribute (i.e., column) A, we define the join r o n s to be a relation consisting of all tuples (a, b, c) such that (a, b) ∈ r and (a, c) ∈ s. Therefore, if fr, j and fs, j denote the frequencies of j in the first columns (i.e., “A”columns) of r and s, respectively, and j can take values in [n], then the size of the join is ∑nj=1 fr, j fs, j . √ √ 72 As noted in Section 7.2.1, the t × n matrix that realizes the tugofwar sketch has every entry in {−1/ t, 1/ t}: in particular, every entry is nonzero. Therefore, in a streaming setting, updating the sketch in response to a token arrival takes Θ(t) time, under the reasonable assumption that the processor can perform arithmetic on Θ(log n)bit integers in constant time. Consider the matrix P ∈ Rt×n given by
( g( j) , Pi j = 0,
if i = h( j) , otherwise,
where hash functions g : [n] → {−1, 1} and h : [n] → [t] are drawn from a k1 universal and a k2 universal family, respectively. Show that using P as a sketch matrix (for some choice of constants k1 and k2 ) leads to dimension reduction guarantees similar to eq. (7.1). What is the pertoken update time achievable using P as the sketch matrix?
35
8
T
Unit
DR AF
Estimating Norms Using Stable Distributions As noted at the end of Unit 7, the AMS TugofWar sketch allows us to estimate the `2 difference between two data streams. Estimating similarity metrics between streams is an important class of problems, so it is nice to have such a clean solution for this specific metric. However, this raises a burning question: Can we do the same for other ` p norms, especially the `1 norm? The `1 difference between two streams can be interpreted (modulo appropriate scaling) as the total variation distance (a.k.a., statistical distance) between two probability distributions: a fundamental and important metric. Unfortunately, although our logspace F2 algorithm automatically gave us a logspace `2 algorithm, the trivial logspace F1 algorithm works only in the cash register model and does not give an `1 algorithm at all. It turns out that thinking harder about the geometric interpretation of the AMS TugofWar Sketch leads us on a path to polylogarithmic space ` p norm estimation algorithms, for all p ∈ (0, 2]. Such algorithms were given by Indyk [Ind06], and we shall study them now. For the first time in this course, it will be necessary to gloss over several technical details of the algorithms, so as to have a clear picture of the important ideas.
8.1
A Different ` 2 Algorithm
The lengthpreserving dimension reduction achieved by the TugofWar Sketch is reminiscent of the famous JohnsonLindenstrauss Lemma [JL84, FM88]. One highlevel way of stating the JL Lemma is that the random linear map given by a t × n matrix whose entries are independently drawn from the standard normal distribution N (0, 1) is lengthpreserving (up to a scaling factor) with high probability. To achieve 1 ± ε error, it suffices to take t = O(1/ε 2 ). Let us call such a matrix a JL Sketch matrix. Notice that the sketch matrix for the TugofWar sketch is a very similar object, except that 1. its entries are uniformly distributed in {−1, 1}: a much simpler distribution;
2. its entries do not have to be fully independent: 4wise independence in each row suffices; and 3. it has a succinct description: it suffices to describe the hash functions that generate the rows. The above properties make the TugofWar Sketch “data stream friendly”. But as a thought experiment one can consider an algorithm that uses a JL Sketch matrix instead. It would give a correct algorithm for `2 estimation, except that its space usage would be very large, as we would have to store the entire sketch matrix. In fact, since this hypothetical algorithm calls for arithmetic with real numbers, it is unimplementable as stated. Nevertheless, this algorithm has something to teach us, and will generalize to give (admittedly unimplementable) ` p algorithms for each p ∈ (0, 2]. Later we shall make these algorithms realistic and spaceefficient. For now, we consider the basic sketch version of this algorithm, i.e., we maintain just one entry of M f , where M is a JL Sketch matrix. The pseudocode below shows the operations involved. 36
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 8. ESTIMATING NORMS USING STABLE DISTRIBUTIONS
Algorithm 12 Sketch for `2 based on normal distribution Initialize: 1: Choose Y1 , . . . ,Yn independently, each from N (0, 1) 2: z ← 0
T
Process (token ( j, c)) : 3: z ← z + cY j Output: z2
8.2
DR AF
Let Z denote the value of x when this algorithm finishes processing σ . Then Z = ∑nj=1 f jY j . From basic statistics, using the independence of the collection {Y j } j∈[n] , we know that Z has the same distribution as k f k2 Y , where Y ∼ N (0, 1). This is a fundamental property of the normal distribution.1 Therefore, we have E Z 2 = k f k22 = F2 , which gives us our unbiased estimator for F2 .
Stable Distributions
The fundamental property of the normal distribution that was used above has a generalization, which is the key to generalizing this algorithm. The next definition captures the general property. Definition 8.2.1. Let p > 0 be a real number. A probability distribution D p over the reals is said to be pstable if for all integers n ≥ 1 and all c = (c1 , . . . , cn ) ∈ Rn , the following property holds. If X1 , . . . , Xn are independent and each Xi ∼ D p , then c1 X1 + · · · + cn Xn has the same distribution as cX, ¯ where X ∼ D p and c¯ = c1p + · · · + cnp
1/p
= kck p .
The concept of stable distributions dates back to L´evy [L´ev54] and is more general than what we need here. It is known that pstable distributions exist for all p ∈ (0, 2], and do not exist for any p > 2. The fundamental property above can be stated simply as: “The standard normal distribution is 2stable.” Another important example of a stable distribution is the Cauchy distribution, which can be shown to be 1stable. Just as the standard normal distribution has density function 2 1 φ (x) = √ e−x /2 , 2π
the Cauchy distribution also has a density function expressible in closed form as c(x) =
1 . π(1 + x2 )
But what is really important to us is not so much that the density function of D p be expressible in closed form, but that it be easy to generate random samples drawn from D p . The ChambersMallowsStuck method [CMS76] gives us the following simple algorithm. Let sin(pθ ) cos((1 − p)θ ) (1−p)/p , X= ln(1/r) (cos θ )1/p where (θ , r) ∈R [−π/2, π/2] × [0, 1]. Then the distribution of X is pstable.
Replacing N (0, 1) with D p in the above pseudocode, where D p is pstable, allows us to generate a random variable distributed according to D p “scaled” by k f k p . Note that the scaling factor k f k p is the quantity we want to estimate. To estimate it, we shall simply take the median of a number of samples from the scaled distribution, i.e., we shall maintain a sketch consisting of several copies of the basic sketch and output the median of (the absolute values of) the entries of the sketch vector. Here is our final “idealized” sketch. 1 The
proof of this fact is a nice exercise in calculus.
37
UNIT 8. ESTIMATING NORMS USING STABLE DISTRIBUTIONS
Dartmouth: CS 35/135 Data Stream Algorithms
Algorithm 13 Indyk’s sketch for ` p estimation Initialize: 1: M[1 . . .t][1 . . . n] ← tn independent samples from D p , where t = O(ε −2 log(δ −1 )) 2: z[1 . . .t] ← ~0
T
Process (token ( j, c)) : 3: for i = 1 to t do 4: z[i] ← z[i] + c M[i][ j] Output: median1≤i≤t zi  / median(D p )
The Median of a Distribution and its Estimation
DR AF
8.3
To analyze this algorithm, we need the concept of the median of a probability distribution over the reals. Let D be an absolutely continuous distribution, let φ be its density function, and let X ∼ D. A median of D is a real number µ that satisfies Z µ 1 = P{X ≤ µ} = φ (x) dx . 2 −∞ The distributions that we are concerned with here are nice enough to have uniquely defined medians; we will simply speak of the median of a distribution. For such a distribution D, we will denote this unique median as median(D). For a distribution D, with density function φ , we denote by D the distribution of the absolute value of a random variable drawn from D. It is easy to show that the density function of D is ψ, where ( 2φ (x) , if x ≥ 0 ψ(x) = 0 if x < 0 . For p ∈ (0, 2] and c ∈ R, let φ p,c denote the density function of the distribution of cX, where X ∼ D p , and let µ p,c denote the median of this distribution. Note that x 1 φ p,c (x) = φ p,1 , and µ p,c = cµ p,1 . c c Let Zi denote the final value of zi after Algorithm 13 has processed σ . By the earlier discussion, and the defintion of pstability, we see that Zi ≡ k f k p Z, where Z ∼ D p . Therefore, Zi / median(D p ) has a distribution whose density function is φ p,λ , where λ = k f k p / median(D p ) = k f k p /µ p,1 . Thus, the median of this distribution is µ p,λ = λ µ p,1 = k f k p . The algorithm—which seeks to estimate k f k p —can thus be seen as attempting to estimate the median of an appropriate distribution by drawing t = O(ε −2 log δ −1 ) samples from it and outputting the sample median. We now show that this does give a fairly accurate estimate.
8.4
The Accuracy of the Estimate
Lemma 8.4.1. Let ε > 0, and let D be a distribution over R with density function φ , and with a unique median µ > 0. Suppose that φ is absolutely continuous on [(1 − ε)µ, (1 + ε)µ] and let φ∗ = min{φ (z) : z ∈ [(1 − ε)µ, (1 + ε)µ]}. Let Y = median1≤i≤t Zi , where Z1 , . . . , Zt are independent samples from D. Then 2 P {Y − µ ≥ ε µ} ≤ 2 exp − ε 2 µ 2 φ∗2t . 3 Proof. We bound P{Y < (1 − ε)µ} from above. A similar argument bounds P{Y > (1 + ε)µ} and to complete the proof we just add the two bounds. 38
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 8. ESTIMATING NORMS USING STABLE DISTRIBUTIONS
Let Φ(y) =
Ry
−∞ φ (z) dz
be the cumulative distribution function of D. Then, for each i ∈ [t], we have P{Zi < (1 − ε)µ} =
Z µ −∞
φ (z) dz −
Z µ
φ (z) dz (1−ε)µ
1 − Φ(µ) + Φ((1 − ε)µ) 2 1 = − ε µφ (ξ ) , 2
T
=
for some ξ ∈ [(1 − ε)µ, µ], where the last step uses the mean value theorem and the fundamental theorem of calculus: Φ0 = φ . Let α be defined by 1 1 − ε µφ (ξ ) (1 + α) = . (8.1) 2 2
DR AF
Let N = {i ∈ [t] : Zi < (1 − ε)µ}. By linearity of expectation, we have E N = ( 12 − ε µφ (ξ ))t. If the sample median, Y , falls below a limit λ , then at least half the Zi s must fall below λ . Therefore P{Y < (1 − ε)µ} ≤ P{N ≥ t/2} = P{N ≥ (1 + α) E N} ≤ exp(− E Nα 2 /3) ,
by a standard Chernoff bound. Now, from (8.1), we derive E Nα = ε µφ (ξ )t and α ≥ 2ε µφ (ξ ). Therefore 2 2 P{Y < (1 − ε)µ} ≤ exp − ε 2 µ 2 φ (ξ )2t ≤ exp − ε 2 µ 2 φ∗2t . 3 3 To apply the above lemma to our situation we need an estimate for φ∗ . We will be using the lemma with φ = φ p,λ and µ = µ p,λ = k f k p , where λ = k f k p /µ p,1 . Therefore, µφ∗ = µ p,λ · min{φ p,λ (z) : z ∈ [(1 − ε)µ p,λ , (1 + ε)µ p,λ ]} z 1 = λ µ p,1 · min φ p,1 : z ∈ [(1 − ε)λ µ p,1 , (1 + ε)λ µ p,1 ] λ λ = µ p,1 · min{φ p,1 (y) : y ∈ [(1 − ε)µ p,1 , (1 + ε)µ p,1 ]} ,
which is a constant depending only on p: call it c p . Thus, by Lemma 8.4.1, the output Y of the algorithm satisfies o n 2 P Y − k f k p ≥ ε k f k p ≤ exp − ε 2 c2pt ≤ δ , 3 for the setting t = (3/(2c2p ))ε −2 log(δ −1 ).
8.5
Annoying Technical Details
There are two glaring issues with the “idealized” sketch we have just discussed and proven correct. As stated, we do not have a proper algorithm to implement the sketch, because • the sketch uses real numbers, and algorithms can only do boundedprecision arithmetic; and • the sketch depends on a huge matrix — with n columns — that does not have a convenient implicit representation. We will not go into the details of how these matters are resolved, but here is an outline. We can approximate all real numbers involved by rational numbers with sufficient precision, while affecting the output by only a small amount. The number of bits required per entry of the matrix M is only logarithmic in n, 1/ε and 1/δ . We can avoid storing the matrix M explicitly by using a pseudorandom generator (PRG) designed to work with spacebounded algorithms. One such generator is given by a theorem of Nisan [Nis90]. Upon reading an update to a token j, we use the PRG (seeded with j plus the initial random seed) to generate the jth column of M. This transformation blows up the space usage by a factor logarithmic in n and adds 1/n to the error probability. 39
9
T
Unit
DR AF
Sparse Recovery
We have seen several algorithms that maintain a small linear sketch of an evolving vector f (that undergoes turnstile updates) and then answer questions about f using this sketch alone. When f is arbitrary, such sketches must necessarily throw away most of the information in f due to size constraints. However, when f itself holds only a small amount of information—for instance, by having at most s many nonzero coordinates—it is reasonable to ask whether such a linear sketch can allow a complete reconstruction of f on demand. This is the problem of sparse recovery. Sparse recovery is an important area of research first studied extensively in the world of signal processing (especially for the problem of image acquisition), where it also goes by such names as compressed sensing and compressive sensing. It has many connections to topics throughout computer science and mathematics [GI10]. We will focus on a clean version of the problem, rather than the most general version.
9.1
The Problem
We are in the turnstile streaming model. Stream tokens are of the form ( j, c), where j ∈ [n] and c ∈ Z and have the usual meaning “ f j ← f j + c.” The net result is to build a frequency vector f ∈ Zn . We will need a bound on how large the individual entries of f can get: we assume that k f k∞ := max j∈[n]  f j  ≤ M at all times. One often assumes that M = poly(n), so that log M = Θ(log n). The support of a vector v = (v1 , . . . , vn ) is defined to be the set of indices where the vector is nonzero: supp v = { j ∈ [n] : v j 6= 0} .
We say that v is ssparse if  supp v  ≤ s, and the cardinality  supp v  is called the sparsity of v . The streaming ssparse recovery problem is to maintain a sketch y of the vector f , as the stream comes in, so that if f ends up being ssparse, it can be recovered from y , and otherwise we can detect the nonsparsity. Think s n. We would like our sketch size to be “not much bigger” than s, with the dependence on the ambient dimension n being only polylogarithmic.
9.2
Special case: 1 sparse recovery
An important special case is s = 1, i.e., the problem of 1sparse recovery. Let us first consider a promise version of the problem, where it is guaranteed that the final frequency vector f resulting from the stream is indeed 1sparse. For this, consider the following very simple deterministic algorithm: maintain the population size ` (the net number
40
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 9. SPARSE RECOVERY
of tokens seen) and the frequencyweighted sum z of all token values seen. In other words, n
`=
n
z=
∑ fj ;
j=1
∑ jfj .
j=1
T
Since f is 1sparse, f = λ e i for some i ∈ [n] and λ ∈ Z, where e i is the standard basis vector that has a 1 entry at index i and 0 entries elsewhere. Therefore, ` = λ and z = i fi = iλ = i`. It follows that we can recover f using ( 0, if ` = 0 , f = `eez/` , otherwise. Of course, in the absence of the promise, the above technique cannot detect whether f is indeed 1sparse.
DR AF
To solve the more general problem of 1sparse detection and recovery, we employ the important idea of fingerprinting. Informally speaking, a fingerprint is a randomized mapping from a large object to a much smaller sketch with the property that two distinct large objects will very likely have distinct sketches, just as two distinct humans will very likely have distinct fingerprints. In computer science, an oftenused technique is to treat the large object as a polynomial and use a random evaluation of that polynomial as its fingerprint. These ideas are incorporated into the algorithm below, which is parametrized by a finite field F, the size of which affects the eventual error probability. As we shall see, a good choice for F is one with n3 < F ≤ 2n3 . We know that we can always find a finite field (in fact a prime field) with size in this range. Algorithm 14 Streaming 1sparse detection and recovery Initialize: 1: (`, z, p) ← (0, 0, 0) 2: r ← uniform random element of finite field F
. (population, sum, fingerprint) . F controls the error probability
Process (token ( j, c)) : 3: ` ← ` + c 4: z ← z + c j 5: p ← p + c r j
Output: 6: if ` = z = p = 0 then declare f = 0 7: else if z/` ∈ / [n] then declare k f k0 > 1 8: else if p 6= ` rz/` then declare k f k0 > 1 9: else 10: declare f = `eez/`
9.3
. very likely correct . definitely correct . definitely correct . very likely correct
Analysis: Correctness, Space, and Time
Let R be the random value of r picked in the initialization. The final values of `, z, and p computed by Algorithm 14 are n
`=
∑ fj ,
(9.1)
∑ jfj ,
(9.2)
∑ f j R j = q(R) ,
(9.3)
j=1 n
z=
j=1 n
p=
j=1
41
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 9. SPARSE RECOVERY
where q(X) ∈ F[X] is the polynomial q(X) = ∑nj=1 f j X j . This polynomial q(X) is naturally in 1to1 correspondence with the vector f : coefficients of the polynomial correspond to entries in the vector. Consider the case when f is indeed 1sparse: say f = λ e i . As before, we have ` = λ and z = i`. So, z/` = i ∈ [n] and p = fi Ri = `Rz/` . In the subcase f = 0 , this makes p = 0 and the algorithm gives a correct output, by the logic of line 6. In the subcase k f k0 = 1, the algorithm reaches line 10 and thus gives a correct output. Overall, we see that there are no false negatives.
T
Consider the case when f is not 1sparse. Then, for certain “bad” choices of R, Algorithm 14 might err by giving a false positive. To be precise, an error happens when ` = z = p = 0 in line 6 or z/` happens to lie in [n] and then p = `Rz/` in line 8. These cases can be “cleverly” combined by defining ( 0, if ` = z = 0 or z/` ∈ / [n] , i= z/` , otherwise.
DR AF
A false positive happens only if q(R) − `Ri = 0, i.e., only if R is a root of the polynomial q(X) − `X i . This latter polynomial has degree at most n and is not the zero polynomial, since q(X) has at least two nonzero coefficients. Recall the following basis theorem from algebra. Theorem 9.3.1. Over any field, a nonzero polynomial of degree d has at most d roots.
By Theorem 9.3.1, q(X) − `X i has at most n roots. Since R is drawn uniformly from F, 1 n P{false positive} ≤ =O 2 , F n
(9.4)
by choosing F such that n3 < F ≤ 2n3 .
For the space and time cost analysis, suppose that F = Θ(n3 ). Algorithm 14 maintains the values `, z, p, and r. Recall that k f k∞ ≤ M at all times. By eqs. (9.1) and (9.2), ` ≤ nM and z ≤ n2 M; so ` and z take up at most O(log n + log M) bits of space. Since p and r lie in F, they take up at most dlog Fe = O(log n) bits of space. Overall, e the space usage is O(log n + log M), which is O(1) under the standard (and reasonable) assumption that M = poly(n). Assume, further, that the processor has a word size of Θ(log n). Then, by using repeated squaring for the computation of r j , the processing of each token takes O(log n) arithmetic operations and so does the postprocessing to produce the output. We have thus proved the following result. Theorem 9.3.2. Given turnstile updates to a vector f ∈ Zn where k f k∞ = poly(n), the 1sparse detection and recovery e problem for f can be solved with error probability O(1/n) using a linear sketch of size O(1). The sketch can be updated e e in O(1) time per token and the final output can be computed in O(1) time from the sketch.
9.4
General Case: s sparse Recovery
We turn to the general case. Let s n be a parameter. We would like to recover the vector f exactly, provided k f k0 ≤ s, and detect that k f k0 > s if not. Our solution will make use of the above 1sparse recovery sketch as a blackbox data structure, denoted D. When an instance of D produces an actual 1sparse vector as output, let’s say that it reports positive; otherwise, let’s say that it reports negative. Remember that there are no false negatives and that a false positive occurs with probability O(1/n). The basic idea behind the algorithm is to hash the universe [n] down to the range [2s], thereby splitting the input stream into 2s substreams, each consisting of items that hash to a particular target. This range size is large enough to single out any particular item in supp f , causing the substream containing that item to have a 1sparse frequency vector. By repeating this idea some number of times, using independent hashes, we get the desired outcome. The precise logic is given in Algorithm 15 below. Clearly, the space usage of this algorithm is dominated by the 2st copies of D. Plugging in the value of t and our previous space bound on D, the overall space usage is O s(log s + log δ −1 )(log n + log M) . 42
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 9. SPARSE RECOVERY
Algorithm 15 Streaming ssparse recovery Initialize: 1: t ← dlog(s/δ )e . δ controls the failure probability 2: D[1 . . .t][1 . . . 2s] ← ~0 . each D[i][k] is a copy of D 3: Choose independent hash functions h1 , . . . ht : [n] → [2s], each from a 2universal family
T
Process (token ( j, c)) : 4: foreach i ∈ [t] do 5: update D[i][hi ( j)] with token ( j, c)
. query all copies of D and collate outputs
DR AF
Output: 6: A ← (empty associative array) 7: foreach i ∈ [t], k ∈ [2s] do 8: if D[i][k] reports positive and outputs λ e a where λ 6= 0 then 9: if a ∈ keys(A) and A[a] 6= λ then abort 10: A[a] ← λ 11: if keys(A) > s then abort ea 12: declare f = ∑a∈keys(A) A[a]e
Turning to the error analysis, suppose that f is indeed ssparse and that supp f ⊆ { j1 , . . . , js }. Notice that the data structure D[i][k] sketches the substream consisting of those tokens ( j, c) for which hi ( j) = k. This substream is 1sparse if there is at most one j ∈ supp f such that hi ( j) = k. Therefore, Algorithm 15 correctly outputs f if both of the following happen. [SR1 ] For each j ∈ supp f , there exists i ∈ [t] such that h−1 i (hi ( j)) ∩ supp f = { j}. [SR2 ] None of the 2st copies of D gives a false positive.
Consider a particular item j ∈ supp f . For each i ∈ [t], 0 0 0 P h−1 i (hi ( j)) ∩ supp f 6= { j} = P ∃ j ∈ supp f : j 6= j ∧ hi ( j ) = hi ( j) ≤ ∑ P hi ( j0 ) = hi ( j) j0 ∈supp f j0 6= j
≤
∑ 0
j ∈supp f j0 6= j
1 1 ≤ , 2s 2
where the penultimate step uses the 2universality property. By the mutual independence of the t hash functions, t t 1 δ P{SR1 fails for item j} = ∏ P h−1 (h ( j)) ∩ supp f = 6 { j} ≤ ≤ . i i 2 s i=1 By a union bound over the items in supp f ,
P(¬ SR1 ) ≤  supp f  ·
δ ≤δ. s
By another union bound over the copies of D, using the false positive rate bound from eq. (9.4), 1 P(¬ SR2 ) ≤ 2st · O 2 = o(1) , n since s ≤ n. Overall, the probability that the recovery fails is at most δ + o(1). 43
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 9. SPARSE RECOVERY
Theorem 9.4.1. Given turnstile updates to a vector f ∈ Zn where k f k∞ = poly(n), the ssparse recovery problem for f can be solved with error probability δ using a linear sketch of size O(s log(s/δ ) log n). With just a little more work (and analysis), we can solve the ssparse detection and recovery problem as well. This is left as an exercise.
T
Exercises
91 Suppose that in Algorithm 14, we modified line 6 to declare f = 0 whenever ` = z = 0, without using the value of p. Would this still be correct? Why?
DR AF
92 As described and analyzed, Algorithm 15 requires the vector f to be ssparse. Explain clearly why it does not already solve the problem of detecting whether this ssparsity condition holds. Make a simple enhancement to the algorithm to enable this detection and analyze your modified algorithm to prove that it works.
44
10
DR AF
WeightBased Sampling
T
Unit
The problem of sampling a coordinate at random from a highdimensional vector is important both in its own right and as a key primitive for other, more advanced, data stream algorithms [JW18]. For instance, consider a stream of visits by customers to the busy website of some business or organization. An analyst might want to sample one or a few customers according to some distribution defined by the frequencies of various customers’ visits. Letting f ∈ Zn be the vector of frequencies (as usual), we would like to pick a coordinate i ∈ [n] according to some distribution π = (π1 , . . . , πn ) ∈ [0, 1]n . Here are three specific examples of distributions we may want to sample from. • Sample a customer uniformly from the set of all distinct customers who visited the website. In our formalism, πi = 1/ k f k0 for each i ∈ supp f and πi = 0 for all other i ∈ [n]. This is called `0 sampling. • Sample a customer with probability proportional to their visit frequency. In our formalism, πi =  fi / k f k1 . This is called `1 sampling. • Sample a customer with frequent visitor weighted disproportionately highly, for instance, proportional to the square of their visit frequency. In our formalism, πi = fi2 / k f k22 . More generally, one can consider πi =  fi  p / k f k pp , for a parameter p > 0. This is called ` p sampling.
10.1
The Problem
We shall study an algorithm for approximate `0 sampling and then one for approximate `2 sampling: these algorithms are “approximate” because the distribution over items (coordinates) that they induce are close to, but not identical to, the respective target distributions. To be precise, the task is as follows. The input is a turnstile stream σ that builds up a frequency vector f ∈ Zn . Assume that at all times we have k f k∞ ≤ M, for some bound M = poly(n). We must maintain a smallsized linear sketch of f from which we can produce a random index i ∈ [n] distributed approximately according to π , where π is one of the distributions listed above. With small probability, we may output FAIL instead of producing an index in [n]. If the output I of the algorithm satisfies P{I = i} ∈ [(1 − ε)πi − θ , (1 + ε)πi + θ ] , for each i ∈ [n] ,
P{I = FAIL} ≤ δ ,
(10.1) (10.2)
where π is the distribution corresponding to ` p sampling, then we say that the algorithm is an (ε, θ , δ )` p sampler. Monemizadeh and Woodruff [MW10] gave sampling algorithms that exploited this relaxation of producing the distribution π only approximately. More recently, Jayaram and Woodruff [JW18] showed that spaceefficient perfect sampling—where there is no approximation, but a small failure probability is allowed—is possible.
45
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 10. WEIGHTBASED SAMPLING
The ` 0 Sampling Problem
10.2
Our task is to design an (ε, θ , δ )`0 sampler as defined in eqs. (10.1) and (10.2), with π defined by 1 , if i ∈ supp f , k f k0
πi = 0 ,
otherwise.
It is promised that f 6= 0 , so that this problem is welldefined.
T
πi =
DR AF
A key observation is that, thanks to the sparse recovery sketches designed in Unit 9, this problem becomes simple if f is promised to be ssparse, for a small s. This leads to the following idea: choose a random subset S ⊆ [n] of coordinates so that if we form a subvector of f by zeroing out all coordinates outside S, the subvector is likely to be very sparse. If we filter the input stream σ and only retain tokens that update items in S, we can simulate the stream that builds up this subvector and feed this simulated stream to a sparse recovery data structure. Let d := k f k0 . If S is formed by picking each coordinate in [n] with probability about 1/d, independently, then the subvector described above has a good—i.e., Ω(1)—chance of being 1sparse. Of course, we don’t know d in advance, so we have to prepare for several possibilities by using several sets S, using several different coordinate retention probabilities.
10.2.1
An Idealized Algorithm
It will be useful to assume, without loss of generality, that n is a power of 2.
We realize the idea outlined above by using hash functions to create the random sets: the set of items in [n] that are hashed to a particular location form one of the sets S mentioned above. We begin with an idealized algorithm that uses uniformly random hash functions on the domain [n]. Of course, storing such a hash function requires Ω(n) bits, so this does not achieve sublinear storage. We can address this issue in one of the following ways. • We can apply Nisan’s spacebounded PRG to generate these hashes pseudorandomly, in limited space. • We can use kuniversal hash families, for a suitable nottoolarge k, which is more space efficient. Either of these approaches will introduce a small additional error in the algorithm’s output. The algorithm below uses several copies of a 1sparse detection and recovery data structure, D. We can plug in the data structure developed in Unit 9, which uses O(log n + log M) space and has a false positive probability in O(1/n2 ). Algorithm 16 Idealized `0 sampling algorithm using full randomness Initialize: 1: for ` ← 0 to log n do 2: choose h` : [n] → {0, 1}` uniformly at random from all such functions 3: D` ← ~0 . 1sparse detection and recovery data structure Process (token ( j, c)) : 4: for ` ← 0 to log n do 5: if h` ( j) = 0 then 6: feed ( j, c) to D`
. happens with probability 2−`
Output: 7: for ` ← 0 to log n do 8: if D` reports positive and outputs λ e i with λ 6= 0 then 9: yield (i, λ ) and stop 10: output FAIL
46
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 10. WEIGHTBASED SAMPLING
10.2.2
The Quality of the Output
Let d =  supp( f ). Consider level ` such that
1 4d
≤
1 2`
0 be an accuracy parameter for the algorithm ( ε should be thought of as very small).
10.3.1
An `2 sampling Algorithm
Algorithm 18 `2 sampling or precision sampling Initialize: 1: u1 , u2 , . . . , un ∈R [ 12 , 1] . n 2: Count sketch data structure D for g = (g1 , g2 , . . . , gn ). Process ( j, c): 3: feed ( j, √cu ) into D. j Output: 4: for each j ∈ [n] do 5: gˆj ← estimate of g j from D. √ 6: fˆj ← gˆj u j . 7: 2 fˆ 1 if gˆj 2 = ujj ≥ ε4 , Xj ← 0 otherwise . if ∃ unique j with X j = 1 then 2 output ( j, fˆj ) . 10: else 11: FAIL. 8:
9:
Remarks:
• As written we need to store u = {u1 , u2 , . . . , un }.
• But in fact u is just random and input independent, and the procedure is spacebounded. So we can use Nisan’s PRG to reduce random seed. • As written Pr[FAIL] ≈ 1 − ε, but just repeat Θ( ε1 log δ1 ) times for Pr[Fail] ≤ δ . • As written, algorithm 18 will work as analysed, assuming 1 ≤ F2 ≤ 2.
10.3.2
Analysis:
Let F2 = ∑ni=1 f j2 = k f k22 and F2 (gg) = ∑ni=1 g2i .
48
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 10. WEIGHTBASED SAMPLING
Claim 1. E F2 (gg) ≤ 5 log n . Proof. n
E F2 (gg) =
∑ E g j2
j=1
∑E
j=1 n
=
∑ f j2 E
j=1
Z 1
= F2
1/n2
T
f j2 uj
n
=
1 uj
1 1 du · u 1 − 1/n2
since E ui = E u j ∀i, j,
DR AF
ln n2 1 − 1/n2 1 ≤ (4 + Θ( 2 )) · ln n n ≤ 5 log n . = F2
Consider estimate gˆj derived from a 1 × k count sketch data structure: gˆj = g j + Z j where Z j is the sum of g contribution from i 6= j that collide with j. Recall that E Z j = 0, E Z 2j = 1k ∑i6= j g2I ≤ F2k(g ) . Hence, by applying Markov h i g inequality, we get Pr Z 2j ≥ 3F2k(g ) ≤ 31 . Now consider the following two cases: • If g j  ≥ ε2 , then gˆj 2 = g j + Z j 2 = e±ε g2j . • Else g j  < ε2 . Then,
gˆj 2 − g2j  ≤ (g j  + Z j )2 − g j 2 , = Z j 2 + 2g j Z j  , 4 ≤ Z 2j (1 + ) . ε
So with probability at least 23 , 2
3F2 (gg) 1 2 since Pr Z j ≥ ≤ , k 3
3F2 (gg) ε + 4 · k ε 13F2 (gg) ≤ . εk
gˆj − g2j  ≤
We pick k = 650εlog n = Θ( logε n ). Using Claim 1, we get Pr [F2 (gg) ≥ 50 log n] ≤ probability 1 − (1/3 + 1/10), gˆj 2 − g2j  ≤ 1 .
5 log n 50 log n
=
1 10 .
2 So in both the cases, gˆj 2 = e±ε g2j ± 1, ⇒ fˆj = e±ε f j2 ± u j . We observe that when X j = 1, u j ≤
e±ε f j2 ±
ε fˆj 4
2
ε fˆj 4
Hence with
2
2 . So fˆj =
2 2 = e±ε f j2 ± ε fˆj . Using the fact that 1 + ε ≈ eε for ε ≈ 0, we get after rearranging, fˆj = e±2ε f j2 .
49
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 10. WEIGHTBASED SAMPLING
Now we are ready to lower bound the probability that algorithm 18 produces an output: !# " Pr [ j is output] = Pr X j = 1
^
^
Xi = 0
,
i6= j
#! , (Xi = 1) X j = 0 = Pr [X j = 1] · 1 − Pr i6= j #! " "
= Pr [X j = 1] · 1 − Pr
_
(Xi = 1)
i6= j
!
T
_
assuming Xi s are pairwise independent ,
≥ Pr [X j = 1] · 1 − ∑ Pr [Xi = 1] by union bound , i6= j
2
#! 2 ε fˆi · 1 − ∑ Pr ui ≤ , 4 i6= j ! 2 2 ε fˆj ε fˆi · 1− ∑ pretending u j ∈R [0, 1] , 4 i6= j 4 ! 2 2 ε fˆj ε fˆi · 1−∑ , 4 4 i ! εe±2ε f j2 εe2ε fi2 · 1−∑ , 4 4 i ! εe−2ε f j2 εe2ε fi2 · 1−∑ , 4 4 i εe−2ε f j2 εe2ε · 1− using F2 ≤ 2 . 4 2 "
"
DR AF
ε fˆj = Pr u j ≤ 4
#
≈
≥
=
≥ =
Now we upper bound the probability that algorithm 18 produces an output: Pr[ j is output] ≤ P{X j = 1} , 2
ε fˆj pretending u j ∈R [0, 1] , 4 εe2ε f j2 ≤ , 4 e2ε = · ε f j2 . 4 ≈
Hence, P{ j is output} = 14 e±3ε · ε f j2 . So success probability of algorithm 18 is given by: 1 P{¬FAIL} = ∑ Pr[ j is output] = e±3ε · εF2 . 4 j
Finally, the sampling probability of j is givem by:
P{sample j  ¬ FAIL} =
50
f j2 · e±6ε F2
.
11
T
Unit
DR AF
Finding the Median
Finding the mean of a stream of numbers is trivial, requiring only logarithmic space. How about finding the median? There is a deterministic lineartime algorithm for finding the median of an inmemory array of numbers—already a nontrivial and beautiful result due to Blum et al. [BFP+ 73]—but it requires full access to the array and is not even close to being implementable on streamed input. The seminal paper of Munro and Paterson [MP80] provides a streaming, sublinearspace solution, provided at least two passes are allowed. In fact, their solution smoothly trades off number of passes for space. Moreover, they prove a restricted lower bound showing that their pass/space tradeoff is essentially optimal for algorithms that operate in a certain way. More recent research has shown that this lower bound actually applies without restrictions, i.e. without assumptions about the algorithm. In hindsight, the Munro–Paterson paper should be recognized as the first serious paper on data stream algorithms.
11.1
The Problem
We consider a vanilla stream model. Our input is a stream of numbers σ = ha1 , a2 , . . . , am i, with each ai ∈ [n]. Our goal is to output the median of these numbers. Recall that if m is odd, wdm/2e , median(σ ) = wm/2 + wm/2+1 , if m is even, 2 where w = (w1 , . . . , wm ) := sort(σ ) is the result of sorting σ in nondecreasing order. Thus, finding the median is essentially a special case of the more general goal of finding wr , the rth element of sort(σ ), where r ∈ [m] is a given rank. This generalization is called the SELECTION problem. We shall give a multipass streaming algorithm for SELECTION . To keep our presentation simple, we shall assume that the numbers in σ are distinct. Dealing with duplicates is messy but not conceptually deep, so this assumption helps us focus on the main algorithmic insights.
11.2
Preliminaries for an Algorithm
In preparation for designing an algorithm for SELECTION, we now set up some terminology and prove a useful combinatorial lemma. Definition 11.2.1. Let T be a set of (necessarily distinct) elements from a totally ordered universe U and let x ∈ U . The rank of x with respect to T is defined as rank(x; T ) = {w ∈ T : w ≤ x} . 51
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 11. FINDING THE MEDIAN
Notice that this definition does not require x ∈ T .
For a sequence ρ = (a1 , . . . , am ), define
T
The algorithm is driven by an idea very close to that of a coreset, seen in many other data stream algorithms.1 In computational geometry and adjacent fields, a set Q is called a coreset of set P if Q ⊆ P, and Q is a good proxy for P with respect to some cost measure. Often, elements of Q need to be given multiplicities or weights in order to preserve the cost measure. The precise definition can depend on the intended application. For our purposes, we shall define a notion of the “core” of a sequence, as follows. evens(ρ) = {a2 , a4 , . . .} = {a2i : 1 ≤ i ≤ m/2, i ∈ Z} , left(ρ) = (a1 , . . . , adm/2e ) , right(ρ) = (adm/2e+1 , . . . , am ) .
Recall that ρ ◦ τ denotes concatenation of sequences: ρ followed by τ.
DR AF
Definition 11.2.2. The icore Ci (τ) of a sequence τ of distinct integers is defined recursively, as follows. C0 (τ) = sort(τ) ;
Ci+1 (τ) = sort(evens(Ci (left(τ))) ∪ evens(Ci (right(τ)))) ,
for each i ≥ 0 .
The sequence Ci (τ) is typically not a subsequence of τ, though its elements do come from τ. Notice that if m, the length of τ, is divisible by 2i+1 , then the length of Ci (τ) is exactly m/2i . Suppose that τ is in ascending order. Then the “sort” operations in Definition 11.2.2 have no effect and Ci (τ) is obtained by taking every 2i th element of τ. Thus, in this special case, if a appears in Ci (τ)), we have rank(a; τ) = 2i · rank(a; Ci (τ)) ,
(11.1)
where by rank(a; τ) we mean the rank of a with respect to the set of elements of τ.
The next (and crucial) lemma shows that an approximate version of eq. (11.1) holds in the general case. Lemma 11.2.3. Let τ be a sequence of length 2i s consisting of distinct integers. Suppose that Ci (τ) = (x1 , . . . , xs ). Then, for all j ∈ [s]. 2i j ≤ rank(x j ; τ) ≤ 2i (i + j − 1) + 1 . Proof. We proceed by induction on i. The base case, i = 0, is immediate: C0 (τ) and τ have the same set of elements. Suppose the lemma has been proved for some i ≥ 0. Consider a sequence τ1 ◦ τ2 , of distinct integers, where each of τ1 , τ2 has length 2i s. Suppose that Ci+1 (τ1 ◦ τ2 ) = (x1 , . . . , xs ). By definition, (x1 , . . . , xs ) = sort(evens(A) ∪ evens(B)) ,
where A = Ci (τ1 ) and B = Ci (τ2 ). Consider a particular element x j . It must appear in either evens(A) or evens(B), but not both. Suppose WLOG that it is the kth element of evens(A). Then the j − k elements of (x1 , . . . , x j ) that are not in evens(A) must instead appear in evens(B). Let y be the largest of these elements, i.e., the ( j − k)th element of evens(B). Let z be the ( j − k + 1)th element of evens(B). Then y < x j < z. By the induction hypothesis,
2i · 2k ≤ rank(x j ; τ1 ) ≤ 2i (i + 2k − 1) + 1 ;
2i (2 j − 2k) ≤ rank(y; τ2 ) ≤ 2i (i + 2 j − 2k − 1) + 1 ;
2i (2 j − 2k + 2) ≤ rank(z; τ2 ) ≤ 2i (i + 2 j − 2k + 1) + 1 .
Therefore,
rank(x j ; τ) ≥ rank(x j ; τ1 ) + rank(y; τ2 ) ≥ 2i (2k + 2 j − 2k) = 2i+1 j ,
and
i
rank(x j ; τ) ≤ rank(x j ; τ1 ) + rank(z; τ2 ) − 1 ≤ 2 (i + 2k − 1 + i + 2 j − 2k + 1) + 2 − 1 = 2i+1 (i + j) + 1 , which establishes the induction step. 1 The original paper uses the term “sample,” which can be confusing because there is nothing random about the construction. The term “coreset” is much more modern, originates in computational geometry and is often given a precise meaning that doesn’t quite fit the concept we are defining. Nevertheless, the strong similarities with the coresets of computational geometry are important to note.
52
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 11. FINDING THE MEDIAN
11.3
The Munro–Paterson Algorithm
We are ready to describe an algorithm for SELECTION. This algorithm uses multiple passes (two or more) over the input stream. The computation inside each pass is interesting and elegant, though it is not so easy to describe in pseudocode. The postprocessing logic after each pass is also nontrivial.
Computing a Core
T
11.3.1
Having introduced the important idea of a core, the next key insight is that Definition 11.2.2 actually allows for a smallspace streaming computation of a core in a single pass. Theorem 11.3.1. There is a onepass streaming algorithm that, given an input stream σ 0 of length m0 = 2t s (for integers t and s) consisting of distinct tokens from [n], computes the tcore Ct (σ 0 ) using O(s log(m0 /s) log n) bits of space.
DR AF
Proof sketch. Buffer each contiguous chunk of s elements of σ 0 . When the buffer fills up, send its contents up the recursion tree and flush the buffer. We will have at most one buffer worth of elements per level of recursion and there are t + 1 = O(log(m0 /s)) levels. [ * * * Insert picture of binary tree showing core computation recursion * * * ]
11.3.2
Utilizing a Core
The Munro–Paterson algorithm uses p such passes and operates in space O(m1/p log2−2/p m log n). Each pass has a corresponding active interval: a possibly infinite open interval (a− , a+ ). Tokens in the input stream that lie in the active interval are the active tokens and they form a substream of the input stream σ called the active substream for this pass. The pass computes a core sequence of this active substream τ, using the algorithm given by Theorem 11.3.1, and does some postprocessing to update the active interval, maintaining the invariant a− < wr < a+ ,
(11.2)
where wr is the rankr element of w := sort(σ ) that we seek. Initially, we set a− = −∞ and a+ = +∞. During a pass, we ignore inactive tokens except to compute q := rank(a− ; σ ); clearly, wr is the (r − q)th element of the active substream τ. We compute the tcore of τ, for a suitable integer t. Then, based on the approximate ranks of elements in this core (as given by Lemma 11.2.3), we compute a new active interval that is a subinterval of the one we started with. The precise logic used in a pass and for determining when to stop making passes is given in pseudocode form in Algorithm 19. This precision comes at the cost of simplicity, so for a full understanding of the pseudocode, it may be necessary to study the analysis that follows. Suppose that the invariant in eq. (11.2) indeed holds at the start of each pass. Examining Algorithm 19, it is clear that when the algorithm returns a final answer, it does so having stored all elements of the active substream, and therefore this answer is correct. Further, the logic of lines 18 and 20, together with the rank bounds in Lemma 11.2.3, goes to show that the invariant is correctly maintained.
11.3.3
Analysis: Pass/Space Tradeoff
It remains to analyze the number of passes over σ made by Algorithm 19.
Consider the ith pass over the stream (i.e., the ith call to S ELECTION PASS) and suppose that this isn’t the final pass (i.e., the condition tested in line 10 fails). Suppose this pass computes a ti core of a stream consisting of the active substream τi —which has length mi —padded up with some number of “+∞” tokens so that the core computation happens on a stream of length 2ti s. Let x = (x1 , . . . , xs ) be the core. Suppose that the desired answer wr is the ri th element of the active substream, i.e., ri = r − rank(a− ; σ ). Notice that ri is computed as the value r − q in Algorithm 19.
53
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 11. FINDING THE MEDIAN
17: 18: 19: 20: 21:
DR AF
T
Algorithm 19 Munro–Paterson multipass algorithm for the SELECTION problem 1: function S ELECTION PASS (stream σ , rank r, space parameter s, active interval a− , a+ ) 2: initialize core computation data structure C with space parameter s 3: (m, m0 , q) ← (0, 0, 0) . (streamlength, activelength, numbelowactive) 4: foreach token j from σ do 5: m ← m+1 6: if j ≤ a− then q ← q + 1 . token too small: use it to update rank of a− 7: else if a− < j < a+ then . active token 8: m0 ← m0 + 1 9: feed j to C 0 10: if m ≤ ds log me then 11: ρ ← output of C . the entire active substream, sorted 12: return ρ[r − q] . answer found 13: else 14: t ← dlog(m0 /s)e . depth of core computed 15: feed (2t s − m0 ) copies of “+∞” to C . pad stream so number of chunks is power of 2 16: ρ ← output of C ` ← max{ j ∈ [s] : 2t (t + j − 1) + 1 < r − q} if ` is defined then a− ← ρ[`] h ← min{ j ∈ [s] : 2t j > r − q} if h is defined then a+ ← ρ[h] return (a− , a+ )
. largest element whose max possible rank is too low
. smallest element whose min possible rank is too high . new active interval (answer not found yet)
function S ELECT(stream σ , rank r, spacebound s) (a− , a+ , w) ← (−∞, +∞, ⊥) while w = ⊥ do z ← S ELECTION PASS(σ , r, s, a− , a+ ) if z is a pair then (a− , a+ ) ← z else w ← z 27: return w
22: 23: 24: 25: 26:
. (active interval, answer)
Let `i = max{ j ∈ [s] : 2ti (ti + j − 1) + 1 < ri }. After the pass, we set a− to the `i th element of the core ρ, i.e., to x`i . Suppose that we are in the typical case, when `i is defined and 1 ≤ `i < s. By the maximality of `i , the maximum possible rank of x`i +1 (as given by Lemma 11.2.3) is not too low, i.e., 2ti (ti + (`i + 1) − 1) + 1 ≥ ri .
Applying the lower bound in Lemma 11.2.3, we obtain
rank(x`i ; σ ) ≥ 2ti `i ≥ ri − 2ti ti − 1 .
Analogously, let hi = min{ j ∈ [s] : 2ti j > ri }. We set a+ = xhi after the pass. By the minimality of hi , 2ti (hi − 1) ≤ ri .
Applying the upper bound in Lemma 11.2.3, we obtain
rank(xhi ; σ ) ≤ 2ti (ti + hi − 1) + 1 ≤ ri + 2ti ti + 1 .
Putting these together, the length mi+1 of the active substream for the next pass satisfies mi+1 ≤ rank(xhi ; σ ) − rank(x`i ; σ ) ≤ 2(2ti ti + 1) l mi m = 2 2dlog(mi /s)e log +1 s 5mi log m ≤ , s 54
(11.3)
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 11. FINDING THE MEDIAN
where the final step tacitly uses mi s. Initially, we have m1 = m active tokens. Iterating eq. (11.3), we obtain ! 5 log m i−1 mi = O m . s
T
By the logic of line 10, Algorithm 19 executes its final pass—the pth, say—when m p ≤ ds log me. For this value of p, we must have m((5 log m)/s) p−1 = Θ(s log m), i.e., s = Θ(m1/p log1−2/p m).
DR AF
If we want to operate within a fixed number p of passes, we could give an alternative presentation of the Munro– Paterson algorithm that computes the chunk size s from m := length(σ ) and p. For such a presentation, we would need to know m in advance. Suppose we then set s = Θ(m1/p log1−2/p m), as we just derived. The space used in each pass is dominated by the space requirements of C , the core computation data structure. By Theorem 11.3.1, this space is O(s log(m/s) log n) in every pass except the last. By the logic of line 10, the space is O(s log m log n) in the last pass. Putting these observations together gives us the following theorem. Theorem 11.3.2. For each fixed p ≥ 1, the SELECTION problem, for a data stream consisting of m tokens from the e 1/p ) bits of space. universe [n], admits a ppass algorithm that uses O(m1/p log2−2/p m log n) = O(m Notice how this theorem trades off passes for space.
Strictly speaking, we have proved the above theorem only for streams whose tokens are distinct. If you wish to achieve a thorough understanding, you may want to extend the above algorithm to the general case.
Exercises
111 Give a detailed proof of Theorem 11.3.1, complete with pseudocode for the recursive algorithm described in the proof sketch. 112 We have stated Theorem 11.3.2 only for fixed p. Go through the analysis closely and show that if our goal is to e compute the exact median of a lengthm data stream in O(1) space, we can do so in O(log m/ log log m) passes.
55
12
T
Unit
DR AF
Geometric Streams and Coresets
The notes for this unit are in reasonable shape but I am not done vetting them. There may be instances of slight errors or lack of clarity. I am continually polishing these notes and when I’ve vetted them enough, I’ll remove this notice.
12.1
Extent Measures and Minimum Enclosing Ball
Up to this point, we have been interested in statistical computations. Our data streams have simply been “collections of tokens.” We did not care about any structure amongst the tokens, because we were only concerned with functions of the token frequencies. The median and selection problems introduced some slight structure: the tokens were assumed to have a natural ordering. Streams represent data sets. The data points in many data sets do have natural structure. In particular, many data sets are naturally geometric: they can be thought of as points in Rd , for some dimension d. This motivates a host of computational geometry problems on data streams. We shall now study a simple problem of this sort, which is in fact a representative of a broader class of problems: namely, the estimation of extent measures of point sets. Broadly speaking, an extent measure estimates the “spread” of a collection of data points, in terms of the size of the smallest object of a particular shape that contains the collection. Here are some examples. • Minimum bounding box (two variants: axisparallel or not) • Minimum enclosing ball, or MEB
• Minimum enclosing shell (a shell being the difference of two concentric balls)
• Minimum width (i.e., min distance between two parallel hyperplanes that sandwich the data) For this lecture, we focus on the MEB problem. Our solution will introduce the key concept of a coreset, which forms the basis of numerous approximation algorithms in computational geometry, including data stream algorithms for all of the above problems.
12.2
Coresets and Their Properties
The idea of a coreset was first formulated in Agarwal, HarPeled and Varadarajan [AHPV04]; the term “coreset” was not used in that work, but became popular later. The same authors have a survey [AHPV05] that is a great reference on the subject, with full historical context and a plethora of applications. Our exposition will be somewhat specialized to target the problems we are interested in. 56
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 12. GEOMETRIC STREAMS AND CORESETS
We define a “cost function” C to be a family of functions {CP } parametrized by point sets P ⊆ Rd . For each P, we have a corresponding function CP : Rd → R+ . We say that C is monotone if ∀ Q ⊆ P ⊆ Rd ∀ x ∈ Rd : CQ (x) ≤ CP (x) .
T
We are given a stream σ of points in Rd , and we wish to compute the minimum value of the corresponding cost function Cσ . To be precise, we want to estimate infx∈Rd Cσ (x); this will be our extent measure for σ . The minimum enclosing ball (or MEB) problem consists of finding the minimum of the cost function Cσ , where Cσ (x) := max kx − yk2 , y∈σ
(12.1)
i.e., the radius of the smallest ball centered at x that encloses the points in σ .
DR AF
Definition 12.2.1. Fix a real number α ≥ 1, and a cost function C parametrized by point sets in Rd . We say Q is an αcoreset for P ⊆ Rd (with respect to C) if Q ⊆ P, and ∀ T ⊆ Rd ∀ x ∈ Rd : CQ∪T (x) ≤ CP∪T (x) ≤ αCQ∪T (x) .
(12.2)
Clearly, if C is monotone, the left inequality always holds. The cost function for MEB, given by (12.1), is easily seen to be monotone. Our data stream algorithm for estimating MEB will work as follows. First we shall show that, for small ε > 0, under the MEB cost function, every point set P ⊆ Rd has a (1 + ε)coreset of size O(1/ε (d−1)/2 ). The amazing thing is that the bound is independent of P. Then we shall give a data stream algorithm to compute a (1 + ε)coreset of the input stream σ , using small space. Clearly, we can estimate the MEB of σ by computing the exact MEB of the coreset. We now give names to three useful properties of coresets. The first two are universal: they hold for all coresets, under all cost functions C. The third happens to hold for the specific coreset construction we shall see later (but is also true for a large number of other coreset constructions). Merge Property: If Q is an αcoreset for P and Q0 is a β coreset for P0 , then Q ∪ Q0 is an (αβ )coreset for P ∪ P0 . Reduce Property: If Q is an αcoreset for P and R is a β coreset for Q, then R is an (αβ )coreset for P. Disjoint Union Property: If Q, Q0 are αcoresets for P, P0 respectively, and P ∩ P0 = ∅, then Q ∪ Q0 is an αcoreset for P ∪ P0 . To repeat: every coreset satisfies the merge and reduce properties. The proof is left as an easy exercise.
12.3
A Coreset for MEB
For nonzero vectors u, v ∈ Rd , let ang(u, v) denote the angle between them, i.e., ang(u, v) := arccos
hu, vi , kuk2 kvk2
where h·, ·i is the standard inner product. We shall call a collection of vectors {u1 , . . . , ut } ⊆ Rd \ {0} a θ grid if, for every nonzero vector x ∈ Rd , there is a j ∈ [t] such that ang(x, u j ) ≤ θ . We think of θ as being “close to zero.” The following geometric theorem is well known; it is obvious for d = 2, but requires a careful proof for higher d. Theorem 12.3.1. In Rd , there is a θ grid consisting of O(1/θ d−1 ) vectors. In particular, for R2 , this bound is O(1/θ ). Using this, we can construct a small coreset for MEB. Theorem 12.3.2. In d ≥ 2 dimensions, the MEB cost function admits a (1 + ε)coreset of size O(1/ε (d−1)/2 ).
57
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 12. GEOMETRIC STREAMS AND CORESETS
Proof. Let {u1 , . . . , ut } be a θ grid in Rd , for a parameter θ = θ (ε) to be chosen later. By Theorem 12.3.1, we may take t = O(1/θ d−1 ). Our proposed coreset for a point set P ⊆ Rd shall consist of the two extreme points of P along each of the directions u1 , . . . , ut . To be precise, let P be given. We then define Q :=
t [
{arg maxhx, vi i, arg minhx, vi i} . x∈P
(12.3)
x∈P
T
i=1
We claim that Q is a (1 + ε)coreset of P, for a suitable choice of θ .
Since the MEB cost function is monotone, the left inequality in (12.2) always holds. We prove the right inequality “by picture” (see below); making this rigorous is left as an exercise.
DR AF
z
x
z’
y
ui
Take an arbitrary x ∈ Rd and T ⊆ Rd , and let z be the farthest point from x in the set P ∪ T . If z ∈ T , then CQ∪T (x) ≥ kx − zk2 = CP∪T (x) .
Otherwise, we have z ∈ P. By the grid property, there is a direction ui that makes an angle of at most θ with ~xz. Let y be the point such that x~y is parallel to ui and kx − yk2 = kx − zk2 , and let z0 be the orthogonal projection of z onto x~y. Then, Q contains a point from P whose orthogonal projection on x~y lies to the right of z0 (by construction of Q) and to the left of y (because z is farthest). Therefore
CQ∪T (x) ≥ CQ (x) ≥ x − z0 2 ≥ kx − zk cos θ = CP∪T (x) cos θ . Using sec θ ≤ 1 + θ 2 (which holds for θ small enough), we obtain CP∪T (x) ≤ (1 + θ 2 )CQ∪T (x). Since this holds for all x ∈ Rd√ , the right inequality in (12.2) holds with α = 1 + θ 2 . Since we wanted Q to be a (1 + ε)coreset, we may take θ = ε. Finally, with this setting of θ , we have Q ≤ 2t = O(1/ε (d−1)/2 ), which completes the proof.
12.4
Data Stream Algorithm for Coreset Construction
We now turn to algorithms. Fix a monotone cost function C, for point sets in Rd , and consider the “Cminimization problem,” i.e., the problem of estimating infx∈Rd Cσ (x). Theorem 12.4.1. Suppose C admits (1 + ε)coresets of size A(ε), and that these coresets have the disjoint union property. Then the Cminimization problem has a data stream algorithm that uses space O(A(ε/ log m) · log m) and returns a (1 + ε)approximation. Proof. Our algorithm builds up a coreset for the input stream σ recursively from coresets for smaller and smaller substreams of σ in a way that is strongly reminiscent of the MunroPaterson algorithm for finding the median.
58
Dartmouth: CS 35/135 Data Stream Algorithms
DR AF
T
UNIT 12. GEOMETRIC STREAMS AND CORESETS
[the picture is not quite right]
Set δ = ε/ log m. We run a number of streaming algorithms in parallel, one at each “level”: we denote the level j algorithm by A j . By design, A j creates a virtual stream that is fed into A j+1 . Algorithm A0 reads σ itself, placing each incoming point into a buffer of large enough size B. When this buffer is full, it computes a (1 + δ )coreset of the points in the buffer, sends this coreset to A1 , and empties the buffer. For j ≥ 1, A j receives a coreset at a time from A j−1 . It maintains up to two such coresets. Whenever it has two of them, say P and P0 , it computes a (1 + δ )coreset of P ∪ P0 , sends this coreset to A j+1 , and discards both P and P0 . Thus, A0 uses space O(B) and, for each j ≥ 1, A j uses space O(A(δ )). The highestnumbered A j that we need is at level dlog(m/B)e. This gives an overall space bound of O(B + A(ε/ log m)dlog(m/B)e), by our choice of δ . Finally, by repeatedly applying the reduce property and the disjoint union property, we see that the final coreset, Q, computed at the highest (“root”) level is an αcoreset, where α = (1 + δ )1+dlog(m/B)e ≤ 1 + δ log m = 1 + ε .
To estimate infx∈Rd Cσ (x), we simply output infx∈Rd CQ (x), which we can compute directly. As a corollary, we see that we can estimate the radius of the minimum enclosing ball (MEB), in d dimensions, up to a factor of 1 + ε, by a data stream algorithm that uses space ! log(d+1)/2 m O . ε (d−1)/2 In two dimensions, this amounts to O(ε −1/2 log3/2 m).
Exercises
121 In the proof of Theorem 12.3.2, the proof that the constructed set Q satisfies the right inequality in eq. (12.2) was done “by picture.” Make this proof formal, i.e., do it using only algebraic or analytic steps, without appealing to intuitive reasoning involving terms like “to the left/right of.” 122 Prove that the coreset for MEB constructed in the proof of Theorem 12.3.2 has the disjoint union property, described in Section 12.2.
59
13
T
Unit
DR AF
Metric Streams and Clustering
In Unit 12, we considered streams representing geometric data, and considered one class of computations on such data: namely, estimating extent measures. The measure we studied in detail — minimum enclosing ball (MEB) — can be thought of as follows. The center of the MEB is a crude summary of the data stream, and the radius of the MEB is the cost of thus summarizing the stream. Often, our data is best summarized not by a single point, but by k points (k ≥ 1): one imagines that the data naturally falls into k clusters, each of which can be summarized by a representative point. For the problem we study today, these representatives will be required to come from the original data. In general, one can imagine relaxing this requirement. At any rate, a particular clustering has an associated summarization cost which should be small if the clusters have small extent (according to some extent measure) and large otherwise.
13.1
Metric Spaces
It turns out that clustering problems are best studied in a setting more general than the geometric one. The only aspect of geometry that matters for this problem is that we have a notion of “distance” between two points. This is abstracted out in the definition of a metric space, which we give below. Definition 13.1.1. A metric space is a pair (M, d), where M is a nonempty set (of “points”) and d : M × M → R+ is a nonnegativevalued “distance” function satisfying the following properties for all x, y, z ∈ M. 1. d(x, y) = 0 ⇐⇒ x = y; 2. d(x, y) = d(y, x); 3. d(x, y) ≤ d(x, z) + d(z, y).
(identity) (symmetry) (triangle inequality)
Relaxing the first property to d(x, x) = 0 gives us a semimetric space instead.
A familiar example of a metric space is Rn , under the distance function d(xx, y ) = kxx − y k k p , where p > 0. In fact, the case p = 2 (Euclidean distance) is especially familiar. Another example should be just about as familiar to computer scientists: take an (undirected) graph G = (V, E), let M = V , and for u, v ∈ V , define d(u, v) to be the length of the shortest path in G between u and v. At any rate, in this lecture, we shall think of the data stream as consisting of points in a metric space (M, d). The function d is made available to us through an oracle which, when queried with two points x, y ∈ M, returns the distance d(x, y) between them. To keep things simple, we will not bother with the issues of representing, in working memory, points in M or distance values. Instead we will measure our space usage as the number of points and/or distances our algorithms store.
60
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 13. METRIC STREAMS AND CLUSTERING
13.2
The Cost of a Clustering: Summarization Costs
Fix a metric space (M, d) and an integer k ≥ 1. Given a data set σ ⊆ M, we wish to cluster it into at most k clusters, and summarize it by choosing a representative from each cluster. Suppose our set of chosen representatives is R.
T
If we think of the elements of σ as being locations of “customers” seeking some service, and elements of R as locations of “service stations,” then one reasonable objective to minimize is the maximum distance that a customer has to travel to receive service. This is formalized as the kcenter objective. If we think of the elements of R as being locations of conference centers (for a multilocation video conference) and elements of σ as being home locations for the participants at this conference, another reasonable objective to minimize is the total fuel spent in getting all participants to the conference centers. This is formalized as the kmedian objective. There is also another natural objective called kmeans which is even older, is motivated by statistics and machine learning applications, and dates back to the 1960s.
DR AF
To formally define these objectives, extend the function d by defining d(x, S) := min d(x, y) , y∈S
for x ∈ M and S ⊆ M. Then, having chosen representatives R, the best way to cluster σ is to assign each point in σ to its nearest representative in R. This then gives rise to (at least) the following three natural cost measures. ∆∞ (σ , R) := max d(x, R) ;
(kcenter)
∆1 (σ , R) :=
∑ d(x, R) ;
(kmedian)
∑ d(x, R)2 .
(kmeans)
x∈σ
x∈σ
∆2 (σ , R) :=
x∈σ
Our goal is choose R ⊆ σ with R ≤ k so as to minimize the cost ∆(σ , R). For the rest of this lecture we focus on the first of these costs (i.e., the kcenter problem). We shall give an efficient data stream algorithm that reads σ as an input stream and produces a summary R whose cost is at most some constant α times the cost of the best summary. Such an algorithm is called an αapproximation. In fact, we shall give two such algorithms: the first will use just O(k) space and produce an 8approximation. The second will improve the approximation ratio from 8 to 2 + ε, blowing up the space usage by about O(1/ε). As noted earlier, when we produce a set of representatives, R, we have in fact produced a clustering of the data implicitly: to form the clusters, simply assign each data point to its nearest representative, breaking ties arbitrarily.
13.3
The Doubling Algorithm
We focus on the kcenter problem. The following algorithm maintains a set R consisting of at most k representatives from the input stream; these representatives will be our cluster centers. The algorithm also maintains a “threshold” τ throughout; as we shall soon see, τ approximates the summarization cost ∆∞ (σ , R), which is the cost of the implied clustering. To analyze this algorithm, we first record a basic fact about metric spaces and the cost function ∆∞ . Lemma 13.3.1. Suppose x1 , . . . , xk+1 ∈ σ ⊆ M satisfy d(xi , x j ) ≥ t for all distinct i, j ∈ [k + 1]. Then, for all R ⊆ M with R ≤ k, we have ∆∞ (σ , R) ≥ t/2. Proof. Suppose, to the contrary, that there exists R ⊆ M with R ≤ k and ∆∞ (σ , R) < t/2. Then, by the pigeonhole principle, there exist distinct i, j ∈ [k + 1], such that rep(xi , R) = rep(x j , R) = r, say. Now d(xi , x j ) ≤ d(xi , r) + d(x j , r) < t/2 + t/2 = t , where the first inequality is a triangle inequality and the second follows from ∆∞ (σ , R) < t/2. But this contradicts the given property of {x1 , . . . , xk+1 }. 61
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 13. METRIC STREAMS AND CLUSTERING
Algorithm 20 Doubling algorithm for metric kmedian Initialize: 1: R ← first k + 1 points in stream 2: (y, z) ← closest pair of points in R 3: τ ← d(y, z) 4: R ← R \ {z}
T
. Initial threshold . Culling step: we may have at most k representatives
Process (token x) : 5: if minr∈R d(x, r) > 2τ then 6: R ← R ∪ {x} 7: while R > k do 8: τ ← 2τ 9: R ← maximal R0 ⊆ R such that ∀ r 6= s ∈ R0 : d(r, s) ≥ τ
DR AF
Output: R
. Raise (double) the threshold . Culling step
Next, we consider the algorithm’s workings and establish certain invariants that it maintains. Lemma 13.3.2. Algorithm 20 maintains the following invariants at the start of each call to the processing section. 1. (Separation invariant) For all distinct r, s ∈ R, we have d(r, s) ≥ τ. 2. (Cost invariant) We have ∆∞ (σ , R) ≤ 2τ.
Proof. At the end of initialization, Invariant 1 holds by definition of y, z, and τ. We also have ∆∞ (σ , R) = τ at this point, so Invariant 2 holds as well. Suppose the invariants hold after we have read an input stream σ , and are just about to process a new point x. Let us show that they continue to hold after processing x. Consider the case that the condition tested in line 5 does not hold. Then R and τ do not change, so Invariant 1 continues to hold. We do change σ by adding x to it. However, noting that d(x, R) ≤ 2τ, we have ∆∞ (σ ∪ {x}, R) = max{∆∞ (σ , R), d(x, R)} ≤ 2τ ,
and so, Invariant 2 also continues to hold.
Next, consider the case that the condition in line 5 holds. After line 6 executes, Invariant 1 continues to hold because the point x newly added to R satisfies its conditions. Further, Invariant 2 continues to hold since x is added to both σ and R, which means d(x, R) = 0 and d(y, R) does not increase for any y ∈ σ \ {x}; therefore ∆∞ (σ , R) does not change. We shall now show that the invariants are satisfied after each iteration of the loop at line 7. Invariant 1 may be broken by line 8 but is explicitly restored in line 9. Immediately after line 8, with τ doubled, Invariant 2 is temporarily strengthened to ∆∞ (σ , R) ≤ τ. Now consider the set R0 computed in line 9. To prove that Invariant 2 holds after that line, we need to prove that ∆∞ (σ , R0 ) ≤ 2τ. Let u ∈ σ be an arbitrary data point. Then d(u, R) ≤ ∆∞ (σ , R) ≤ τ. Let r0 = rep(u, R) = arg minr∈R d(u, r). If ∈ R0 , then d(u, R0 ) ≤ d(u, r0 ) = d(u, R) ≤ τ. Otherwise, by maximality of R0 , there exists a representative s ∈ R0 such that d(r0 , s) < τ. Now d(u, R0 ) ≤ d(u, s) ≤ d(u, r0 ) + d(r0 , s) < d(u, R) + τ ≤ 2τ . r0
Thus, for all x ∈ σ , we have d(x, R0 ) ≤ 2τ. Therefore, ∆∞ (σ , R0 ) ≤ 2τ, as required.
Having established the above properties, it is now simple to analyze the doubling algorithm. Theorem 13.3.3. The doubling algorithm uses O(k) space and outputs a summary R whose cost is at most 8 times the optimum. Proof. The space bound is obvious. Let R∗ be an optimum summary of the input stream σ , i.e., one that minimizes ∆∞ (σ , R∗ ). Let Rˆ and τˆ be the final values of R and τ after processing σ . By Lemma 13.3.2 (Invariant 2), we have ˆ ≤ 2τ. ˆ ∆∞ (σ , R) 62
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 13. METRIC STREAMS AND CLUSTERING
ˆ Further, the value of τ before ˜ = k + 1 and R˜ ⊇ R. Let R˜ be the value of R just before the final culling step. Then R ˆ it got doubled in that culling step was τ/2. By Lemma 13.3.2 (Invariant 1), every pair of distinct points in R˜ is at ˆ ˆ distance at least τ/2. Therefore, by Lemma 13.3.1, we have ∆∞ (σ , R∗ ) ≥ τ/4. ∗ ˆ Putting these together, we have ∆∞ (σ , R) ≤ 8∆∞ (σ , R ).
Metric Costs and Threshold Algorithms
T
13.4
The following two notions are key for our improvement to the above approximation factor.
Definition 13.4.1. A summarization cost function ∆ is said to be metric if for all streams σ , π and summaries S, T , we have ∆(σ [S] ◦ π, T ) − ∆(σ , S) ≤ ∆(σ ◦ π, T ) ≤ ∆(σ [S] ◦ π, T ) + ∆(σ , S) . (13.1)
DR AF
Here, σ [S] is the stream obtained by replacing each token of σ with its best representative from S.
Importantly, if we define “best representative” to be the “nearest representative,” then the kcenter cost function ∆∞ is metric (an easy exercise). Also, because of the nature of the kcenter cost function, we may as well replace σ [S] by S in (13.1). Consider the following example of a stream, with ‘◦’ representing “normal” elements in the stream and ‘⊗’ representing elements that have been chosen as representatives. Suppose a clustering/summarization algorithm is running on this stream, has currently processed σ and computed its summary S, and is about to process the rest of the stream, π. ◦ ⊗ ◦ ◦ ⊗ ◦ ⊗ ◦ ◦◦  ◦ ◦ ◦{z◦ ◦◦}  {z } π
σ
The definition of a metric cost function attempts to control the “damage” that would be done if the algorithm were to forget everything about σ at this point, except for the computed summary S. We think of T as summarizing the whole stream, σ ◦ π. Definition 13.4.2. Let ∆ be a summarization cost function and let α ≥ 1 be a real number. An αthreshold algorithm for ∆ is one that takes an input a threshold t and a data stream σ , and does one of the following two things. 1. Produces a summary S; if so, we must have ∆(σ , S) ≤ αt. 2. Fails (producing no output); if so, we must have ∀ T : ∆(σ , T ) > t.
The doubling algorithm contains the following simple idea for a 2threshold algorithm for the kcenter cost ∆∞ . Maintain a set S of representatives from σ that are pairwise 2t apart; if at any point we have S > k, then fail; otherwise, output S. Lemma 13.3.1 guarantees that this is a 2threshold algorithm.
13.5
Guha’s Cascading Algorithm
To describe Guha’s algorithm, we generalize the kcenter problem as follows. Our task is to summarize an input stream σ , minimizing a summarization cost given by ∆, which • is a metric cost; and • has an αapproximate threshold algorithm A , for some α ≥ 1. As we have just seen, kcenter has both these properties.
The idea behind Guha’s algorithm is to run multiple copies of A in parallel, with geometrically increasing thresholds. Occasionally a copy of A will fail; when it does, we start a new copy of A with a much higher threshold to take over from the failed copy, using the failed copy’s summary as its initial input stream. Here is an outline of the algorithm, which computes an (α + O(ε))approximation of an optimal summary, for ε α. Let S∗ denote an optimal summary, i.e., one that minimizes ∆(σ , S∗ ). It should be easy to flesh this out into complete pseudocode; we leave this as an exercise. 63
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 13. METRIC STREAMS AND CLUSTERING
13.5.1
T
• Perform some initial processing to determine a lower bound, c, on ∆(σ , S∗ ). • Let p = dlog1+ε (α/ε)e. From now on, keep p instances of A running at all times, with thresholds increasing geometrically by factors of (1 + ε). The lowest threshold is initially set to c(1 + ε). • Whenever q ≤ p of the instances fail, start up q new instances of A using the summaries from the failed instances to “replay” the stream so far. When an instance fails, kill all other instances with a lower threshold. Alternatively, we can pretend that when an instance with threshold c(1 + ε) j fails, its threshold is raised to c(1 + ε) j+p . • Having processed all of σ , output the summary from the instance of A that has the lowest threshold.
Space Bounds
DR AF
In the above algorithm, let s0 denote the space required to determine the initial lower bound, c. Also, let sA denote the space required by an instance of A ; we assume that this quantity is independent of the threshold with which A is run. Then the space required by the above algorithm is α sA log max{s0 , psA } = O s0 + . ε ε In the case of kcenter, using the initialization section of the Doubling Algorithm to determine c gives us s0 = O(k). Furthermore, using the 2threshold algorithm given at the end of Section 13.4, we get sA = O(k) and α = 2. Therefore, for kcenter, we have an algorithm running in space O((k/ε) log(1/ε)).
13.5.2
The Quality of the Summary
Consider a run of Guha’s algorithm on an input stream σ . Consider the instance of A that had the smallest threshold (among the nonfailed instances) when the input ended. Let t be the final threshold being used by this instance. Suppose this instance had its threshold raised j times overall. Let σi denote portion of the stream σ between the (i − 1)th and ith raising of the threshold, and let σ j+1 denote the portion after the last raising of the threshold. Then σ = σ1 ◦ σ2 ◦ · · · ◦ σ j+1 ,
Let Si denote the summary computed by this instance of A after processing σi ; then S j+1 is the final summary. During the processing of Si , the instance was using threshold ti = t/(1 + ε) p( j−i+1) . Since p = dlog1+ε (α/ε)e, we have (1 + ε) p ≥ α/ε, which gives t j ≤ (ε/α) j−i+1t. Now, by Property 1 of an αthreshold algorithm, we have ∆(Si−1 ◦ σi , Si ) ≤ α(ε/α) j−i+1t ,
for 1 ≤ i ≤ j + 1 ,
(13.2)
where we put S0 = ∅. Since ∆ is metric, by (13.1), after the simplification σ [S] = S, we have ∆(σ1 ◦ · · · ◦ σi , Si ) ≤ ∆(Si−1 ◦ σi , Si ) + ∆(σ1 ◦ · · · ◦ σi−1 , Si−1 ) .
(13.3)
Using (13.3) repeatedly, we can bound the cost of the algorithm’s final summary as follows. j+1
∆(σ , S j+1 ) = ∆(σ1 ◦ · · · ◦ σ j+1 , S j+1 ) ≤
∑ ∆(Si−1 ◦ σi , Si )
i=1 j+1
≤
∑ α(ε/α) j−i+1t
(by (13.2))
i=1
∞
≤ αt ∑
i=0
ε i α
= (α + O(ε))t .
Meanwhile, since t was the smallest threshold for a nonfailed instance of A , we know that A fails when run with threshold t/(1 + ε). By Property 2 of an αthreshold algorithm, we have ∆(σ , S∗ ) ≥ 64
t . 1+ε
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 13. METRIC STREAMS AND CLUSTERING
Strictly speaking, the above reasoning assumed that at least one instance of A failed while processing σ . But notice that, by our choice of c in Guha’s algorithm, the above inequality holds even if this isn’t true, because in that case, we would have t = c(1 + ε). Putting the last two inequalities together, we see that ∆(σ , S j+1 ) approximates the optimal cost ∆(σ , S∗ ) within a factor of (1 + ε)(α + O(ε)) = α + O(ε).
T
For kcenter, since we have a 2threshold algorithm, we get an overall approximation ratio of 2 + O(ε).
Exercises
DR AF
131 Give a rigorous proof that the summarization cost function ∆∞ (corresponding to the kcenter objective) is metric, in the sense of Definition 13.4.1. Write out the steps of reasoning explicitly, using algebra. Your proof should explain how each nontrivial step was derived. At least one of the steps will use the triangle inequality in the underlying metric space.
65
14
T
Unit
14.1
DR AF
Graph Streams: Basic Algorithms Streams that Describe Graphs
We have been considering streams that describe data with some kind of structure, such as geometric structure, or more generally, metric structure. Another very important class of structured large data sets is large graphs. In this and the next few units, we shall study several streaming graph algorithms: in each case, the input is a stream that describes a graph. Since the terminology of graph theory is not totally uniform in the computer science and mathematics literature, it is useful to clearly define our terms. Definition 14.1.1. A graph (respectively, digraph) is a pair (V, E) where V is a nonempty finite set of vertices and E is a set of edges. Each edge is an unordered (respectively, ordered) pair of distinct vertices. If G is a graph or digraph, we denote its vertex set by V (G) and its edge set by E(G). Most streaming algorithms in the literature handle only undirected graphs (which, as our definition suggests, will simply be called “graphs”), although it makes sense to study algorithms that handle directed graphs (digraphs) too. Notice that our definition forbids parallel edges and loops in a graph. Several interesting algorithmic questions involve graphs where each edge has a realvalued weight or an integervalued multiplicity. In all graph algorithmic problems we study, the vertex set V (G) of the input graph G is known in advance, so we assume that V (G) = [n], for some (large) integer n. The stream’s tokens describe the edge set E(G) in one of the following ways. • In a vanilla or insertiononly graph stream, each token is an ordered pair (u, v) ∈ [n]×[n], such that the corresponding unordered pair {u, v} ∈ E(G). We assume that each edge of G appears exactly once in the stream. There is no easy way to check that this holds, so we have to take this as a promise. While n is known beforehand, the stream length m, which also equals E(G), is not. • In a vanilla weighted graph stream, with weights lying in some set S, each token is a triple (u, v, w) ∈ [n] × [n] × S and indicates that {u, v} is an edge of G with weight w. • A dynamic graph stream represents an evolving graph, where edges can come and go. Each token is a triple (u, v, b) ∈ [n] × [n] × {−1, 1} and indicates that an edge {u, v} is being inserted into the graph if b = 1 and deleted from the graph if b = −1. It is promised that an edge being inserted isn’t already in the graph and an edge being deleted is definitely in the graph. • A turnstile graph stream represents an evolving multigraph. Each token is a triple (u, v, ∆) ∈ [n] × [n] × Z and indicates that ∆ is being added to the multiplicity of edge {u, v}. Edges that end up with a negative multiplicity usually don’t make graphtheoretic sense, so it may be convenient to assume a promise that that doesn’t happen. 66
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 14. GRAPH STREAMS: BASIC ALGORITHMS
In this unit, we will only study algorithms in a vanilla (i.e., insertiononly) streaming setting.
14.1.1
SemiStreaming Space Bounds
T
Unfortunately, most of the interesting things we may want to compute for a graph provably require Ω(n) space in a streaming model, even allowing multiple passes over the input stream. These include such basic questions as “Is G connected?” and even “Is there a path from u to v in G?” where the vertices u and v are known beforehand. We shall prove such results when we study lower bounds, in later units.
14.2
DR AF
Therefore, we have to reset our goal. Where (log n)O(1) space used to be the holy grail for basic data stream algorithms, for several graph problems, the quest for a good space bound has to stop at O(n polylog n) space. Algorithms achieving such a space bound have come to be known as “semistreaming” algorithms. Alternatively, we say that such an algorithm run in “semistreaming space.” Note that an algorithm that guarantees a space bound of O(nα ) for any constant α < 2 is already achieving sublinear space, when the input graph is dense enough, because m could be as high as Ω(n2 ).
The Connectedness Problem
Our first problem is CONNECTEDNESS: decide whether or not the input graph G, which is given by a stream of edges, is connected. This is a Boolean problem—the answer is either 0 (meaning “no”) or 1 (meaning “yes”)—and so we require an exact answer. We could consider randomized algorithms, but we won’t need to. For this problem, as well as all others in this unit, the algorithm will consist of maintaining a subgraph of G satisfying certain conditions. For CONNECTEDNESS, the idea is to maintain a spanning forest, F. As G gets updated, F might or might not become a tree at some point. Clearly G is connected iff it does. The algorithm below maintains F as a set of edges. The vertex set is always [n]. Algorithm 21 Graph connectivity Initialize: 1: F ← ∅, flag ← 0
Process (token {u, v}) : 2: if ¬flag and F ∪ {{u, v}} does not contain a cycle then 3: F ← F ∪ {{u, v}} 4: if F = n − 1 then flag ← 1 Output: flag
We have already argued the algorithm’s correctness. Its space usage is easily seen to be O(n log n), since we always have F ≤ n − 1, and each edge of F requires at most 2dlog ne = O(log n) bits to describe. The well known U NION F IND data structure can be used to do the work in the processing section quickly. To test acyclicity of F ∪ {{u, v}}, we simply check if root(u) and root(v) are distinct in the data structure.
14.3
The Bipartiteness Problem
A bipartite graph is one whose vertices can be partitioned into two disjoint sets—L and R, say—so that every edge is between a vertex in L and a vertex in R. Equivalently, a bipartite graph is one whose vertices can be properly colored using two colors.1 Our next problem is BIPARTITENESS: determine whether or not the input graph G is bipartite. 1A
coloring is proper if, for every edge e, the endpoints of e receive distinct colors.
67
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 14. GRAPH STREAMS: BASIC ALGORITHMS
Note that being bipartite is a monotone property (just as connectedness is): that is, given a nonbipartite graph, adding edges to it cannot make it bipartite. Therefore, once a streaming algorithm detects that the edges seen so far make the graph nonbipartite, it can stop doing more work. Here is our proposed algorithm.
T
Algorithm 22 Bipartiteness testing Initialize: 1: F ← ∅, flag ← 1
DR AF
Process (token {u, v}) : 2: if flag then 3: if F ∪ {{u, v}} does not contain a cycle then 4: F ← F ∪ {{u, v}} 5: else if F ∪ {{u, v}} contains an odd cycle then 6: flag ← 0 Output: flag
Just like our CONNECTEDNESS algorithm before, this one also maintains the invariant that F is a subgraph of G and is a forest. Therefore it uses O(n log n) space. Its correctness is guaranteed by the following theorem. Theorem 14.3.1. Algorithm 22 outputs 1 (meaning “yes”) iff the input graph G is bipartite. Proof. Suppose the algorithm outputs 0. Then G must contain an odd cycle. This odd cycle does not have a proper 2coloring, so neither does G. Therefore G is not bipartite. Next, suppose the algorithm outputs 1. Let χ : [n] → {0, 1} be a proper 2coloring of the final forest F (such a χ clearly exists, since forests are bipartite). We claim that χ is also a proper 2coloring of G, which would imply that G is bipartite and complete the proof. To prove the claim, consider an edge e = {u, v} of G. If e ∈ F, then we already have χ(u) 6= χ(v). Otherwise, F ∪ {e} must contain an even cycle. Let π be the path in F obtained by deleting e from this cycle. Then π runs between u and v and has odd length. Since every edge on π is properly colored by χ, we again get χ(u) 6= χ(v). The above algorithm description and analysis focuses on space complexity (for a reason: it is the main issue here) and does not address the time required to process each token. It is a good exercise to figure out how to make this processing time efficient as well.
14.4
Shortest Paths and Distance Estimation via Spanners
Now consider the problem of estimating distances in G. Recall that every graph naturally induces a metric on its vertex set: given two vertices x, y ∈ V (G), the distance dG (x, y) between them is the length of the shortest xtoy path in G. dG (x, y) := min{length(π) : π is a path in G from x to y} ,
(14.1)
where the minimum of an empty set defaults to ∞ (i.e., dG (x, y) = ∞ if there is no xtoy path). In a streaming setting, our problem is to process an input (vanilla) graph stream describing G and build a data structure using which we can then answer distance queries of the form “what is the distance between x and y?” ˆ y) for the distance dG (x, y). It maintains a suitable subgraph H The following algorithm computes an estimate d(x, of G which, as we shall see, satisfies the following property. ∀ x, y ∈ [n] : dG (x, y) ≤ dH (x, y) ≤ t · dG (x, y) ,
(14.2)
ˆ y) = dH (x, y) and this answer will be correct up to where t ≥ 1 is an integer constant. The algorithm can then report d(x, an approximation factor of t. A subgraph H satisfying eq. (14.2) is called a tspanner of G. Note that the left inequality trivially holds for every subgraph H of G. 68
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 14. GRAPH STREAMS: BASIC ALGORITHMS
Algorithm 23 Distance estimation using a tspanner Initialize: 1: H ← ∅
T
Process (token {u, v}) : 2: if dH (u, v) ≥ t + 1 then 3: H ← H ∪ {{u, v}} Output (query (x, y)) : ˆ y) = dH (x, y) 4: report d(x,
14.4.1
The Quality of the Estimate
DR AF
ˆ y) is Theorem 14.4.1. The final graph H constructed by Algorithm 23 is a tspanner of G. Therefore, the estimate d(x, a tapproximation to the actual distance dG (x, y): more precisely, it lies in the interval [dG (x, y), t · dG (x, y)]. Proof. Pick any two distinct vertices x, y ∈ [n]. We shall show that eq. (14.2) holds. If dG (x, y) = ∞, then G has no xtoy path, so neither does its subgraph H, whence dH (x, y) = ∞ as well. Otherwise, let π be the shortest xtoy path in G and let x = v0 , v1 , v2 , . . . , vk = y be the vertices on π, in order. Then dG (x, y) = k. Pick an arbitrary i ∈ [k], and let e = {vi−1 , vi }. If e ∈ H, then dH (vi−1 , vi ) = 1. Otherwise, e ∈ / H, which means that at the time when e appeared in the input stream, we had dH 0 (vi−1 , vi ) ≤ t, where H 0 was the value of H at that time. Since H 0 is a subgraph of the final H, we have dH (vi−1 , vi ) ≤ t. Thus, in both cases, we have dH (vi−1 , vi ) ≤ t. By the triangle inequality, it now follows that k
dH (x, y) ≤ ∑ dH (vi−1 , vi ) ≤ tk = t · dG (x, y) , i=1
which completes the proof, and hence implies the claimed quality guarantee for the algorithm.
14.4.2
Space Complexity: HighGirth Graphs and the Size of a Spanner
How much space does Algorithm 23 use? Clearly, the answer is O(H log n), for the final graph H constructed by it. To estimate H, we note that, by construction, the shortest cycle in H has length at least t + 2. We can then appeal to a result in extremal graph theory to upper bound H, the number of edges in H. The girth γ(G) of a graph G is defined to be the length of its shortest cycle; we set γ(G) = ∞ if G is acyclic. As noted above, the graph H constructed by our algorithm has γ(H) ≥ t + 2. The next theorem places an upper bound on the size of a graph with high girth. Theorem 14.4.2. For sufficiently large n, if the nvertex graph G has m edges and γ(G) ≥ k, for an integer k, then m ≤ n + n1+1/b(k−1)/2c .
Proof. Let d := 2m/n be the average degree of G. If d ≤ 3, then m ≤ 3n/2 and we are done. Otherwise, let F be the subgraph of G obtained by repeatedly deleting from G all vertices of degree less than d/2. Then F has minimum degree at least d/2, and F is nonempty, because the total number of edges deleted is less than n · d/2 = m. Put ` = b(k − 1)/2c. Clearly, γ(F) ≥ γ(G) ≥ k. Therefore, for any vertex v of F, the ball in F centered at v and of radius ` is a tree (if not, F would contain a cycle of length at most 2` ≤ k − 1). By the minimum degree property of F, when we root this tree at v, its branching factor is at least d/2 − 1 ≥ 1. Therefore, the tree has at least (d/2 − 1)` vertices. It follows that ` ` d m n≥ −1 = −1 , 2 n which implies m ≤ n + n1+1/` , as required. 69
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 14. GRAPH STREAMS: BASIC ALGORITHMS
Using b(k − 1)/2c ≥ (k − 2)/2, we can weaken the above bound to m = O n1+2/(k−2) .
T
Plugging in k = t + 2, we see that the tspanner H constructed by Algorithm 23 has H = O(n1+2/t ). Therefore, the space used by the algorithm is O(n1+2/t log n). In particular, we can 3approximate all distances in a graph by a e 5/3 ). streaming algorithm in space O(n Incidentally, more precise bounds on the size of a highgirth graph are known, though they do not lead to any asymptotic improvement in this space complexity analysis. See the paper by Alon, Hoory and Linial [AHL02] and the references therein.
DR AF
Exercises
141 Suppose that we have a vanilla weighted graph stream, where each token is a triple (u, v, wuv ), specifying an edge {u, v} and its weight wuv ∈ [W ]. This number W is an integer parameter. Distances in G are defined using weighted shortest paths, i.e., ( ) dG,w (x, y) := min
∑ we : π is a path from x to y
e∈π
e 1+2/t logW ) so that, given x, y ∈ V (G), we can then return a Give an algorithm that processes G using space O(n (2t)approximation of dG,w (x, y). Give careful proofs of the quality and space guarantees of your algorithm.
70
15
T
Unit
DR AF
Finding Maximum Matchings
In Unit 14, we considered graph problems that admitted especially simple streaming algorithms. Now we turn to maximum matching, where we will see a more sophisticated solution. Matchings are wellstudied in the classical theory of algorithm design, and Edmonds’s polynomialtime algorithm for the problem [Edm65] remains one of the greatest achievements in the field. For graphs described by streams, we cannot afford computations of the type performed by Edmonds’s algorithm, but it turns out that we can achieve low (semistreaming) space if we settle for approximation.
15.1
Preliminaries
Let G = (V, E) be a graph. A matching in G is a set M ⊆ E such that no two edges in M touch, i.e., e ∩ f = ∅ for all e 6= f ∈ M. If G is weighted, with weight function wt : E → R giving a weight to each edge, then the weight of a subset S ⊆ E of edges is defined by wt(S) = ∑ wt(e) . e∈S
In particular, this defines the weight of every matching.
In the maximumcardinality matching (MCM) problem, the goal is to find a matching in the input graph G that has maximum possible cardinality. In the maximumweight matching (MWM) problem, the goal is to find a matching in the weighted input graph G that has maximum possible weight: we assume, without loss of generality, that edge weights are positive. Clearly, MCM is a special case of MWM where every edge has weight 1. A particular matching M of G is said be an Aapproximate MWM, where A > 0 is a real number, if wt(M ∗ ) ≤ wt(M) ≤ wt(M ∗ ) , A
where M ∗ is an MWM of G. This also defines Aapproximate MCMs. The right inequality is trivial, of course; it’s all about the left inequality. In this unit, the input graph will be given as a vanilla stream or a vanilla weighted stream (as defined in Section 14.1), depending on the problem at hand.
15.2
Maximum Cardinality Matching
Let us briefly consider the MCM problem, before moving on to the more interesting weighted case. It should be said that in traditional (nonstreaming) algorithms, MCM is already a rich problem calling for deep algorithmic ideas. However, in a onepass semistreaming setting, essentially the only algorithm we know is the following extremely simple one.
71
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 15. FINDING MAXIMUM MATCHINGS
Algorithm 24 Greedy algorithm for maximumcardinality matching (MCM) Initialize: 1: M ← ∅ . invariant: M will always be a matching
T
Process (token (u, v)) : 2: if M ∪ {{u, v}} is a matching then 3: M ← M ∪ {u, v} Output: M
DR AF
A matching M of G is said to be maximal if no edge of G can be added to M to produce a larger matching. Note that this is not the same as saying that M is an MCM. For instance, if G is path of length 3, then the singleton set consisting of the middle edge is a maximal matching but obviously not an MCM. It is easy to see that Algorithm 24 maintains a maximal matching of the input graph. This immediately implies that it uses at most O(n log n) space, since we have M ≤ n/2 at all times. The quality analysis is not much harder. Theorem 15.2.1. The output of Algorithm 24 is a 2approximate MCM. Proof. Let M ∗ be an MCM. Suppose M < M ∗ /2. Each edge in M “blocks” (prevents from being added to M) at most two edges in M ∗ . Therefore, there exists an unblocked edge in M ∗ that could have been added to M, contradicting the maximality of M. We won’t discuss MCM any further, but we note that if we are prepared to use more passes over the stream, it is possible to improve the approximation factor. There are various ways to do so and they all use more sophisticated ideas. Of these, one set of algorithms [?, ?] uses the standard combinatorial idea of improving a matching M by finding an augmenting path, which is a path whose edges are alternately in M and not in M, beginning and ending in unmatched vertices. A different set of algorithms [AG13] uses linear programming duality. A key takeaway is that for each ε > 0, we can produce a (1 + ε)approximate MCM using pε passes, where pε is a parameter depending only on ε. It remains open whether pε can be made to be at most poly(ε −1 ).
15.3
Maximum Weight Matching
We now turn to weighted graphs, where the goal is to produce a matching of high weight, specifically, an Aapproximate MWM. Although it will be useful to think of an ideal model where edge weights can are positive reals, for space analysis purposes, we think of edge weights as lying in [W ], for some positive integer W . This is reasonable, because we can approximate real numbers using rationals and the clear denominators. The algorithm uses a strategy of eviction: it maintains a current matching, based on edges seen so far. A newly seen edge might be in conflict with some edges (at most two) from the current matching. This is undesirable if the new edge is significantly better than the ones it’s in conflict with, so in such a case, we take in the new edge and evict those conflicting edges from the current matching. The full logic is given in Algorithm 25. Algorithm 25 Eviction algorithm for maximumweight matching (MWM) Initialize: 1: M ← ∅ . invariant: M will always be a matching Process (token (u, v, w)) : 2: C ← {e ∈ M : e ∩ {u, v} 6= ∅} 3: if w > (1 + α) wt(C) then 4: M ← M ∪ {u, v} \C
. current edge conflicts with C; note that C ∈ {1, 2} . current edge is too good; take it and evict C
Output: M
72
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 15. FINDING MAXIMUM MATCHINGS
T
The logic of Algorithm 25 makes it clear that M is always a matching. In particular, M ≤ n/2. Therefore, the e algorithm stores at most O(n) edges and their weights, leading to a space usage in O(n(log n + logW )) = O(n), which makes it a semistreaming algorithm. Theorem 15.3.1. Let wˆ be the weight of the matching output by Algorithm 25 and let M ∗ be a maximumweight matching of the input graph G. Then, wt(M ∗ ) ≤ wˆ ≤ wt(M ∗ ) , cα where cα is a constant, depending only on α. Thus, the algorithm outputs a cα approximate MWM.
Proof. As the algorithm processes tokens, we say that an edge e is born when it is added to M; we say that e is killed by the edge f when it is removed from M upon processing f ; we say that e survives if it is born and never killed; we say that e is unborn if, when it is process, it is not added to M.
DR AF
We can define a parentchild relation among the born edges in E(G) by defining an edge’s killer (if it has one) to be its killer. This organizes the born edges into a collection of rooted trees that we call killing tree: each survivor is the root of such a tree, killed edges are nonroot nodes in the trees, and unborn edges simply do not appear in theStrees. Let S be the set of all survivors. Let T (e) be the set of all strict descendants of e in its killing tree and let T (S) = e∈S T (e). The proof of the theorem is based on two key claims about these trees. Claim 2. The total weight of killed edges is bounded as follows: wt(T (S)) ≤ wt(S)/α. Proof. By the logic of line 3 in Algorithm 25, for each edge e that is born, wt(e) > (1 + α)
wt( f ) .
∑
f child of e
Let Di (e) := { f : f is a leveli descendant of e}. Applying the above fact repeatedly, wt(Di (e))
0, and we √run additional passes of an αimproving algorithm with the setting α = ε/3 (the first pass still uses α = 1/ 2), using the output of the ith pass as the initial M for the (i + 1)th pass. Let Mi denote the output of the ith pass. We stop after the (i + 1)th pass when wt(Mi+1 ) α3 ≤ 1+ , wt(Mi ) 1 + 2α + α 2 − α 3
where α =
ε . 3
Prove that this multipass algorithm makes O(ε −3 ) passes and outputs a (2 + ε)approximation to the MWM.
74
T
16
Unit
DR AF
Graph Sketching
The graph algorithms in Unit 14 used especially simple logic: each edge was either retained or discarded, based on some easytocheck criterion. This simplicity came at the cost of not being able to handle edge deletions. In this unit, we consider graph problems in a turnstile stream setting, so a token can either insert or delete an edge or, more generally, increase or decrease its multiplicity. Algorithms that can handle such streams use the basic observation, made back in Unit 5, that linear sketches naturally admit turnstile updates. The cleverness in these algorithms lies in constructing the right linear sketch. Such algorithms were first developed in the influential paper of Ahn, Guha, and McGregor [AGM12] and their underlying sketching technique has come to be known as the “AGM sketch.” The sketch then allows us to solve CONNECTEDNESS and BIPARTITENESS (plus many other problems) in turnstile graph streams, using only semistreaming space.
16.1
The Value of Boundary Edges
Our algorithms for CONNECTEDNESS and BIPARTITENESS will be based on the primitive of sampling an edge from the boundary of a given set of vertices. Later, we shall introduce a streamfriendly data structure—which will be a linear sketch—that implements this primitive. Definition 16.1.1. Let G = (V, E) be a graph. For each S ⊆ V , the boundary of S is defined to be ∂ S := {e ∈ E : e ∩ S = 1} ,
i.e., the set of edges that have exactly one endpoint in S. If G0 is a multigraph, we define boundaries based on the underlying simple graph consisting of the edges of G0 that have nonzero multiplicity. In the trivial cases S = ∅ and S = V , we have ∂ S = ∅. We shall say that S is a nontrivial subset of vertices if ∅ 6= S 6= V . Here is a simple lemma about such subsets that you can prove yourself. Lemma 16.1.2. Let S be a nontrivial subset of vertices of G. Then ∂ S = ∅ iff S is a union of some, but not all, connected components of G. In particular if G is connected, then ∂ S 6= ∅. Assume, now, that we have somehow processed our turnstile graph stream to produce a data structure B that can be queried for boundary edges. Specifically, when B is queried with parameter S, it either returns some edge in ∂ S (possibly a random edge, according to some distribution) or declares that ∂ S = ∅. We shall call B a boundary sampler for G. Let us see how we can use B to solve our graph problems.
16.1.1
Testing Connectivity Using Boundary Edges
To gain some intuition for why finding boundary edges is useful, let us consider an input graph G = (V, E) that is connected. 75
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 16. GRAPH SKETCHING
Suppose we have a partition of V into k ≥ 2 nonempty subsets C1 , . . . ,Ck , which we shall call clusters. the Lemma 16.1.2 says that every one of these clusters must have nonempty boundary. Suppose we query B with parameter C1 , . . . ,Ck (one by one) and collect the returned edges e1 , . . . , ek . Each of these edges must necessarily be intercluster and we can use these edges to merge clusters that are connected to one another. If (the subgraph induced by) each Ci happened to be connected to begin with, then after these mergers we again have a partition into connected clusters.
T
By starting with a partition into singleton clusters—which is trivially a partition into connected clusters—and iterating the above boundary query and merge procedure, we will eventually merge everything into one single cluster. At this point, we have convinced ourselves that G is indeed connected. In fact, we can say more: the edges we obtained by querying B suffice to connect up all the vertices in G and therefore contain a spanning tree of G.
DR AF
We formalize the above intuition in Algorithm 26, which computes a spanning forest of a general (possibly disconnected) graph, given access to a boundary sampler B as described above. Note that this is not a streaming algorithm! It is to be called after the graph stream has been processed and B has been created. As usual, the vertex set of the input graph is assumed to be [n]. Algorithm 26 Compute edges of spanning forest using boundary sampler 1: function S PANNING F OREST (vertex set V , boundary sampler B) 2: F ←∅ 3: C ← {{v} : v ∈ V } . initial partition into singletons 4: repeat 5: query B(C) for each C ∈ C and collect returned edges in H . B(C) yields an arbitrary edge in ∂C 6: F ← spanning forest of graph (V, F ∪ H) 7: C ← {V (T ) : T is a connected component of (V, F)} . new partition after merging clusters 8: until H = ∅ . stop when no new boundary edges found 9: return F Consider how Algorithm 26 handles a connected input graph G. Let Fi and Ci denote the values of F and C at the start of the ith iteration of the loop at line 4. Let Hi denote the set H collected in the ith iteration. Let ki = Ci . Equivalently, ki is the number of connected components of the forest (V, Fi ). Then, ki+1 is exactly the number of connected components of the graph (Ci , Hi ) that treats each cluster as a “supervertex.” If ki > 1, by Lemma 16.1.2, this ki vertex graph has no isolated vertices, so ki+1 ≤ ki /2. (It is instructive to spell out when exactly we would have ki+1 = ki /2.) Since k1 = n, it follows that ki reaches 1 after at most blog nc iterations of the loop at line 4. Further, when ki becomes 1, the graph (V, Fi ) becomes connected, and so, Fi is a spanner tree of G. Finally, consider how Algorithm 26 handles a general input graph G whose connected components are G1 , . . . , G` . Applying the above argument to each component separately, we see that the algorithm terminates in O(maxi∈[`] log V (Gi )) = O(log n) iterations and returns a spanning forest of G.
16.1.2
Testing Bipartiteness
A neat idea now allows us to solve BIPARTITENESS as well. The key observation is that the problem reduces to CONNECTEDNESS. The bipartite double cover of a graph G = (V, E) is defined to be the graph (V × {1, 2}, E 0 ), where E 0 = {{(u, 1), (v, 2)} : {u, v} ∈ E} .
It is important to note that E 0  = 2E, since each {u, v} ∈ E in fact gives rise to both {{(u, 1), (v, 2)} and {{(v, 1), (u, 2)}. More succinctly, the double cover is the tensor product G × K2 . An example is shown in Figure 16.1. Below is the important lemma we need about this construction. We skip the proof, which is a nice exercise in elementary graph theory. Lemma 16.1.3. A connected graph G is bipartite iff G × K2 is disconnected. More generally, for a graph G that has k connected components: 76
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 16. GRAPH SKETCHING
×
=
=
K2
T
G
Figure 16.1: An example of a bipartite double cover. This graph G is connected and bipartite, so its double cover is disconnected. Source: Wikimedia, Miym / CC BYSA (https://creativecommons.org/licenses/bysa/3.0).
DR AF
• if G is bipartite, then G × K2 has exactly 2k connected components; • else, G × K2 has fewer than 2k connected components.
The lemma, combined with Algorithm 26, immediately gives us the following algorithm for bipartiteness testing, once we have a boundary sampler sketch, B. As the stream is read, we maintain two copies of B: one for the input graph G and one for G × K2 . At the end of the stream, we run call S PANNING F OREST on each of the copies of B to determine the number of connected components of G and of G × K2 . Lemma 16.1.3 gives us our answer.
16.2
The AGM Sketch: Producing a Boundary Edge
We have arrived at the heart of the algorithm. It remains to implement the primitive on which the algorithms of Section 16.1.1 rest. We need to design a linear sketch of a turnstile edge stream that allows us to sample boundary edges. As a start, consider the specialcase problem of producing an edge from ∂ {v}, where v is a vertex. In other words, we seek an edge incident to v. Let A be the adjacency matrix of G, i.e., the matrix whose entry Auv is the multiplicity of edge {u, v}. Sampling an edge incident to u is the same as sampling an entry from the support of the uth row of A. We do know an efficient linear sketch for this: the `0 sampling sketch, described in Unit 10. e Therefore, if we maintain an `0 sampling sketch per row of A —i.e., per vertex of G—then we use only O(n) space and can produce, on demand, an edge incident to any given vertex. There is a small probability that a query will fail, but this probability can be kept below O(1/nc ) for any constant c. This idea does not, however, generalize to sampling from ∂ S where S > 1. We would like to sample from the support of a vector obtained by “combining the rows indexed by S” in such a way that entries corresponding to edges within S are zeroed out. The clever trick that allows such a combination is to replace the adjacency matrix A with the signed incidence matrix B defined as follows. Let P = V2 be the set of unordered pairs of vertices, i.e., the set of all potential edges of G. Then B ∈ ZV ×P is the matrix whose entries are if e = {u, v} and u < v , Auv , Bue = −Auv , if e = {u, v} and u > v , 0, if u ∈ / e. [[ *** Insert example here *** ]]
Consider the column of B corresponding to a pair e = {x, y}. If e is not an edge of G (equivalently, e has multiplicity zero), then the column is allzero. Otherwise, if e appears with multiplicity c, then the column has exactly two nonzero entries—in rows x and y—and these two entries sum to zero. This immediately gives us the following. Observation 16.2.1. Let bu denote the uth row of the signed incidence matrix B of a graph G. For all S ⊆ V (G), supp ∑ b u = ∂ S . u∈S
77
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 16. GRAPH SKETCHING
Let L be a (random) `0 sampling sketch matrix, as constructed in Unit 10. Then, by querying the sketch L ∑u∈S b u , we can (with high probability) produce a random edge from ∂ S or detect that ∂ S = ∅.1 By linearity, L ∑ bu = u∈S
∑ Lbu ,
u∈S
T
so it suffices to maintain a sketch L b u corresponding to each vertex u. The construction in Unit 10 guarantees that each `0 sampling sketch uses O(log2 n log δ −1 ) space to achieve a failure probability of δ , leading to an overall space usage e of O(n) if we use one such sketch per vertex. We almost have a complete algorithm now, but we need to address two issues.
1. In executing a “recovery procedure” such that Algorithm 26, we need to query `0 samplers multiple times. Considering this, how many independent samplers must we construct?
DR AF
2. In Section 16.1.1, we assumed that the boundary samplers always work, but our `0 samplers have a certain failure probability. How do we account for this? [[ *** Wrap up analysis and provide pseudocode *** ]]
1 Moreover,
the random edge we produce is nearly uniformly distributed in ∂ S. The algorithms in this unit do not make use of this property.
78
17
T
Unit
DR AF
Counting Triangles
When analyzing a large graph, an important class of algorithmic problems is to count how frequently some particular pattern occurs in the graph. Perhaps the most important such pattern is a triangle, i.e., a triple of vertices {u, v, w} such that {u, v}, {v, w} and {u, w} are all edges in the graph. Counting (or estimating) the number of triangles gives us a useful summary of the “community forming” tendency of the graph: especially relevant when the graph is a social network, for instance. We shall consider the problem of counting (or estimating) the number of triangles in a graph. It turns out that this estimation problem is a rare example of a natural graph problem where a truly lowspace (as opposed semistreaming) solution is possible. Moreover, a sketching algorithm is possible, which means that we can solve the problem even in a turnstile model.
17.1
A SamplingBased Algorithm
Given a vanilla graph stream describing a graph G = (V, E), where V = [n], we want to estimate T (G), the number of triangles in G. Unfortunately, we cannot estimate T (G) efficiently, in the sense of multiplicative approximation, no matter how loose the factor. The reason is that using a single pass, we require Ω(n2 ) simply to distinguish the cases T (G) = 0 and T (G) ≥ 1. Later in the course, we shall see how to prove such a lower bound. In light of the above, we will need to relax our goal. One way to do so is to aim for additive error in our estimate, instead of multiplicative error. Another way—the one that has become standard in the literature and that we shall use here—is to multiplicatively approximate T (G) under a promise that T (G) ≥ T , for some lower bound T ≥ 1. The higher this lower bound, the more restricted our space of valid inputs. Accordingly, we will aim for algorithms whose space complexity is a decreasing function of T . Here is an algorithm (in outline form) that uses the simple primitive of reservoir sampling that we had seen in Section 6.2. 1. Pick an edge {u, v} uniformly at random from the stream, using reservoir sampling. 2. Based on the above, pick a vertex w uniformly at random from V \ {u, v}.
3. Then, if {u, w} and {v, w} appear after {u, v} in the stream, then output m(n − 2) else output 0. A little thought shows that a single pass suffices to carry out all of the above steps. One can show that the above algorithm, after suitable parallel repetition, provides an (ε, δ )estimate for T (G) using space O(ε −2 log δ −1 mn/T ), under the promise that T (G) ≥ T . The outline is as follows. First, one shows that the algorithm’s output is an unbiased estimator for T (G). Next, one computes a variance bound and uses Chebyshev’s inequality to obtain the claimed space bound. The details are fairly straightforward given what has come before in the course. 79
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 17. COUNTING TRIANGLES
17.2
A SketchBased Algorithm
The following algorithm is due to BarYossef, Kumar, and Sivakumar [BKS02]. It uses space 1 1 mn 2 e O 2 log · , ε δ T
(17.1)
T
to produce an (ε, δ )estimate of T (G). This is greater than the space used by the algorithm outlined in Section 17.1. However, this new algorithm has the advantage of being based on linear sketches. What it computes is not exactly a linear transformation of the stream, but nevertheless we can compose “sketches” computed by this algorithm for two different edge streams and we can handle edge deletions. The highlevel idea in the algorithm is to take the given (actual) stream σ of edges and transform it on the fly into a “virtual stream” τ of triples {u, v, w}, where u, v, w ∈ V . Specifically, upon seeing a single actual token, we pretend that we have seen n − 2 virtual tokens as indicated below: virtual tokens
DR AF
actual token
{u, v}
−→
{u, v, w1 }, {u, v, w2 }, . . . , {u, v, wn−2 }
where {w1 , w2 , . . . , wn−2 } = V \ {u, v}. Thus, if σ consists of m edge updates, then τ consists of the corresponding m(n − 2) virtual tokens. Here comes the ingenious idea: consider the frequency moments Fk (τ) of this virtual stream! Let us define T j = T j (G) := {u, v, w} : u, v, w ∈ V distinct and induce exactly j edges of G between themselves . Then note that
n T0 + T1 + T2 + T3 = . 3
If we knew the frequency moments Fk (τ), we could derive further linear equations involving these T j values. For example, F2 (τ) =
∑ (number of occurrences of {u, v, w} in τ)2
u,v,w
= 12 · T1 + 22 · T2 + 32 · T3 = T1 + 4T2 + 9T3
(17.2)
We can derive another two linear equations using F1 (τ) and F0 (τ). It is a good exercise to derive them and then check that all four linear equations for the T j values are linearly independent, so we can solve for the T j values. Finally, note that T3 = T (G), the number of triangles in G. As we know from earlier in the course, we can compute good estimates for F0 (τ), F1 (τ), and F2 (τ), all using linear sketches. By carefully managing the accuracy guarantees for these sketches and analyzing how they impact the accuracy of the computed value of T (G), we can arrive at the space bound given in eq. (17.1).
Exercises
171 Formally analyze the algorithm outlined in Section 17.1 and prove the claimed space bound. 172 Consider the operation of the algorithm outlined in Section 17.2 on a stream consisting of m edge insertions and no deletions, producing an nvertex graph G that is promised to contain at least T triangles, i.e., T (G) ≥ T . Prove that, for the derived stream τ, we have the following relations, analogous to eq. (17.2). F1 (τ) = T1 + 2T2 + 3T3 ; F0 (τ) = T1 + T2 + T3 . Based on these, work out an exact formula for T (G) in terms of n, m, F0 (τ), and F2 (τ). Then work out what guarantees you need on estimates for F0 (τ) and F2 (τ) so that your formula gives you a (1 ± ε)approximation to T (G). Finally, based on these required guarantees, prove that the space upper bound given in eq. (17.1) holds. 80
18
T
Unit
DR AF
Communication Complexity and a First Look at Lower Bounds We have been making occasional references to impossibility results saying that suchandsuch in not doable in a data streaming model. Such results are called lower bounds: formally, one shows that in order to accomplish some welldefined task in a streaming model, at least a certain number of bits of space is required. The standard approach to proving such lower bounds results goes through communication complexity, itself a rich and sometimes deep area of computer science that we will only scratch the surface of. In these notes, our focus is to develop just enough of the subject to be able to apply it to many basic lower bound problems in data streaming. Readers interested in delving deeply into communication complexity are referred to a textbook on the subject [KN97, RY20]. For now, we shall develop just enough of communication complexity to be able to prove certain space lower bounds for the MAJORITY and FREQUENCY ESTIMATION problems, encountered in Unit 1.
18.1
Communication Games, Protocols, Complexity
A communication game is a cooperative game between two or more players where each player holds some portion of an overall input and their joint goal is to evaluate some function of this input. Most of the time we shall only care about twoplayer games, where the players—named Alice and Bob—seek to compute a function f : X × Y → Z , with Alice starting out with an input fragment x ∈ X and Bob with an input fragment y ∈ Y . Obviously, in order to carry out their computation, Alice will have to tell Bob something about x, and/or Bob will have to tell Alice something about y. How they do so is given by a predetermined protocol, which instructs players to send message bits by evaluating appropriate “message functions.” For instance, a oneround or oneway protocol would involve a single message from Alice to Bob (say), given by a function msgA : X → {0, 1}∗ , following which Bob would produce an output based on his own input fragment and the message received from Alice. A threeround protocol where each message is ` bits long would be specified by four functions msgA,1 : X → {0, 1}` ,
msgB,1 : Y × {0, 1}` → {0, 1}` ,
msgA,2 : X × {0, 1}` → {0, 1}` , outB : Y × {0, 1}` × {0, 1}` → Z . On input (x, y), the above protocol would cause Alice to send w1 := msgA,1 (x), then Bob to send w2 := msgB,1 (y, w1 ), then Alice to send w3 := msgA,2 (x, w2 ), and finally Bob to output z := outB (y, w1 , w3 ). Naming the above protocol Π, we would define outΠ (x, y) to be this z. We say that Π computes f if outΠ (x, y) = f (x, y) for all (x, y) ∈ X × Y . 81
UNIT 18. COMMUNICATION COMPLEXITY AND A FIRST LOOK AT LOWER BOUNDS
Dartmouth: CS 35/135 Data Stream Algorithms
A fully formal definition that handles twoplayer protocols in general can be given by visualizing the protocol’s execution as descent down a binary decision tree, with each node (a) belonging to the player who is supposed to send the next bit and (b) being labeled with an appropriate message function that produces the bit to be sent, thereby directing the descent either to the left or the right child. We do not need this general treatment in these notes, so we shall not give more details. The interested reader can find these details in either of the aforementioned textbooks.
T
The cost of a protocol Π is the worstcase (over all inputs) number of bits communicated in an execution of Π. The deterministic communication complexity D( f ) of a function f is defined to be the minimum cost of a protocol that correctly computes f on every input.
DR AF
Thus far, we have been describing deterministic communication protocols. More generally, we can consider privatecoin randomized protocols, where players may compute their messages based on random coin tosses. That is, each message function takes a random bit string as an additional input (besides the usual inputs: the sending player’s fragment and the history of received messages). Thus, each player generates random bits as needed by tossing a virtual coin private to them. Still more generally, we have publiccoin randomized protocols, where a sufficiently long random string is made available to all players at the start and the players may refer to its bits at any time. In these notes, when we say “randomized protocol” without qualification, we mean this stronger, publiccoin, variant. For a randomized protocol Π, we define its cost to be the worstcase (over all inputs and all settings of the random string) number of bits communicated in an execution. For each input (x, y), the output outΠ (x, y; R) is a random variable, depending on the random string R. We say that Π computes f with error δ if ∀ (x, y) ∈ X × Y : P outΠ (x, y; R) 6= f (x, y) ≤ δ . (18.1) The δ error randomized communication complexity Rδ ( f ) is defined to be the minimum cost of a protocol that satisfies eq. (18.1). For a Boolean function f , it will be convenient to define R( f ) := R1/3 ( f ). By a standard parallel repetition argument via Chernoff bound (analogous to the “median trick” in Section 2.4), for any positive constant δ ≤ 1/3, Rδ ( f ) = Θ(R( f )) .
(18.2)
We shall use the notations D→ ( f ) and R→ ( f ) for deterministic and randomized communication complexities (respectively) restricted to oneway protocols where the only message goes from Alice to Bob. Similarly, we shall use the notations Dk ( f ) and Rk ( f ) when restricting to kround protocols, where the first round sender is Alice. We shall use the notation Rpriv ( f ) when restricting to privatecoin protocols. Trivially, every twoplayer communication problem can be solved by having one player sending their input to the other. Keep in mind that computation time and space do not count towards the cost, only communication does. Further, extra rounds of communication or removing a restriction on the number of rounds can only help. We record the result of these observations below. Observation 18.1.1. For every twoplayer communication game given by a function f and integers k > ` > 0, the following inequalities hold: • R( f ) ≤ D( f ) ≤ min{log X , log Y }; • D( f ) ≤ Dk ( f ) ≤ D` ( f ); • R( f ) ≤ Rk ( f ) ≤ R` ( f ).
Typically, we will be interested in the asymptotic behavior of D( f ) or R( f ) for functions f whose inputs can be expressed in “about N bits.” By Observation 18.1.1, an upper bound of O(N) would be trivial, so sublinear upper bounds would be interesting and, from a lowerbounds perspective, a bound of Ω(N) would be asymptotically the strongest possible.
18.2
18.2.1
Specific TwoPlayer Communication Games Definitions
We shall now define a few twoplayer communication games that are canonical and often used in data stream lower bounds. Each of these is defined by a Boolean function. When an input to such a Boolean function is an Nbit string, we shall notate it like an Ndimensional vector: x for the whole string (vector) and x1 , . . . , xN for its bits (entries). 82
UNIT 18. COMMUNICATION COMPLEXITY AND A FIRST LOOK AT LOWER BOUNDS
Dartmouth: CS 35/135 Data Stream Algorithms
The Nbit EQUALITY game asks the players to compute the function EQN : {0, 1}N × {0, 1}N → {0, 1}, defined by ( 1 , if x = y , x, y ) = EQN (x 0 , otherwise.
x, y ) = ¬ EQN (x
N _ i=1
T
Equivalently, using “⊕” to denote the XOR operation, xi ⊕ yi .
(18.3)
For the Nbit INDEX game, Alice has an Nbit string while Bob has an integer in [N] that indexes into this string. Their task is to compute the function IDXN : {0, 1}N × [N] → {0, 1}, defined by
DR AF
x, y) = xy . IDX N (x
(18.4)
The Nbit SET DISJOINTNESS game treats the two input fragments as characteristic vectors of subsets of [N] and asks the players to decide whether these sets are disjoint. The corresponding function is DISJN : {0, 1}N × {0, 1}N → {0, 1}, defined by ( 1 , if x ∩ y = ∅ , x, y ) = DISJ N (x 0 , otherwise. Equivalently,
x, y ) = ¬ DISJ N (x
N _
xi ∧ yi .
(18.5)
i=1
18.2.2
Results and Some Proofs: Deterministic Case
We summarize several key results about the communication complexity of the above games. We will encounter some more canonical communication games later. Game
Model
EQN
deterministic oneway randomized oneway deterministic oneway randomized deterministic randomized
EQN
IDX N IDX N
IDX N DISJ N
Complexity bound
D(EQN ) ≥ N Rpriv,→ (EQN ) = O(log N) D→ (IDXN ) ≥ N R→ (IDXN ) = Ω(N) D2 (IDXN ) ≤ dlog Ne R(DISJN ) = Ω(N)
Notes
Theorem 18.2.4 Theorem 18.2.5 Theorem 18.2.1 Theorem 18.2.7 trivial
Let’s see a few things that we can prove easily. Theorem 18.2.1. D→ (IDXN ) ≥ N.
Proof. Let Π be a oneway deterministic protocol for IDXN in which M is the set of all possible messages from Alice, msg : {0, 1}N → M is Alice’s message function, and out : [N] × M → {0, 1} is Bob’s output function. By the correctness of Π, for all (xx, y) ∈ {0, 1}N × [N], we have out(y, msg(xx)) = IDXN (xx, y) = xy . Given a message string m ∈ M from Alice, Bob can use it N times, pretending that his input is 1, then 2, and so on, to recover all the bits of x . Formally, the function rec : M → {0, 1}N defined by rec(m) = (out(1, m), out(2, m), . . . , out(N, m)) 83
(18.6)
UNIT 18. COMMUNICATION COMPLEXITY AND A FIRST LOOK AT LOWER BOUNDS
Dartmouth: CS 35/135 Data Stream Algorithms
satisfies rec(msg(xx)) = x , for all x ∈ {0, 1}N . Thus, rec is the inverse function of msg, which means that msg is bijective, implying M  = {0, 1}N  = 2N . Suppose that the cost of Π is ` ≤ N − 1. Then every message in M is at most N − 1 bits long, implying M  ≤ 21 + 22 + · · · + 2N−1 ≤ 2N − 2 ,
T
a contradiction. Theorem 18.2.2. D→ (EQN ) ≥ N and D→ (DISJN ) ≥ N.
Proof sketch. Similarly to the proof of Theorem 18.2.1, Bob can use Alice’s message in a correct protocol Π to recover her input entirely. As before, this implies that the cost of Π is at least N.
DR AF
To generalize the above result to nonroundrestricted protocols, we need a key combinatorial fact called the rectangle property of deterministic communication protocols. First, let’s define the transcript Π(x, y) of a protocol Π on a particular input (x, y) to be the sequence of bits communicated when the protocol executes, followed by the output bits. By definition, protocols have to be selfpunctuating, in the sense that any prefix of the transcript should determine which player speaks next. Thus, the cost of a protocol is the maximum length of a transcript, not counting the output bits. Theorem 18.2.3 (Rectangle property). Let τ be a possible transcript of a deterministic twoplayer protocol Π on input domain X × Y . Then the set {(x, y) ∈ X × Y : Π(x, y) = τ} is of the form A × B, where A ⊆ X and B ⊆ Y . Thus, if Π(x, y) = Π(x0 , y0 ) = τ, then also Π(x0 , y) = Π(x, y0 ) = τ. In particular, if Π computes a function f and Π(x, y) = Π(x0 , y0 ) then f (x, y) = f (x0 , y0 ) = f (x0 , y) = f (x, y0 ). Proof sketch. Given any prefix of a possible transcript, the set of inputs consistent with that prefix is a rectangle (i.e., of the form A × B), as can be proved by an easy induction on the length of the prefix. Theorem 18.2.4. D(EQN ) ≥ N and D(DISJN ) ≥ N. Proof. We’ll prove this part way, leaving the rest as an exercise.
Consider a protocol Π for EQN . An input of the form (xx, x ), where x ∈ {0, 1}N , is called a positive input. Suppose w, x ) 6= EQN (xx, x ), so by Theorem 18.2.3, Π(w w, w ) 6= Π(xx, x ). It follows that Π has at least 2N w 6= x . Then EQN (w N different possible transcripts corresponding to the 2 positive inputs. Then, reasoning as in the proof of Theorem 18.2.1, at least one of these transcripts must be N bits long. Subtracting off the one output bit leaves us with a communication cost of at least N − 1. Therefore, D(EQN ) ≥ N − 1. The proof of DISJN is similar, using a wellchosen subset of positive inputs.
18.2.3
More Proofs: Randomized Case
The most remarkable thing about the EQUALITY game is that it is maximally expensive for a deterministic protocol whereas, for randomized protocols, it admits a rather cheap and oneway protocol. There are a few different ways to see this, but in the context of data stream algorithms, the most natural way to obtain the upper bound is to use the fingerprinting idea we encountered in Section 9.2, when studying 1sparse recovery. Theorem 18.2.5. Rpriv,→ (EQN ) = O(log N). Proof. The players agree on a finite field F with F = Θ(N 2 ) and Alice picks a uniformly random element r ∈R F. Given an input x , Alice computes the fingerprint n
p := φ (xx; r) := ∑ xi ri , i=1
which is an evaluation of the polynomial φ (xx; Z) at Z = r. She sends (r, p) to Bob, spending 2dlog Fe = O(log N) bits. Bob outputs 1 if φ (yy; r) = p and 0 otherwise. If x = y, Bob always correctly outputs 1. If x 6= y, Bob may output 1 wrongly, but this happens only if φ (xx; r) = φ (yy; r), i.e., r is a root of the nonzero polynomial φ (xx − y ; Z). Since this latter polynomial has degree at most N, by Theorem 9.3.1, it has at most N roots. So the probability of an error is at most N/F = O(1/N). 84
UNIT 18. COMMUNICATION COMPLEXITY AND A FIRST LOOK AT LOWER BOUNDS
Dartmouth: CS 35/135 Data Stream Algorithms
To obtain randomized communication lower bounds, we need a fundamental reasoning tool for randomized algorithms, known as Yao’s minimax principle. To set this up, we need to consider a different mode of computation, where the goal is not to get a correctness guarantee on every input, but only on an “average” input. In the context of communication games, consider a distribution µ on the domain of a function f . A deterministic protocol Π computes f up to error δ with respect to µ if P(X,Y )∼µ outΠ (X,Y ) 6= f (X,Y ) ≤ δ . (18.7)
DR AF
T
Even though we’re considering random inputs when talking of Π’s correctness, when it comes to cost, it will be convenient to continue to use the worstcase number of bits communicated. The δ error µdistributional communication µ complexity Dδ ( f ) is defined to be the minimum cost of a protocol satisfying eq. (18.7). Incidentally, the restriction to deterministic protocols above is without loss of generality, since if a randomized protocol were to achieve a guarantee analogous to eq. (18.7), we could pick the setting of its random string that minimizes the lefthand side and produce a deterministic protocol satisfying eq. (18.7). µ Theorem 18.2.6 (Yao’s minimax principle for communication complexity). Rδ ( f ) = maxµ Dδ ( f ), where the maximum is over all probability distributions µ on the domain of f . Proof. Given a randomized protocol Π with cost C satisfying eq. (18.1) and an input distribution µ, Π also satisfies the analog of eq. (18.7). As observed above, by suitably fixing Π’s random string, we obtain a deterministic protocol Π0 µ with cost C satisfying eq. (18.7). This proves that Rδ ( f ) ≥ Dδ ( f ) for every µ. Therefore, LHS ≥ RHS. The proof that LHS ≤ RHS is more complicated and we won’t go into it here. The standard argument, available in a textbook, goes through the von Neumann minimax principle for zerosum games, which itself is a consequence of the linear programming duality theorem. Fortunately, the weaker statement that LHS ≥ RHS is often all we need for proving lower bounds. Armed with this simple yet powerful tool, we prove our first randomized communication lower bound. Theorem 18.2.7. R→ (IDXN ) = Ω(N). Proof. Let µ be the uniform distribution on {0, 1}N × [N]. Thanks to eq. (18.2) and Theorem 18.2.6, we can prove our µ,→ X ,Y ) ∼ µ and let Π be a deterministic protocol with cost C lower bound for D1/10 (IDXN ) instead. To this end, let (X such that 1 X ,Y ) 6= XY ≤ P outΠ (X . 10 By two applications of Markov’s inequality, there exist sets X 0 ⊆ {0, 1}N and Y 0 ⊆ [N] such that X 0  ≥
8N 2N , Y 0  ≥ , and ∀ (xx, y) ∈ X 0 × Y 0 : outΠ (xx, y) = xy . 2 10
(18.8)
Let the set M and the functions msg : {0, 1}N → M and rec : M → {0, 1}N be as in the proof of Theorem 18.2.1; recall that rec the input recovery function defined in eq. (18.6). By eq. (18.8), ∀ x ∈ X 0 : krec(msg(xx)) − x k1 ≤
2N . 10
w)) = rec(msg(xx)), then by a triangle inequality, It follows that if w , x ∈ X 0 are such that rec(msg(w w − x k1 ≤ kw w − rec(msg(w w))k1 + krec(msg(w w)) − rec(msg(xx))k1 + krec(msg(xx)) − x k1 ≤ kw
4N . 10
An elementary fact in coding theory, readily proved by estimating the volume of a Hamming ball, is that a set with cardinality as large as that of X 0 contains a subset X 00 with X 00  = 2Ω(N) that is a code with distance greater than w − x k1 > 4N/10. 4N/10, i.e., for all distinct w , x ∈ X 00 , we have kw By the above reasoning, the function rec ◦ msg is injective on such a code X 00 . In particular, msg is injective on X Therefore, M  ≥ X 00  = 2Ω(N) and so, one of the messages in M must have length Ω(N). 00 .
We did not try to optimize constants in the above proof, but it should be clear that by a careful accounting, one can obtain R→ δ ( IDX N ) ≥ cδ N, for some specific constant cδ . The optimal constant turns out to be cδ = 1 − H2 (δ ), where H2 (x) = −x log x − (1 − x) log(1 − x) is the binary entropy function. 85
UNIT 18. COMMUNICATION COMPLEXITY AND A FIRST LOOK AT LOWER BOUNDS
18.3
Dartmouth: CS 35/135 Data Stream Algorithms
Data Streaming Lower Bounds
It is time to connect all of the above with the data streaming model. The starting point is the following easy observation that a streaming algorithm gives rise to some natural communication protocols.
T
Let A be a streaming algorithm that processes an input stream σ using S bits of space and p passes, producing some possibly random output A (σ ). If we cut σ into t contiguous pieces σ1 , . . . , σt , not necessarily of equal lengths, and give each piece to a distinct player (player i gets σi ), this sets up a tplayer communication game: the common goal of the players is to evaluate A (σ ).
DR AF
There is a natural protocol for this game that simply simulates A . Player 1 begins by processing the tokens in σ1 as in A , keeping track of A ’s memory contents. When she gets to the end of her input, she sends these memory contents as a message to player 2 who continues the simulation by processing the tokens in σ2 , and so on. After player t is done, we have reached the end of the stream σ . If there are more passes to be executed, player t sends A ’s memory contents to player 1 so she can begin the next pass. Eventually, after p such passes, player t is able to produce the output A (σ ). This construction is illustrated in fig. 18.1 using three players. Alice
Bob
Charlie
Pass 1:
σ1
σ2
σ3
Pass 2:
σ1
σ2
σ3
Pass 3:
σ1
σ2
σ3
Output
Figure 18.1: Communication protocol that simulates a streaming algorithm
Notice that this protocol causes t p − 1 messages to be communicated and has a total cost of (t p − 1)S. In particular, if t = 2 and p = 1 (the most common case), the protocol is oneway. We instantiate these abstract observations using two familiar data streaming problems.
18.3.1
Lower Bound for Majority
Recall our standard notation for data stream problems. The input σ is a stream of m tokens, each from the universe [n]. Theorem 18.3.1. A onepass algorithm for MAJORITY requires Ω(min{m, n}) space. Proof. We shall prove the (stronger) result that even the simpler problem MAJ  DETECT, which asks whether the input vanilla stream contains a token of frequency greater than m/2, already requires this much space. Let A be a onepass Sspace algorithm for MAJ  DETECT. We use the construction above to reduce from the twoplayer IDXN problem. Given an input (xx, y) for IDXN , Alice creates the stream σ1 = (a1 , . . . , aN ), where ai = 2i − xi , while Bob creates the stream (b, b, . . . , b) of length N where b = 2y − 1. Clearly, the only possible majority element in the combined stream σ := σ1 ◦ σ2 is b. Observe that b is in fact a majority iff ay = 2y − 1, i.e., IDXN (xx, y) = xy = 1. Therefore, simulating A enables Alice and Bob to solve IDXN using one Sbit message from Alice to Bob. By Theorem 18.2.7, S = Ω(N). By construction, the stream σ has m = n = 2N. Therefore, we have proven a lower bound of Ω(N) for all settings where m ≥ 2N and n ≥ 2N. In particular, we have shown that S = Ω(min{m, n}).
86
UNIT 18. COMMUNICATION COMPLEXITY AND A FIRST LOOK AT LOWER BOUNDS
18.3.2
Dartmouth: CS 35/135 Data Stream Algorithms
Lower Bound for Frequency Estimation
T
Recall that the FREQUENCY ESTIMATION problem with parameter ε asks us to process an input stream σ and answer point queries to the frequency vector f = f (σ ): given a query j ∈ [n], we should output an estimate fˆj that lies in the interval [ f j − ε k f k1 , f j + ε k f k1 ] with probability at least 2/3. The MisraGries algorithm from Unit 1 solves this for vanilla and cash register streams, while the Count/Median algorithm from Unit 5 solves it for turnstile streams. In each e −1 ). case, the space usage is O(ε Two natural question arise. First, must we estimate in order to achieve sublinear space? Can’t we get exact answers? Second, if we wanted an accuracy of ε in the above sense, can the space usage be any better than O(ε −1 )? We have the tools to answer these questions. Theorem 18.3.2. A onepass algorithm for FREQUENCY ESTIMATION that achieves an accuracy parameter ε must use Ω(min{m, n, ε −1 }) space. In particular, in order to get exact answers—i.e., ε = 0—it must use Ω(min{m, n}) space.
DR AF
Proof sketch. An idea similar to the above, reducing from IDXN for an appropriate N, works. The details are left to the reader as an instructive exercise.
Exercises
181 Write out a formal proof of Theorem 18.2.2 analogous to the above proof of Theorem 18.2.1. 182 Complete the proof of Theorem 18.2.4. 183 Formally prove Theorem 18.3.2.
87
19
T
Unit
DR AF
Further Reductions and Lower Bounds
We have seen how space lower bounds on data streaming algorithms follow from lower bounds on communication complexity. In Section 18.3, we gave two basic streaming lower bounds, each using a reduction from the INDEX problem. We shall now give several additional data streaming lower bounds, almost all of which are based on the communication games introduced in Unit 18.
19.1
The Importance of Randomization
Quite a large number of the data streaming algorithms we have studied are randomized. In fact, randomization is usually crucial and we can prove that no deterministic algorithm can achieve similar space savings. The most common way to prove such a result is by reduction from the EQUALITY communication game: recall, from Section 18.3, that this game has high deterministic communication complexity but much lower randomized communication complexity. The most basic result of this sort is for the DISTINCT ELEMENTS problem, i.e., F0 . It holds even in a vanilla streaming model and even if we are allowed multiple passes over the input stream. Theorem 19.1.1. A deterministic ppass streaming algorithm that solves the DISTINCT ELEMENTS problem exactly must use space Ω(min{m, n}/p). Proof. Given an input (xx, y ) for EQN , Alice creates the stream σ = (a1 , . . . , aN ), where ai = 2i − xi , while Bob creates the stream τ = (b1 , . . . , bN ), where bi = 2i − yi . Observe that F0 (σ ◦ τ) = N + kxx − y k1 = N + d (say) .
If x = y , then d = 0, else d ≥ 1. Thus, by computing F0 (σ ◦ τ) exactly, the players can solve EQN . Notice that the stream σ ◦ τ is of length m = 2N and its tokens come from a universe of size n = 2N. By the reasoning in Section 18.3, two players simulating a deterministic ppass algorithm that uses S bits of space would use (2p − 1)S bits of communication. Thus, by Theorem 18.2.4, if DISTINCT ELEMENTS had such an algorithm, then (2p − 1)S ≥ N, i.e., S = Ω(N/p) = Ω(min{m, n}/p). There is something unsatisfactory about the above result. When we designed algorithms for DISTINCT ELEMENTS in Units 2 and 3, our solutions were randomized and they provided approximate answers. The above lower bound does not show that both relaxations are necessary. However, we can strengthen the result by a simple application of error correcting codes. Theorem 19.1.2. There is a positive constant ε∗ such that every deterministic ppass streaming algorithm that (ε∗ , 0)estimates DISTINCT ELEMENTS requires Ω(min{m, n}/p) space.
88
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 19. FURTHER REDUCTIONS AND LOWER BOUNDS
Proof. We reuse the elementary fact in coding theory that we encountered in the proof of Theorem 18.2.7. This time, we use it in the following form: for all large enough integers M, there exists a code C ⊆ {0, 1}M with distance greater than 3M/10 and cardinality C  ≥ 2M/10 . This enables us to solve EQN as follows.
T
Take M = 10N and let C be the above code. Fix an injective function enc : {0, 1}N → C ; it must exist, thanks to the lower bound on C . Given an instance (xx, y ) of EQN , Alice and Bob apply the construction in the proof of Theorem 19.1.1 to vectors enc(xx) and enc(yy), obtaining streams σ and τ, respectively. As above, = M , if x = y , F0 (σ ◦ τ) = M + kenc(xx) − enc(yy)k1 = 3M ≥ M + , if x 6= y . 10 Given an (ε∗ , 0)estimate for F0 (σ ◦ τ), where (1 + ε∗ )/(1 − ε∗ ) < 13/10 (in particular, ε∗ = 1/8 works), Alice and Bob can distinguish which of the two cases above has occurred and can therefore determine EQN (xx, y ). As before, we conclude a space lower bound of Ω(N/p) = Ω(M/p) = Ω(min{m, n}/p), where m and n have their usual meanings.
19.2
DR AF
In fact, a still further strengthening is possible. Chakrabarti and Kale [CK16] showed that Ω(min{m, n}/(Ap)) space is required to estimate DISTINCT ELEMENTS within a multiplicative factor of A, even for A 1. This result is considerably harder to prove and necessarily so, for reasons explored in the exercises.
MultiPass Lower Bounds for Randomized Algorithms
Thus far, we have obtained data streaming lower bounds via reduction from either the INDEX problem (which becomes easy with two rounds of communication allowed) or the EQUALITY problem (which becomes easy with randomized communication allowed). To prove randomized and multipass lower bounds, we need a more powerful communication complexity result. The following celebrated result about the SET DISJOINTNES game, which we defined in Section 18.2.1, is the source of most such lower bounds. Theorem 19.2.1. R(DISJN ) = Ω(N). Theorem 19.2.1 is one of the most important results in communication complexity, with far reaching applications (well beyond data streaming). It was first proved by Kalyanasundaram and Schnitger [KS92]. Their proof was conceptually simplified by Razborov [Raz92] and by BarYossef et al. [BJKS04]. Nevertheless, even the simpler proofs of this result are much too elaborate for us to cover in these notes, so we refer the interested reader to textbooks instead [KN97, RY20].
19.2.1
Graph Problems
As a first application of Theorem 19.2.1, let us show that for the basic graph problem CONNECTEDNESS, Ω(n/p) space is required to handle nvertex graphs using p passes, even under vanilla streaming. This shows that aiming for semistreaming space, as we did in Unit 14, was the right thing to do. Theorem 19.2.2. Every (possibly randomized) ppass streaming algorithm that solves CONNECTEDNESS on nvertex graphs in the vanilla streaming model must use Ω(n/p) space. Proof. We reduce from DISJN . Given an instance (xx, y ), Alice and Bob construct edge sets A and B for a graph on vertex set V = {s,t, v1 , . . . , vN }, as follows. A = {{s, vi } : xi = 0, i ∈ [N]} ,
B = {{vi ,t} : yi = 0, i ∈ [N]} ∪ {{s,t}} .
Let G = (V, A ∪ B). It is easy to check that if x ∩ y = ∅, then G is connected. On the other hand, if there is an element j ∈ x ∩ y , i.e., x j = y j = 1, then v j is an isolated vertex and so G is disconnected. It follows that Alice and Bob can simulate a streaming algorithm for CONNECTEDNESS to solve DISJN . Graph G has n = N + 2 vertices. Thus, as before, we derive a space lower bound of Ω(N/p) = Ω(n/p). The exercises explore some other lower bounds that are proved along similar lines. 89
Dartmouth: CS 35/135 Data Stream Algorithms
UNIT 19. FURTHER REDUCTIONS AND LOWER BOUNDS
19.2.2
The Importance of Approximation
T
Theorems such as theorem 19.1.2 convey the important message that randomization is crucial to certain data stream computations. Equally importantly, approximation is also crucial. We can prove this by a reduction from DISJ to the exactcomputation version of the problem at hand. Here is a classic example, from the classic paper of Alon, Matias, and Szegedy [AMS99]. Theorem 19.2.3. Every (possibly randomized) ppass streaming algorithm that solves DISTINCT ELEMENTS exactly requires Ω(min{m, n}/p) space. Proof. Given an instance (xx, y ) of DISJN , Alice and Bob construct streams σ = (a1 , . . . , aN ) and τ = (b1 , . . . , bN ), respectively, where, for each i ∈ [N]: ai = 3i + xi − 1 ;
DR AF
bi = 3i + 2yi − 2 . This construction has the following easily checked property.
F0 (σ ◦ τ) = 2N − xx ∩ y  .
Thus, an algorithm that computes F0 (σ ◦ τ) exactly solve DISJN . The combined stream has length m = 2N and its tokens come from a universe of size n = 3N. The theorem now follows along the usual lines. Arguing along similar lines, one can show that approximation is necessary in order to compute any frequency moment Fk in sublinear space, with the exception of F1 , which is just a count of the stream’s length. It is an instructive exercise to work out the details of the proof and observe where it breaks when k = 1.
19.3
Space versus Approximation Quality
[ *** Lower bounds via GHD, to be inserted here *** ]
Exercises
191 Consider a graph stream describing an unweighted, undirected nvertex graph G. Prove that Ω(n2 ) space is required to determine, in one pass, whether or not G contains a triangle, even with randomization allowed. 192 A lower bound argument based on splitting a stream into two pieces and assigning each piece to a player in a communication game can never prove a result √ of the sort mentioned just after the proof of Theorem 19.1.2. Show that using such an argument, when A ≥ 2, we cannot rule out sublinearspace solutions for producing an estimate dˆ ∈ [A−1 d, Ad], where d is the number of distinct elements in the input stream. 193 Prove that computing F2 exactly, in p streaming passes with randomization allowed, requires Ω(min{m, n}/p) space. Generalize the result to the exact computation of Fk for any real constant k > 0 except k = 1. 194 The diameter of a graph G = (V, E) is defined as diam(G) = max{dG (x, y) : x, y ∈ V }, i.e., the largest vertextovertex distance in the graph. A real number dˆ satisfying diam(G) ≤ dˆ ≤ α diam(G) is called an αapproximation to the diameter. Suppose that 1 ≤ α < 1.5. Prove that, in the vanilla graph streaming model, a 1pass randomized algorithm that αapproximates the diameter of a connected graph must use Ω(n) space. How does the result generalize to p passes?
90
DR AF
T
Bibliography
[AG13]
Kook Jin Ahn and Sudipto Guha. Linear programming in the semistreaming model with application to the maximum matching problem. Inf. Comput., 222:59–79, 2013.
[AGM12] Kook Jin Ahn, Sudipto Guha, and Andrew McGregor. Analyzing graph structure via linear measurements. In Proc. 23rd Annual ACMSIAM Symposium on Discrete Algorithms, pages 459–467, 2012. [AHL02]
Noga Alon, Shlomo Hoory, and Nathan Linial. The moore bound for irregular graphs. Graphs and Combinatorics, 18(1):53–57, 2002.
[AHPV04] Pankaj K. Agarwal, Sariel HarPeled, and Kasturi R. Varadarajan. Approximating extent measures of points. J. ACM, 51(4):606–635, 2004. [AHPV05] Pankaj K. Agarwal, Sariel HarPeled, and Kasturi R. Varadarajan. Geometric approximation via coresets. Available online at http://valis.cs.uiuc.edu/˜sariel/research/papers/04/ survey/survey.pdf, 2005. [AMS99]
Noga Alon, Yossi Matias, and Mario Szegedy. The space complexity of approximating the frequency moments. J. Comput. Syst. Sci., 58(1):137–147, 1999. Preliminary version in Proc. 28th Annual ACM Symposium on the Theory of Computing, pages 20–29, 1996.
[BFP+ 73] Manuel Blum, Robert W. Floyd, Vaughan R. Pratt, Ronald L. Rivest, and Robert Endre Tarjan. Time bounds for selection. J. Comput. Syst. Sci., 7(4):448–461, 1973. [BJK+ 02] Ziv BarYossef, T. S. Jayram, Ravi Kumar, D. Sivakumar, and Luca Trevisan. Counting distinct elements in a data stream. In Proc. 6th International Workshop on Randomization and Approximation Techniques in Computer Science, pages 128–137, 2002. [BJKS04] Ziv BarYossef, T. S. Jayram, Ravi Kumar, and D. Sivakumar. An information statistics approach to data stream and communication complexity. J. Comput. Syst. Sci., 68(4):702–732, 2004. [BKS02]
Ziv BarYossef, Ravi Kumar, and D. Sivakumar. Reductions in streaming algorithms, with an application to counting triangles in graphs. In Proc. 13th Annual ACMSIAM Symposium on Discrete Algorithms, pages 623–632, 2002.
[CCFC04] Moses Charikar, Kevin Chen, and Martin FarachColton. Finding frequent items in data streams. Theor. Comput. Sci., 312(1):3–15, 2004. [CK16]
Amit Chakrabarti and Sagar Kale. Strong fooling sets for multiplayer communication with applications to deterministic estimation of stream statistics. In Proc. 57th Annual IEEE Symposium on Foundations of Computer Science, pages 41–50, 2016.
[CM05]
Graham Cormode and S. Muthukrishnan. An improved data stream summary: the countmin sketch and its applications. J. Alg., 55(1):58–75, 2005. Preliminary version in Proc. 6th Latin American Theoretical Informatics Symposium, pages 29–38, 2004. 91
Dartmouth: CS 35/135 Data Stream Algorithms
BIBLIOGRAPHY
J. M. Chambers, C. L. Mallows, and B. W. Stuck. A method for simulating stable random variables. J. Amer. Stat. Assoc., 71(354):340–344, 1976.
[CS14]
Michael Crouch and Daniel Stubbs. Improved streaming algorithms for weighted matching, via unweighted matching. In Proc. 17th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems, pages 96–104, 2014.
[Edm65]
Jack Edmonds. Paths, trees, and flowers. Canad. J. Math., 17:449–467, 1965.
T
[CMS76]
[ELMS11] L. Epstein, A. Levin, J. Mestre, and D. Segev. Improved approximation guarantees for weighted matching in the semistreaming model. SIAM Journal on Discrete Mathematics, 25(3):1251–1265, 2011. [FKM+ 05] Joan Feigenbaum, Sampath Kannan, Andrew McGregor, Siddharth Suri, and Jian Zhang. On graph problems in a semistreaming model. Theor. Comput. Sci., 348(2–3):207–216, 2005. Preliminary version in Proc. 31st International Colloquium on Automata, Languages and Programming, pages 531–543, 2004. Philippe Flajolet and G. Nigel Martin. Probabilistic counting algorithms for data base applications. J. Comput. Syst. Sci., 31(2):182–209, 1985.
[FM88]
P´eter Frankl and Hiroshi Maehara. The JohnsonLindenstrauss lemma and the sphericity of some graphs. J. Combin. Theory Ser. B, 44(3):355–362, 1988.
[GI10]
Anna C. Gilbert and Piotr Indyk. Sparse recovery using sparse matrices. Proceedings of the IEEE, 98(6):937–947, 2010.
DR AF
[FM85]
[Ind06]
Piotr Indyk. Stable distributions, pseudorandom generators, embeddings, and data stream computation. J. ACM, 53(3):307–323, 2006.
[JL84]
W. B. Johnson and J. Lindenstrauss. Extensions of Lipschitz mapping into Hilbert space. Contemp. Math., 26:189–206, 1984.
[JW18]
Rajesh Jayaram and David P. Woodruff. Perfect lp sampling in a data stream. In Proc. 59th Annual IEEE Symposium on Foundations of Computer Science, pages 544–555, 2018.
[KN97]
Eyal Kushilevitz and Noam Nisan. Communication Complexity. Cambridge University Press, Cambridge, 1997.
[KNW10] Daniel M. Kane, Jelani Nelson, and David P. Woodruff. An optimal algorithm for the distinct elements problem. In Proc. 29th ACM Symposium on Principles of Database Systems, pages 41–52, 2010. [KS92]
Bala Kalyanasundaram and Georg Schnitger. The probabilistic communication complexity of set intersection. SIAM J. Disc. Math., 5(4):547–557, 1992.
[L´ev54]
Paul L´evy. Th´eorie de l’addition des variables al´eatoires. Jacques Gabay, 2e edition, 1954.
[McG05]
Andrew McGregor. Finding graph matchings in data streams. In Proc. 8th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems, pages 170–181, 2005.
[MG82]
Jayadev Misra and David Gries. Finding repeated elements. Sci. Comput. Program., 2(2):143–152, 1982.
[MP80]
J. Ian Munro and Mike Paterson. Selection and sorting with limited storage. TCS, 12:315–323, 1980. Preliminary version in Proc. 19th Annual IEEE Symposium on Foundations of Computer Science, pages 253–258, 1978.
[MW10]
Morteza Monemizadeh and David P. Woodruff. 1Pass relativeerror lp sampling with applications. In Proc. 21st Annual ACMSIAM Symposium on Discrete Algorithms, pages 1143–1160, 2010.
[Nis90]
Noam Nisan. Pseudorandom generators for spacebounded computation. In Proc. 22nd Annual ACM Symposium on the Theory of Computing, pages 204–212, 1990.
92
Dartmouth: CS 35/135 Data Stream Algorithms
BIBLIOGRAPHY
Ami Paz and Gregory Schwartzman. A (2+ε)approximation for maximum weight matching in the semistreaming model. ACM Trans. Alg., 15(2):18:1–18:15, 2019. Preliminary version in Proc. 28th Annual ACMSIAM Symposium on Discrete Algorithms, pages 2153–2161, 2017.
[Raz92]
Alexander Razborov. On the distributional complexity of disjointness. Theor. Comput. Sci., 106(2):385– 390, 1992. Preliminary version in Proc. 17th International Colloquium on Automata, Languages and Programming, pages 249–253, 1990.
[RY20]
Anup Rao and Amir Yehudayoff. Communication Complexity: and Applications. Cambridge University Press, 2020.
[Sr.78]
Robert H. Morris Sr. Counting large numbers of events in small registers. Commun. ACM, 21(10):840–842, 1978.
[Zel08]
Mariano Zelke. Weighted matching in the semistreaming model. In Proc. 25th International Symposium on Theoretical Aspects of Computer Science, pages 669–680, 2008.
DR AF
T
[PS19]
93
CS 6550: Randomized Algorithms
Spring 2019
Lecture 7: Derandomization via Pairwise Independence January 31, 2019 Lecturer: Eric Vigoda
Scribes: Zhen Liu
Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.
7.1
Derandomization of an Algorithm
Idea: Present a randomized algorithm that works with constant probability and only uses pairwise independent random variables. Then we can iterate through all possible choices of those pairwise independent random variables to find a deterministic choice that is guaranteed to succeed.
7.2 7.2.1
Maximal Independent Set Algorithm Sequential Algorithm
Definition 7.1 For graph G = (V, E), the independent set (IS) is some set of vertices S ⊂ V such that ∀(v, w) ∈ E, V ∈ / S or w ∈ / S. Definition 7.2 The maximal independent set (MIS) S of graph G = (V, E) is the one that it is not a subset of any other independent sets. In other words, it satisfies: ∀v ∈ V , v ∈ S or N (v) ∩ S 6= ∅ (N (v) is the neighborhood of v in G). Consider the following sequential algorithm, which may take O(n) rounds: Algorithm 1: Sequential Algorithm for Maximal Independent Set input : A graph G = (V, E) output: A maximal independent set I
5
Initialization: I = ∅, V 0 = V ; while V 0 6= ∅ do Choose any v ∈ V 0 ; Set I ← I ∪ {v}; Set V 0 ← V 0 \ {v} ∪ N (v)
6
Output I;
1 2 3 4
7.2.2
Parallel Algorithm
[Luby ’85] Parallel algorithm for MIS with O(log n) rounds and poly(n) processes under CREW PRAM model (concurrent read & exclusive write). Goal: Instead of adding single vertex to I in each round, we add an independent set S of G0 to I. If S ∪ N (S) is a constant fraction of G0 , then we only need O(log n) rounds.
71
Lecture 7: Derandomization via Pairwise Independence
72
How to find S? Every vertex v ∈ G0 adds itself to S with probability p(v) independently (or pairwise independently). To make sure that S is an independent set: For all edges (v, w) ∈ E, if v and w are in S, then remove the lower degree vertex from S. This idea yields the following algorithm: Algorithm 2: Parallel Algorithm for Maximal Independent Set input : A graph G = (V, E) output: A maximal independent set I 1 2 3 4 5 6 7 8 9 10 11
Initialization: I = ∅, G0 = G, V 0 = V ; while V 0 6= ∅ do Set S = ∅; for each v ∈ V 0 do Add v to S with probability
1 2dG0 (v)
, where dG0 (v) is the degree of v in G0 ;
for each edge (v, w) ∈ G0 do if v ∈ S, w ∈ S then Drop the lower degree vertex in {v, w} (If tie, pick a random one); Let S 0 be the remaining vertices; I ← I ∪ S 0 , V 0 ← V 0 \ {s} ∪ N (S 0 ) , G0 = induced subgraph on V 0 ; Output I;
1 Lemma 7.3 Let Gj = (Vj , Ej ) be the graph after round j and G0 = G. Then E[Ej+1  Ej ] < Ej (1 − ). 24 Corollary 7.4 With l = O(log n), Gl = ∅. Proof: [Proof of corollary 7.4] 1 j ) 12 j ≤ m exp(− ) 12 ≤1 for j > 12 log m
E[Ej ] ≤ E0 (1 −
(7.1)
Moreover, Pr(Ej 6= ∅) = Pr(Ej ≥ 1) ≤ E[Ej ] 1 ≤ for j > 48 log m 4
(7.2)
Thus, with probability at least 3/4, we have O(log n) rounds. (This is an RNC algorithm for MIS.) Proof: [Proof of Lemma 7.3] For v ∈ Vj , define H(v) = {w ∈ NGj (v) : dGj (w) > dGj (v) and L(v) = {w ∈ NGj (v) : dGj (w) ≤ dGj (v), where NGj (v) denotes the neighborhood of v in the induced subgraph G0 .
Lecture 7: Derandomization via Pairwise Independence
73
2 1 dG0 (v) and GOOD if L(v) > dG0 (v). 3 3 Further, we say edge (v, w) ∈ Ej is BAD if v and w are BAD and GOOD otherwise. 1 To prove that Pr(w is deleted  w is GOOD) ≥ , it is sufficient to show the following two claims (Note 12 that w is deleted iff w ∈ S ∪ N (S)): We say that for any v ∈ V (G0 ), v in BAD if H(v) ≥
Claim 1 Let EG be the GOOD edges. Then EG  ≥ Claim 2 Pr(edge e is deleted  e is GOOD) ≥
E 2 .
1 12 .
With these two claims, X (1 − Pr(e gets deleted)) E[Ej+1  Ej ] = e∈Ej
1 EG  12 1 ≤ Ej (1 − ) 12
≤ Ej  −
(7.3)
Proof: [Proof of Claim 1] Let EB be BAD edges of Gj . We will define a mapping f : EB → E2j so that for all e1 6= e2 ∈ E  EB , f (e1 ) ∩ f (e2 ) = ∅. Thus, each e ∈ EB has a distinct pair of edges in Ej and hence EB  ≤ 2j , which proves the claim. The mapping f is defined with the following procedure: For each (v, w) ∈ Ej , direct it from the lower degree endpoint to the higher degree one (choose arbitrary if tie). Suppose (v, w) ∈ EB is directed as v → w. So dG0 (v) ≤ dG0 (w). Since (v, w) ∈ EB , both v and w are bad. Since v is BAD, at least 23 of its neighbors are of degree ≥ dG0 (v) and at most 13 of the edges incident to v. Therefore, ≥ 2 times as many outedges from v as inedges into v. Hence, for each BAD edge e directed into v, there are a pair of out edges out of v that we can uniquely assign to e. Proof: [Proof of Claim 2] We will show: (1): Pr(w ∈ S 0 w ∈ S) ≥ 12 . (2): Pr(N (v) ∩ S 6= ∅v is GOOD) ≥
1 6
Lecture 7: Derandomization via Pairwise Independence
74
Proof of (1): Pr(w ∈ S 0 w ∈ S) = Pr(H(w) ∩ S 6= ∅w ∈ S) X ≤ Pr(z ∈ Sw ∈ S) (Union Bound) z∈H(w)
Pr(z ∈ S, w ∈ S) Pr(w ∈ S)
X
=
z∈H(w)
X
=
Pr(z ∈ S)
(Pairwise Independence)
z∈H(w)
X
=
z∈H(w)
X
≤
z∈H(w)
1 2dG (z) 1 2dG (w)
1 ≤ 2
(7.4)
Proof of (2): Pr(N (v) ∩ S 6= ∅v is GOOD) = 1 − Pr(N (v) ∩ S = ∅v is GOOD) Y = 1− Pr(z ∈ / Sv is GOOD) (Mutual Independence) z∈N (v)
Y
= 1−
(1 −
z∈N (v)
Y
≥ 1−
Y z∈L(v)
2dG0 (z)
)
(1 −
1 ) 2dG0 (z)
(1 −
1 ) 2dG0 (v)
z∈L(v)
≥ 1−
1
L(v) ≥ 1 − exp − 2dG0 (v) 1 ≥ 1 − exp(− ) 6 1 ≥ 6
(7.5)
With both (1) and (2), we have: Pr(v ∈ NG0 (S 0 )v is GOOD) = Pr(NG0 (v) ∩ S 0 6= ∅N (v) ∩ S 6= ∅, v is GOOD) Pr(N (v) ∩ S 6= ∅v is GOOD) = Pr(NG0 (v) ∩ S 0 6= ∅N (v) ∩ S 6= ∅, v is GOOD) Pr(N (v) ∩ S 6= ∅v is GOOD) 1 1 ≥ · 2 6 1 = 12 Since e = (v, w) is GOOD if there is at least one endpoint being GOOD, we have proved Claim 2.
(7.6)
Lecture 7: Derandomization via Pairwise Independence
7.2.3
75
Proof with Pairwise Independence
Instead of using mutual independence in (7.5) to obtain the lower bound on Pr(N (v) ∩ S 6= ∅v is GOOD), we can relax the condition with pairwise independence via the following lemma: Lemma 7.5 For pairwise independent random variables X1 , ..., Xl ∈ {0, 1} with Pr(Xi = 1) = pi , we have l l X 1 X 1 Pr( Xi > 0) ≥ min{ , pi } 2 2 i=1 i=1
Corollary 7.6 ( (Lower bound on Pr(N (v) ∩ S 6= ∅v is GOOD) with pairwise independence) 1 , if wi ∈ S Let Xi = . We have 0 , otherwise d 0 (v) X 1 G 1 1 Pr(N (v) ∩ S = } 6 ∅v is GOOD) ≥ min{ , 2 2 i=1 2dG0 (wi ) G0
L(v) 1 X 1 1 } ≥ min{ , 2 2 i=1 2dG0 (v)
≥
1 12
(7.7)
Proof: [Proof of lemma 7.5] P Case 1: p ≤ 1 i i l l X X Pr( Xi > 0) ≥ Pr( Xi > 1) i=1
i=1
≥
l X
Pr(Xi > 1) −
i=1
≥
l X
i,j,i6=j
pi −
i=1
=
l X
1X 2
pi (1 −
i=1
≥
l X
1 X Pr(Xi = Xj = 1) 2
pi pj
i6=j
1X pj ) 2 j6=i l
pi (1 −
i=1
1X pj ) 2 j=1
l
≥
1X pi 2 i=1
P Case 2: i pi > 1 We find always find some S ⊂ {1, 2, ..., l} such that such that pj ≥ 12 (then pick S = {j}).
(7.8)
1 2
≤
P
i∈S
pi ≤ 1, because either 1) pi
0) ≥ Pr( Xi > 1) i=1
i∈S
1X ≥ pi 2 i∈S
1 ≥ 4 Hence, Pr(
Pl
i=1
Xi > 0) ≥
1 2
min{ 12 ,
Pl
i=1
pi }.
(7.9)
Lecture Notes on a Parallel Algorithm for Generating a Maximal Independent Set Eric Vigoda Georgia Institute of Technology Last updated for 7530  Randomized Algorithms, Spring 2010. In this lecture we present a randomized parallel algorithm for generating a maximal independent set. We then show how to derandomize the algorithm using pairwise independence. For an input graph with n vertices, our goal is to devise an algorithm that works in time polynomial in log n and using polynomial in n processors. See Chapter 12.1 of Motwani and Raghavan [5] for background on parallel models of computation (specifically the CREW PRAM model) and the associated complexity classes NC and RNC. Here we present the algorithm and proof of Luby [4], see Section 4 for more on the history of this problem. Our goal is to present a parallel algorithm for constructing a maximal independent set of an input graph on n vertices, in time polynomial in log n and using polynomial in n processors.
1
Maximal Independent Sets
For a graph G = (V, E), an independent set is a set S ⊂ V which contains no edges of G, i.e., for all (u, v) ∈ E either u 6∈ S and/or v 6∈ S. The independent set S is a maximal independent set if for all v ∈ V , either v ∈ S or N (v) ∩ S 6= ∅ where N (v) denotes the neighbors of v. It’s easy to find a maximal independent set. For example, the following algorithm works: 1. I = ∅, V 0 = V . 2. While (V 0 6= ∅) do (a) Choose any v ∈ V 0 . (b) Set I = I ∪ v. (c) Set V 0 = V 0 \ (v ∪ N (v)). 3. Output I. Our focus is finding an independent set using a parallel algorithm. The idea is that in every round we find a set S which is an independent set. Then we add S to our current independent set I, and we remove S ∪ N (S) from the current graph V 0 . If S ∪ N (S) is a constant fraction of V 0 , then we will only need O(log V ) rounds. We will instead ensure that by removing S ∪ N (S) from the graph, we remove a constant fraction of the edges. 1
To choose S in parallel, each vertex v independently adds themselves to S with a well chosen probability p(v). We want to avoid adding adjacent vertices to S. Hence, we will prefer to add low degree vertices. But, if for some edge (u, v), both endpoints were added to S, then we keep the higher degree vertex. Here’s the algorithm:
The Algorithm Problem : Given a graph find a maximal independent set. 1. I = ∅, V 0 = V and G0 = G. 2. While (V 0 6= ∅) do: (a) Choose a random set of vertices S ⊂ V 0 by selecting each vertex v independently with probability 1/(2dG0 (v)) where dG0 (v) is the degree of v in the graph G0 . (b) For every edge (u, v) ∈ E(G0 ) if both endpoints are in S then remove the vertex of lower degree from S (Break ties arbitrarily). Call this new set S 0 . (c) I = I S 0 . Let V 0 = V 0 \ (S 0 on V 0 . S
S
NG0 (S 0 )). Finally, let G0 be the induced subgraph
3. Output I Fig 1: The algorithm Correctness : We see that at each stage the set S 0 that is added is an independent set. S Moreover since we remove, at each stage, S 0 N (S 0 ) the set I remains an independent set. Also note that all the vertices removed from G0 at a particular stage are either vertices in I or neighbours of some vertex in I. So the algorithm always outputs a maximal independent set. We also note that it can be easily parallelized on a CREW PRAM.
2
Expected Running Time
In this section we bound the expected running time of the algorithm and in the next section we derandomize it. Let Gj = (Vj , Ej ) denote the graph G0 after stage j. 2
Main Lemma: For some c < 1, E [Ej   Ej−1 ] < cEj−1 . Hence, in expectation, only O(log m) rounds will be required, where m = E0 . For graph Gj , we classify the vertices and edges as GOOD and BAD to distinguish those that are likely to be removed. We say vertex v is BAD if more than 2/3 of the neighbors of v are of higher degree than v. We say an edge is BAD if both of its endpoints are bad; otherwise the edge is GOOD. The key claims are that at least half the edges are GOOD, and each GOOD edge is deleted with a constant probability. The main lemma then follows immediately. Here are the main lemmas. Lemma 1. At least half the edges are GOOD. Lemma 2. If anedge e is GOOD then the probability that it gets deleted is at least α where 1 α := 2 1 − e−1/6 . The constant α is approximately 0.07676. We can now restate and prove the main lemma: Main Lemma: E [Ej   Ej−1 ] ≤ Ej−1 (1 − α/2). Proof of Main Lemma. E [Ej   Ej−1 ] =
X
1 − Pr [e gets deleted]
e∈Ej−1
≤ Ej−1  − αGOOD edges ≤ Ej−1 (1 − α/2).
Thus, α j E [Ej ] ≤ E0  1 − ≤ m exp(−jα/2) < 1, 2 for j > α2 log m. Therefore, the expected number of rounds required is ≤ 30 log m = O(log m). Moreover, by Markov’s inequality,
Pr [Ej 6= ∅] ≤ E [Ej ] < m exp(−jα/2) ≤ 1/4, for j = α4 log m. Hence, with probability at least 3/4 the number of rounds is ≤ 60 log m, and therefore we have an RNC algorithm for MIS. 3
It remains to prove Lemmas 1 and 2. We begin with Lemma 1 which is a cute combinatorial proof.
Proof of Lemma 1. Denote the set of bad edges by EB . We will define f : EB → E2 so that for all e1 6= e2 ∈ EB , f (e1 ) ∩ f (e2 ) = ∅. This proves EB  ≤ E/2, and we’re done. The function f is defined as follows. For each (u, v) ∈ E, direct it to the higher degree vertex. Break ties as in the algorithm. Now, suppose e = (u, v) ∈ EB , and is directed towards v. Since e is BAD then v is BAD. Therefore, by the definition of a BAD vertex, at least 2/3 of the edges incident to v are directed away from v, and at most 1/3 of the edges incident to v are directed into v. In other words, v has at least twice as many outedges as inedges. Hence for each edge into v we can assign a disjoint pair of edges out of v. This gives our mapping f since each BAD edge directed into v has a disjoint pair of edges directed out of v. We now prove Lemma 2. To that end we prove the following lemmas, which say that GOOD vertices are likely to have a neighbor in S, and vertices in S have probability at least 1/2 of being in S 0 . From these lemmas, Lemma 2 will easily follow since the neighbors of S 0 are deleted from the graph. Lemma 3. If v is GOOD then Pr [N (v) ∩ S 6= ∅] ≥ 2α, where α := 21 (1 − e−1/6 ). Proof. Define L(v) := {w ∈ N (v)  d(w) ≤ d(v)}. By definition, L(v) ≥ d(v) if v is a GOOD vertex. 3 Pr [N (v) ∩ S 6= ∅] = 1 − Pr [N (v) ∩ S = ∅] Y = 1− Pr [w 6∈ S] using full independence w∈N (v)
Y
≥ 1−
Pr [w 6∈ S]
w∈L(v)
Y
= 1−
w∈L(v)
Y
≥ 1−
w∈L(v)
1 1− 2d(w) 1 1− 2d(v)
!
!
≥ 1 − exp(−L(v)/2d(v)) ≥ 1 − exp(−1/6).
Note, the above lemma is using full independence in its proof. Lemma 4. Pr [w ∈ / S 0  w ∈ S] ≤ 1/2. 4
Proof. Let H(w) = N (w) \ L(w) = {z ∈ N (w) : d(z) > d(w)}.
Pr [w ∈ / S 0  w ∈ S] = Pr [H(w) ∩ S 6= ∅  w ∈ S] X ≤ Pr [z ∈ S  w ∈ S] z∈H(w)
≤
Pr [z ∈ S, w ∈ S] Pr [w ∈ S] z∈H(w)
=
Pr [z ∈ S]Pr [w ∈ S] using pairwise independence Pr [w ∈ S] z∈H(w)
=
X
X
X
Pr [z ∈ S]
z∈H(w)
=
1 2d(z) z∈H(w)
≤
1 2d(v) z∈H(w)
≤
1 . 2
X
X
From Lemmas 3 and 4 we get the following result that GOOD vertices are likely to be deleted. Lemma 5. If v is GOOD then Pr [v ∈ N (S 0 )] ≥ α Proof. Let VG denote the GOOD vertices. We have Pr [v = = ≥ ≥ =
∈ N (S 0 )  v ∈ VG ] Pr [N (v) ∩ S 0 6= ∅  v ∈ VG ] Pr [N (v) ∩ S 0 6= ∅  N (v) ∩ S 6= ∅, v ∈ VG ]Pr [N (v) ∩ S 6= ∅  v ∈ VG ] Pr [w ∈ S 0  w ∈ N (v) ∩ S, v ∈ VG ]Pr [N (v) ∩ S 6= ∅  v ∈ VG ] (1/2)(2α) by Lemmas 4 and 3 α.
Since vertices in N (S 0 ) are deleted, an immediate corollary of Lemma 5 is the following. Corollary 6. If v is GOOD then the probability that v gets deleted is at least α. Finally, from Corollary 6 we can easily prove Lemma 2, which was our main task remaining in the analysis of the RNC algorithm. 5
Proof of Lemma 2. Let e = (u, v) and at least one of the endpoints is GOOD, so assume v is GOOD. Therefore, by Lemma 5 we have: Pr [e = (u, v) ∈ Ej−1 \ Ej ] ≥ Pr [v gets deleted]. ≥ α.
3
Derandomizing MIS
The only step where we use full independence is in Lemma 3 for lower bounding the probability that a GOOD vertex gets picked. The argument we used was essentially the following: Lemma 7. Let Xi , 1 ≤ i ≤ n, be {0, 1} random variables and pi := Pr [Xi = 1]. If the Xi are fully independent then Pr
" n X
#
Xi > 0 ≥ 1 −
n Y
(1 − pi )
i=1
i=1
Here is the corresponding bound if the variables are pairwise independent Lemma 8. Let Xi , 1 ≤ i ≤ n, be {0, 1} random variables and pi := Pr [Xi = 1]. If the Xi are pairwise independent then n 1 X 1 pi , Xi > 0 ≥ min Pr 2 2 i=1 i=1
" n X
(
#
)
Proof. Suppose ni=1 pi ≤ 1. Then we have the following, (the condition come into some algebra at the end) P
Pr
" n X
#
Xi > 0
≥ Pr
i=1
" n X
P
i
pi ≤ 1 will only
#
Xi = 1
i=1
≥
X
=
X
Pr [Xi = 1] −
i
pi −
i
≥
X
=
X
i
i
≥
1X pi pj 2 i6=j
1 X pi − pi 2 i pi
1X Pr [Xi = 1, Xj = 1] 2 i6=j
!2
1X 1− pi 2 i
!
X 1X pi when pi ≤ 1. 2 i i
6
This proves the lemma when
P
i
pi ≤ 1.
If i pi > 1, then we restrict our index of summation to a set S ⊆ [n] = {1, . . . , n} such P P that 1/2 ≤ i∈S pi ≤ 1. Note, if i pi > 1, there always must exist a subset S ⊆ [n] where P 1/2 ≤ i∈S pi ≤ 1, since either there is an index j where 1/2 ≤ pj ≤ 1 and we can then choose S = {j}, or for all i we have pi < 1/2 and it is easy to see then that there is a such a subset S in this case. Given this S, following the above proof we have: P
Pr
" n X i=1
since
P
i∈S
#
Xi > 0 ≥ Pr
"
# X
Xi = 1 ≥
i∈S
1X pi ≥ 1/4, 2 i∈S
pi ≥ 1/2, and this proves the conclusion of the lemma in this case.
Using Lemma 8 in the proof of Lemma 3 we get 2α replaced by 1/12. Hence, using the construction of pairwise random variables described in an earlier lecture, we can now derandomize the algorithm to get a deterministic algorithm that runs in O(mn log n) time (this is asymptotically almost as good as the sequential algorithm). The advantage of this method is that it can be easily parallelized to give an N C 2 algorithm (using O(m) processors).
4
History and Open Questions
The k−wise independence derandomization approach was developed in [2, 1, 4]. The maximal independence problem (MIS) was first shown to be in NC by Karp and Wigderson [3]. They showed that MIS is in N C 4 . Subsequently, improvements and simplifications on their result were found by Alon et al [1] and Luby [4]. The algorithm and proof described in these notes is the result of Luby [4]. The question of whether MIS is in N C 1 is still open.
References [1] N Alon, L. Babai, A. Itai, A Fast and Simple Randomized Parallel Algorithm for the Maximal Independent Set Problem, Journal of Algorithms, 7:567583, 1986. [2] B. Chor, O. Goldreich, On the power of twopoint sampling, Journal of Complexity, 5:96106, 1989. [3] R.M. Karp, A. Wigderson, A fast parallel algorithm for the maximal independent set problem, Proc. 16th ACM Symposium on Theory of Computing (STOC), 266272, 1984. [4] M. Luby, A simple parallel algorithm for the maximal independent set problem, Proc, 17th ACM Symposium on Theory of Computing (STOC), 110, 1985. 7
[5] R. Motwani and P. Raghavan, Randomized Algorithms, Cambridge University Press, New York, 1995.
8
CS 6550: Randomized Algorithms
Spring 2019
Lecture 8: Method of Conditional Expectations & Randomized Rounding February 5, 2019 Lecturer: Eric Vigoda
Scribes: Osman Dai & Alex Forney
Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.
8.1 8.1.1
Maximum Satisfiability (MAXSAT) Problem Statement
Let x1 , x2 , · · · , xn ∈ {TRUE, FALSE} be Boolean random variables and f be a Boolean formula in clausal normal form (CNF), i.e. it is decomposed into clauses as f = C1 ∧ C2 ∧ · · · ∧ Cm where each clause is an OR disjunction of independent literals. (Or simply, it is written as an ‘AND of ORs’.) Example 8.1 The following Boolean formula is written in CNF: f = C1 ∧ C2 = (x1 ∨ x2 ∨ x3 ) ∧ (x1 ∨ x2 ∨ x4 ∨ x5 ) We want to find an assignment of the Boolean variables that satisfies as many of the clauses C1 , · · · , Cm .
8.1.2
Random Assignment
Consider a random assignment of the variables, such that Pr[Xi = TRUE] = Pr[Xi = F ALSE] = 1/2 for each i ∈ {1, 2, · · · , n}, independently of all others. Define Y1 , Y2 , · · · , Ym ∈ {0, 1} such that ( 1, if Ci = TRUE Yi = 0, otherwise Then Yi = 0 if and only if every Boolean variable in Ci has the ‘bad’ assignment. E.g. in example 8.1 Y1 = 0 ⇐⇒ (X1 , X2 , X3 ) = (FALSE, TRUE, FALSE). Thus for any clause Ci consisting of k literals, the probability of not being satisfied is given as Pr[Yi = 0] = Pr[Ci = FALSE] = (1/2)k . Then Pr[Yi = 1] = 1 − 2−k . Lemma 8.2 Let f be some Boolean formula in CN F , m the number of clauses and k the minimum number of literals inn any clause. Then there exists some assignment that satisfies at least m(1 − 2−k ) of the clauses. P Proof: Let Y = PYi denotePthe number of P satisfied clauses. Let ki denote the number of literals in clause Ci . Then E[Y ] = E[Yi ] = Pr[Yi = 1] = (1 − 2−k ) ≥ m(1 − 2−k ). By the fact that every clause has at least 1 literal, i.e. k ≥ 1, we have m(1 − 2−k ) ≥ m/2 which gives us the following corollary. Corollary 8.3 Let f be some Boolean formula in CN F , m the number of clauses. There exists some assignment that satisfies at least m/2 of the clauses.
81
Lecture 8: Method of Conditional Expectations & Randomized Rounding
82
Algorithm 1: Derandomized 0.5MAXSAT input : Clauses C1 , C2 , · · · , Cm . output: Boolean variable assignments x1 , x2 , · · · , xn 1 2
3
for i ← 1 to n do Compute E[Y (X1 , X2 , · · · , Xi−1 , Xi ) = (x1 , x2 , · · · , xi−1 , TRUE)] and E[Y (X1 , X2 , · · · , Xi−1 , Xi ) = (x1 , x2 , · · · , xi−1 , FALSE)]; Pick the largest one and assign xi accordingly.
8.1.3
Derandomization
We can derandomize this algorithm to achieve the same guarantee on the number of satisfied clauses using a deterministic assignment. Lemma 8.4 Let k be the mininum number of literals any clause has. Then Algorithm 1 returns an assignment that satisfies at least (1 − 2−k )m clauses. Proof: E[Y ] = 12 E[Y X1 = TRUE] + 12 E[Y X1 = FALSE]. Thus both E[Y X1 = TRUE] and E[Y X1 = FALSE] cannot be smaller than E[Y ] and the largest must satisfy EY X1 = x1 ≥ E[Y ]. We can continue to apply this conditioning recursively, always getting some expectation that is at least as large as the expectation in the previous step. This finally gives us E[Y (X1 , X2 , · · · , Xn ) = (x1 , x2 , · · · , xn )] ≥ (1 − 2−k )m, where the value of Y is no longer random as it is conditioned on the assignment of all literals. Finally, by the proof of Lemma 8.2, we have E[Y ] ≥ (1 − 2−k )m. Then this final value cannot be less than (1 − 2−k )m.
8.1.4
Randomized Rounding
8.1.4.1
Integer (Linear) Programming
Recall that a linear program is an optimization problem where our goal is to maximize some linear objective function subject to linear constraints. An integer linear program additionally requires that some or all of the decision variables take integer values. Given c ∈ Rn , A ∈ Rm×n , and b ∈ Rm , the integer linear programming (ILP) problem can be written in canonical form as max s.t.
cT x Ax ≤ b x≥0 x ∈ Zn
8.1.4.2
Reduction
We will now show that ILP is NPhard via a reduction from SAT. Before we proceed with the reduction, however, we shall introduce some notation. For a clause Cj in an instance of SAT, let Cj+ denote the variables appearing in positive form in Cj and Cj− denote the variables appearing in negative form in Cj . Consider the following example: Example 8.5 Suppose we have a clause C 1 = x5 ∨ x3 ∨ x7 . Then, C1+ = {x5 , x7 } and C1− = {x3 }.
Lecture 8: Method of Conditional Expectations & Randomized Rounding
83
Suppose we are given a SAT instance given by a formula f with n variables x1 , . . . , xn and m clauses C1 , . . . , Cm . We reduce SAT to ILP in the following way: • For each SAT variable xi , create an ILP variable yi . Additionally, add the constraint that yi ∈ [0, 1]. Along with the integrality constraint, this will force yi ∈ {0, 1}. We will choose additional constraints so that, When we solve the ILP, yi = 1 corresponds to the assignment xi = T and yi = 0 corresponds to the assignment xi = F . • For each SAT clause Cj , create an ILP variable zj . As above, add the constraint zj ∈ [0, 1] so that we have a similar correspondence between zj = 1 and the clause Cj being satisfied and zj = 0 and the clause Cj being unsatisfied. • For each SAT clause Cj , add the constraint X X yi + (1 − yi ) ≥ zj . i∈Cj−
i∈Cj+
To see the necessity of the third set of constraints, we return to Example 8.5. In order to satisfy clause C1 , we must have at least one of x5 = T , x3 = F , or x7 = T . Using the variables y5 , y3 , and y7 defined in the reduction, this is equivalent to having at least one of y5 = 1, y3 = 0, or y7 = 1. Additionally, Cj being satisfied corresponds to z1 = 1. The third set of constraints gives us the inequality y5 + (1 − y3 ) + y7 ≥ z1 , so C1 is satisfied if and only if y5 = 1, y3 = 0, or y7 = 1. Otherwise, it is not. We have reduced SAT to ILP, which shows that ILP is NPhard. To apply our ILP to MAXSAT, we will attempt to maximize the value of z1 + · · · + zm , which corresponds to maximizing the number of satisfied clauses in f . Thus, we have the integer linear program m X
max
zj
j=1
s.t.
X i∈Cj+
yi +
X
(1 − yi ) ≥ zj
∀j ∈ {1, . . . , m}
i∈Cj−
yi ∈ [0, 1]
∀i ∈ {1, . . . , n}
zj ∈ [0, 1]
∀j ∈ {1, . . . , m}
yi ∈ Z
∀i ∈ {1, . . . , n}
zj ∈ Z
∀j ∈ {1, . . . , m}
Solving this integer linear program is equivalent to solving MAXSAT, so MAXSAT reduces to ILP. 8.1.4.3
Algorithm
We can drop the integrality constraints in the above integer program in order to obtain a linear program. This linear program is the LP relaxation of our problem. Unlike the integer program, we can solve the LP relaxation in polynomial time, so we would like to use the LP solution to find a solution to the ILP (which, in turn, gives us a valid assignment for MAXSAT). Let yˆi∗ , zˆj∗ denote the optimal solutions to the LP, which we can find, and let yi∗ , zi∗ denote the optimal solutions to the ILP, which we don’t know. Notice that the yi∗ , zi∗ are feasible solutions to the LP relaxation, which implies that m m X X zj∗ ≤ zj∗ . (8.1) j=1
j=1
Lecture 8: Method of Conditional Expectations & Randomized Rounding
84
Algorithm 2: Randomized Rounding input : MAXSAT instance f with n variables x1 , . . . , xn and m clauses C1 , . . . , Cm output: T/F assignment for each xi
4
Formulate the ILP corresponding to f as above; Solve the LP relaxation to obtain an optimal solution yˆi∗ , zˆi∗ ; for i = 1, . . . , n do Set xi = T with probability yˆi∗ and xi = F with probability 1 − yˆi∗ ;
5
Output x1 , . . . , xn ;
1 2 3
That is, the ILP optimum is at most the LP optimum. This is why we say that the LP is a relaxation of the ILP. With this, we have the following algorithm: Notice that this algorithm indeed gives a feasible solution to the ILP because our choice of each xi is equivalent to setting each ILP variable ( 1, with probability yˆi∗ yi = 0, with probability 1 − yˆi∗ , which is valid because 0 ≤ yˆi∗ ≤ 1. Theorem 8.6 Algorithm 2 is a (1 − 1/e)approximation for MAXSAT. Before we prove Theorem 8.6, we introduce additional notation. Let W denote the number of satisfied clauses by the assignment output by the algorithm. For each j ∈ {1, . . . , m}, let ( 1, if clause Cj is satisfied Wj = 0, if clause Cj is not satisfied. Notice that W =
Pm
j=1
Wj . We also prove the following preliminary results.
Claim 8.7 Let k ≥ 1 be an integer. For each α ∈ [0, 1], k 1 α k ≥1− 1− α. 1− 1− k k Proof of Claim 8.7: Define f : R → R by α k f (α) = 1 − 1 − . k Notice that
−(k − 1) α k−2 1− ≤0 k k if α ∈ [0, 1], so f is concave on [0, 1]. Moreover, f 00 (α) =
0 f (0) = 1 − 1 − k
k
and
1 f (1) = 1 − 1 − k
1 =0=1− 1− k
k
1 =1− 1− k
k α
k α.
Lecture 8: Method of Conditional Expectations & Randomized Rounding
Since 1 − 1 −
1 k k
85
α is linear on [0, 1], and equals f on the endpoints, it follows that k 1 α k ≥1− 1− α f (α) = 1 − 1 − k k
if α ∈ [0, 1]. Lemma 8.8 Let Cj be a clause, and let k denote the number of literals in Cj . Then, E[Wj ] ≥ βk zˆj∗ , where
1 βk = 1 − 1 − k
k .
Proof of Lemma 8.8: Consider the LP relaxation and its solution yˆi∗ , zˆj∗ . By feasibility, we have 0 ≤ zˆj∗ ≤ 1 and X X (1 − yˆi∗ ) ≥ zˆj∗ . yˆi∗ + i∈Cj−
i∈Cj+
By the AMGM inequality, 1/k
Y Y yˆi∗ (1 − yˆi∗ ) i∈Cj+
i∈Cj−
≤
X X zˆj∗ 1X 1X ∗ (1 − yˆi∗ ) + yˆi∗ = 1 − yˆi + (1 − yˆi∗ ) ≤ 1 − . k k k + − + − i∈Cj
i∈Cj
i∈Cj
i∈Cj
Hence, Y
(1 − yˆi∗ )
Y
yˆi∗ ≤
1−
i∈Cj−
i∈Cj+
zˆj∗ k
k .
Thus, E[Wj ] = Pr(Wj = 1) Y Y =1− (1 − yˆi∗ ) yˆi∗ i∈Cj−
i∈Cj+
zˆj∗ k ≥1− 1− k k 1 ≥1− 1− zˆj∗ k
(By the AMGM inequality) (By Claim 8.7)
= βk zˆj∗ . Proof of Theorem 8.6: For each integer k ≥ 1, recall that 1 − 1−
1 k
k ≤
1 k
≤ e−1/k and so
1 . e
Hence, k 1 1 βk = 1 − 1 − ≥1− . k e
Lecture 8: Method of Conditional Expectations & Randomized Rounding
86
For each j ∈ {1, . . . , m}, let kj denote the number of literals in clause Cj . Applying the above observation to each kj , the expected number of satisfied clauses is given by m m m m m X X X 1 X ∗ 1 X ∗ E[W ] = E Wj = zˆj ≥ 1 − z , E[Wj ] ≥ βkj zˆj∗ ≥ 1 − e j=1 e j=1 j j=1 j=1 j=1 Pm where we have used Equation 8.1 to obtain the final inequality. However, j=1 zj∗ is the maximum number of satisfied clauses in the MAXSAT instance. This implies that Algorithm 2 is a (1 − 1/e)approximation for MAXSAT.
8.1.5
A Third Algorithm
We have presented two algorithms for approximating MAXSAT. On classes of size k, Algorithm 1 gives a k (1 − 2−k )approximation, while Algorithm 2 yields a βk = 1 − 1 − k1 approximation. Thus, we have the following approximation factors for MAXkSAT: k 1 2 3 ≥4
Simple 1/2 3/4 7/8
LP 1 3/4 19/27
max 1 3/4 7/8 1 − 2−k
avg 3/4 3/4 > 3/4
To obtain the best results, we may as well choose the best of the two algorithms. This idea was proposed by Goemans and Williamson [1]. Algorithm 3: Best of Two input : MAXSAT instance f with n variables x1 , . . . , xn and m clauses C1 , . . . , Cm output: T/F assignment for each xi 1 2 3
Run Algorithm 1 to get assignments x11 , . . . , x1n ; Run Algorithm 2 to get assignments x21 . . . . , x2n ; Output the assignment that maximizes the number of satisfied clauses;
Theorem 8.9 Algorithm 3 is a 3/4approximation algorithm for MAXSAT. Proof: Let m1 denote the expected number of satisfied clauses by Algorithm 1, m2 denote the expected number of satisfied clauses by Algorithm 2, and m∗ denote the optimal number of satisfied clauses. Also, let yˆi∗ , zˆj∗ denote the optimal solution to the LP relaxation, and, for each integer k ≥ 1, let Sk be the set of clauses with exactly k literals. Since 0 ≤ zˆj∗ ≤ 1, ∞ X ∞ X X X m1 = (1 − 2−k ) ≥ (1 − 2−k )ˆ zj∗ k=1 Cj ∈Sk
and m2 ≥
∞ X X k=1 Cj ∈Sk
Notice that 1−2
−1
+
k=1 Cj ∈Sk
1 1− 1− k
k !
1 ! 1 1− 1− = 1 − 2−2 + 1
zˆj∗ .
1 1− 1− 2
2 ! =
3 2
Lecture 8: Method of Conditional Expectations & Randomized Rounding
87
and, for each integer k ≥ 3, −k
1−2
1 1− 1− k
+
k ! ≥
7 1 3 +1− ≥ . 8 e 2
Hence, for each integer k ≥ 1, we have −k
1−2
+
k ! 1 3 1− 1− ≥ . k 2
Thus,
max{m1 , m2 } ≥
m1 + m2 ≥ 2
∞ X X k=1 Cj ∈Sk
k 1 − 2−k + 1 − 1 − k1 2
zˆj∗ ≥
m ∞ X X 3X ∗ 3 3 ∗ zˆj ≥ zˆ ≥ m∗ , 4 4 j=1 j 4
k=1 Cj ∈Sk
which shows that Algorithm 3 is a 3/4approximation algorithm for MAXSAT.
References [1] Goemans, Michel X. and Williamson, David P. New 3/4Approximation Algorithms for the Maximum Satisfiability Problem. SIAM Journal on Discrete Mathematics, 7(4):656666, Nov 1994.
CS 6550: Randomized Algorithms
Spring 2019
Lecture 11: MaxCut Approximation Algorithm via Semidefinite Programming February 7, 2019 Lecturer: Eric Vigoda
Scribes: Kenny Scharm and Ben Polk
Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.
11.1
MaxCut Problem
¯ which maximizes Given weighted graph G = (V, E) with weights w(e) > 0 for e ∈ E, find a cut (S, S) P ¯ (v,y)∈E,v∈S,y∈S
11.1.1
Simple 12 approximation Algorithm
For v ∈ V , assign v = S with probability Denote this cut by Y : V → {0, 1} Thus,
1 2
and y = S¯ w.p
( +1 Y (v) = 0
E[cut weight] = E[
X
1 2
with prob. with prob.
1 2 1 2
w(v, z)P r(Y (v) 6= Y (z))]
(v,z)∈E
=
X
w(v, z) ∗
(v,z)∈E
=
1 2
X W where W = w(v, z) = total weight of all edges 2 (v,z)∈E
Hence, the expected weight of the cut is ≥ 12 of the optimal. As we saw in the last lecture, we can derandomize by the method of conditional expectations [MR]. We will show that we can do better than this in the next section.
11.1.2
MaxCut as an Integer Linear Programming (ILP) Problem
For each vertex v, create variable yv ∈ {0, 1}. For each edge (u, v), create variable zuv ∈ {0, 1} (zuv = 1 iff yu 6= yv ) Constraints: yu + yV ≥ zuv
(11.1)
2 − (yu + yv ) ≥ zuv
(11.2)
if yu = yv = 0, then (11.1) =⇒ zuv = 0
(11.3)
if yu = yv = 1, then (11.2) =⇒ zuv = 0
(11.4)
if yu 6= yv , then zuv ∈ {0, 1}, but we will maximize it
(11.5)
111
Lecture 11:
MaxCut Approximation Algorithm via Semidefinite Programming
Objective function: max
P
(u,v)∈E
112
w(u, v)zuv s.t. ∀(u, v) ∈ E yu + yv ≥ zuv 2 − (yu + yv ) ≥ zuv zuv ∈ {0, 1}
Consider the LP relaxation by replacing y ∈ {0, 1} by 0 ≤ yv ≤ 1 & zuv ∈ {0, 1} by 0 ≤ zuv ≤ 1. However, this LP is a poor estimate of the ILP. Set yv = 21 , ∀v ∈ V . Let us define a new objective function W . This is equivalent to doing randomized rounding in the simple random algorithm. Since this does not work, let us try a different method. Instead of Y : V → {0, 1}, do Y : V → {−1, 1}. Then, Y (u) 6= Y (v) ⇐⇒ Y (u)Y (v) = −1 ( 1 if Y (u) 6= Y (v) 1 − Y (u)Y (v) = 2 0 if Y (u) = Y (v) Now we can write the MaxCut Problem as an Integer Quadratic Program (IQP) with the following objective function: X 1 − vi vj max w(i, j)( ) s.t. ∀i ∈ V, vi2 = 1 & vi ∈ R 2 (i,j)∈E
Even though IQP is NPhard, we can relax it s.t. each vi is a unitvector in Rn instead of 1 dimension. Thus, vi vj becomes the dotproduct vi · vj . Theorem 11.1 For a, b ∈ Rn , a · b =
Pn
i=1
ai bi = ab cos θ
In our definition, a = b = 1, so a · b = cos θ. This definition yields a Semidefinite Program (SDP) which can be solves in polynomial time.
11.2
Semidefinite Programming
Objective function: max
P
(u,v)∈E
w(u, v) 1−y2u ·yv s.t. ∀v ∈ V, yv · yv = 1, yv ∈ Rn
Because any solution in the IQP corresponds to a feasible point in the SDP, the solution to the SDP must be at least as good as the solution to the IQP. Algorithms exist to find a solution to an SDP in polynomial time, so we can take a solution to this program and ”round” it to a feasible point in our IQP. We perform this rounding by selecting a random hyperplane H in Rn that passes through the origin. We will assign variables in the IQP corresponding to vectors on one side of the hyperplane in the SDP to 1, and variables in the IQP corresponding to vectors on the other side of the hyperplane in the SDP to −1. We know we want the hyperplane to pass through the origin, so we select a random hyperplane by simply selecting a random unit vector r ∈ Rn to be its normal vector. Now for each of our vectors yi we can set the corresponding variable in the IQP to be sgn(r · yi ). This is because if sgn(r · yi ) > 0 then yi and r fall on the same side of the hyperplane and if sgn(r · yi ) < 0 they fall on opposite sides of the hyperplane. Now, imagine two of these vectors yu and yv projected into 2D space. This would look like two vectors coming out of the origin with angle θ between them. If we project H into 2D, it will look like a line through the origin. This line splits yu and yv with probability πθ which means that the edge (u, v) crosses the cut with
Lecture 11:
MaxCut Approximation Algorithm via Semidefinite Programming
113
this probability. Note that because yu and yv are unit vectors θ = cos−1 (yu · yv ), so the expected value of the weight of the cut is given as: E[cutweight] =
X (u,v)∈E
cos−1 (yu · yv ) ∗ 2 π
We will now present a lemma that will help us show that this is a 0.87856... approximation: Theorem 11.2 For α ≈ 0.87856 and ∀σuv ∈ [−1, 1], cos−1 (σuv ) 1 − σuv ≥ α( ) π π The proof will not be included here, but further explanation can be found in this paper [GT]. Recall that the solution for the SDP which we know is at least as good as the true solution is given as: X (u,v)∈E
w(u, v)
1 − yu · yv 2
As such we can write the following inequality: X
w(u, v)
(u,v)∈E
1 − yu · yv ≥ 2
X (u,v)∈E
X cos−1 (yu · yv )2 1 − yu · yv ≥α w(u, v) π 2 (u,v)∈E
Thus proving that the solution we found is a 0.87856...approximation of the true max cut.
11.3
Summary
We saw a 43 approximation algorithm for the MaxSAT Problem. For the Max3SAT Problem, we can use SDP to get a 87 approximation algorithm [KZ]. This is the best possible approximation under the Unique Games Conjecture [GT]. In this lecture, we saw that we can use SDP to get a 0.87856...approximation algorithm for the MaxCut Problem.
Lecture 11:
MaxCut Approximation Algorithm via Semidefinite Programming
114
References [1] A. Gupta and K. Talwar. Approximating Unique Games. SODA ’06 Proceedings of the seventeenth annual ACMSIAM symposium on Discrete algorithm, pages 99–106, 2006. [2] H. Karloff and U. Zwick. A 7=8Approximation Algorithm for MAX 3SAT?. Proc. of 38th FOCS, pages 406–415, 1997. [3] S Mahajan and H. Ramesh. Derandomizing approximation algorithms based on semidefinite srogramming. SIAM Journal on Computing 28, pages 16411663, 1999.
The Design of Approximation Algorithms
David P. Williamson David B. Shmoys c 2010 by David P. Williamson and David B. Shmoys. All rights reserved. To be Copyright ⃝ published by Cambridge University Press.
2
This electroniconly manuscript is published on www.designofapproxalgs.com with the permission of Cambridge University Press. One copy per user may be taken for personal use only and any other use you wish to make of the work is subject to the permission of Cambridge University Press ([email protected]). You may not post this ﬁle on any other website.
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
Preface
This book is designed to be a textbook for graduatelevel courses in approximation algorithms. After some experience teaching minicourses in the area in the mid1990s, we sat down and wrote out an outline of the book. Then one of us (DPW), who was at the time an IBM Research Staﬀ Member, taught several iterations of the course following the outline we had devised, in Columbia University’s Department of Industrial Engineering and Operations Research in Spring 1998, in Cornell University’s School of Operations Research and Industrial Engineering in Fall 1998, and at the Massachusetts Institute of Technology’s Laboratory for Computer Science in Spring 2000. The lecture notes from these courses were made available, and we got enough positive feedback on them from students and from professors teaching such courses elsewhere that we felt we were on the right track. Since then, there have been many exciting developments in the area, and we have added many of them to the book; we taught additional iterations of the course at Cornell in Fall 2006 and Fall 2009 in order to ﬁeld test some of the writing of the newer results. The courses were developed for students who have already had a class, undergraduate or graduate, in algorithms, and who were comfortable with the idea of mathematical proofs about the correctness of algorithms. The book assumes this level of preparation. The book also assumes some basic knowledge of probability theory (for instance, how to compute the expected value of a discrete random variable). Finally, we assume that the reader knows something about NPcompleteness, at least enough to know that there might be good reason for wanting fast, approximate solutions to NPhard discrete optimization problems. At one or two points in the book, we do an NPcompleteness reduction to show that it can be hard to ﬁnd approximate solutions to such problems; we include a short appendix on the problem class NP and the notion of NPcompleteness for those unfamiliar with the concepts. However, the reader unfamiliar with such reductions can also safely skip over such proofs. In addition to serving as a graduate textbook, this book is a way for students to get the background to read current research in the area of approximation algorithms. In particular, we wanted a book that we could hand our own Ph.D. students just starting in the ﬁeld and say, “Here, read this.” We further hope that the book will serve as a reference to the area of approximation algorithms for researchers who are generally interested in the heuristic solution of discrete optimization problems; such problems appear in areas as diverse as traditional operations research planning problems (such as facility location and network design) to computer science prob3
4
Preface
lems in database and programming language design to advertising issues in viral marketing. We hope that the book helps researchers understand the techniques available in the area of approximation algorithms for approaching such problems. We have taken several particular perspectives in writing the book. The ﬁrst is that we wanted to organize the material around certain principles of designing approximation algorithms, around algorithmic ideas that have been used in diﬀerent ways and applied to diﬀerent optimization problems. The title The Design of Approximation Algorithms was carefully chosen. The book is structured around these design techniques. The introduction applies several of them to a single problem, the set cover problem. The book then splits into two parts. In the ﬁrst part, each chapter is devoted to a single algorithmic idea (e.g., “greedy and local search algorithms,”“rounding data and dynamic programming”), and the idea is then applied to several diﬀerent problems. The second part revisits all of the same algorithmic ideas, but gives more sophisticated treatments of them; the results covered here are usually much more recent. The layout allows us to look at several central optimization problems repeatedly throughout the book, returning to them as a new algorithmic idea leads to a better result than the previous one. In particular, we revisit such problems as the uncapacitated facility location problem, the prizecollecting Steiner tree problem, the binpacking problem, and the maximum cut problem several times throughout the course of the book. The second perspective is that we treat linear and integer programming as a central aspect in the design of approximation algorithms. This perspective is from our background in the operations research and mathematical programming communities. It is a little unusual in the computer science community, and students coming from a computer science background may not be familiar with the basic terminology of linear programming. We introduce the terms we need in the ﬁrst chapter, and we include a brief introduction to the area in an appendix. The third perspective we took in writing the book is that we have limited ourselves to results that are simple enough for classroom presentation while remaining central to the topic at hand. Most of the results in the book are ones that we have taught ourselves in class at one point or another. We bent this rule somewhat in order to cover the recent, exciting work by Arora, Rao, and Vazirani [22] applying semideﬁnite programming to the uniform sparsest cut problem. The proof of this result is the most lengthy and complicated of the book. We are grateful to a number of people who have given us feedback about the book at various stages in its writing. We are particularly grateful to James Davis, Lisa Fleischer, Isaac Fung, Rajiv Gandhi, Igor Gorodezky, Nick Harvey, Anna Karlin, Vijay Kothari, Katherine Lai, Gwen Spencer, and Anke van Zuylen for very detailed comments on a number of sections of the book. Additionally, the following people spotted typos, gave us feedback, helped us understand particular papers, and made useful suggestions: Bruno Abrahao, HyungChan An, Matthew Andrews, Eliot Anshelevich, Sanjeev Arora, Ashwinkumar B.V., Moses Charikar, Chandra Chekuri, Joseph Cheriyan, Chao Ding, Dmitriy Drusvyatskiy, Michel Goemans, Sudipto Guha, Anupam Gupta, Sanjeev Khanna, Lap Chi Lau, Renato Paes Leme, Jan Karel Lenstra, Roman Rischke, Gennady Samorodnitsky, Daniel Schmand, Jiawei Qian, Yogeshwer Sharma, Viktor ´ Tardos, Mike Todd, Di Wang, and Ann Williamson. We also Simjanoski, Mohit Singh, Eva thank a number of anonymous reviewers who made useful comments. Eliot Anshelevich, Joseph Cheriyan, Lisa Fleischer, Michel Goemans, Nicole Immorlica, and Anna Karlin used various drafts of the book in their courses on approximation algorithms and gave us useful feedback about the experience of using the book. We received quite a number of useful comments from the students in Anna’s class: Benjamin Birnbaum, Punyashloka Biswal, Elisa Celis, Jessica Chang, Mathias Hallman, Alyssa Joy Harding, Trinh Huynh, Alex Jaﬀe, Karthik Mohan, Katherine Moore, Cam Thach Nguyen, Richard Pang, Adrian Sampson, William Austin Webb, and Kevin Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
Preface
5
Zatloukal. Frans Schalekamp generated the image on the cover; it is an illustration of the tree metric algorithm of Fakcharoenphol, Rao, and Talwar [106] discussed in Section 8.5. Our editor at Cambridge, Lauren Cowles, impressed us with her patience in waiting for this book to be completed and gave us a good deal of useful advice. We would like to thank the institutions that supported us during the writing of this book, including our home institution, Cornell University, and the IBM T.J. Watson and Almaden Research Centers (DPW), as well as TU Berlin (DPW) and the Sloan School of Management at MIT and the Microsoft New England Research Center (DBS), where we were on sabbatical leave when the ﬁnal editing of the book occurred. We are grateful to the National Science Foundation for supporting our research in approximation algorithms. Additional materials related to the book (such as contact information and errata) can be found at the website www.designofapproxalgs.com. We are also grateful to our wives and children — to Ann, Abigail, Daniel, and Ruth, and ´ to Eva, Rebecca, and Amy — for their patience and support during the writing of this volume. Finally, we hope the book conveys some of our enthusiasm and enjoyment of the area of approximation algorithms. We hope that you, dear reader, will enjoy it too. David P. Williamson David B. Shmoys January 2011
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
6
Preface
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
Table of Contents
Preface
I
3
An introduction to the techniques
11
1 An introduction to approximation algorithms 1.1 The whats and whys of approximation algorithms . . . . . . . . . 1.2 An introduction to the techniques and to linear programming: problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 A deterministic rounding algorithm . . . . . . . . . . . . . . . . . 1.4 Rounding a dual solution . . . . . . . . . . . . . . . . . . . . . . 1.5 Constructing a dual solution: the primaldual method . . . . . . 1.6 A greedy algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 A randomized rounding algorithm . . . . . . . . . . . . . . . . . 2 Greedy algorithms and local search 2.1 Scheduling jobs with deadlines on a single machine 2.2 The kcenter problem . . . . . . . . . . . . . . . . 2.3 Scheduling jobs on identical parallel machines . . . 2.4 The traveling salesman problem . . . . . . . . . . . 2.5 Maximizing ﬂoat in bank accounts . . . . . . . . . 2.6 Finding minimumdegree spanning trees . . . . . . 2.7 Edge coloring . . . . . . . . . . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . . . the set cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
13 . 13 . . . . . .
16 19 20 23 24 28
. . . . . . .
35 36 37 39 43 47 49 54
3 Rounding data and dynamic programming 65 3.1 The knapsack problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.2 Scheduling jobs on identical parallel machines . . . . . . . . . . . . . . . . . . . . 68 3.3 The binpacking problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4 Deterministic rounding of linear programs 81 4.1 Minimizing the sum of completion times on a single machine . . . . . . . . . . . 82 4.2 Minimizing the weighted sum of completion times on a single machine . . . . . . 84 7
8
Table of Contents
4.3 4.4 4.5 4.6
Solving large linear programs in polynomial time via the The prizecollecting Steiner tree problem . . . . . . . . . The uncapacitated facility location problem . . . . . . . The binpacking problem . . . . . . . . . . . . . . . . .
ellipsoid . . . . . . . . . . . . . . .
method . . . . . . . . . . . . . . .
5 Random sampling and randomized rounding of linear programs 5.1 Simple algorithms for MAX SAT and MAX CUT . . . . . . . . . . 5.2 Derandomization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Flipping biased coins . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Randomized rounding . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Choosing the better of two solutions . . . . . . . . . . . . . . . . . . 5.6 Nonlinear randomized rounding . . . . . . . . . . . . . . . . . . . . 5.7 The prizecollecting Steiner tree problem . . . . . . . . . . . . . . . . 5.8 The uncapacitated facility location problem . . . . . . . . . . . . . . 5.9 Scheduling a single machine with release dates . . . . . . . . . . . . 5.10 Chernoﬀ bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11 Integer multicommodity ﬂows . . . . . . . . . . . . . . . . . . . . . . 5.12 Random sampling and coloring dense 3colorable graphs . . . . . . . 6 Randomized rounding of semideﬁnite programs 6.1 A brief introduction to semideﬁnite programming 6.2 Finding large cuts . . . . . . . . . . . . . . . . . 6.3 Approximating quadratic programs . . . . . . . . 6.4 Finding a correlation clustering . . . . . . . . . . 6.5 Coloring 3colorable graphs . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
7 The primaldual method 7.1 The set cover problem: a review . . . . . . . . . . . . . . . . . . 7.2 Choosing variables to increase: the feedback vertex set problem graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Cleaning up the primal solution: the shortest st path problem . 7.4 Increasing multiple variables at once: the generalized Steiner tree 7.5 Strengthening inequalities: the minimum knapsack problem . . . 7.6 The uncapacitated facility location problem . . . . . . . . . . . . 7.7 Lagrangean relaxation and the kmedian problem . . . . . . . . . 8 Cuts and metrics 8.1 The multiway cut problem and a minimumcutbased algorithm 8.2 The multiway cut problem and an LP rounding algorithm . . . 8.3 The multicut problem . . . . . . . . . . . . . . . . . . . . . . . 8.4 Balanced cuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Probabilistic approximation of metrics by tree metrics . . . . . 8.6 An application of tree metrics: Buyatbulk network design . . 8.7 Spreading metrics, tree metrics, and linear arrangement . . . .
. . . . . . .
. . . . .
. . . . .
. . . . . . . . . . . .
. . . . .
. . . . . . . . . . . .
. . . . .
. . . . . . . . . . . .
. . . . .
. . . .
. . . . . . . . . . . .
. . . . .
. . . .
. . . . . . . . . . . .
. . . . .
. . . .
. . . .
86 88 91 95
. . . . . . . . . . . .
105 . 106 . 108 . 110 . 111 . 114 . 116 . 118 . 120 . 124 . 128 . 132 . 133
. . . . .
141 . 141 . 143 . 147 . 150 . 153
161 . . . . . . . . . 161 in undirected . . . . . . . . . 164 . . . . . . . . . 168 problem . . . . 170 . . . . . . . . . 178 . . . . . . . . . 180 . . . . . . . . . 184
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
195 . 196 . 197 . 203 . 208 . 211 . 216 . 220
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
Table of Contents
II
Further uses of the techniques
9
231
9 Further uses of greedy and local search algorithms 9.1 A local search algorithm for the uncapacitated facility location problem 9.2 A local search algorithm for the kmedian problem . . . . . . . . . . . . 9.3 Minimumdegree spanning trees . . . . . . . . . . . . . . . . . . . . . . . 9.4 A greedy algorithm for the uncapacitated facility location problem . . .
. . . .
. . . .
. . . .
. . . .
. . . .
233 234 239 243 247
10 Further uses of rounding data and dynamic programming 257 10.1 The Euclidean traveling salesman problem . . . . . . . . . . . . . . . . . . . . . . 257 10.2 The maximum independent set problem in planar graphs . . . . . . . . . . . . . 269 11 Further uses of deterministic rounding of linear 11.1 The generalized assignment problem . . . . . . . 11.2 Minimumcost boundeddegree spanning trees . . 11.3 Survivable network design and iterated rounding
programs 281 . . . . . . . . . . . . . . . . . . 282 . . . . . . . . . . . . . . . . . . 286 . . . . . . . . . . . . . . . . . . 297
12 Further uses of random sampling and randomized rounding grams 12.1 The uncapacitated facility location problem . . . . . . . . . . . . 12.2 The singlesource rentorbuy problem . . . . . . . . . . . . . . . 12.3 The Steiner tree problem . . . . . . . . . . . . . . . . . . . . . . 12.4 Everything at once: ﬁnding a large cut in a dense graph . . . . . 13 Further uses of randomized rounding of 13.1 Approximating quadratic programs . . . 13.2 Coloring 3colorable graphs . . . . . . . 13.3 Unique games . . . . . . . . . . . . . . .
semideﬁnite . . . . . . . . . . . . . . . . . . . . . . . .
of linear pro309 . . . . . . . . . 310 . . . . . . . . . 313 . . . . . . . . . 316 . . . . . . . . . 322
programs 333 . . . . . . . . . . . . . . . 334 . . . . . . . . . . . . . . . 340 . . . . . . . . . . . . . . . 344
14 Further uses of the primaldual method 355 14.1 The prizecollecting Steiner tree problem . . . . . . . . . . . . . . . . . . . . . . . 355 14.2 The feedback vertex set problem in undirected graphs . . . . . . . . . . . . . . . 360 15 Further uses of cuts and metrics 15.1 Low distortion embeddings and the sparsest cut problem 15.2 Oblivious routing and cuttree packings . . . . . . . . . 15.3 Cuttree packings and the minimum bisection problem . 15.4 The uniform sparsest cut problem . . . . . . . . . . . . 16 Techniques in proving the hardness of approximation 16.1 Reductions from NPcomplete problems . . . . . . . . . 16.2 Reductions that preserve approximation . . . . . . . . . 16.3 Reductions from probabilistically checkable proofs . . . 16.4 Reductions from label cover . . . . . . . . . . . . . . . . 16.5 Reductions from unique games . . . . . . . . . . . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
. . . . .
. . . .
369 . 369 . 376 . 382 . 385
. . . . .
407 . 407 . 412 . 420 . 425 . 437
17 Open Problems
447
A Linear programming
453
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
10
Table of Contents
B NPcompleteness
457
Bibliography
461
Author index
481
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
Part I
An introduction to the techniques
11
Chapter 1
An introduction to approximation algorithms
1.1
The whats and whys of approximation algorithms
Decisions, decisions. The diﬃculty of sifting through large amounts of data in order to make an informed choice is ubiquitous in today’s society. One of the promises of the information technology era is that many decisions can now be made rapidly by computers, from deciding inventory levels, to routing vehicles, to organizing data for eﬃcient retrieval. The study of how to make decisions of these sorts in order to achieve some best possible goal, or objective, has created the ﬁeld of discrete optimization. Unfortunately, most interesting discrete optimization problems are NPhard. Thus, unless P = NP, there are no eﬃcient algorithms to ﬁnd optimal solutions to such problems, where we follow the convention that an eﬃcient algorithm is one that runs in time bounded by a polynomial in its input size. This book concerns itself with the answer to the question “What should we do in this case?” An old engineering slogan says, “Fast. Cheap. Reliable. Choose two.” Similarly, if P ̸= NP, we can’t simultaneously have algorithms that (1) ﬁnd optimal solutions (2) in polynomial time (3) for any instance. At least one of these requirements must be relaxed in any approach to dealing with an NPhard optimization problem. One approach relaxes the “for any instance” requirement, and ﬁnds polynomialtime algorithms for special cases of the problem at hand. This is useful if the instances one desires to solve fall into one of these special cases, but this is not frequently the case. A more common approach is to relax the requirement of polynomialtime solvability. The goal is then to ﬁnd optimal solutions to problems by clever exploration of the full set of possible solutions to a problem. This is often a successful approach if one is willing to take minutes, or even hours, to ﬁnd the best possible solution; perhaps even more importantly, one is never certain that for the next input encountered, the algorithm will terminate in any reasonable amount of time. This is the approach taken by those in the ﬁeld of operations research and mathematical programming who solve integer programming formulations of discrete optimization problems, or those in the area of artiﬁcial intelligence who consider techniques such as A∗ search or constraint programming. 13
14
An introduction to approximation algorithms
By far the most common approach, however, is to relax the requirement of ﬁnding an optimal solution, and instead settle for a solution that is “good enough,” especially if it can be found in seconds or less. There has been an enormous study of various types of heuristics and metaheuristics such as simulated annealing, genetic algorithms, and tabu search, to name but a few. These techniques often yield good results in practice. The approach of this book falls into this third class. We relax the requirement of ﬁnding an optimal solution, but our goal is to relax this as little as we possibly can. Throughout this book, we will consider approximation algorithms for discrete optimization problems. We try to ﬁnd a solution that closely approximates the optimal solution in terms of its value. We assume that there is some objective function mapping each possible solution of an optimization problem to some nonnegative value, and an optimal solution to the optimization problem is one that either minimizes or maximizes the value of this objective function. Then we deﬁne an approximation algorithm as follows. Deﬁnition 1.1: An αapproximation algorithm for an optimization problem is a polynomialtime algorithm that for all instances of the problem produces a solution whose value is within a factor of α of the value of an optimal solution. For an αapproximation algorithm, we will call α the performance guarantee of the algorithm. In the literature, it is also often called the approximation ratio or approximation factor of the algorithm. In this book we will follow the convention that α > 1 for minimization problems, while α < 1 for maximization problems. Thus, a 12 approximation algorithm for a maximization problem is a polynomialtime algorithm that always returns a solution whose value is at least half the optimal value. Why study approximation algorithms? We list several reasons. • Because we need algorithms to get solutions to discrete optimization problems. As we mentioned above, with our current information technology there are an increasing number of optimization problems that need to be solved, and most of these are NPhard. In some cases, an approximation algorithm is a useful heuristic for ﬁnding nearoptimal solutions when the optimal solution is not required. • Because algorithm design often focuses first on idealized models rather than the “realworld” application. In practice, many discrete optimization problems are quite messy, and have many complicating side constraints that make it hard to ﬁnd an approximation algorithm with a good performance guarantee. But often approximation algorithms for simpler versions of the problem give us some idea of how to devise a heuristic that will perform well in practice for the actual problem. Furthermore, the push to prove a theorem often results in a deeper mathematical understanding of the problem’s structure, which then leads to a new algorithmic approach. • Because it provides a mathematically rigorous basis on which to study heuristics. Typically, heuristics and metaheuristics are studied empirically; they might work well, but we might not understand why. The ﬁeld of approximation algorithms brings mathematical rigor to the study of heuristics, allowing us to prove how well the heuristic performs on all instances, or giving us some idea of the types of instances on which the heuristic will not perform well. Furthermore, the mathematical analyses of many of the approximation algorithms in this book have the property that not only is there an a priori guarantee for any input, but there is also an a fortiori guarantee that is provided on an inputbyinput Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
1.1
The whats and whys of approximation algorithms
15
basis, which allows us to conclude that speciﬁc solutions are in fact much more nearly optimal than promised by the performance guarantee. • Because it gives a metric for stating how hard various discrete optimization problems are. Over the course of the twentieth century, the study of the power of computation has steadily evolved. In the early part of the century, researchers were concerned with what kinds of problems could be solved at all by computers in ﬁnite time, with the halting problem as the canonical example of a problem that could not be solved. The latter part of the century concerned itself with the eﬃciency of solution, distinguishing between problems that could be solved in polynomial time, and those that are NPhard and (perhaps) cannot be solved eﬃciently. The ﬁeld of approximation algorithms gives us a means of distinguishing between various optimization problems in terms of how well they can be approximated. • Because it’s fun. The area has developed some very deep and beautiful mathematical results over the years, and it is inherently interesting to study these. It is sometimes objected that requiring an algorithm to have a nearoptimal solution for all instances of the problem — having an analysis for what happens to the algorithm in the worst possible instance — leads to results that are too loose to be practically interesting. After all, in practice, we would greatly prefer solutions within a few percent of optimal rather than, say, twice optimal. From a mathematical perspective, it is not clear that there are good alternatives to this worstcase analysis. It turns out to be quite diﬃcult to deﬁne a “typical” instance of any given problem, and often instances drawn randomly from given probability distributions have very special properties not present in realworld data. Since our aim is mathematical rigor in the analysis of our algorithms, we must content ourselves with this notion of worstcase analysis. We note that the worstcase bounds are often due to pathological cases that do not arise in practice, so that approximation algorithms often give rise to heuristics that return solutions much closer to optimal than indicated by their performance guarantees. Given that approximation algorithms are worth studying, the next natural question is whether there exist good approximation algorithms for problems of interest. In the case of some problems, we are able to obtain extremely good approximation algorithms; in fact, these problems have polynomialtime approximation schemes. Deﬁnition 1.2: A polynomialtime approximation scheme (PTAS) is a family of algorithms {Aϵ }, where there is an algorithm for each ϵ > 0, such that Aϵ is a (1 + ϵ)approximation algorithm (for minimization problems) or a (1 − ϵ)approximation algorithm (for maximization problems). Many problems have polynomialtime approximation schemes. In later chapters we will encounter the knapsack problem and the Euclidean traveling salesman problem, each of which has a PTAS. However, there exists a class of problems that is not so easy. This class is called MAX SNP; although we will not deﬁne it, it contains many interesting optimization problems, such as the maximum satisﬁability problem and the maximum cut problem, which we will discuss later in the book. The following has been shown. Theorem 1.3: For any MAX SNPhard problem, there does not exist a polynomialtime approximation scheme, unless P = NP. Finally, some problems are very hard. In the maximum clique problem, we are given as input an undirected graph G = (V, E). The goal is to ﬁnd a maximumsize clique; that is, Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
16
An introduction to approximation algorithms
we wish to ﬁnd S ⊆ V that maximizes S so that for each pair i, j ∈ S, it must be the case that (i, j) ∈ E. The following theorem demonstrates that almost any nontrivial approximation guarantee is most likely unattainable. Theorem 1.4: Let n denote the number of vertices in an input graph, and consider any constant ϵ > 0. Then there does not exist an O(nϵ−1 )approximation algorithm for the maximum clique problem, unless P = NP. To see how strong this theorem is, observe that it is very easy to get an n−1 approximation algorithm for the problem: just output a single vertex. This gives a clique of size 1, whereas the size of the largest clique can be at most n, the number of vertices in the input. The theorem states that ﬁnding something only slightly better than this completely trivial approximation algorithm implies that P = NP!
1.2
An introduction to the techniques and to linear programming: the set cover problem
One of the theses of this book is that there are several fundamental techniques used in the design and analysis of approximation algorithms. The goal of this book is to help the reader understand and master these techniques by applying each technique to many diﬀerent problems of interest. We will visit some problems several times; when we introduce a new technique, we may see how it applies to a problem we have seen before, and show how we can obtain a better result via this technique. The rest of this chapter will be an illustration of several of the central techniques of the book applied to a single problem, the set cover problem, which we deﬁne below. We will see how each of these techniques can be used to obtain an approximation algorithm, and how some techniques lead to improved approximation algorithms for the set cover problem. In the set cover problem, we are given a ground set of elements E = {e1 , . . . , en }, some subsets of those elements S1 , S2 , . . . , Sm where each Sj ⊆ E, and a nonnegative weight wj ≥ 0 for each subset Sj . The goal is to ﬁnd a minimumweight collection of subsets that∪covers all of ∑ E; that is, we wish to ﬁnd an I ⊆ {1, . . . , m} that minimizes j∈I wj subject to j∈I Sj = E. If wj = 1 for each subset j, the problem is called the unweighted set cover problem. The set cover problem is an abstraction of several types of problems; we give two examples here. The set cover problem was used in the development of an antivirus product, which detects computer viruses. In this case it was desired to ﬁnd salient features that occur in viruses designed for the boot sector of a computer, such that the features do not occur in typical computer applications. These features were then incorporated into another heuristic for detecting these boot sector viruses, a neural network. The elements of the set cover problem were the known boot sector viruses (about 150 at the time). Each set corresponded to some threebyte sequence occurring in these viruses but not in typical computer programs; there were about 21,000 such sequences. Each set contained all the boot sector viruses that had the corresponding threebyte sequence somewhere in it. The goal was to ﬁnd a small number of such sequences (much smaller than 150) that would be useful for the neural network. By using an approximation algorithm to solve the problem, a small set of sequences was found, and the neural network was able to detect many previously unanalyzed boot sector viruses. The set cover problem also generalizes the vertex cover problem. In the vertex cover problem, we are given an undirected graph G = (V, E) and a nonnegative weight wi ≥ 0 for each vertex i ∈ V . The goal is to ﬁnd a minimumweight subset of vertices C ⊆ V such that for each edge (i, j) ∈ E, either i ∈ C or j ∈ C. As in the set cover problem, if wi = 1 for each vertex i, Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
1.2 An introduction to the techniques and to linear programming: the set cover problem 17
the problem is an unweighted vertex cover problem. To see that the vertex cover problem is a special case of the set cover problem, for any instance of the vertex cover problem, create an instance of the set cover problem in which the ground set is the set of edges, and a subset Si of weight wi is created for each vertex i ∈ V containing the edges incident to i. It is not diﬃcult to see that for any vertex cover C, there is a set cover I = C of the same weight, and vice versa. A second thesis of this book is that linear programming plays a central role in the design and analysis of approximation algorithms. Many of the techniques introduced will use the theory of integer and linear programming in one way or another. Here we will give a very brief introduction to the area in the context of the set cover problem; we give a slightly less brief introduction in Appendix A, and the notes at the end of this chapter provide suggestions of other, more indepth, introductions to the topic. Each linear program or integer program is formulated in terms of some number of decision variables that represent some sort of decision that needs to be made. The variables are constrained by a number of linear inequalities and equalities called constraints. Any assignment of real numbers to the variables such that all of the constraints are satisﬁed is called a feasible solution. In the case of the set cover problem, we need to decide which subsets Sj to use in the solution. We create a decision variable xj to represent this choice. In this case we would like xj to be 1 if the set Sj is included in the solution, and 0 otherwise. Thus, we introduce constraints xj ≤ 1 for all subsets Sj , and xj ≥ 0 for all subsets Sj . This is not suﬃcient to guarantee that xj ∈ {0, 1}, so we will formulate the problem as an integer program to exclude fractional solutions (that is, nonintegral solutions); in this case, we are also allowed to constrain the decision variables to be integers. Requiring xj to be integer along with the constraints xj ≥ 0 and xj ≤ 1 is suﬃcient to guarantee that xj ∈ {0, 1}. We also want to make sure that any feasible solution corresponds to a set cover, so we introduce additional constraints. In order to ensure that every element ei is covered, it must be the case that at least one of the subsets Sj containing ei is selected. This will be the case if ∑
xj ≥ 1,
j:ei ∈Sj
for each ei , i = 1, . . . , n. In addition to the constraints, linear and integer programs are deﬁned by a linear function of the decision variables called the objective function. The linear or integer program seeks to ﬁnd a feasible solution that either maximizes or minimizes this objective function. Such a solution is called an optimal solution. The value of the objective function for a particular feasible solution is called the value of that solution. The value of the objective function for an optimal solution is called the value of the linear (or integer) program. We say we solve the linear program if we ﬁnd an optimal solution. In the case of the set cover problem, we want to ﬁnd a set cover of minimum weight. Given the decision variables xj and constraints described above, the weight ∑m of a set cover given the x variables is w j j=1 j xj . Thus, the objective function of the integer ∑m program is j=1 wj xj , and we wish to minimize this function. Integer and linear programs are usually written in a compact form stating ﬁrst the objective function and then the constraints. Given the discussion above, the problem of ﬁnding a Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
18
An introduction to approximation algorithms
minimumweight set cover is equivalent to the following integer program: minimize
m ∑
wj xj
j=1
∑
subject to
xj ≥ 1,
i = 1, . . . , n,
xj ∈ {0, 1} ,
j = 1, . . . , m.
(1.1)
j:ei ∈Sj
∗ denote the optimum value of this integer program for a given instance of the set cover Let ZIP ∗ = OPT, problem. Since the integer program exactly models the problem, we have that ZIP where OPT is the value of an optimum solution to the set cover problem. In general, integer programs cannot be solved in polynomial time. This is clear because the set cover problem is NPhard, so solving the integer program above for any set cover input in polynomial time would imply that P = NP. However, linear programs are polynomialtime solvable. In linear programs we are not allowed to require that decision variables are integers. Nevertheless, linear programs are still extremely useful: even in cases such as the set cover problem, we are still able to derive useful information from linear programs. For instance, if we replace the constraints xj ∈ {0, 1} with the constraints xj ≥ 0, we obtain the following linear program, which can be solved in polynomial time:
minimize
m ∑
wj xj
j=1
subject to
∑
xj ≥ 1,
i = 1, . . . , n,
xj ≥ 0,
j = 1, . . . , m.
(1.2)
j:ei ∈Sj
We could also add the constraints xj ≤ 1, for each j = 1, . . . , m, but they would be redundant: in any optimal solution to the problem, we can reduce any xj > 1 to xj = 1 without aﬀecting the feasibility of the solution and without increasing its cost. The linear program (1.2) is a relaxation of the original integer program. By this we mean two things: ﬁrst, every feasible solution for the original integer program (1.1) is feasible for this linear program; and second, the value of any feasible solution for the integer program has the same value in the linear program. To see that the linear program is a relaxation,∑note that any solution for the integer program such that xj ∈ {0, 1} for each j = 1, . . . , m and j:ei ∈Sj xj ≥ 1 for each i = 1, . . . , m will certainly satisfy all the constraints of the linear program. Furthermore, the objective functions of both the integer and linear programs are the same, so that any feasible ∗ denote the solution for the integer program has the same value for the linear program. Let ZLP optimum value of this linear program. Any optimal solution to the integer program is feasible ∗ . Thus, any optimal solution to the linear program will for the linear program and has value ZIP ∗ ∗ have value ZLP ≤ ZIP = OPT, since this minimization linear program ﬁnds a feasible solution of lowest possible value. Using a polynomialtime solvable relaxation of a problem in order to obtain a lower bound (in the case of minimization problems) or an upper bound (in the case of maximization problems) on the optimum value of the problem is a concept that will appear frequently in this book. In the following sections, we will give some examples of how the linear programming relaxation can be used to derive approximation algorithms for the set cover problem. In the next section, we will show that a fractional solution to the linear program can be rounded to Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
1.3
A deterministic rounding algorithm
19
a solution to the integer program of objective function value that is within a certain factor ∗ . Thus, the integer solution will cost no more than f of the value of the linear program ZLP f · OPT. In the following section, we will show how one can similarly round the solution to something called the dual of the linear programming relaxation. In Section 1.5, we will see that in fact one does not need to solve the dual of the linear programming relaxation, but in fact can quickly construct a dual feasible solution with the properties needed to allow a good rounding. In Section 1.6, a type of algorithm called a greedy algorithm will be given; in this case, linear programming need not be used at all, but one can use the dual to improve the analysis of the algorithm. Finally, in Section 1.7, we will see how randomized rounding of the solution to the linear programming relaxation can lead to an approximation algorithm for the set cover problem. Because we will frequently be referring to linear programs and linear programming, we will often abbreviate these terms by the acronym LP. Similarly, IP stands for either integer program or integer programming.
1.3
A deterministic rounding algorithm
Suppose that we solve the linear programming relaxation of the set cover problem. Let x∗ denote an optimal solution to the LP. How then can we recover a solution to the set cover problem? Here is a very easy way to obtain a solution: given the LP solution x∗ , we include subset Sj in our solution if and only if x∗j ≥ 1/f , where f is the maximum number of sets in which any element appears. More formally, let fi =  {j : ei ∈ Sj }  be the number of sets in which element ei appears, i = 1, . . . , n; then f = maxi=1,...,n fi . Let I denote the indices j of the subsets in this solution. In eﬀect, we round the fractional solution x∗ to an integer solution x ˆ by setting x ˆj = 1 if x∗j ≥ 1/f , and x ˆj = 0 otherwise. We shall see that it is straightforward to prove that x ˆ is a feasible solution to the integer program, and I indeed indexes a set cover. Lemma 1.5: The collection of subsets Sj , j ∈ I, is a set cover.
Proof. Consider the solution speciﬁed by the lemma, and call an element ei covered if this solution contains some subset containing ei . We show that each element ei is covered. Because ∑ ∗ the optimal solution x is a feasible solution to the linear program, we know that j:ei ∈Sj x∗j ≥ 1 for element ei . By the deﬁnition of fi and of f , there are fi ≤ f terms in the sum, so at least one term must be at least 1/f . Thus, for some j such that ei ∈ Sj , x∗j ≥ 1/f . Therefore, j ∈ I, and element ei is covered.
We can also show that this rounding procedure yields an approximation algorithm. Theorem 1.6: The rounding algorithm is an f approximation algorithm for the set cover problem.
Proof. It is clear that the algorithm runs in polynomial time. By our construction, 1 ≤ f · x∗j for each j ∈ I. From this, and the fact that each term f wj x∗j is nonnegative for j = 1, . . . , m, Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
20
An introduction to approximation algorithms
we see that ∑
≤
wj
m ∑
wj · (f · x∗j )
j=1 m ∑
j∈I
= f
wj x∗j
j=1 ∗ = f · ZLP
≤ f · OPT, ∗ ≤ OPT. where the ﬁnal inequality follows from the argument above that ZLP
In the special case of the vertex cover problem, fi = 2 for each vertex i ∈ V , since each edge is incident to exactly two vertices. Thus, the rounding algorithm gives a 2approximation algorithm for the vertex cover problem. This particular algorithm allows us to have an a fortiori guarantee for each input. While we know that for any input, the solution produced has cost at most a factor of f more than the cost of an optimal solution, we can for any input compare the value of the solution we ﬁnd with∑the value of the linear programming relaxation. If the algorithm ﬁnds a set cover I, ∗ . From the proof above, we know that α ≤ f . However, for any given let α = j∈I wj /ZLP input, it could be the case that α is signiﬁcantly smaller than f ; in this case we know that ∑ ∗ j∈I wj = αZLP ≤ α OPT, and the solution is within a factor of α of optimal. The algorithm can easily compute α, given that it computes I and solves the LP relaxation.
1.4
Rounding a dual solution
Often it will be useful to consider the dual of the linear programming relaxation of a given problem. Again, we will give a very brief introduction to the concept of the dual of a linear program in the context of the set cover problem, and more indepth introductions to the topic will be cited in the notes at the end of this chapter. To begin, we suppose that each element ei is charged some nonnegative price yi ≥ 0 for its coverage by a set cover. Intuitively, it might be the case that some elements can be covered with lowweight subsets, while other elements might require highweight subsets to cover them; we would like to be able to capture this distinction by charging low prices to the former and high prices to the latter. In order for the prices to be reasonable, it cannot be the case that the sum of the prices of elements in a subset Sj is more than the weight of the set, since we are able to cover all of those elements by paying weight wj . Thus, for each subset Sj we have the following limit on the prices: ∑ yi ≤ wj . i:ei ∈Sj
We can ﬁnd the highest total price that the elements can be charged by the following linear program: n ∑ maximize yi i=1
subject to
∑
yi ≤ wj ,
j = 1, . . . , m,
yi ≥ 0,
i = 1, . . . , n.
(1.3)
i:ei ∈Sj
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
1.4
Rounding a dual solution
21
This linear program is the dual linear program of the set cover linear programming relaxation (1.2). We can in general derive a dual linear program for any given linear program, but we will not go into the details of how to do so; see Appendix A or the references in the notes at the end of the chapter. If we derive a dual for a given linear program, the given program is sometimes called the primal linear program. For instance, the original linear programming relaxation (1.2) of the set cover problem is the primal linear program of the dual (1.3). Notice that this dual ∑ has a variable yi for each constraint of the primal linear program (that is, for the constraint j:ei ∈Sj xj ≥ 1), and has a constraint for each variable xj of the primal. This is true of dual linear programs in general. Dual linear programs have a number of very interesting and useful properties. For example, let x be any feasible solution to the set cover linear programming relaxation, and let y be any feasible set of prices (that is, any feasible solution to the dual linear program). Then consider the value of the dual solution y: n ∑
yi ≤
i=1
n ∑
∑
yi
xj ,
j:ei ∈Sj
i=1
∑
since for any ei , j:ei ∈Sj xj ≥ 1 by the feasibility of x. Then rewriting the righthand side of this inequality, we have m n ∑ ∑ ∑ ∑ yi . xj = xj yi i=1
j:ei ∈Sj
i:ei ∈Sj
j=1
Finally, noticing that since y is a feasible solution to the dual linear program, we know that ∑ i:ei ∈Sj yi ≤ wj for any j, so that m ∑ j=1
So we have shown that
∑
xj
yi ≤
i:ei ∈Sj n ∑ i=1
yi ≤
m ∑
xj wj .
j=1 m ∑
wj xj ;
j=1
that is, any feasible solution to the dual linear program has a value no greater than any feasible solution to the primal linear program. In particular, any feasible solution to the dual linear program has a value than the optimal solution to the primal linear program, so for ∑ no greater ∗ . This is called the weak duality property of linear programs. any feasible y, ni=1 yi ≤ ZLP ∑n ∗ ≤ OPT, we have that for any feasible y, Since we previously argued that ZLP i=1 yi ≤ OPT . This is a very useful property that will help us in designing approximation algorithms. Additionally, there is a quite amazing strong duality property of linear programs. Strong duality states that as long as there exist feasible solutions to both the primal and dual linear programs, their optimal values are equal. Thus, if x∗ is an optimal solution to the set cover linear programming relaxation, and y ∗ is an optimal solution to the dual linear program, then m ∑ j=1
wj x∗j =
n ∑
yi∗ .
i=1
Information from a dual linear program solution can sometimes be used to derive good approximation algorithms. Let y ∗ be an optimal solution to the dual LP (1.3), and consider Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
22
An introduction to approximation algorithms
the solution in which we choose all subsets for which the corresponding dual inequality is ∑ tight; that is, the inequality is met with equality for subset Sj , and i:ei ∈Sj yi∗ = wj . Let I ′ denote the indices of the subsets in this solution. We will prove that this algorithm also is an f approximation algorithm for the set cover problem. Lemma 1.7: The collection of subsets Sj , j ∈ I ′ , is a set cover. Proof. Suppose that there exists some uncovered element ek . Then for each subset Sj containing ek , it must be the case that ∑ yi∗ < wj . (1.4) i:ei ∈Sj
Let ϵ be the smallest diﬀerence between side ( the righthand ) and lefthand side of all constraints ∑ ∗ involving ek ; that is, ϵ = minj:ek ∈Sj wj − i:ei ∈Sj yi . By inequality (1.4), we know that ϵ > 0. Consider now a new dual solution y ′ in which yk′ = yk∗ + ϵ and every other component of y ′ is the same as in y ∗ . Then y ′ is a dual feasible solution since for each j such that ek ∈ Sj , ∑ ∑ yi′ = yi∗ + ϵ ≤ wj , i:ei ∈Sj
i:ei ∈Sj
by the deﬁnition of ϵ. For each j such that ek ∈ / Sj , ∑ ∑ yi′ = yi∗ ≤ wj , ∑n
i:ei ∈Sj
i:ei ∈Sj
∑n
as before. Furthermore, i=1 yi′ > i=1 yi∗ , which contradicts the optimality of y ∗ . Thus, it must be the case that all elements are covered and I ′ is a set cover. Theorem 1.8: The dual rounding algorithm described above is an f approximation algorithm for the set cover problem. Proof. The central idea is the following “charging” argument: when we choose a set Sj to be in the cover, we “pay” for it by charging yi∗ to each of its elements ei ; each element is charged at most once set that contains it (and hence at most f times), and so the total cost is at ∑ for each ∗ , or f times the dual objective function. y most f m i=1 i ∑ More formally, since j ∈ I ′ only if wj = i:ei ∈Sj yi∗ , we have that the cost of the set cover I ′ is ∑ ∑ ∑ wj = yi∗ j∈I ′
=
j∈I ′ i:ei ∈Sj n ∑
{ }  j ∈ I ′ : ei ∈ Sj  · yi∗
i=1
≤
n ∑
fi yi∗
i=1 n ∑
≤ f
yi∗
i=1
≤ f · OPT . The second equality follows from the fact that when we interchange the order of summation, the coeﬃcient of yi∗ is, of course, equal to the number of times that this term occurs overall. The ﬁnal inequality follows from the weak duality property discussed previously. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
1.5
Constructing a dual solution: the primaldual method
23
In fact, it is possible to show that this algorithm can do no better than the algorithm of the previous section; to be precise, we can show that if I indexes the solution returned by the primal rounding algorithm of the previous section, then I ⊆ I ′ . This follows from a property of optimal linear programming solutions called complementary slackness. We showed earlier the following string of inequalities for any feasible solution x to the set cover linear programming relaxation, and any feasible solution y to the dual linear program: n ∑ i=1
yi ≤
n ∑ i=1
yi
∑ j:ei ∈Sj
xj =
m ∑ j=1
xj
∑ i:ei ∈Sj
yi ≤
m ∑
xj wj .
j=1
Furthermore, we claimed that strong duality implies that for optimal solutions x∗ and y ∗ , ∑ ∑ m n ∗ ∗ ∗ ∗ j=1 wj xj . Thus, for any optimal solutions x and y the two inequalities in the i=1 yi = chain of inequalities above this can happen is that ∑ must in fact be equalities. The only way ∑ whenever yi∗ > 0 then j:ei ∈Sj x∗j = 1, and whenever x∗j > 0, then i:ei ∈Sj yi∗ = wj . That is, whenever a linear programming variable (primal or dual) is nonzero, the corresponding constraint in the dual or primal is tight. These conditions are known as the complementary slackness conditions. Thus, if x∗ and y ∗ are optimal solutions, the complementary slackness conditions must hold. The converse is also true: if x∗ and y ∗ are feasible primal and dual solutions, respectively, then if the complementary slackness conditions hold, the values of the two objective functions are equal and therefore the solutions must be optimal. In the case of the set cover program, if x∗j > 0 for any primal optimal solution x∗ , then the corresponding dual inequality for Sj must be tight for any dual optimal solution y ∗ . Recall that in the algorithm of the previous section, we put j ∈ I when x∗j ≥ 1/f . Thus, j ∈ I implies that j ∈ I ′ , so that I ′ ⊇ I.
1.5
Constructing a dual solution: the primaldual method
One of the disadvantages of the algorithms of the previous two sections is that they require solving a linear program. While linear programs are eﬃciently solvable, and algorithms for them are quick in practice, special purpose algorithms are often much faster. Although in this book we will not usually be concerned with the precise running times of the algorithms, we will try to indicate their relative practicality. The basic idea of the algorithm in this section is that the dual rounding algorithm of the previous section uses relatively few properties of an optimal dual solution. Instead of actually solving the dual LP, we can construct a feasible dual solution with the same properties. In this case, constructing the dual solution is much faster than solving the dual LP, and hence leads to a much faster algorithm. The ∑ algorithm of the previous section used the following properties. First, we used the fact′ that ni=1 yi ≤ ∑ OPT, which is true for any feasible dual solution y. Second, we include j ∈ I precisely when i:ei ∈Sj yi = wj , and I ′ is a set cover. These two facts together gave the proof that the cost of I ′ is no more than f times optimal. Importantly, it is the proof of Lemma 1.7 (that we have constructed a feasible cover) that shows how to obtain an algorithm that constructs a dual solution. Consider any feasible dual solution y, and let T be the set of the indices of all tight dual constraints; that is, T = {j : ∑ y i:ei ∈Sj i = wj }. If T is a set cover, then we are done. If T is not a set cover, then some item ei is uncovered, and as shown in the proof of Lemma 1.7 it is possible to improve the dual objective function by increasing yi by some ϵ > 0. More speciﬁcally, we can increase yi Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
24
An introduction to approximation algorithms
y←0 I←∅ ∪ while there exists ei ∈ / j∈I Sj do Increase ∑the dual variable yi until there is some ℓ with ei ∈ Sℓ such that j:ej ∈Sℓ yj = wℓ I ← I ∪ {ℓ} Algorithm 1.1: Primaldual algorithm for the set cover problem. ( ) ∑ by minj:ei ∈Sj wj − k:ek ∈Sj yk , so that the constraint becomes tight for the subset Sj that attains the minimum. Additionally, the modiﬁed dual solution remains feasible. Thus, we can add j to T , and element ei is now covered by the sets in T . We repeat this process until T is a set cover. Since an additional element ei is covered each time, the process is repeated at most n times. To complete the description of the algorithm, we need to give only an initial dual feasible solution. We can use the solution yi = 0 for each i = 1, . . . , n; this is feasible since each wj , j = 1, . . . , m, is nonnegative. A formal description is given in Algorithm 1.1. This yields the following theorem. Theorem 1.9: Algorithm 1.1 is an f approximation algorithm for the set cover problem. This type of algorithm is called a primaldual algorithm by analogy with the primaldual method used in other combinatorial algorithms. Linear programming problems, network ﬂow problems, and shortest path problems (among others) all have primaldual optimization algorithms; we will see an example of a primaldual algorithm for the shortest st path problem in Section 7.3. Primaldual algorithms start with a dual feasible solution, and use dual information to infer a primal, possibly infeasible, solution. If the primal solution is indeed infeasible, the dual solution is modiﬁed to increase the value of the dual objective function. The primaldual method has been very useful in designing approximation algorithms, and we will discuss it extensively in Chapter 7. We observe again that this particular algorithm allows us to have an a fortiori guarantee for each input, since we can compare the value of the solution obtained with the value of the dual solution generated by the algorithm. This ratio is guaranteed to be at most f by the proof above, but it might be signiﬁcantly better.
1.6
A greedy algorithm
At this point, the reader might be forgiven for feeling a slight sense of futility: we have examined several techniques for designing approximation algorithms for the set cover problem, and they have all led to the same result, an approximation algorithm with performance guarantee f . But, as in life, perseverance and some amount of cleverness often pay dividends in designing approximation algorithms. We show in this section that a type of algorithm called a greedy algorithm gives an approximation algorithm with a performance guarantee that is often signiﬁcantly better than f . Greedy algorithms work by making a sequence of decisions; each decision is made to optimize that particular decision, even though this sequence of locally optimal (or “greedy”) decisions might not lead to a globally optimal solution. The advantage of greedy algorithms is that they are typically very easy to implement, and hence greedy algorithms are a commonly used heuristic, even when they have no performance guarantee. We now present a very natural greedy algorithm for the set cover problem. Sets are chosen Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
1.6
A greedy algorithm
25
I←∅ Sˆj ← Sj ∀j while I is not a set cover do w ℓ ← arg minj:Sˆj ̸=∅ ˆj I ← I ∪ {ℓ} Sˆj ← Sˆj − Sℓ
Sj 
∀j
Algorithm 1.2: A greedy algorithm for the set cover problem.
in a sequence of rounds. In each round, we choose the set that gives us the most bang for the buck; that is, the set that minimizes the ratio of its weight to the number of currently uncovered elements it contains. In the event of a tie, we pick an arbitrary set that achieves the minimum ratio. We continue choosing sets until all elements are covered. Obviously, this will yield a polynomialtime algorithm, since there can be no more than m rounds, and in each we compute O(m) ratios, each in constant time. A formal description is given in Algorithm 1.2. Before we state the theorem, we need some notation and a useful mathematical fact. Let Hk denote the kth harmonic number: that is, Hk = 1 + 12 + 13 + · · · + k1 . Note that Hk ≈ ln k. The following fact is one that we will use many times in the course of this book. It can be proven with simple algebraic manipulations. Fact 1.10: Given positive numbers a1 , . . . , ak and b1 , . . . , bk , then ∑k ai ai ai ≤ max min ≤ ∑i=1 . k i=1,...,k bi i=1,...,k bi i=1 bi Theorem 1.11: Algorithm 1.2 is an Hn approximation algorithm for the set cover problem. Proof. The basic intuition for the analysis of the algorithm is as follows. Let OPT denote the value of an optimal solution to the set cover problem. We know that an optimal solution covers all n elements with a solution of weight OPT; therefore, there must be some subset that covers its elements with an average weight of at most OPT /n. Similarly, after k elements have been covered, the optimal solution can cover the remaining n − k elements with a solution of weight OPT, which implies that there is some subset that covers its remaining uncovered elements with an average weight of at most OPT /(n − k). So in general the greedy algorithm pays about ∑n OPT1/(n − k + 1) to cover the kth uncovered element, giving a performance guarantee of k=1 n−k+1 = Hn . We now formalize this intuition. Let nk denote the number of elements that remain uncovered at the start of the kth iteration. If the algorithm takes ℓ iterations, then n1 = n, and we set nℓ+1 = 0. Pick an arbitrary iteration k. Let Ik denote the indices of the sets chosen in iterations 1 through k − 1, and for each j = 1, . . . , m, let Sˆ∪ j denote the set of uncovered elements in Sj at the start of this iteration; that is, Sˆj = Sj − p∈Ik Sp . Then we claim that for the set j chosen in the kth iteration, wj ≤
nk − nk+1 OPT . nk
(1.5)
Given the claimed inequality (1.5), we can prove the theorem. Let I contain the indices of the Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
26
An introduction to approximation algorithms
sets in our ﬁnal solution. Then ∑ j∈I
wj
≤
ℓ ∑ nk − nk+1
nk
k=1
OPT
) ℓ ( ∑ 1 1 1 ≤ OPT · + + ··· + nk nk − 1 nk+1 + 1 = OPT ·
k=1 n ∑ i=1
(1.6)
1 i
= Hn · OPT, where the inequality (1.6) follows from the fact that n1k ≤ nk1−i for each 0 ≤ i < nk . To prove the claimed inequality (1.5), we shall ﬁrst argue that in the kth iteration, wj OPT ≤ . ˆ nk j:Sˆj ̸=∅ Sj  min
(1.7)
If we let O contain the indices of the sets in an optimal solution, then inequality (1.7) follows from Fact 1.10, by observing that ∑ wj OPT OPT j∈O wj min ≤∑ , =∑ ≤ ˆj  nk Sˆj  Sˆj  j:Sˆj ̸=∅ S j∈O
j∈O
∪ where the last inequality follows from the fact that since O is a set cover, the set j∈O Sˆj must include all remaining nk uncovered elements. Let j index a subset that minimizes this ratio, so w ˆ that ˆj ≤ OPT nk . If we add the subset Sj to our solution, then there will be Sj  fewer uncovered Sj 
elements, so that nk+1 = nk − Sˆj . Thus, wj ≤
Sˆj  OPT nk − nk+1 = OPT . nk nk
We can improve the performance guarantee of the algorithm slightly by using the dual of the linear programming relaxation in the analysis. Let g be the maximum size of any subset ∗ Sj ; that is, g = maxj Sj . Recall that ZLP is the optimum value of the linear programming relaxation for the set cover problem. The following theorem immediately implies that the greedy ∗ ≤ OPT . algorithm is an Hg approximation algorithm, since ZLP ∑ ∗ . Theorem 1.12: Algorithm 1.2 returns a solution indexed by I such that j∈I wj ≤ Hg · ZLP ∑ Proof. To prove the theorem, we will construct an infeasible dual solution y such that j∈I wj = ∑n We will then show that y ′ = H1g y is a feasible dual solution. By the weak duality i=1 yi . ∑ ∑n ∑ ∑n ′ ∗ , so that theorem, ni=1 yi′ ≤ ZLP i=1 yi ≤ Hg · OPT. We will see at j∈I wj = i=1 yi = Hg the end of the proof the reason we choose to divide the infeasible dual solution y by Hg . The name dual ﬁtting has been given to this technique of constructing an infeasible dual solution whose value is equal to the value of the primal solution constructed, and such that scaling the dual solution by a single value makes it feasible. We will return to this technique in Section 9.4. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
1.6
A greedy algorithm
27
To construct the infeasible dual solution y, suppose we choose to add subset Sj to our solution in iteration k. Then for each ei ∈ Sˆj , we set yi = wj /Sˆj . Since each ei ∈ Sˆj is uncovered in iteration k, and is then covered for the remaining iterations of the algorithm (because we added subset Sj to the solution), the dual variable yi is set to a value exactly once; ∑ in particular, it is set in the iteration in which element ei is covered. Furthermore, wj = i:ei ∈Sˆj yi ; that is, the weight of the subset Sj chosen in the kth iteration is equal to the sum of the duals yi of∑the uncovered ∑n elements that are covered in the kth iteration. This immediately implies that j∈I wj = i=1 yi . It remains to prove that the dual solution y ′ = H1g y is feasible. We must show that for each ∑ subset Sj , i:ei ∈Sj yi′ ≤ wj . Pick an arbitrary subset Sj . Let ak be the number of elements in this subset that are still uncovered at the beginning of the kth iteration, so that a1 = Sj , and aℓ+1 = 0. Let Ak be the uncovered elements of Sj covered in the kth iteration, so that Ak  = ak − ak+1 . If subset Sp is chosen in the kth iteration, then for each element ei ∈ Ak covered in the kth iteration, wj wp ≤ , yi′ = ˆ H g ak Hg Sp  where Sˆp is the set of uncovered elements of Sp at the beginning of the kth iteration. The inequality follows because if Sp is chosen in the kth iteration, it must minimize the ratio of its weight to the number of uncovered elements it contains. Thus, ∑
yi′ =
ℓ ∑ ∑
yi′
k=1 i:ei ∈Ak
i:ei ∈Sj
≤
ℓ ∑
(ak − ak+1 )
k=1
≤ ≤
wj Hg ak
ℓ wj ∑ ak − ak+1 Hg ak k=1 ) ℓ ( wj ∑ 1 1 1 + + ··· + Hg ak ak − 1 ak+1 + 1 k=1
Sj  wj ∑ 1 ≤ Hg i i=1 wj = H Hg Sj  ≤ wj ,
where the ﬁnal inequality follows because Sj  ≤ g. Here we see the reason for scaling the dual solution by Hg , since we know that HSj  ≤ Hg for all sets j. It turns out that no approximation algorithm for the set cover problem with performance guarantee better than Hn is possible, under an assumption slightly stronger than P ̸= NP. Theorem 1.13: If there exists a c ln napproximation algorithm for the unweighted set cover problem for some constant c < 1, then there is an O(nO(log log n) )time deterministic algorithm for each NPcomplete problem. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
28
An introduction to approximation algorithms
Theorem 1.14: There exists some constant c > 0 such that if there exists a c ln napproximation algorithm for the unweighted set cover problem, then P = NP. We will discuss results of this sort at more length in Chapter 16; in Theorem 16.32 we show how a slightly weaker version of these results can be derived. Results of this type are sometimes called hardness theorems, as they show that it is NPhard to provide nearoptimal solutions for a certain problem with certain performance guarantees. The f approximation algorithms for the set cover problem imply a 2approximation algorithm for the special case of the vertex cover problem. No algorithm with a better constant performance guarantee is known at this point in time. Additionally, two hardness theorems, Theorems 1.15 and 1.16 below, have been shown. Theorem √ 1.15: If there exists an αapproximation algorithm for the vertex cover problem with α < 10 5 − 21 ≈ 1.36, then P = NP. The following theorem mentions a conjecture called the unique games conjecture that we will discuss more in Section 13.3 and Section 16.5. The conjecture is roughly that a particular problem (called unique games) is NPhard. Theorem 1.16: Assuming the unique games conjecture, if there exists an αapproximation algorithm for the vertex cover problem with constant α < 2, then P = NP. Thus, assuming P ̸= NP and the NPcompleteness of the unique games problem, we have found essentially the best possible approximation algorithm for the vertex cover problem.
1.7
A randomized rounding algorithm
In this section, we consider one ﬁnal technique for devising an approximation algorithm for the set cover problem. Although the algorithm is slower and has no better guarantee than the greedy algorithm of the previous section, we include it here because it introduces the notion of using randomization in approximation algorithms, an idea we will cover in depth in Chapter 5. As with the algorithm in Section 1.3, the algorithm will solve a linear programming relaxation for the set cover problem, and then round the fractional solution to an integral solution. Rather than doing so deterministically, however, the algorithm will do so randomly using a technique called randomized rounding. Let x∗ be an optimal solution to the LP relaxation. We would like to round fractional values of x∗ to either 0 or 1 in such a way that we obtain a solution x ˆ to the integer programming formulation of the set cover problem without increasing the cost too much. The central idea of randomized rounding is that we interpret the fractional value x∗j as the probability that x ˆj should be set to 1. Thus, each subset Sj is included in our solution ∗ with probability xj , where these m events (that Sj is included in our solution) are independent random events. We assume some basic knowledge of probability theory throughout this text; for those who need some additional background, see the notes at the end of the chapter for suggested references. Let Xj be a random variable that is 1 if subset Sj is included in the solution, and 0 otherwise. Then the expected value of the solution is m m m ∑ ∑ ∑ ∗ E wj Xj = wj Pr[Xj = 1] = wj x∗j = ZLP , j=1
j=1
j=1
or just the value of the linear programming relaxation, which is no more than OPT! As we will see, however, it is quite likely that the solution is not a set cover. Nevertheless, this illustrates Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
1.7
A randomized rounding algorithm
29
why randomized rounding can provide such good approximation algorithms in some cases, and we will see further examples of this in Chapter 5. Let us now calculate the probability that a given element ei is not covered by this procedure. This is the probability that none of the subsets containing ei are included in the solution, or ∏ (1 − x∗j ). j:ei ∈Sj
We can bound this probability by using the fact that 1 − x ≤ e−x for any x, where e is the base of the natural logarithm. Then ∏ Pr[ei not covered] = (1 − x∗j ) j:ei ∈Sj
∏
≤
∗
e−xj
j:ei ∈Sj ∑ − j:e
i ∈Sj
= e
x∗j
≤ e−1 ,
∑ where the ﬁnal inequality follows from the LP constraint that j:ei ∈Sj x∗j ≥ 1. Although e−1 is an upper bound on the probability that a given element is not covered, it is possible to approach this bound arbitrarily closely, so in the worst case it is quite likely that this randomized rounding procedure does not produce a set cover. How small would this probability have to be in order for it to be very likely that a set cover is produced? And perhaps even more fundamentally, what is the “right” notion of “very likely”? The latter question has a number of possible answers; one natural way to think of the situation is to impose a guarantee in keeping with our focus on polynomialtime algorithms. Suppose that, for any constant c, we could devise a polynomialtime algorithm whose chance of failure is at most an inverse polynomial n−c ; then we say that we have an algorithm that works with high probability. To be more precise, we would have a family of algorithms, since it might be necessary to give progressively slower algorithms, or ones with worse performance guarantees, to achieve analogously more failsafe results. If we could devise a randomized procedure such that Pr[ei not covered] ≤ n1c for some constant c ≥ 2, then Pr[there exists an uncovered element] ≤
n ∑
Pr[ei not covered] ≤
i=1
1 , nc−1
and we would have a set cover with high probability. In fact, we can achieve such a bound in the following way: for each subset Sj , we imagine a coin that comes up heads with probability x∗j , and we ﬂip the coin c ln n times. If it comes up heads in any of the c ln n trials, we include Sj in our solution, otherwise not. Thus, the probability that Sj is not included is (1 − x∗j )c ln n . Furthermore, ∏ Pr[ei not covered] = (1 − x∗j )c ln n j:ei ∈Sj
≤
∏
j:ei ∈Sj −(c ln n)
= e 1 ≤ , nc
∗
e−xj (c ln n) ∑ j:ei ∈Sj
x∗j
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
30
An introduction to approximation algorithms
as desired. We now need to prove only that the algorithm has a good expected value given that it produces a set cover. Theorem 1.17: The algorithm is a randomized O(ln n)approximation algorithm that produces a set cover with high probability. Proof. Let pj (x∗j ) be the probability that a given subset Sj is included in the solution as a function of x∗j . By construction of the algorithm, we know that pj (x∗j ) = 1 − (1 − x∗j )c ln n . Observe that if x∗j ∈ [0, 1] and c ln n ≥ 1, then we can bound the derivative p′j at x∗j by p′j (x∗j ) = (c ln n)(1 − x∗j )(c ln n)−1 ≤ (c ln n). Then since pj (0) = 0, and the slope of the function pj is bounded above by c ln n on the interval [0,1], pj (x∗j ) ≤ (c ln n)x∗j on the interval [0,1]. If Xj is a random variable that is 1 if the subset Sj is included in the solution, and 0 otherwise, then the expected value of the random procedure is m m ∑ ∑ E wj Xj = wj Pr[Xj = 1] j=1
j=1
≤
m ∑
wj (c ln n)x∗j
j=1
= (c ln n)
m ∑
∗ wj x∗j = (c ln n)ZLP .
j=1
However, we would like to bound the expected value of the solution given that a set cover is produced. Let F be the event that the solution obtained by the procedure is a feasible set cover, and let F¯ be the complement of this event. We know from the previous discussion that 1 Pr[F ] ≥ 1 − nc−1 , and we also know that m m m ∑ ∑ ∑ E wj Xj = E wj Xj F Pr[F ] + E wj Xj F¯ Pr[F¯ ]. j=1 j=1 j=1 Since wj ≥ 0 for all j,
Thus,
wj Xj F¯ ≥ 0. E j=1
E wj Xj F = j=1
m ∑
≤
m ∑
m m ∑ ∑ 1 E wj Xj − E wj Xj F¯ Pr[F¯ ] Pr[F ] j=1 j=1 m ∑ 1 ·E wj Xj Pr[F ] j=1
≤ ≤
∗ (c ln n)ZLP 1 1 − nc−1 ∗ 2c(ln n)ZLP
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
1.7
A randomized rounding algorithm
31
for n ≥ 2 and c ≥ 2. While in this case there is a simpler and faster approximation algorithm that achieves a better performance guarantee, we will see in Chapter 5 that sometimes randomized algorithms are simpler to describe and analyze than deterministic algorithms. In fact, most of the randomized algorithms we present in this book can be derandomized: that is, a deterministic variant of them can be created that achieves the expected performance guarantee of the randomized algorithm. However, these deterministic algorithms are sometimes more complicated to describe. In addition, there are some cases in which the deterministic variant is easy to state, but the only way in which we know how to analyze the algorithm is by analyzing a corresponding randomized algorithm. This brings us to the end of our introduction to approximation algorithms. In subsequent chapters, we will look at the techniques introduced here — as well as a few others — in greater depth, and see their application to many other problems.
Exercises 1.1 In the set ∑ cover problem, the goal is to ﬁnd a collection of subsets indexed by I that minimizes j∈I wj such that ∪ Sj = E. j∈I Consider the partial ∑ cover problem, in which one ﬁnds a collection of subsets indexed by I that minimizes j∈I wj such that ∪ Sj ≥ pE, j∈I where 0 < p < 1 is some constant. (a) Give a polynomialtime algorithm to ﬁnd a solution to the partial cover problem in which the value is no more than c(p) · OPT, where c(p) is a constant that depends on p, and OPT is the value of the optimal solution to the set cover problem. (b) Give an f (p)approximation algorithm for the partial cover problem, such that f is nondecreasing in p and f (1) ≤ HE . 1.2 In the directed Steiner tree problem, we are given as input a directed graph G = (V, A), nonnegative costs cij ≥ 0 for arcs (i, j) ∈ A, a root vertex r ∈ V , and a set of terminals T ⊆ V . The goal is to ﬁnd a minimumcost tree such that for each i ∈ T there exists a directed path from r to i. Prove that for some constant c there can be no c log T approximation algorithm for the directed Steiner tree problem, unless P = NP. 1.3 In the metric asymmetric traveling salesman problem, we are given as input a complete directed graph G = (V, A) with costs cij ≥ 0 for all arcs (i, j) ∈ A, such that the arc costs obey the triangle inequality: for all i, j, k ∈ V , we have that cij + cjk ≥ cik . The goal is to ﬁnd a tour of minimum cost, that is, a directed cycle that contains each vertex exactly once, such that the sum of the cost of the arcs in the cycle is minimized. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
32
An introduction to approximation algorithms
One approach to ﬁnding an approximation algorithm for this problem is to ﬁrst ﬁnd a minimumcost strongly connected Eulerian subgraph of the input graph. A directed graph is strongly connected if for any pair of vertices i, j ∈ V there is a path from i to j and a path from j to i. A directed graph is Eulerian if the indegree of each vertex equals its outdegree. Given a strongly connected Eulerian subgraph of the input to the problem, it is possible to use a technique called “shortcutting” (discussed in Section 2.4) to turn this into a tour of no greater cost by using the triangle inequality. One way to ﬁnd a strongly connected Eulerian subgraph is as follows: We ﬁrst ﬁnd a minimum meancost cycle in the graph. A minimum meancost cycle is a directed cycle that minimizes the ratio of the cost of the arcs in the cycle to the number of arcs in the cycle. Such a cycle can be found in polynomial time. We then choose one vertex of the cycle arbitrarily, remove all other vertices of the cycle from the graph, and repeat. We do this until only one vertex of the graph is left. Consider the subgraph consisting of all the arcs from all the cycles found. (a) Prove that the subgraph found by the algorithm is a strongly connected Eulerian subgraph of the input graph. (b) Prove that the cost of this subgraph is at most 2Hn · OPT, where n = V  and OPT is the cost of the optimal tour. Conclude that this algorithm is a 2Hn approximation algorithm for the metric asymmetric traveling salesman problem. 1.4 In the uncapacitated facility location problem, we have a set of clients D and a set of facilities F . For each client j ∈ D and facility i ∈ F , there is a cost cij of assigning client j to facility i. Furthermore, there is a cost fi associated with each facility i ∈ F . The goal of the problem is to choose a subset of facilities F ′ ⊆ F so as to minimize the total cost of the facilities in F ′ and the cost of assigning each client∑ j ∈ D to the ∑ nearest facility in F ′ . In other words, we wish to ﬁnd F ′ so as to minimize i∈F ′ fi + j∈D mini∈F ′ cij . (a) Show that there exists some c such that there is no (c ln D)approximation algorithm for the uncapacitated facility location problem unless P = NP. (b) Give an O(ln D)approximation algorithm for the uncapacitated facility location problem. 1.5 Consider the vertex cover problem. (a) Prove that any extreme point of the linear program ∑ minimize wi xi i∈V
subject to xi + xj ≥ 1, xi ≥ 0,
∀(i, j) ∈ E, i ∈ V,
has the property that xi ∈ {0, 12 , 1} for all i ∈ V . (Recall that an extreme point x is a feasible solution that cannot be expressed as λx1 + (1 − λ)x2 for 0 < λ < 1 and feasible solutions x1 and x2 distinct from x.) (b) Give a 32 approximation algorithm for the vertex cover problem when the input graph is planar. You may use the facts that polynomialtime LP solvers return extreme points, and that there is a polynomialtime algorithm to 4color any planar graph (i.e., the algorithm assigns each vertex one of four colors such that for any edge (i, j) ∈ E, vertices i and j have been assigned diﬀerent colors). Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
1.7
A randomized rounding algorithm
33
1.6 In the nodeweighted Steiner tree problem, we are given as input an undirected graph G = (V, E), node weights wi ≥ 0 for all i ∈ V , edge costs ce ≥ 0 for all e ∈ E, and a set of terminals T ⊆ V . The cost of a tree is the sum of the weights of the nodes plus the sum of the costs of the edges in the tree. The goal of the problem is to ﬁnd a minimumweight tree that spans all the terminals in T . (a) Show that there exists some c such that there is no (c ln T )approximation algorithm for the nodeweighted Steiner tree problem unless P = NP. (b) Give a greedy O(ln T )approximation algorithm for the nodeweighted Steiner tree problem.
Chapter Notes The term “approximation algorithm” was coined by David S. Johnson [179] in an inﬂuential and prescient 1974 paper. However, earlier papers had proved the performance guarantees of heuristics, including a 1967 paper of Erd˝os [99] on the maximum cut problem (to be discussed in Section 6.2), a 1966 paper of Graham [142] on a scheduling problem (to be discussed in Section 2.3), and a 1964 paper of Vizing [284] on the edge coloring problem (to be discussed in Section 2.7). Johnson’s paper gave an O(log n)approximation algorithm for the unweighted set cover problem, as well as approximation algorithms for the maximum satisﬁability problem (to be discussed in Section 5.1), vertex coloring (Sections 5.12, 6.5, and 13.2), and the maximum clique problem. At the end of the paper, Johnson [179] speculates about the approximability of these various problems: The results described in this paper indicate a possible classiﬁcation of optimization problems as to the behavior of their approximation algorithms. Such a classiﬁcation must remain tentative, at least until the existence of polynomialtime algorithms for ﬁnding optimal solutions has been proved or disproved. In the meantime, many questions can be asked. Are there indeed O(log n) coloring algorithms? Are there any clique ﬁnding algorithms better than O(nϵ ) for all ϵ > 0? Where do other optimization problems ﬁt into the scheme of things? What is it that makes algorithms for diﬀerent problems behave in the same way? Is there some stronger kind of reducibility than the simple polynomial reducibility that will explain these results, or are they due to some structural similarity between the problems as we deﬁne them? And what other types of behavior and ways of analyzing and measuring it are possible? (p. 278) There has been substantial work done in attempting to answer these questions in the decades since Johnson’s paper appeared, with signiﬁcant progress; for instance, Theorem 1.4 shows that no clique algorithm of the kind Johnson mentions is possible unless P = NP. Other books on approximation algorithms are available, including the textbooks of Ausiello, Crescenzi, Gambosi, Kann, MarchettiSpaccamela, and Protasi [27] and of Vazirani [283], and the collection of surveys edited by Hochbaum [162]. Many books on algorithms and combinatorial optimization now contain sections on approximation algorithms, including the textbooks of Bertsimas and Tsitsiklis [47], Cook, Cunningham, Pulleyblank, and Schrijver [81], Cormen, Leiserson, Rivest, and Stein [82], Kleinberg and Tardos [198], and Korte and Vygen [203]. For solid introductions to linear programming, we suggest the books of Bertsimas and Tsitsiklis [47], Chv´atal [79] and Ferris, Mangasarian, and Wright [112]. Bertsekas and Tsitsiklis [45], Durrett [93, 94], and Ross [256] provide basic introductions to probability theory; the Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
34
An introduction to approximation algorithms
ﬁrst few chapters of the book of Mitzenmacher and Upfal [226] provide a brief introduction to probability theory in the context of computer algorithms. The antivirus application of the set cover problem mentioned is due to Kephart, Sorkin, Arnold, Chess, Tesauro, and White [188]. Theorem 1.3 on the nonexistence of approximation schemes for problems in MAX SNP is due to Arora, Lund, Motwani, Sudan, and Szegedy [19], building on earlier work of Feige, Goldwasser, Lov´asz, Safra, and Szegedy [108] and Arora and Safra [23]. Theorem 1.4 on the hardness of approximating the maximum clique problem is due to H˚ astad [158], with a strengthening due to Zuckerman [296]. The LP rounding algorithm of Section 1.3 and the dual rounding algorithm of Section 1.4 are due to Hochbaum [160]. The primaldual algorithm of Section 1.5 is due to BarYehuda and Even [35]. The greedy algorithm and the LPbased analysis of Section 1.6 are due to Chv´atal [78]. The randomized rounding algorithm of Section 1.7 is apparently folklore. Johnson [179] and Lov´asz [218] give earlier greedy O(log n)approximation algorithms for the unweighted set cover problem. Theorem 1.13 on the hardness of approximating the set cover problem is due to Lund and Yannakakis [220], with a strengthening due to Bellare, Goldwasser, Lund, and Russell [43]. Theorem 1.14 on the hardness of the set cover problem is due to Feige [107]. Theorem 1.15 on the hardness of the vertex cover problem is due to Dinur and Safra [91], while Theorem 1.16, which uses the unique games conjecture, is due to Khot and Regev [194]. Exercise 1.3 is an unpublished result of Kleinberg and Williamson. The algorithm in Exercise 1.4 is due to Hochbaum [161]. Nemhauser and Trotter [231] show that all extreme points of the linear programming relaxation of vertex cover have value {0, 12 , 1} (used in Exercise 1.5). Exercise 1.6 is due to Klein and Ravi [196].
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
Chapter 2
Greedy algorithms and local search
In this chapter, we will consider two standard and related techniques for designing algorithms and heuristics, namely, greedy algorithms and local search algorithms. Both algorithms work by making a sequence of decisions that optimize some local choice, though these local choices might not lead to the best overall solution. In a greedy algorithm, a solution is constructed step by step, and at each step of the algorithm the next part of the solution is constructed by making some decision that is locally the best possible. In Section 1.6, we gave an example of a greedy algorithm for the set cover problem that constructs a set cover by repeatedly choosing the set that minimizes the ratio of its weight to the number of currently uncovered elements it contains. A local search algorithm starts with an arbitrary feasible solution to the problem, and then checks if some small, local change to the solution results in an improved objective function. If so, the change is made. When no further change can be made, we have a locally optimal solution, and it is sometimes possible to prove that such locally optimal solutions have value close to that of the optimal solution. Unlike other approximation algorithm design techniques, the most straightforward implementation of a local search algorithm typically does not run in polynomial time. The algorithm usually requires some restriction to the local changes allowed in order to ensure that enough progress is made during each improving step so that a locally optimal solution is found in polynomial time. Thus, while both types of algorithm optimize local choices, greedy algorithms are typically primal infeasible algorithms: they construct a solution to the problem during the course of the algorithm. Local search algorithms are primal feasible algorithms: they always maintain a feasible solution to the problem and modify it during the course of the algorithm. Both greedy algorithms and local search algorithms are extremely popular choices for heuristics for NPhard problems. They are typically easy to implement and have good running times in practice. In this chapter, we will consider greedy and local search algorithms for scheduling problems, clustering problems, and others, including the most famous problem in combinatorial optimization, the traveling salesman problem. Because greedy and local search algorithms are natural choices for heuristics, some of these algorithms were among the very ﬁrst approximation algorithms devised; in particular, the greedy algorithm for the parallel machine scheduling problem in Section 2.3 and the greedy algorithm for edge coloring in Section 2.7 were both given and analyzed in the 1960s, before the concept of NPcompleteness was invented. 35
36
Greedy algorithms and local search
Job 1 Job 2
0
2
3
Job 3
7
Time
Figure 2.1: An instance of a schedule for the onemachine scheduling problem in which p1 = 2, r1 = 0, p2 = 1, r2 = 2, p3 = 4, r3 = 1. In this schedule, C1 = 2, C2 = 3, and C3 = 7. If the deadlines for the jobs are such that d1 = −1, d2 = 1, and d3 = 10, then L1 = 2 − (−1) = 3, L2 = 3 − 1 = 2, and L3 = 7 − 10 = −3, so that Lmax = L1 = 3.
2.1
Scheduling jobs with deadlines on a single machine
One of the most common types of problems in combinatorial optimization is that of creating a schedule. We are given some type of work that must be done, and some resources to do the work, and from this we must create a schedule to complete the work that optimizes some objective; perhaps we want to ﬁnish all the work as soon as possible, or perhaps we want to make sure that the average time at which we complete the various pieces of work is as small as possible. We will often consider the problem of scheduling jobs (the work) on machines (the resources). We start this chapter by considering one of the simplest possible versions of this problem. Suppose that there are n jobs to be scheduled on a single machine, where the machine can process at most one job at a time, and must process a job until its completion once it has begun processing; suppose that each job j must be processed for a speciﬁed pj units of time, where the processing of job j may begin no earlier than a speciﬁed release date rj , j = 1, . . . , n. We assume that the schedule starts at time 0, and each release date is nonnegative. Furthermore, assume that each job j has a speciﬁed due date dj , and if we complete its processing at time Cj , then its lateness Lj is equal to Cj − dj ; we are interested in scheduling the jobs so as to minimize the maximum lateness, Lmax = maxj=1,...,n Lj . A sample instance of this problem is shown in Figure 2.1. Unfortunately, this problem is NPhard, and in fact, even deciding if there is a schedule for which Lmax ≤ 0 (i.e., deciding if all jobs can be completed by their due date) is strongly NPhard (the reader unfamiliar with strong NPhardness can consult Appendix B). Of course, this is a problem that we often encounter in everyday life, and many of us schedule our lives with the following simple greedy heuristic: focus on the task with the earliest due date. We will show that in certain circumstances this is a provably good thing to do. However, we ﬁrst argue that as stated, this optimization problem is not particularly amenable to obtaining nearoptimal solutions. If there were a ρapproximation algorithm, then for any input with optimal value 0, the algorithm must still ﬁnd a schedule of objective function value at most ρ · 0 = 0, and hence (given the NPhardness result stated above) this would imply that P = NP. (There is the further complication of what to do if the objective function of the optimal solution is negative!) One easy workaround to this is to assume that all due dates are negative, which implies that the optimal value is always positive. We shall give a 2approximation algorithm for this special case. We ﬁrst provide a good lower bound on the optimal value ∑ for this scheduling problem. Let S denote a subset of jobs, and let r(S) = minj∈S rj , p(S) = j∈S pj , and d(S) = maxj∈S dj . Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
2.2
The kcenter problem
37
Let L∗max denote the optimal value. Lemma 2.1: For each subset S of jobs, L∗max ≥ r(S) + p(S) − d(S). Proof. Consider the optimal schedule, and view this simply as a schedule for the jobs in the subset S. Let job j be the last job in S to be processed. Since none of the jobs in S can be processed before r(S), and in total they require p(S) time units of processing, it follows that job j cannot complete any earlier than time r(S) + p(S). The due date of job j is d(S) or earlier, and so the lateness of job j in this schedule is at least r(S) + p(S) − d(S); hence, L∗max ≥ r(S) + p(S) − d(S). A job j is available at time t if its release date rj ≤ t. We consider the following natural algorithm: at each moment that the machine is idle, start processing next an available job with the earliest due date. This is known as the earliest due date (EDD) rule. Theorem 2.2: The EDD rule is a 2approximation algorithm for the problem of minimizing the maximum lateness on a single machine subject to release dates with negative due dates. Proof. Consider the schedule produced by the EDD rule, and let job j be a job of maximum lateness in this schedule; that is, Lmax = Cj − dj . Focus on the time Cj in this schedule; ﬁnd the earliest point in time t ≤ Cj such that the machine was processing without any idle time for the entire period [t, Cj ). Several jobs may be processed in this time interval; we require only that the machine not be idle for some interval of positive length within it. Let S be the set of jobs that are processed in the interval [t, Cj ). By our choice of t, we know that just prior to t, none of these jobs were available (and clearly at least one job in S is available at time t); hence, r(S) = t. Furthermore, since only jobs in S are processed throughout this time interval, p(S) = Cj − t = Cj − r(S). Thus, Cj ≤ r(S) + p(S); since d(S) < 0, we can apply Lemma 2.1 to get that L∗max ≥ r(S) + p(S) − d(S) ≥ r(S) + p(S) ≥ Cj . (2.1) On the other hand, by applying Lemma 2.1 with S = {j}, L∗max ≥ rj + pj − dj ≥ −dj .
(2.2)
Adding inequalities (2.1) and (2.2), we see that the maximum lateness of the schedule computed is Lmax = Cj − dj ≤ 2L∗max , which completes the proof of the theorem.
2.2
The kcenter problem
The problem of ﬁnding similarities and dissimilarities in large amounts of data is ubiquitous: companies wish to group customers with similar purchasing behavior, political consultants group precincts by their voting behavior, and search engines group webpages by their similarity of topic. Usually we speak of clustering data, and there has been extensive study of the problem of ﬁnding good clusterings. Here we consider a particular variant of clustering, the kcenter problem. In this problem, we are given as input an undirected, complete graph G = (V, E), with a distance dij ≥ 0 Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
38
Greedy algorithms and local search
Pick arbitrary i ∈ V S ← {i} while S < k do j ← arg maxj∈V d(j, S) S ← S ∪ {j} Algorithm 2.1: A greedy 2approximation algorithm for the kcenter problem.
2∗ 1∗
2
1
3 3∗
Figure 2.2: An instance of the kcenter problem where k = 3 and the distances are given by the Euclidean distances between points. The execution of the greedy algorithm is shown; the nodes 1, 2, 3 are the nodes selected by the greedy algorithm, whereas the nodes 1∗ , 2∗ , 3∗ are the three nodes in an optimal solution. between each pair of vertices i, j ∈ V . We assume dii = 0, dij = dji for each i, j ∈ V , and that the distances obey the triangle inequality: for each triple i, j, l ∈ V , it is the case that dij + djl ≥ dil . In this problem, distances model similarity: vertices that are closer to each other are more similar, whereas those farther apart are less similar. We are also given a positive integer k as input. The goal is to ﬁnd k clusters, grouping together the vertices that are most similar into clusters together. In this problem, we will choose a set S ⊆ V , S = k, of k cluster centers. Each vertex will assign itself to its closest cluster center, grouping the vertices into k diﬀerent clusters. For the kcenter problem, the objective is to minimize the maximum distance of a vertex to its cluster center. Geometrically speaking, the goal is to ﬁnd the centers of k diﬀerent balls of the same radius that cover all points so that the radius is as small as possible. More formally, we deﬁne the distance of a vertex i from a set S ⊆ V of vertices to be d(i, S) = minj∈S dij . Then the corresponding radius for S is equal to maxi∈V d(i, S), and the goal of the kcenter problem is to ﬁnd a set of size k of minimum radius. In later chapters we will consider other objective functions, such ∑ as minimizing the sum of distances of vertices to their cluster centers, that is, minimizing i∈V d(i, S). This is called the kmedian problem, and we will consider it in Sections 7.7 and 9.2. We shall also consider another variant on clustering called correlation clustering in Section 6.4. We give a greedy 2approximation algorithm for the kcenter problem that is simple and intuitive. Our algorithm ﬁrst picks a vertex i ∈ V arbitrarily, and puts it in our set S of cluster centers. Then it makes sense for the next cluster center to be as far away as possible from all the other cluster centers. Hence, while S < k, we repeatedly ﬁnd a vertex j ∈ V that determines the current radius (or in other words, for which the distance d(j, S) is maximized) and add it to S. Once S = k, we stop and return S. Our algorithm is given in Algorithm 2.1. An execution of the algorithm is shown in Figure 2.2. We will now prove that the algorithm Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
2.3
Scheduling jobs on identical parallel machines
39
is a good approximation algorithm. Theorem 2.3: Algorithm 2.1 is a 2approximation algorithm for the kcenter problem. Proof. Let S ∗ = {j1 , . . . , jk } denote the optimal solution, and let r∗ denote its radius. This solution partitions the nodes V into clusters V1 , . . . , Vk , where each point j ∈ V is placed in Vi if it is closest to ji among all of the points in S ∗ (and ties are broken arbitrarily). Each pair of points j and j ′ in the same cluster Vi are at most 2r∗ apart: by the triangle inequality, the distance djj ′ between them is at most the sum of djji , the distance from j to the center ji , plus dji j ′ , the distance from the center ji to j ′ (that is, djj ′ ≤ djji + dji j ′ ); since djji and dj ′ ji are each at most r∗ , we see that djj ′ is at most 2r∗ . Now consider the set S ⊆ V of points selected by the greedy algorithm. If one center in S is selected from each cluster of the optimal solution S ∗ , then every point in V is clearly within 2r∗ of some selected point in S. However, suppose that the algorithm selects two points within the same cluster. That is, in some iteration, the algorithm selects a point j ∈ Vi , even though the algorithm had already selected a point j ′ ∈ Vi in an earlier iteration. Again, the distance between these two points is at most 2r∗ . The algorithm selects j in this iteration because it is currently the furthest from the points already in S. Hence, all points are within a distance of at most 2r∗ of some center already selected for S. Clearly, this remains true as the algorithm adds more centers in subsequent iterations, and we have proved the theorem. The instance in Figure 2.2 shows that this analysis is tight. We shall argue next that this result is the best possible; if there exists a ρapproximation algorithm with ρ < 2, then P = NP. To see this, we consider the dominating set problem, which is NPcomplete. In the dominating set problem, we are given a graph G = (V, E) and an integer k, and we must decide if there exists a set S ⊆ V of size k such that each vertex is either in S, or adjacent to a vertex in S. Given an instance of the dominating set problem, we can deﬁne an instance of the kcenter problem by setting the distance between adjacent vertices to 1, and nonadjacent vertices to 2: there is a dominating set of size k if and only if the optimal radius for this kcenter instance is 1. Furthermore, any ρapproximation algorithm with ρ < 2 must always produce a solution of radius 1 if such a solution exists, since any solution of radius ρ < 2 must actually be of radius 1. This implies the following theorem. Theorem 2.4: There is no αapproximation algorithm for the kcenter problem for α < 2 unless P = NP.
2.3
Scheduling jobs on identical parallel machines
In Section 2.1, we considered the problem of scheduling jobs on a single machine to minimize lateness. Here we consider a variation on that problem, in which we now have multiple machines and no release dates, but our goal is to minimize the time at which all jobs are ﬁnished. Suppose that there are n jobs to be processed, and there are m identical machines (running in parallel) to which each job may be assigned. Each job j = 1, . . . , n, must be processed on one of these machines for pj time units without interruption, and each job is available for processing at time 0. Each machine can process at most one job at a time. The aim is to complete all jobs as soon as possible; that is, if job j completes at a time Cj (presuming that the schedule starts at time 0), then we wish to minimize Cmax = maxj=1,...,n Cj , which is often called the makespan or length of the schedule. An equivalent view of the same problem is as a loadbalancing problem: there are n items, each of a given weight pj , and they are to be distributed among m machines; Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
40
Greedy algorithms and local search
1 2 Machines 3 4 5 Time
1 2 Machines 3 4 5 Time
Figure 2.3: An example of a local move in the local search algorithm for scheduling jobs on parallel machines. The gray job on machine 2 ﬁnishes last in the schedule on the top, but the schedule can be improved by moving the gray job to machine 4. No further local moves are possible after this one since again the gray job ﬁnishes last. the aim is to assign each item to one machine so to minimize the maximum total weight assigned to one machine. This scheduling problem has the property that even the simplest algorithms compute reasonably good solutions. In particular, we will show that both a local search algorithm and a very simple greedy algorithm ﬁnd solutions that have makespan within a factor of 2 of the optimum. In fact, the analyses of these two algorithms are essentially identical. Local search algorithms are deﬁned by a set of local changes or local moves that change one feasible solution to another. The simplest local search procedure for this scheduling problem works as follows: Start with any schedule; consider the job ℓ that ﬁnishes last; check whether or not there exists a machine to which it can be reassigned that would cause this job to ﬁnish earlier. If so, transfer job ℓ to this other machine. We can determine whether to transfer job ℓ by checking if there exists a machine that ﬁnishes its currently assigned jobs earlier than Cℓ − pℓ . The local search algorithm repeats this procedure until the last job to complete cannot be transferred. An illustration of this local move is shown in Figure 2.3. In order to analyze the performance of this local search algorithm, we ﬁrst provide some ∗ . Since each job must be natural lower bounds on the length of an optimal schedule, Cmax processed, it follows that ∗ Cmax ≥ max pj . (2.3) ∑n
j=1,...,n
On the other hand, there is, in total, P = j=1 pj units of processing to accomplish, and only m machines to do this work. Hence, on average, a machine will be assigned P/m units of work, and consequently, there must exist one machine that is assigned at least that much work. Thus, ∗ Cmax ≥
n ∑
pj /m.
(2.4)
j=1
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
2.3
Scheduling jobs on identical parallel machines
41
Consider the solution produced by the local search algorithm. Let ℓ be a job that completes last in this ﬁnal schedule; the completion time of job ℓ, Cℓ , is equal to this solution’s objective function value. By the fact that the algorithm terminated with this schedule, every other machine must be busy from time 0 until the start of job ℓ at time Sℓ = Cℓ −pℓ . We can partition the schedule into two disjoint time intervals, from time 0 until Sℓ , and the time during which ∗ . Now consider job ℓ is being processed. By (2.3), the latter interval has length at most Cmax the former time interval; we know that each machine is busy processing jobs throughout this period. The total amount of work being ∑nprocessed in this interval is mSℓ , which is clearly no more than the total work to be done, j=1 pj . Hence, Sℓ ≤
n ∑
pj /m.
(2.5)
j=1 ∗ . But now, we see that the length of the By combining this with (2.4), we see that Sℓ ≤ Cmax ∗ schedule before the start of job ℓ is at most Cmax , as is the length of the schedule afterward; in ∗ . total, the makespan of the schedule computed is at most 2Cmax Now consider the running time of this algorithm. This local search procedure has the property that the value of Cmax for the sequence of schedules produced, iteration by iteration, never increases (it can remain the same, but then the number of machines that achieve the maximum value decreases). One natural assumption to make is that when we transfer a job to another machine, then we reassign that job to the machine that is currently ﬁnishing earliest. We will analyze the running time of this variant instead. Let Cmin be the completion time of a machine that completes all its processing the earliest. One consequence of focusing on the new variant is that Cmin never decreases (and if it remains the same, then the number of machines that achieve this minimum value decreases). We argue next that this implies that we never transfer a job twice. Suppose this claim is not true, and consider the ﬁrst time that a job j is transferred twice, say, from machine i to i′ , and later then to i∗ . When job j is reassigned from machine i to machine i′ , it then starts at Cmin for the current schedule. Similarly, when ′ . Furthermore, no job j is reassigned from machine i′ to i∗ , it then starts at the current Cmin ′ change occurred to the schedule on machine i in between these two moves for job j. Hence, ′ Cmin must be strictly smaller than Cmin (in order for the transfer to be an improving move), but this contradicts our claim that the Cmin value is nondecreasing over the iterations of the local search algorithm. Hence, no job is transferred twice, and after at most n iterations, the algorithm must terminate. We have thus shown the following theorem.
Theorem 2.5: The local search procedure for scheduling jobs on identical parallel machines is a 2approximation algorithm. In fact, it is not hard to see that the analysis of the approximation ratio can be reﬁned slightly. In deriving the inequality (2.5), we included job ℓ among the work to be done prior to the start of job ℓ. Hence, we actually derived that ∑ Sℓ ≤ pj /m, j̸=ℓ
and hence the total length of the schedule produced is at most pℓ +
∑ j̸=ℓ
( pj /m =
1 1− m
) pℓ +
n ∑
pj /m.
j=1
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
42
Greedy algorithms and local search
By applying the two lower bounds (2.3) and (2.4) to these two terms, we see that the schedule 1 ∗ . Of course, the diﬀerence between this bound and 2 is signiﬁcant has length at most (2− m )Cmax only if there are very few machines. Another natural algorithm to compute a schedule is a greedy algorithm that assigns the jobs as soon as there is machine availability to process them: whenever a machine becomes idle, then one of the remaining jobs is assigned to start processing on that machine. This algorithm is often called the list scheduling algorithm, since one can equivalently view the algorithm as ﬁrst ordering the jobs in a list (arbitrarily), and the next job to be processed is the one at the top of the list. Another viewpoint, from the loadbalancing perspective, is that the next job on the list is assigned to the machine that is currently the least heavily loaded. It is in this sense that one can view the algorithm as a greedy algorithm. The analysis of this algorithm is now quite trivial; if one uses this schedule as the starting point for the local search procedure, that algorithm would immediately declare that the solution cannot be improved! To see this, consider a job ℓ that is (one of the jobs) last to complete its processing. Each machine is busy until Cℓ − pℓ , since otherwise we would have assigned job ℓ to that other machine. Hence, no transfers are possible. Theorem 2.6: The list scheduling algorithm for the problem of minimizing the makespan on m identical parallel machines is a 2approximation algorithm. It is not hard to obtain a stronger result by improving this list scheduling algorithm. Not all lists yield the same schedule, and it is natural to use an additional greedy rule that ﬁrst sorts the jobs in nonincreasing order. One way to view the results of Theorems 2.5 and 2.6 is that the relative error in the length of the schedule produced is entirely due to the length of the last job to ﬁnish. If that job is short, then the error is not too big. This greedy algorithm is called the longest processing time rule, or LPT. Theorem 2.7: The longest processing time rule is a 4/3approximation algorithm for scheduling jobs to minimize the makespan on identical parallel machines. Proof. Suppose that the theorem is false, and consider an input that provides a counterexample to the theorem. For ease of notation, assume that p1 ≥ · · · ≥ pn . First, we can assume that the last job to complete is indeed the last (and smallest) job in the list. This follows without loss of generality: any counterexample for which the last job ℓ to complete is not the smallest can yield a smaller counterexample, simply by omitting all of the jobs ℓ + 1, . . . , n; the length of the schedule produced is the same, and the optimal value of the reduced input can be no larger. Hence the reduced input is also a counterexample. So we know that the last job to complete in the schedule is job n. If this is a counterexample, ∗ /3, then the analysis of Theorem 2.6 implies what do we know about pn (= pℓ )? If pℓ ≤ Cmax ∗ that the schedule length is at most (4/3)Cmax , and so this is not a counterexample. Hence, we know that in this purported counterexample, job n (and therefore all of the jobs) has a ∗ /3. This has the following simple corollary. processing requirement strictly greater than Cmax In the optimal schedule, each machine may process at most two jobs (since otherwise the total ∗ ). processing assigned to that machine is more than Cmax However, we have now reduced our assumed counterexample to the point where it simply cannot exist. For inputs of this structure, we have the following lemma. Lemma 2.8: For any input to the problem of minimizing the makespan on identical parallel machines for which the processing requirement of each job is more than onethird the optimal makespan, the longest processing time rule computes an optimal schedule. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
2.4
The traveling salesman problem
43
This lemma can be proved by some careful case checking, and we defer this to an exercise (Exercise 2.2). However, the consequence of the lemma is clear; no counterexample to the theorem can exist, and hence the theorem must be true. In Section 3.2, we will see that it is possible to give a polynomialtime approximation scheme for this problem.
2.4
The traveling salesman problem
In the traveling salesman problem, or TSP, there is a given set of cities {1, 2, . . . , n}, and the input consists of a symmetric n by n matrix C = (cij ) that speciﬁes the cost of traveling from city i to city j. By convention, we assume that the cost of traveling from any city to itself is equal to 0, and costs are nonnegative; the fact that the matrix is symmetric means that the cost of traveling from city i to city j is equal to the cost of traveling from j to i. (The asymmetric traveling salesman problem, where the restriction that the cost matrix be symmetric is relaxed, has already made an appearance in Exercise 1.3.) If we instead view the input as an undirected complete graph with a cost associated with each edge, then a feasible solution, or tour, consists of a Hamiltonian cycle in this graph; that is, we specify a cyclic permutation of the cities or, equivalently, a traversal of the cities in the order k(1), k(2), . . . , k(n), where each city j is listed as a unique image k(i). The cost of the tour is equal to ck(n)k(1) +
n−1 ∑
ck(i)k(i+1) .
i=1
Observe that each tour has n distinct representations, since it does not matter which city is selected as the one in which the tour starts. The traveling salesman problem is one of the most wellstudied combinatorial optimization problems, and this is certainly true from the point of view of approximation algorithms as well. There are severe limits on our ability to compute nearoptimal tours, and we start with a discussion of these results. It is NPcomplete to decide whether a given undirected graph G = (V, E) has a Hamiltonian cycle. An approximation algorithm for the TSP can be used to solve the Hamiltonian cycle problem in the following way: Given a graph G = (V, E), form an input to the TSP by setting, for each pair i, j, the cost cij equal to 1 if (i, j) ∈ E, and equal to n + 2 otherwise. If there is a Hamiltonian cycle in G, then there is a tour of cost n, and otherwise each tour costs at least 2n + 1. If there were to exist a 2approximation algorithm for the TSP, then we could use this algorithm to distinguish graphs with Hamiltonian cycles from those without any: run the approximation algorithm on the new TSP input, and if the tour computed has cost at most 2n, then there exists a Hamiltonian cycle in G, and otherwise there does not. Of course, there is nothing special about setting the cost for the “nonedges” to be n + 2; setting the cost to be αn + 2 has a similarly inﬂated consequence, and we obtain an input to the TSP of polynomial size provided that, for example, α = O(2n ). As a result, we obtain the following theorem. Theorem 2.9: For any α > 1, there does not exist an αapproximation algorithm for the traveling salesman problem on n cities, provided P ̸= NP. In fact, the existence of an O(2n )approximation algorithm for the TSP would similarly imply that P = NP. But is this the end of the story? Clearly not. A natural assumption to make about the input to the TSP is to restrict attention to those inputs that are metric; that is, for each triple i, j, k ∈ V , Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
44
Greedy algorithms and local search
k
k
j
j i
i
Figure 2.4: Illustration of a greedy step of the nearest addition algorithm. we have that the triangle inequality cik ≤ cij + cjk holds. This assumption rules out the construction used in the reduction for the Hamiltonian cycle problem above; the nonedges can be given cost at most 2 if we want the triangle inequality to hold, and this value is too small to yield a nontrivial nonapproximability result. We next give three approximation algorithms for this metric traveling salesman problem. Here is a natural greedy heuristic to consider for the traveling salesman problem; this is often referred to as the nearest addition algorithm. Find the two closest cities, say, i and j, and start by building a tour on that pair of cities; the tour consists of going from i to j and then back to i again. This is the ﬁrst iteration. In each subsequent iteration, we extend the tour on the current subset S by including one additional city, until we include the full set of cities. In each iteration, we ﬁnd a pair of cities i ∈ S and j ̸∈ S for which the cost cij is minimum; let k be the city that follows i in the current tour on S. We add j to S, and insert j into the current tour between i and k. An illustration of this algorithm is shown in Figure 2.4. The crux of the analysis of this algorithm is the relationship of this algorithm to Prim’s algorithm for the minimum spanning tree in an undirected graph. A spanning tree of a connected graph G = (V, E) is a minimal subset of edges F ⊆ E such that each pair of nodes in G is connected by a path using edges only in F . A minimum spanning tree is a spanning tree for which the total edge cost is minimized. Prim’s algorithm computes a minimum spanning tree by iteratively constructing a set S along with a tree T , starting with S = {v} for some (arbitrarily chosen) node v ∈ V and T = (S, F ) with F = ∅. In each iteration, it determines the edge (i, j) such that i ∈ S and j ̸∈ S is of minimum cost, and adds the edge (i, j) to F . Clearly, this is the same sequence of vertex pairs identiﬁed by the nearest addition algorithm. Furthermore, there is another important relationship between the minimum spanning tree problem and the traveling salesman problem. Lemma 2.10: For any input to the traveling salesman problem, the cost of the optimal tour is at least the cost of the minimum spanning tree on the same input. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
2.4
The traveling salesman problem
45
Proof. The proof is quite simple. For any input with n ≥ 2, start with the optimal tour. Delete any one edge from the tour. The result is a spanning tree (albeit a very special one), and this costs no more than the optimal tour. But the minimum spanning tree must cost no more than this special tree. Hence, the cost of the minimum spanning tree is at most the cost of the optimal tour. By combining these observations, we can obtain the following result with just a bit more work. Theorem 2.11: The nearest addition algorithm for the metric traveling salesman problem is a 2approximation algorithm. Proof. Let S2 , S3 , . . . , Sn = {1, . . . , n} be the subsets identiﬁed at the end of each iteration of the nearest addition algorithm (where Sℓ  = ℓ), and let F = {(i2 , j2 ), (i3 , j3 ), . . . , (in , jn )}, where (iℓ , jℓ ) is the edge identiﬁed in iteration ℓ − 1 (with iℓ ∈ Sℓ−1 , ℓ = 3, . . . , n). As indicated above, we also know that ({1, . . . , n}, F ) is a minimum spanning tree for the original input, when viewed as a complete undirected graph with edge costs. Thus, if OPT is the optimal value for the TSP input, then n ∑ OPT ≥ ciℓ jℓ . ℓ=2
The cost of the tour on the ﬁrst two nodes i2 and j2 is exactly 2ci2 j2 . Consider an iteration in which a city j is inserted between cities i and k in the current tour. How much does the length of the tour increase? An easy calculation gives cij + cjk − cik . By the triangle inequality, we have that cjk ≤ cji + cik or, equivalently, cjk − cik ≤ cji . Hence, the increase in cost in this iteration is at most cij + cji = 2cij . Thus, overall, we know that the ﬁnal tour has cost at most 2
n ∑
ciℓ jℓ ≤ 2 OPT,
ℓ=2
and the theorem is proved. In fact, this algorithm can be viewed from another perspective. Although this new perspective deviates from viewing the algorithm as a “greedy” procedure, this approach ultimately leads to a better algorithm. First we need some graphtheoretic preliminaries. A graph is said to be Eulerian if there exists a permutation of its edges of the form (i0 , i1 ), (i1 , i2 ), . . . , (ik−1 , ik ), (ik , i0 ); we will call this permutation a traversal of the edges, since it allows us to visit every edge exactly once. A graph is Eulerian if and only if it is connected and each node has even degree (where the degree of a node v is the number of edges with v as one of its endpoints). Furthermore, if a graph is Eulerian, one can easily construct the required traversal of the edges, given the graph. To ﬁnd a good tour for a TSP input, suppose that we ﬁrst compute a minimum spanning tree (for example, by Prim’s algorithm). Suppose that we then replace each edge by two copies of itself. The resulting (multi)graph has cost at most 2 OPT and is Eulerian. We can construct a tour of the cities from the Eulerian traversal of the edges, (i0 , i1 ), (i1 , i2 ), . . . , (ik−1 , ik ), (ik , i0 ). Consider the sequence of nodes, i0 , i1 , . . . , ik , and remove all but the ﬁrst occurrence of each city in this sequence. This yields a tour containing each city exactly once (assuming we then return to i0 at the end). To bound the length of this tour, consider two consecutive cities in this tour, iℓ and im . We have omitted iℓ+1 , . . . , im−1 because these cities have already been visited “earlier” in the tour. However, by the triangle inequality, the cost of the edge ciℓ ,im can be upper bounded by the total cost of the edges traversed in the Eulerian traversal between Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
46
Greedy algorithms and local search
iℓ and im , that is, the total cost of the edges (iℓ , iℓ+1 ), . . . , (im−1 , im ). In total, the cost of the tour is at most the total cost of all of the edges in the Eulerian graph, which is at most 2 OPT. Hence, we have also analyzed this doubletree algorithm. Theorem 2.12: The doubletree algorithm for the metric traveling salesman problem is a 2approximation algorithm. This technique of “skipping over” previously visited cities and bounding the cost of the resulting tour in terms of the total cost of all the edges is sometimes called shortcutting. The bigger message of the analysis of the doubletree algorithm is also quite useful; if we can eﬃciently construct an Eulerian subgraph of the complete input graph, for which the total edge cost is at most α times the optimal value of the TSP input, then we have derived an αapproximation algorithm as well. This strategy can be carried out to yield a 3/2approximation algorithm. Consider the output from the minimum spanning tree computation. This graph is certainly not Eulerian, since any tree must have nodes of degree one, but it is possible that not many nodes have odd degree. Let O be the set of odddegree nodes in the minimum spanning tree. For any graph, the sum of its node degrees must be even, since each edge in the graph contributes 2 to this total. The total degree of the evendegree nodes must also be even (since we are adding a collection of even numbers), but then the total degree of the odddegree nodes must also be even. In other words, we must have an even number of odddegree nodes; O = 2k for some positive integer k. Suppose that we pair up the nodes in O: (i1 , i2 ), (i3 , i4 ), . . . , (i2k−1 , i2k ). Such a collection of edges that contain each node in O exactly once is called a perfect matching of O. One of the classic results of combinatorial optimization is that given a complete graph (on an even number of nodes) with edge costs, it is possible to compute the perfect matching of minimum total cost in polynomial time. Given the minimum spanning tree, we identify the set O of odddegree nodes with even cardinality, and then compute a minimumcost perfect matching on O. If we add this set of edges to our minimum spanning tree, we have constructed an Eulerian graph on our original set of cities: it is connected (since the spanning tree is connected) and has even degree (since we added a new edge incident to each node of odd degree in the spanning tree). As in the doubletree algorithm, we can shortcut this graph to produce a tour of no greater cost. This algorithm is known as Christoﬁdes’ algorithm. Theorem 2.13: Christoﬁdes’ algorithm for the metric traveling salesman problem is a 3/2approximation algorithm. Proof. We want to show that the edges in the Eulerian graph produced by the algorithm have total cost at most 32 OPT. We know that the minimum spanning tree edges have total cost at most OPT. So we need only show that the perfect matching on O has cost at most OPT /2. This is surprisingly simple. First observe that there is a tour on just the nodes in O of total cost at most OPT. This again uses the shortcutting argument. Start with the optimal tour on the entire set of cities, and if for two cities i and j, the optimal tour between i and j contains only cities that are not in O, then include edge (i, j) in the tour on O. Each edge in the tour corresponds to disjoint paths in the original tour, and hence by the triangle inequality, the total length of the tour on O is no more than the length of the original tour. Now consider this “shortcut” tour on the node set O. Color these edges red and blue, alternating colors as the tour is traversed. This partitions the edges into two sets, the red set and the blue set; each of these is a perfect matching on the node set O. In total, these two Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
2.5
Maximizing ﬂoat in bank accounts
47
S←∅ while S < k do i ← arg maxi∈B v(S ∪ {i}) − v(S) S ← S ∪ {i} return S Algorithm 2.2: A greedy approximation algorithm for the ﬂoat maximization problem. edge sets have cost at most OPT. Thus, the cheaper of these two sets has cost at most OPT /2. Hence, there is a perfect matching on O of cost at most OPT /2. Therefore, the algorithm to ﬁnd the minimumcost perfect matching must ﬁnd a matching of cost at most OPT /2, and this completes the proof of the theorem. Remarkably, no better approximation algorithm for the metric traveling salesman problem is known. However, substantially better algorithms might yet be found, since the strongest negative result is as follows. Theorem 2.14: Unless P = NP, for any constant α < algorithm for the metric TSP exists.
220 219
≈ 1.0045, no αapproximation
We can give better approximation algorithms for the problem in special cases. In Section 10.1, we will see that it is possible to obtain a polynomialtime approximation scheme in the case that cities correspond to points in the Euclidean plane and the cost of traveling between two cities is equal to the Euclidean distance between the corresponding two points.
2.5
Maximizing ﬂoat in bank accounts
In the days before quick electronic check clearing, it was often advantageous for large corporations to maintain checking accounts in various locations in order to maximize ﬂoat. The ﬂoat is the time between making a payment by check and the time that the funds for that payment are deducted from the company’s banking account. During that time, the company can continue to accrue interest on the money. Float can also be used by scam artists for check kiting: covering a deﬁcit in the checking account in one bank by writing a check against another account in another bank that also has insuﬃcient funds — then a few days later covering this deﬁcit with a check written against the ﬁrst account. We can model the problem of maximizing ﬂoat as follows. Suppose we wish to open up to k bank accounts so as to maximize our ﬂoat. Let B be the set of banks where we can potentially open accounts, and let P be the set of payees to whom we regularly make payments. Let vij ≥ 0 be the value of the ﬂoat created by paying payee j ∈ P from bank account i ∈ B; this may take into account the amount of time it takes for a check written to j to clear at i, the interest rate at bank i, and other factors. Then we wish to ﬁnd a set S ⊆ B of banks at which to open accounts such that S ≤ k. Clearly we will pay payee j ∈ P ∑ from the account i ∈ S that maximizes vij . So we wish to ﬁnd S ⊆ B, S ≤ k, that maximizes j∈P maxi∈S vij . We deﬁne v(S) to be the value of this objective function for S ⊆ B. A natural greedy algorithm is as follows: we start with S = ∅, and while S < k, ﬁnd the bank i ∈ B that most increases the objective function, and add it to S. This algorithm is summarized in Algorithm 2.2. We will show that this algorithm has a performance guarantee of 1 − 1e . To do this, we require the following lemma. We let O denote an optimal solution, so that O ⊆ B and O ≤ k. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
48
Greedy algorithms and local search
Lemma 2.15: Let S be the set of banks at the start of some iteration of Algorithm 2.2, and let i ∈ B be the bank chosen in the iteration. Then v(S ∪ {i}) − v(S) ≥
1 (v(O) − v(S)). k
To get some intuition of why this is true, consider the optimal solution O. We can allocate shares of the value of the objective function v(O) to each bank i ∈ O: the value vij for each j ∈ P can be allocated to a bank i ∈ O that attains the maximum of maxi∈O vij . Since O ≤ k, some bank i ∈ O is allocated at least v(O)/k. So after choosing the ﬁrst bank i to add to S, we have v({i}) ≥ v(O)/k. Intuitively speaking, there is also another bank i′ ∈ O that is allocated at least a 1/k fraction of whatever wasn’t allocated to the ﬁrst bank, so that there is an i′ such that v(S ∪ {i′ }) − v(S) ≥ k1 (v(O) − v(S)), and so on. Given the lemma, we can prove the performance guarantee of the algorithm. Theorem 2.16: Algorithm 2.2 gives a (1 − 1e )approximation algorithm for the ﬂoat maximization problem. Proof. Let S t be our greedy solution after t iterations of the algorithm, so that S 0 = ∅ and S = S k . Let O be( an optimal solution. We set v(∅) = 0. Note that Lemma 2.15 implies that ) 1 1 t−1 t v(S ) ≥ k v(O) + 1 − k v(S ). By applying this inequality repeatedly, we have v(S) = v(S k ) ( ) 1 1 ≥ v(O) + 1 − v(S k−1 ) k k ( )( ( ) ) 1 1 1 1 k−2 ≥ v(O) + 1 − v(O) + 1 − v(S ) k k k k ( ) ( ) ( ) ) ( 1 2 1 k−1 v(O) 1 + 1− + ··· + 1 − ≥ 1+ 1− k k k k ) ( k v(O) 1 − 1 − k1 ) ( = · k 1 − 1 − k1 ( ) ) ( 1 k = v(O) 1 − 1 − k ( ) 1 ≥ v(O) 1 − , e where in the ﬁnal inequality we use the fact that 1 − x ≤ e−x , setting x = 1/k. To prove Lemma 2.15, we ﬁrst prove the following. Lemma 2.17: For the objective function v, for any X ⊆ Y and any ℓ ∈ / Y, v(Y ∪ {ℓ}) − v(Y ) ≤ v(X ∪ {ℓ}) − v(X). Proof. Consider any payee j ∈ P . Either j is paid from the same bank account in both X ∪ {ℓ} and X, or it is paid by ℓ from X ∪ {ℓ} and some other bank in X. Consequently, ) ∑ { ( )} ∑( v(X ∪ {ℓ}) − v(X) = max vij − max vij = max 0, vℓj − max vij . (2.6) j∈P
i∈X∪{ℓ}
i∈X
j∈P
i∈X
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
2.6
Finding minimumdegree spanning trees
Similarly, v(Y ∪ {ℓ}) − v(Y ) =
∑ j∈P
{ ( )} max 0, vℓj − max vij . i∈Y
49
(2.7)
Now since X ⊆ Y for a given j ∈ P , maxi∈Y vij ≥ maxi∈X vij , so that { ( )} { ( )} max 0, vℓj − max vij ≤ max 0, vℓj − max vij . i∈Y
i∈X
By summing this inequality over all j ∈ P and using the equalities (2.6) and (2.7), we obtain the desired result. The property of the value function v that we have just proved is one that plays a central role in a number of algorithmic settings, and is often called submodularity, though the usual deﬁnition of this property is somewhat diﬀerent (see Exercise 2.10). This deﬁnition captures the intuitive property of decreasing marginal beneﬁts: as the set includes more elements, the marginal value of adding a new element decreases. Finally, we prove Lemma 2.15. Proof of Lemma 2.15. Let O − S = {i1 , . . . , ip }. Note that since O − S ≤ O ≤ k, then p ≤ k. Since adding more bank accounts can only increase the overall value of the solution, we have that v(O) ≤ v(O ∪ S), and a simple rewriting gives v(O ∪ S) = v(S) +
p ∑
[v(S ∪ {i1 , . . . , ij }) − v(S ∪ {i1 , . . . , ij−1 })] .
j=1
By applying Lemma 2.17, we can upper bound the righthand side by v(S) +
p ∑
[v(S ∪ {ij }) − v(S)] .
j=1
Since the algorithm chooses i ∈ B to maximize v(S ∪ {i}) − v(S), we have that for any j, v(S ∪ {i}) − v(S) ≥ v(S ∪ {ij }) − v(S). We can use this bound to see that v(O) ≤ v(O ∪ S) ≤ v(S) + p[v(S ∪ {i}) − v(S)] ≤ v(S) + k[v(S ∪ {i}) − v(S)]. This inequality can be rewritten to yield the inequality of the lemma, and this completes the proof. This greedy approximation algorithm and its analysis can be extended to similar problems in which the objective function v(S) is given by a set of items S, and is monotone and submodular. We leave the deﬁnition of these terms and the proofs of the extensions to Exercise 2.10.
2.6
Finding minimumdegree spanning trees
We now turn to a local search algorithm for the problem of minimizing the maximum degree of a spanning tree. The problem we consider is the following: given a graph G = (V, E) we wish to ﬁnd a spanning tree T of G so as to minimize the maximum degree of nodes in T . We Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
50
Greedy algorithms and local search
u
u y
v
w
y
v
w
Figure 2.5: Illustration of a local move for minimizing the maximum degree of a spanning tree. The bold solid lines are in the tree, and the dashed lines are graph edges not in the tree. will call this the minimumdegree spanning tree problem. This problem is NPhard. A special type of spanning tree of a graph is a path that visits all nodes of the graph; this is called a Hamiltonian path. A spanning tree has maximum degree two if and only if it is a Hamiltonian path. Furthermore, deciding if a graph G has a Hamiltonian path is NPcomplete. Thus, we have the following theorem. Theorem 2.18: It is NPcomplete to decide whether or not a given graph has a minimumdegree spanning tree of maximum degree two. For a given graph G, let T ∗ be the spanning tree that minimizes the maximum degree, and let OPT be the maximum degree of T ∗ . We will give a polynomialtime local search algorithm that ﬁnds a tree T with maximum degree at most 2 OPT +⌈log2 n⌉, where n = V  is the number of vertices in the graph. To simplify notation, throughout this section we will let ℓ = ⌈log2 n⌉. The local search algorithm starts with an arbitrary spanning tree T . We will give a local move to change T into another spanning tree in which the degree of some vertex has been reduced. Let dT (u) be the degree of u in T . The local move picks a vertex u and tries to reduce its degree by looking at all edges (v, w) that are not in T but if added to T create a cycle C containing u. Suppose max(dT (v), dT (w)) ≤ dT (u) − 2. For example, consider the graph in Figure 2.5, in which the edges of the tree T are shown in bold. In this case, the degree of node u is 5, but those of v and w are 3. Let T ′ be the result of adding (v, w) and removing an edge from C incident to u. In the example, if we delete edge (u, y), then the degrees of u, v, and w will all be 4 after the move. The conditions ensure that this move provides improvement in general; the degree of u is reduced by one in T ′ (that is, dT ′ (u) = dT (u)−1) and the degrees of v and w in T ′ are not greater than the reduced degree of u; that is, max(dT ′ (v), dT ′ (w)) ≤ dT ′ (u). The local search algorithm carries out local moves on nodes that have high degree. It makes sense for us to carry out a local move to reduce the degree of any node, since it is possible that if we reduce the degree of a lowdegree node, it may make possible another local move that reduces the degree of a highdegree node. However, we do not know how to show that such an algorithm terminates in polynomial time. To get a polynomialtime algorithm, we apply the local moves only to nodes whose degree is relatively high. Let ∆(T ) be the maximum degree of T ; that is, ∆(T ) = maxu∈V dT (u). The algorithm picks a node in T that has degree at least Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
2.6
Finding minimumdegree spanning trees
51
∆(T ) − ℓ and attempts to reduce its degree using the local move. If there is no move that can reduce the degree of any node having degree between ∆(T ) − ℓ and ∆(T ), then the algorithm stops. We say that the algorithm has found a locally optimal tree. By applying local moves only to nodes whose degree is between ∆(T ) − ℓ and ∆(T ), we will be able to show that the algorithm runs in polynomial time. We now need to prove two things. First, we need to show that any locally optimal tree has maximum degree at most 2 OPT +ℓ. Second, we need to show that we can ﬁnd a locally optimal tree in polynomial time. For most approximation algorithms, the proof that the algorithm runs in polynomial time is relatively straightforward, but this is often not the case for local search algorithms. In fact, we usually need to restrict the set of local moves in order to prove that the algorithm converges to a locally optimal solution in polynomial time. Here we do this by restricting the local moves to apply only to nodes with high degree. Theorem 2.19: Let T be a locally optimal tree. Then ∆(T ) ≤ 2 OPT +ℓ, where ℓ = ⌈log2 n⌉. Proof. We ﬁrst explain how we will obtain a lower bound on OPT. Suppose that we remove k edges of the spanning tree. This breaks the tree into k + 1 diﬀerent connected components. Suppose we also ﬁnd a set of nodes S such that each edge in G connecting two of the k + 1 connected components is incident on a node in S. For example, consider the graph in Figure 2.6 that shows the connected components remaining after the bold edges are deleted, along with an appropriate choice of the set S. Observe that any spanning tree of the graph must have at least k edges with endpoints in diﬀerent components. Thus, the average degree of nodes in S is at least k/S for any spanning tree, and OPT ≥ k/S. Now we show how to ﬁnd the set of edges to remove and the set of nodes S so that we can apply this lower bound. Let Si be the nodes of degree at least i in the locally optimal tree T . We claim that for each Si , where i ≥ ∆(T ) − ℓ + 1, there are at least (i − 1)Si  + 1 distinct edges of T incident on the nodes of Si , and after removing these edges, each edge that connects distinct connected components is incident on a node of Si−1 . Furthermore, we claim there exists an i such that Si−1  ≤ 2Si , so that the value of OPT implied by removing these edges with S = Si−1 is OPT ≥
(i − 1)Si  + 1 (i − 1)Si  + 1 ≥ > (i − 1)/2 ≥ (∆(T ) − ℓ)/2. Si−1  2Si 
Rearranging terms proves the desired inequality. We turn to the proofs of the claims. We ﬁrst show that there must exist some i ≥ ∆(T )−ℓ+1 such that Si−1  ≤ 2Si . Suppose not. Then clearly S∆(T )−ℓ  > 2ℓ S∆(T )  or S∆(T )−ℓ  > nS∆(T )  ≥ n since S∆(T )  ≥ 1. This is a contradiction, since any Si can have at most n nodes. Now we show that there are at least (i − 1)Si  + 1 distinct edges of T incident on the nodes of Si , and after removing these edges, any edge connecting diﬀerent connected components is incident on a node of Si−1 . Figure 2.6 gives an example of this construction for i = 4. Each edge that connects distinct connected components after removing the edges of T incident on nodes of Si either must be one of the edges of T incident on Si , or must close a cycle C in T containing some node in Si . Because the tree is locally optimal, it must be the case that at least one of the endpoints has degree at least i − 1, and so is in Si−1 . In removing the edges in T incident on nodes in Si , there are at least iSi  edges incident on nodes in Si , since each node has degree at least i. At most Si  − 1 such edges can join two nodes in Si since T is a spanning tree. Thus, there are at least iSi  − (Si  − 1) distinct edges of T incident on the nodes Si , and this proves the claim. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
52
Greedy algorithms and local search
Figure 2.6: Illustration of lower bound on OPT. The vertices in S have white centers. If the bold edges are deleted, every potential edge for joining the resulting connected components has an endpoint in S.
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
2.6
Finding minimumdegree spanning trees
53
Theorem 2.20: The algorithm ﬁnds a locally optimal tree in polynomial time. Proof. To prove that the algorithm runs in polynomial time, we will use a potential function argument. The idea of such an argument is that the function captures the current state of the algorithm, and that we can determine upper and lower bounds on this function for any feasible solution, as well as a lower bound on the amount that the function must decrease after each move. In this way, we can bound the number of moves possible before the algorithm must terminate, and of course, the resulting tree must therefore be locally optimal. ∑ For a tree T , we let the potential of T , Φ(T ), be Φ(T ) = v∈V 3dT (v) . Note that Φ(T ) ≤ n3∆(T ) , and so the initial potential is at most n3n . On the other hand, the lowest possible potential is for a Hamiltonian path, which has potential 2 · 3 + (n − 2)32 > n. We will show 2 that for each move, the potential function of the resulting tree is at most 1 − 27n 3 times the potential function previously. 4 After 27 2 n ln 3 local moves, the conditions above imply that the potential of the resulting tree is at most ( ) 27 n4 ln 3 2 2 1− · (n3n ) ≤ e−n ln 3 · (n3n ) = n, 27n3 using the fact that 1 − x ≤ e−x . Since the potential of a tree is greater than n, after O(n4 ) local moves there must be no further local moves possible, and the tree must be locally optimal. We must still prove the claimed potential reduction in each iteration. Suppose the algorithm reduces the degree of a vertex u from i to i − 1, where i ≥ ∆(T ) − ℓ, and adds an edge (v, w). Then the increase in the potential function due to increasing the degree of v and w is at most 2 · (3i−1 − 3i−2 ) = 4 · 3i−2 , since the degree of v and w can be increased to at most i − 1. The decrease in the potential function due to decreasing the degree of u is 3i − 3i−1 = 2 · 3i−1 . Observe that 3ℓ ≤ 3 · 3log2 n ≤ 3 · 22 log2 n = 3n2 . Therefore, the overall decrease in the potential function is at least 2 2 ∆(T ) 2 2 2 · 3i−1 − 4 · 3i−2 = 3i ≥ 3∆(T )−ℓ ≥ 3 ≥ Φ(T ). 9 9 27n2 27n3 Thus, for the resulting tree T ′ we have that Φ(T ′ ) ≤ (1 − proof.
2 )Φ(T ). 27n3
This completes the
By slightly adjusting the parameters within the same proof outline, we can actually prove a stronger result. Given some constant b > 1, suppose we perform local changes on nodes of degree at least ∆(T ) − ⌈logb n⌉. Then it is possible to show the following. Corollary 2.21: The local search algorithm runs in polynomial time and results in a spanning tree T such that ∆(T ) ≤ b OPT +⌈logb n⌉. In Section 9.3, we will prove a still stronger result: we can give a polynomialtime algorithm that ﬁnds a spanning tree T with ∆(T ) ≤ OPT +1. Given that it is NPhard to determine whether a spanning tree has degree exactly OPT, this is clearly the best possible result that can be obtained. In the next section, we give another result of this type for the edge coloring problem. In Section 11.2, we will show that there are interesting extensions of these results to the case of spanning trees with costs on the edges. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
54
Greedy algorithms and local search
Figure 2.7: A graph with a 3edgecoloring.
Figure 2.8: The Petersen graph. This graph is not 3edgecolorable.
2.7
Edge coloring
To conclude this chapter, we give an algorithm that has the elements of both a greedy algorithm and a local search algorithm: it attempts to make progress in a greedy way, but when blocked it makes local changes until progress can be made again. The algorithm is for the problem of ﬁnding an edge coloring of a graph. An undirected graph is kedgecolorable if each edge can be assigned exactly one of k colors in such a way that no two edges with the same color share an endpoint. We call the assignment of colors to edges a kedgecoloring. For example, Figure 2.7 shows a graph with a 3edgecoloring. An analogous notion of vertex coloring will be discussed in Sections 5.12, 6.5, and 13.2. For a given graph, we would like to obtain a kedgecoloring with k as small as possible. Let ∆ be the maximum degree of a vertex in the given graph. Clearly, we cannot hope to ﬁnd a kedgecoloring with k < ∆, since at least ∆ diﬀerent colors must be incident to any vertex of maximum degree. Note that this shows that the coloring given in Figure 2.7 is optimal. On the other hand, consider the example in Figure 2.8, which is called the Petersen graph; it is not too hard to show that this graph is not 3edgecolorable, and yet it is easy to color it with four colors. Furthermore, the following has been shown. Theorem 2.22: For graphs with ∆ = 3, it is NPcomplete to decide whether the graph is Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
2.7
Edge coloring
55
3edgecolorable or not. In this section, we give a polynomialtime algorithm that will ﬁnd a (∆ + 1)edgecoloring for any graph. Given the NPcompleteness result, it is clearly the best we can hope to do unless P = NP. We give the algorithm and its analysis in the proof of the theorem below. We repeatedly ﬁnd an uncolored edge (u, v) and attempt to color it with one of the ∆ + 1 colors. If no color is available such that coloring (u, v) would result in a (∆ + 1)edgecoloring, then we show that it is possible to locally change some of the edge colors in such a way that we can correctly color (u, v). Theorem 2.23: There is a polynomialtime algorithm to ﬁnd a (∆+1)edgecoloring of a graph. Proof. Our algorithm will start with a completely uncolored graph. In each iteration of the algorithm we will take some uncolored edge and color it. This will be done in such a way that the algorithm maintains a legal coloring of the graph at the beginning of each iteration of the main loop; that is, for any vertex v in the graph, no two colored edges incident on v have the same color, and at most ∆ + 1 distinct colors have been used. Clearly this is true initially, when the entire graph is uncolored. We show that this invariant is maintained throughout the course of each iteration. In the argument that follows, we say that a vertex v lacks color c if an edge of color c is not incident on v. We summarize the algorithm in Algorithm 2.3, and now explain it in detail. Let (u, v0 ) be the selected uncolored edge. We then construct a sequence of edges (u, v0 ), (u, v1 ), . . . and colors c0 , c1 , . . . . We will use this sequence to do some local recoloring of edges so that we can correctly color the uncolored edge. Note that since we are maintaining a legal coloring of ∆ + 1 colors, and the maximum degree is ∆, each vertex must lack at least one of the ∆ + 1 colors. To build this sequence of edges, consider the current vertex vi , starting initially with v0 ; if vi lacks a color that u also lacks, then we let ci be this color, and the sequence is complete. If not, then choose the color ci arbitrarily from among those that vi lacks; note that u will not lack this color. If this color has not appeared in the sequence c0 , c1 , . . . to this point, then we let vi+1 be the vertex such that (u, vi+1 ) is the edge incident to u of color ci , and this extends our sequence (sometimes called a fan sequence) one edge further. If we have that ci = cj , where j < i, then we also stop building our sequence. (See Figure 2.9 for an example of this construction.) We need to argue that this process terminates either in the case that u and vi lack the same color ci , or that we have ci = cj for some j < i. Let d be the number of edges incident to u that are currently colored. Suppose the sequence reaches vertex vd without terminating. If there is no color cd that both u and vd lack, then any color that vd lacks is one that u does not lack, which must be one of the colors on the edges (u, v1 ), . . . , (u, vd−1 ). Hence, the color we choose for cd must be the same as one of the previous colors c0 , . . . , cd−1 . Suppose we complete the sequence because we ﬁnd some ci that both u and vi lack. This case is easy: we recolor edges (u, vj ) with color cj for j = 0, . . . , i; call this shifting recoloring. This situation and the resulting recoloring are depicted in Figure 2.10. In eﬀect, we shift the uncolored edge to (u, vi ) and color it with ci since both u and vi lack ci . The recoloring is correct since we know that each vj lacks cj and for j < i, cj was incident on u previously via the edge (u, vj+1 ), which we now give another color. Now consider the remaining case, where we complete the sequence because ci = cj for some j < i. This situation and its recoloring are given in Figure 2.11. We shift the uncolored edge to (u, vj ) by recoloring edges (u, vk ) with color ck for 0 ≤ k < j and uncoloring edge (u, vj ); this is correct by the same argument as above. Now vi and vj lack the same color c = ci = cj . Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
56
Greedy algorithms and local search
u
v0
v2
v1
v3
Figure 2.9: An example of building the fan sequence. Edge (u, v0 ) is uncolored. Vertex v0 lacks gray, but u does not lack gray due to (u, v1 ), so c0 is gray. Vertex v1 lacks black, but u does not lack black due to (u, v2 ), so c1 is black. Vertex v2 lacks the dashed color, but u does not lack dashed due to (u, v3 ), so c2 is dashed. Vertex v3 lacks black, so c3 is black, and the sequence repeats a color.
u
v0
v2
u
v3
v1
v0
v2
v3
v1
Figure 2.10: A slightly diﬀerent fan sequence and its recoloring. As before, edge (u, v0 ) is uncolored. Vertex v0 lacks black, but u does not lack black due to (u, v1 ), so c0 is black. Vertex v1 lacks gray, but u does not lack gray due to (u, v2 ), so c1 is gray. Vertex v2 lacks dashed, but u does not lack dashed due to (u, v3 ), so c2 is dashed. Vertex v3 lacks dotted, and u also lacks dotted, and thus c3 is dotted. Therefore, we shift colors as shown and color the edge (u, v3 ) with dotted.
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
2.7
Edge coloring
u
v0
v2
57
u
v3
v1
v0
v2
v3
v1
Figure 2.11: The fan sequence from Figure 2.9. We start by shifting the uncolored edge to (u, v1 ). Now both v1 and v3 lack black. The dotted color can be the color cu that u lacks but that v1 and v3 do not lack. We let cu be a color that u lacks; by our selection (and the fact that we did not fall in the ﬁrst case), we know that both vi and vj do not lack cu . Consider the subgraph induced by taking all of the edges with colors c and cu ; since we have a legal coloring, this subgraph must be a collection of paths and simple cycles. Since each of u, vi , and vj has exactly one of these two colors incident to it, each is an endpoint of one of the path components, and since a path has only two endpoints, at least one of vi and vj must be in a diﬀerent component than u. Suppose that vj is in a diﬀerent component than u. Suppose we recolor every edge of color c with color cu and every edge of color cu with color c in the component containing u; call this path recoloring. Afterward, u now lacks c (and this does not aﬀect the colors incident to vj at all), and so we may color the uncolored edge (u, vj ) with c. See Figure 2.12 for an example. Finally, suppose that u and vj are endpoints of the same path, and so vi must be in a diﬀerent component. In this case, we can apply the previous shifting recoloring technique to ﬁrst uncolor the edge (u, vi ). We then apply the path recoloring technique on the uvj path to make u lack c; this does not aﬀect any of the colors incident on vi , and it allows us to color edge (u, vi ) with c. Clearly we color a previously uncolored edge in each iteration of the algorithm, and each iteration can be implemented in polynomial time.
Exercises 2.1 The ksuppliers problem is similar to the kcenter problem given in Section 2.2. The input to the problem is a positive integer k, and a set of vertices V , along with distances dij between any two vertices i, j that obey the same properties as in the kcenter problem. However, now the vertices are partitioned into suppliers F ⊆ V and customers D = V −F . The goal is to ﬁnd k suppliers such that the maximum distance from a supplier to a customer is minimized. In other words, we wish to ﬁnd S ⊆ F , S ≤ k, that minimizes maxj∈D d(j, S). (a) Give a 3approximation algorithm for the ksuppliers problem. (b) Prove that there is no αapproximation algorithm for α < 3 unless P = NP. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
58
Greedy algorithms and local search
u
v0
v2
u
v3
v1
v0
v2
v3
v1
Figure 2.12: The example of Figure 2.11 continued, now showing the components of black and dotted edges containing u, v1 , and v3 . Since u and v3 are at endpoints of the same black/dotted path, we switch the colors black and dotted on this path, then color (u, v1 ) black.
while G is not completely colored do Pick uncolored edge (u, v0 ) i ← −1 repeat // Build fan sequence i←i+1 if there is a color vi lacks and u lacks then Let ci be this color else Pick some color ci that vi lacks Let vi+1 be the edge (u, vi+1 ) of color ci until ci is a color u lacks or ci = cj for some j < i if u and vi lack color ci then Shift uncolored edge to (u, vi ) and color (u, vi ) with ci else Let j < i be such that ci = cj Shift uncolored edge to (u, vj ) Pick color cu that u lacks Let c = ci Let E ′ be edges colored c or cu if u and vj in diﬀerent connected components of (V, E ′ ) then Switch colors cu and c in component containing u of (V, E ′ ) Color (u, vj ) with color c else // u and vi in different components of (V, E ′ ) Shift uncolored edge to (u, vi ) Switch colors cu and c in component containing u of (V, E ′ ) Color (u, vi ) with color c Algorithm 2.3: A greedy algorithm to compute a (∆ + 1)edgecoloring of a graph.
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
2.7
Edge coloring
59
2.2 Prove Lemma 2.8: show that for any input to the problem of minimizing the makespan on identical parallel machines for which the processing requirement of each job is more than onethird the optimal makespan, the longest processing time rule computes an optimal schedule. 2.3 We consider scheduling jobs on identical machines as in Section 2.3, but jobs are now subject to precedence constraints. We say i ≺ j if in any feasible schedule, job i must be completely processed before job j begins processing. A natural variant on the list scheduling algorithm is one in which whenever a machine becomes idle, then any remaining job that is available is assigned to start processing on that machine. A job j is available if all jobs i such that i ≺ j have already been completely processed. Show that this list scheduling algorithm is a 2approximation algorithm for the problem with precedence constraints. 2.4 In this problem, we consider a variant of the problem of scheduling on parallel machines so as to minimize the length of the schedule. Now each machine i has an associated speed si , and it takes pj /si units of time to process job j on machine i. Assume that machines are numbered from 1 to m and ordered such that s1 ≥ s2 ≥ · · · ≥ sm . We call these related machines. (a) A ρrelaxed decision procedure for a scheduling problem is an algorithm such that given an instance of the scheduling problem and a deadline D either produces a schedule of length at most ρ · D or correctly states that no schedule of length D is possible for the instance. Show that given a polynomialtime ρrelaxed decision procedure for the problem of scheduling related machines, one can produce a ρapproximation algorithm for the problem. (b) Consider the following variant of the list scheduling algorithm, now for related machines. Given a deadline D, we label every job j with the slowest machine i such that the job could complete on that machine in time D; that is, pj /si ≤ D. If there is no such machine for a job j, it is clear that no schedule of length D is possible. If machine i becomes idle at a time D or later, it stops processing. If machine i becomes idle at a time before D, it takes the next job of label i that has not been processed, and starts processing it. If no job of label i is available, it looks for jobs of label i + 1; if no jobs of label i + 1 are available, it looks for jobs of label i + 2, and so on. If no such jobs are available, it stops processing. If not all jobs are processed by this procedure, then the algorithm states that no schedule of length D is possible. Prove that this algorithm is a polynomialtime 2relaxed decision procedure. 2.5 In the minimumcost Steiner tree problem, we are given as input a complete, undirected graph G = (V, E) with nonnegative costs cij ≥ 0 for all edges (i, j) ∈ E. The set of vertices is partitioned into terminals R and nonterminals (or Steiner vertices) V − R. The goal is to ﬁnd a minimumcost tree containing all terminals. (a) Suppose initially that the edge costs obey the triangle inequality; that is, cij ≤ cik + ckj for all i, j, k ∈ V . Let G[R] be the graph induced on the set of terminals; that is, G[R] contains the vertices in R and all edges from G that have both endpoints in R. Consider computing a minimum spanning tree in G[R]. Show that this gives a 2approximation algorithm for the minimumcost Steiner tree problem. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
60
Greedy algorithms and local search
(b) Now we suppose that edge costs do not obey the triangle inequality, and that the input graph G is connected but not necessarily complete. Let c′ij be the cost of the shortest path from i to j in G using input edge costs c. Consider running the algorithm above in the complete graph G′ on V with edge costs c′ to obtain a tree T ′ . To compute a tree T in the original graph G, for each edge (i, j) ∈ T ′ , we add to T all edges in a shortest path from i to j in G using input edge costs c. Show that this is still a 2approximation algorithm for the minimumcost Steiner tree problem on the original (incomplete) input graph G. G′ is sometimes called the metric completion of G. 2.6 Prove that there can be no αapproximation algorithm for the minimumdegree spanning tree problem for α < 3/2 unless P = NP. 2.7 Suppose that an undirected graph G has a Hamiltonian path. Give a polynomialtime algorithm to ﬁnd a path of length at least Ω(log n/(log log n)). 2.8 Consider the local search algorithm of Section 2.6 for ﬁnding a minimumdegree spanning tree, and suppose we apply a local move to a node whenever it is possible to do so; that is, we don’t restrict local moves to nodes with degrees between ∆(T ) − ℓ and ∆(T ). What kind of performance guarantee can you obtain for a locally optimal tree in this case? 2.9 As given in Exercise 2.5, in the Steiner tree problem we are given an undirected graph G = (V, E) and a set of terminals R ⊆ V . A Steiner tree is a tree in G in which all the terminals are connected; a nonterminal need not be spanned. Show that the local search algorithm of Section 2.6 can be adapted to ﬁnd a Steiner tree whose maximum degree is at most 2 OPT +⌈log2 n⌉, where OPT is the maximum degree of a minimumdegree Steiner tree. 2.10 Let E be a set of items, and for S ⊆ E, let f (S) give the value of the subset S. Suppose we wish to ﬁnd a maximum value subset of E of at most k items. Furthermore, suppose that f (∅) = 0, and that f is monotone and submodular. We say that f is monotone if for any S and T with S ⊆ T ⊆ E, then f (S) ≤ f (T ). We say that f is submodular if for any S, T ⊆ E, then f (S) + f (T ) ≥ f (S ∪ T ) + f (S ∩ T ). Show that the greedy (1 − 1e )approximation algorithm of Section 2.5 extends to this problem. 2.11 In the maximum coverage problem, we have a set of elements E, and m subsets of elements S1 , . . . , Sm ⊆ E, each with a nonnegative weight wj ≥ 0. The goal is to choose k elements such that we maximize the weight of the subsets that are covered. We say that a subset is covered if we have chosen some element from it. Thus we want to ﬁnd S ⊆ E such that S = k and that we maximize the total weight of the subsets j such that S ∩ Sj ̸= ∅. (a) Give a (1 − 1e )approximation algorithm for this problem. (b) Show that if an approximation algorithm with performance guarantee better than 1 − 1e + ϵ exists for the maximum coverage problem for some constant ϵ > 0, then every NPcomplete problem has an O(nO(log log n) ) time algorithm. (Hint: Recall Theorem 1.13.) Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
2.7
Edge coloring
61
2.12 A matroid (E, I) is a set E of ground elements together with a collection I of subsets of E; that is, if S ∈ I, then S ⊆ E. A set S ∈ I is said to be independent. The independent sets of a matroid obey the following two axioms: • If S is independent, then any S ′ ⊆ S is also independent. • If S and T are independent, and S < T , then there is some e ∈ T − S such that S ∪ {e} is also independent. An independent set S is a base of the matroid if no set strictly containing it is also independent. (a) Given an undirected graph G = (V, E), show that the forests of G form a matroid; that is, show that if E is the ground set, and I the set of forests of G, then the matroid axioms are obeyed. (b) Show that for any matroid, every base of the matroid has the same number of ground elements. (c) For any given matroid, suppose that for each e ∈ E, we have a nonnegative weight we ≥ 0. Give a greedy algorithm for the problem of ﬁnding a maximumweight base of a matroid. 2.13 Let (E, I) be a matroid as deﬁned in Exercise 2.12, and let f be a monotone, submodular function as deﬁned in Exercise 2.10 such that f (∅) = 0. Consider the following local search algorithm for ﬁnding a maximumvalue base of the matroid: First, start with an arbitrary base S. Then consider all pairs e ∈ S and e′ ∈ / S. If S ∪ {e} − {e′ } is a base, and f (S ∪ {e′ } − {e}) > f (S), then set S ← S ∪ {e′ } − {e}. Repeat until a locally optimal solution is reached. The goal of this problem is to show that a locally optimal solution has value at least half the optimal value. (a) We begin with a simple case: suppose that the matroid is a uniform matroid; that is, S ⊆ E is independent if S ≤ k for some ﬁxed k. Prove that for a locally optimal solution S, f (S) ≥ 21 OPT. (b) To prove the general case, it is useful to know that for any two bases of a matroid, X and Y , there exists a bijection g : X → Y such that for any e ∈ X, S − {e} ∪ {g(e)} is independent. Use this to prove that for any locally optimal solution S, f (S) ≥ 1 2 OPT. (c) For any ϵ > 0, give a variant of this algorithm that is a ( 21 − ϵ)approximation algorithm. 2.14 In the edgedisjoint paths problem in directed graphs, we are given as input a directed graph G = (V, A) and k sourcesink pairs si , ti ∈ V . The goal of the problem is to ﬁnd edgedisjoint paths so that as many sourcesink pairs as possible have a path from si to ti . More formally, let S ⊆ {1, . . . , k}. We want to ﬁnd S and paths Pi for all i ∈ S such that S is as large as possible and for any i, j ∈ S, i ̸= j, Pi and Pj are edgedisjoint (Pi ∩ Pj = ∅). √ Consider the following greedy algorithm for the problem. Let ℓ be the maximum of m and the diameter of the graph (where m = A is the number of input arcs). For each i from 1 to k, we check to see if there exists an si ti path of length at most ℓ in the graph. If there is such a path Pi , we add i to S and remove the arcs of Pi from the graph. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
62
Greedy algorithms and local search
Show that this greedy algorithm is an Ω(1/ℓ)approximation algorithm for the edgedisjoint paths problem in directed graphs. 2.15 Prove that there is no αapproximation algorithm for the edge coloring problem for α < 4/3 unless P = NP. 2.16 Let G = (V, E) be a bipartite graph; that is, V can be partitioned into two sets A and B, such that each edge in E has one endpoint in A and the other in B. Let ∆ be the maximum degree of a node in G. Give a polynomialtime algorithm for ﬁnding a ∆edgecoloring of G.
Chapter Notes As discussed in the introduction to this chapter, greedy algorithms and local search algorithms are very popular choices for heuristics for discrete optimization problems. Thus it is not surprising that they are among the earliest algorithms analyzed for a performance guarantee. The greedy edge coloring algorithm in Section 2.7 is from a 1964 paper due to Vizing [284]. To the best of our knowledge, this is the earliest polynomialtime algorithm known for a combinatorial optimization problem that proves that its performance is close to optimal, with an additive performance guarantee. In 1966, Graham [142] gave the list scheduling algorithm for scheduling identical parallel machines found in Section 2.3. To our knowledge, this is the ﬁrst appearance of a polynomialtime algorithm with a relative performance guarantee. The longest processing time algorithm and its analysis is from a 1969 paper of Graham [143]. Other early examples of the analysis of greedy approximation algorithms include a 1977 paper of Cornuejols, Fisher, and Nemhauser [83], who introduce the ﬂoat maximization problem of Section 2.5, as well as the algorithm presented there. The analysis of the algorithm presented follows that given in Nemhauser and Wolsey [232]. The earliest due date rule given in Section 2.1 is from a 1955 paper of Jackson [174], and is sometimes called Jackson’s rule. The analysis of the algorithm in the case of negative due dates was given by Kise, Ibaraki, and Mine [195] in 1979. The nearest addition algorithm for the metric traveling salesman problem given in Section 2.4 and the analysis of the algorithm are from a 1977 paper of Rosenkrantz, Stearns, and Lewis [255]. The doubletree algorithm from that section is folklore, while Christoﬁdes’ algorithm is due, naturally enough, to Christoﬁdes [73]. The hardness result of Theorem 2.9 is due to Sahni and Gonzalez [257] while the result of Theorem 2.14 is due to Papadimitriou and Vempala [239]. There is an enormous literature on the traveling salesman problem. For booklength treatments of the problem, see the book edited by Lawler, Lenstra, Rinnooy Kan, and Shmoys [211] and the book of Applegate, Bixby, Chv´atal, and Cook [9]. Of course, greedy algorithms for polynomialtime solvable discrete optimization problems have also been studied for many years. The greedy algorithm for ﬁnding a maximumweight base of a matroid in Exercise 2.12 was given by Rado [246] in 1957, Gale [121] in 1968, and Edmonds [96] in 1971; matroids were ﬁrst deﬁned by Whitney [285]. Analysis of the performance guarantees of local search algorithms has been relatively rare, at least until some work on facility location problems from the late 1990s and early 2000s that will be discussed in Chapter 9. The local search algorithm for scheduling parallel machines given in Section 2.3 is a simpliﬁcation of a local search algorithm given in a 1979 paper of Finn and Horowitz [113]; Finn and Horowitz show that their algorithm has performance guarantee of at most 2. The local search algorithm for ﬁnding a maximumvalue base of Exercise 2.13 was given by Fisher, Nemhauser, and Wolsey [114] in 1978. The local search algorithm for ﬁnding Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
2.7
Edge coloring
63
a minimumdegree spanning tree in Section 2.6 is from 1992 and can be found in F¨ urer and Raghavachari [118]. Now to discuss the other results presented in the chapter: The algorithm and analysis of the kcenter problem in Section 2.2 is due to Gonzalez [141]. An alternative 2approximation algorithm for the problem is due to Hochbaum and Shmoys [163]. Theorem 2.4, which states that getting a performance guarantee better than 2 is NPhard, is due to Hsu and Nemhauser [172]. Theorem 2.22, which states that deciding if a graph is 3edgecolorable or not is NPcomplete, is due to Holyer [170]. The ksupplier problem of Exercise 2.1, as well as a 3approximation algorithm for it, were introduced in Hochbaum and Shmoys [164]. The list scheduling variant for problems with precedence constraints in Exercise 2.3 is due to Graham [142]. The idea of a ρrelaxed decision procedure in Exercise 2.4 is due to Hochbaum and Shmoys [165]; the 2relaxed decision procedure for related machines in that exercise is due to Shmoys, Wein, and Williamson [265]. Exercise 2.7 was suggested to us by Nick Harvey. Exercise 2.9 is due to F¨ urer and Raghavachari [118]. Exercises 2.10 and 2.11 are due to Nemhauser, Wolsey, and Fisher [233]. The hardness result of Exercise 2.11 is due to Feige [107]; Feige also shows that the same result can be obtained under the assumption that P ̸= NP. Kleinberg [199] gives the greedy algorithm for the edgedisjoint paths problem in directed graphs given in Exercise 2.14. K˝onig [201] shows that it is possible to ∆edgecolor a bipartite graph as in Exercise 2.16.
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
64
Greedy algorithms and local search
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
Chapter 3
Rounding data and dynamic programming
Dynamic programming is a standard technique in algorithm design in which an optimal solution for a problem is built up from optimal solutions for a number of subproblems, normally stored in a table or multidimensional array. Approximation algorithms can be designed using dynamic programming in a variety of ways, many of which involve rounding the input data in some way. For instance, sometimes weakly NPhard problems have dynamic programming algorithms that run in time polynomial in the input size if the input is represented in unary rather than in binary (so, for example, the number 7 would be encoded as 1111111). If so, we say that the algorithm is pseudopolynomial. Then by rounding the input values so that the number of distinct values is polynomial in the input size and an error parameter ϵ > 0, this pseudopolynomial algorithm can be made to run in time polynomial in the size of the original instance. We can often show that the rounding does not sacriﬁce too much in the quality of the solution produced. We will use this technique in discussing the knapsack problem in Section 3.1. For other problems, such as scheduling problems, we can often make distinctions between “large” and “small” parts of the input instance; for instance, in scheduling problems, we distinguish between jobs that have large and small processing times. We can then show that by rounding the sizes of the large inputs so that, again, the number of distinct, large input values is polynomial in the input size and an error parameter, we can use dynamic programming to ﬁnd an optimal solution on just the large inputs. Then this solution must be augmented to a solution for the whole input by dealing with the small inputs in some way. Using these ideas, we will devise polynomialtime approximation schemes for the problem of scheduling parallel machines introduced in the last chapter, and for a new problem of packing bins.
3.1
The knapsack problem
A traveler with a knapsack comes across a treasure hoard. Unfortunately, his knapsack can hold only so much. What items should he place in his knapsack in order to maximize the value of the items he takes away? This unrealistic scenario gives the name to the knapsack problem. In the knapsack problem, we are given a set of n items I = {1, . . . , n}, where each item i has a value vi and a size si . All sizes and values are positive integers. The knapsack has capacity B, where B 65
66
Rounding data and dynamic programming
A(1) ← {(0, 0), (s1 , w1 )} for j ← 2 to n do A(j) ← A(j − 1) for each (t, w) ∈ A(j − 1) do if t + sj ≤ B then Add (t + sj , w + vj ) to A(j) Remove dominated pairs from A(j) return max(t,w)∈A(n) w Algorithm 3.1: A dynamic programming algorithm for the knapsack problem. is ∑also a positive integer. The goal is to ﬁnd a subset of items S ⊆ I that maximizes the value to the constraint that the total size of these items is i∈S vi of items in the knapsack subject ∑ no more than the capacity; that is, i∈S si ≤ B. We assume that we consider only items that could actually ﬁt in the knapsack (by themselves), so that si ≤ B for each i ∈ I. Although the application stated above is unlikely to be useful in real life, the knapsack problem is well studied because it is a simpliﬁed model of a problem that arises in many realistic scenarios. We now argue that we can use dynamic programming to ﬁnd the optimal solution to the knapsack problem. We maintain an array entry A(j) for j = 1, . . . , n. Each entry A(j) is a list of pairs (t, w). A pair (t, w) in the list of entry A(j) indicates that there is a set S from the ﬁrst j items that uses space ∑ exactly t ≤ B and has ∑ value exactly w; that is, there exists a set S ⊆ {1, . . . , j}, such that i∈S si = t ≤ B and i∈S vi = w. Each list does not contain all possible such pairs, but instead keeps track of only the most eﬃcient ones. To do this, we introduce the notion of one pair dominating another one; a pair (t, w) dominates another pair (t′ , w′ ) if t ≤ t′ and w ≥ w′ ; that is, the solution indicated by the pair (t, w) uses no more space than (t′ , w′ ), but has at least as much value. Note that domination is a transitive property; that is, if (t, w) dominates (t′ , w′ ) and (t′ , w′ ) dominates (t′′ , w′′ ), then (t, w) also dominates (t′′ , w′′ ). We will ensure that in any list, no pair dominates another one; this means that we can assume each list A(j) is of the form (t1 , w1 ), . . . , (tk , wk ) with t1 < t2 < · · · < tk and w1 < w2 < · · · < wk . Since the sizes of the items are integers, ∑n this implies that there are at most B + 1 pairs in each list. Furthermore, if we let V = i=1 vi be the maximum possible value for the knapsack, then there can be at most ∑ V + 1 pairs in the list. Finally, we ensure that for each feasible set ∑ S ⊆ {1, ∑. . . , j} (with i∈S si ≤ B), the list A(j) contains some pair (t, w) that dominates ( i∈S si , i∈S vi ). In Algorithm 3.1, we give the dynamic program that constructs the lists A(j) and solves the knapsack problem. We start out with A(1) = {(0, 0), (s1 , w1 )}. For each j = 2, . . . , n, we do the following. We ﬁrst set A(j) ← A(j − 1), and for each (t, w) ∈ A(j − 1), we also add the pair (t + sj , w + vj ) to the list A(j) if t + sj ≤ B. We ﬁnally remove from A(j) all dominated pairs by sorting the list with respect to their space component, retaining the best value for each space total possible, and removing any larger space total that does not have a corresponding larger value. One way to view this process is to generate two lists, A(j − 1) and the one augmented by (sj , wj ), and then perform a type of merging of these two lists. We return the pair (t, w) from A(n) of maximum value as our solution. Next we argue that this algorithm is correct. Theorem 3.1: Algorithm 3.1 correctly computes the optimal value of the knapsack problem. Proof. By induction on j we prove that A(j) contains all nondominated pairs corresponding to feasible sets S ⊆ {1, . . . , j}. Certainly this is true in the base case by setting A(1) to Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
3.1
The knapsack problem
67
∑ {(0, 0), (s1 , w1∑ )}. Now suppose it is true for A(j − 1). Let S ⊆ {1, . . . , j}, and let t = i∈S si ≤ B and w = i∈S vi . We claim that there is some pair (t′ , w′ ) ∈ A(j) such that t′ ≤ t and w′ ≥ w. First, suppose that j ∈ / S. Then the claim follows by the induction hypothesis and by the fact that we initially set A(j) to A(j − 1) and removed dominated pairs. Now suppose ˆ ˆ ∈ A(j − 1) that j ∈ S. Then∑ for S ′ =∑ S − {j}, by the induction ∑ hypothesis, there ∑ is some (t, w) ˆ dominates ( i∈S ′ si , i∈S ′ vi ), so that t ≤ i∈S ′ si and w ˆ ≥ i∈S ′ vi . Then the algorithm will add the pair (tˆ + sj , w ˆ + vj ) to A(j), where tˆ + sj ≤ t ≤ B and w ˆ + vj ≥ w. Thus, there will be some pair (t′ , w′ ) ∈ A(j) that dominates (t, w). Algorithm 3.1 takes O(n min(B, V )) time. This is not a polynomialtime algorithm, since we assume that all input numbers are encoded in binary; thus, the size of the input number B is essentially log2 B, and so the running time O(nB) is exponential in the size of the input number B, not polynomial. If we were to assume that the input is given in unary, then O(nB) would be a polynomial in the size of the input. It is sometimes useful to make this distinction between problems. Deﬁnition 3.2: An algorithm for a problem Π is said to be pseudopolynomial if its running time is polynomial in the size of the input when the numeric part of the input is encoded in unary. If the maximum possible value V were some polynomial in n, then the running time would indeed be a polynomial in the input size. We now show how to get a polynomialtime approximation scheme for the knapsack problem by rounding the values of the items so that V is indeed a polynomial in n. The rounding induces some loss of precision in the value of a solution, but we will show that this does not aﬀect the ﬁnal value by too much. Recall the deﬁnition of an approximation scheme from Chapter 1. Deﬁnition 3.3: A polynomialtime approximation scheme (PTAS) is a family of algorithms {Aϵ }, where there is an algorithm for each ϵ > 0, such that Aϵ is a (1 + ϵ)approximation algorithm (for minimization problems) or a (1 − ϵ)approximation algorithm (for maximization problems). Note that the running time of the algorithm Aϵ is allowed to depend arbitrarily on 1/ϵ: this dependence could be exponential in 1/ϵ, or worse. We often focus attention on algorithms for which we can give a good bound of the dependence of the running time of Aϵ on 1/ϵ. This motivates the following deﬁnition. Deﬁnition 3.4: A fully polynomialtime approximation scheme (FPAS, FPTAS) is an approximation scheme such that the running time of Aϵ is bounded by a polynomial in 1/ϵ. We can now give a fully polynomialtime approximation scheme for the knapsack problem. Suppose that we measure value in (integer) multiples of µ (where we shall set µ below), and convert each value vi by rounding down to the nearest integer multiple of µ; more precisely, we set vi′ to be ⌊vi /µ⌋ for each item i. We can then run the dynamic programming algorithm of Figure 3.1 on the items with sizes si and values vi′ , and output the optimal solution for the rounded data as a nearoptimal solution for the true data. The main idea here is that we wish to show that the accuracy we lose in rounding is not so great, and yet the rounding enables us to have the algorithm run in polynomial time. Let us ﬁrst do a rough estimate; if we used values v˜i = vi′ µ instead of vi , then each value is inaccurate by at most µ, and so each feasible solution has its value changed by at most nµ. We want the error introduced to be at most ϵ times a lower bound on the optimal value (and so be sure that the true relative error is at most ϵ). Let M be the maximum value of an item; that is, M = maxi∈I vi . Then M is a lower bound Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
68
Rounding data and dynamic programming
M ← maxi∈I vi µ ← ϵM/n vi′ ← ⌊vi /µ⌋ for all i ∈ I Run Algorithm 3.1 for knapsack instance with values vi′ Algorithm 3.2: An approximation scheme for the knapsack problem. on OPT, since one possible solution is to pack the most valuable item in the knapsack by itself. Thus, it makes sense to set µ so that nµ = ϵM ∑ or, in other∑words, to set µ = ϵM/n. vi Note that with the modiﬁed values, V ′ = ni=1 vi′ = ni=1 ⌊ ϵM/n ⌋ = O(n2 /ϵ). Thus, the running time of the algorithm is O(n min(B, V ′ )) = O(n3 /ϵ) and is bounded by a polynomial in 1/ϵ. We can now prove that the algorithm returns a solution whose value is at least (1 − ϵ) times the value of an optimal solution. Theorem 3.5: Algorithm 3.2 is a fully polynomialtime approximation scheme for the knapsack problem. Proof. We need to show that the algorithm returns a solution whose value is at least (1 − ϵ) times the value of an optimal solution. Let S be the set of items returned by the algorithm. Let O be an optimal set of items. Certainly M ≤ OPT, since one possible solution is to put the most valuable item in a knapsack by itself. Furthermore, by the deﬁnition of vi′ , µvi′ ≤ vi ≤ µ(vi′ + 1), so that µvi′ ≥ vi − µ. Applying the deﬁnitions of the rounded data, along with the fact that S is an optimal solution for the values vi′ , we can derive the following chain of inequalities: ∑ ∑ vi ≥ µ vi′ i∈S
i∈S
≥ µ
∑
vi′
i∈O
≥
∑
vi − Oµ
i∈O
≥
∑
vi − nµ
i∈O
=
∑
vi − ϵM
i∈O
≥ OPT −ϵ OPT = (1 − ϵ) OPT .
3.2
Scheduling jobs on identical parallel machines
We return to the problem of scheduling a collection of n jobs on m identical parallel machines; in Section 2.3 we presented a result that by ﬁrst sorting the jobs in order of nonincreasing processing requirement, and then using a list scheduling rule, we ﬁnd a schedule of length guaranteed to be at most 4/3 times the optimum. In this section, we will show that this result contains the seeds of a polynomialtime approximation scheme: for any given value of ρ > 1, we give an algorithm that runs in polynomial time and ﬁnds a solution of objective function value at most ρ times the optimal value. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
3.2
Scheduling jobs on identical parallel machines
69
As in Section 2.3, we let the processing requirement of job j be pj , j = 1, . . . , n, and let Cmax denote the length (or makespan) of a given schedule with job completion times Cj , j = 1, . . . , n; ∗ . We shall assume that each processing requirement is a the optimal value is denoted Cmax positive integer. The key idea of the analysis of the list scheduling rule was that its error can be upper bounded by the processing requirement of the last job to complete. The 4/3approximation algorithm was based on this fact, combined with the observation that when ∗ /3, this natural greedytype algorithm each job’s processing requirement is more than Cmax actually ﬁnds the optimal solution. We present an approximation scheme for this problem based on a similar principle: we focus on a speciﬁed subset of the longest jobs, and compute the optimal schedule for that subset; then we extend that partial schedule by using list scheduling on the remaining jobs. We will show that there is a tradeoﬀ between the number of long jobs and the quality of the solution found. More precisely, let k be a ﬁxed positive integer; we will derive a family of algorithms, and focus on the algorithm Ak among them. Suppose that we partition the job set ∑ into two parts: n 1 the long jobs and the short jobs, where a job ℓ is considered short if pℓ ≤ km j=1 pj . Note that this implies that there are at most km long jobs. Enumerate all possible schedules for the long jobs, and choose one with the minimum makespan. Extend this schedule by using list scheduling for the short jobs; that is, given an arbitrary order of the short jobs, schedule these jobs in order, always assigning the next job to the machine currently least loaded. Consider the running time of algorithm Ak . To specify a schedule for the long jobs, we simply indicate to which of the m machines each long job is assigned; thus, there are at most mkm distinct assignments (since the order of processing on each machine is unimportant). If we focus on the special case of this problem in which the number of machines is a constant (say, 100, 1,000, or even 1,000,000), then this number is also a constant, not depending on the size of the input. Thus, we can check each schedule, and determine the optimal length schedule in polynomial time in this special case. As in the analysis of the local search algorithm in Section 2.3, we focus on the last job ℓ to ﬁnish. Recall that we derived the equality that ∑ pj /m; (3.1) Cmax ≤ pℓ + j̸=ℓ
the validity of this inequality relied only on the fact that each machine is busy up until the time that job ℓ starts. To analyze the algorithm that starts by ﬁnding the optimal schedule for the long jobs, we distinguish now between two cases. If the last job to ﬁnish (in the entire schedule), job ℓ, is a short job, then this job was scheduled by the list scheduling rule, and it ∑ follows that inequality (3.1) holds. Since job ℓ is short, and hence pℓ ≤ nj=1 pj /(mk), it also follows that ) n ( ) ( n ∑ ∑ 1 1 ∑ ∗ pj /m ≤ 1 + Cmax . Cmax ≤ pj /(mk) + pj /m ≤ 1 + k k j=1
j̸=ℓ
j=1
If job ℓ is a long job, then the schedule delivered by the algorithm is optimal, since its makespan is equal to the length of the optimal schedule for just the long jobs, which is clearly ∗ for the entire input. The algorithm Ak can easily be implemented to run in no more than Cmax polynomial time (treating m as a constant), and hence we have obtained the following theorem. Theorem 3.6: The family of algorithms {Ak } is a polynomialtime approximation scheme for the problem of minimizing the makespan on any constant number of identical parallel machines. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
70
Rounding data and dynamic programming
Of course, it is a signiﬁcant limitation of this theorem that the number of machines is restricted to be a constant. In fact, it is not too hard to extend these techniques to obtain a polynomialtime approximation scheme even if the number of machines is allowed to be an input parameter (and hence the algorithm must also have running time polynomially bounded in the number of machines m). The key idea is that we didn’t really need the schedule for the long jobs to be optimal. We used the optimality of the schedule for the long jobs only when the last job to ﬁnish was a long job. If we had found a schedule for the long jobs that had makespan at most 1 + k1 times the optimal value, then that clearly would have been suﬃcient. We will show how to obtain this nearoptimal schedule for long jobs by rounding input sizes and dynamic programming, as we saw in the previous section on the knapsack problem. It will be convenient to ﬁrst set a target length T for the schedule. As before, we also ﬁx a positive integer k; we will design a family of algorithms {Bk } where Bk either proves that no schedule of length T exists, or else ﬁnds a schedule of length (1 + k1 )T . Later we will show how such a family of algorithms also ∑ implies the existence of a polynomialtime approximation n 1 scheme. We can assume that T ≥ m j=1 pj , since otherwise no feasible schedule exists. The algorithm Bk is quite simple. We again partition the jobs into long and short jobs, but in this case, we require that pj > T /k for job j to be considered long. We round down the processing requirement of each long job to its nearest multiple of T /k 2 . We will determine in polynomial time whether or not there is a feasible schedule for these rounded long jobs that completes within time T . If there is such a schedule, we then interpret it as a schedule for the long jobs with their original processing requirements. If not, we conclude that no feasible schedule of length T exists for the original input. Finally, we extend this schedule to include the short jobs by using the list scheduling algorithm for the short jobs. We need to prove that the algorithm Bk always produces a schedule of length at most (1 + k1 )T whenever there exists a schedule of length at most T . When the original input has a schedule of length T , then so does the reduced input consisting only of the rounded long jobs (which is why we rounded down the processing requirements); in this case, the algorithm does compute a schedule for the original input. Suppose that a schedule is found. It starts with a schedule of length at most T for the rounded long jobs. Let S be the set of jobs assigned by this schedule to one machine. Since each job in S is long, and hence has rounded size at least T /k, it follows that S ≤ k. Furthermore, for each job j ∈ S, the diﬀerence between its true processing requirement and its rounded one is at most T /k 2 . Hence, ) ( ∑ 1 2 T. pj ≤ T + k(T /k ) = 1 + k j∈S
Now consider the eﬀect of assigning the short ∑ jobs: each job ℓ, in turn, is assigned ∑ to a machine for which the current load is smallest. Since nj=1 pj /m ≤ T , we also know that j̸=ℓ pj /m < T . Since the average load assigned to a machine is less than T , there must exist a machine that is currently assigned jobs of total processing requirement less than T . So, when we choose the machine that currently has the lightest load, and then add job ℓ, this machine’s new load is at most ( ) ∑ 1 pℓ + pj /m < T /k + T = 1 + T. k j̸=ℓ
Hence, the schedule produced by list scheduling will also be of length at most (1 + k1 )T . To complete the description of the algorithm Bk , we must still show that we can use dynamic programming to decide if there is a schedule of length T for the rounded long jobs. Clearly if Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
3.2
Scheduling jobs on identical parallel machines
71
there is a rounded long job of size greater than T , then there is no such schedule. Otherwise, we can describe an input by a k 2 dimensional vector, where the ith component speciﬁes the number of long jobs of rounded size equal to iT /k 2 , for each i = 1, . . . , k 2 . (In fact, we know that for i < k, there are no such jobs, since that would imply that their original processing 2 requirement was less than T /k, and hence not long.) So there are at most nk distinct inputs — a polynomial number! How many distinct ways are there to feasibly assign long jobs to one machine? Each rounded long job still has processing time at least T /k. Hence, at most k jobs are assigned to one machine. Again, an assignment to one machine can be described by a k 2 dimensional vector, where again the ith component speciﬁes the number of long jobs of rounded size equal to iT /k 2 that are assigned to that machine. Consider the vector (s1 , s2 , . . . , sk2 ); we shall call it a machine conﬁguration if k2 ∑ si · iT /k 2 ≤ T. i=1 2
Let C denote the set of all machine conﬁgurations. Note that there are at most (k +1)k distinct conﬁgurations, since each machine must process a number of rounded long jobs that is in the set {0, 1, . . . , k}. Since k is ﬁxed, this means that there are a constant number of conﬁgurations. Let OPT(n1 , . . . , nk2 ) denote the minimum number of (identical) machines suﬃcient to schedule this arbitrary input. This value is governed by the following recurrence relation, based on the idea that a schedule consists of assigning some jobs to one machine, and then using as few machines as possible for the rest: OPT(n1 , . . . , nk2 ) = 1 +
min (s1 ,...,sk2 )∈C
OPT(n1 − s1 , . . . , nk2 − sk2 ).
This can be viewed as a table with a polynomial number of entries (one for each possible input type), and to compute each entry, we need to ﬁnd the minimum over a constant number of previously computed values. The desired schedule exists exactly when the corresponding optimal value is at most m. Finally, we need to show that we can convert the family of algorithms {Bk } into a polynomialtime approximation scheme. Fix the relative error ϵ > 0. We use a bisection search procedure to determine a suitable choice of the target value T (which is required as part of the input for each algorithm Bk ). We know that the optimal makespan for our scheduling input is within the interval [L0 , U0 ], where n ∑ , max pj L0 = max p /m j j=1,...,n j=1 and
n ∑
U0 = pj /m max pj . + j=1,...,n j=1 (We strengthen the lower bound with the ⌈·⌉ by relying on the fact that the processing requirements are integral.) Throughout the bisection search, we maintain such an interval [L, U ], with ∗ , and (2) that we can compute a schedule with the algorithmic invariants (1) that L ≤ Cmax makespan at most (1 + ϵ)U . This is clearly true initially: by the arguments of Section 2.3, L0 is a lower bound on the length of an optimal schedule, and using a list scheduling algorithm we can compute a schedule of length at most U0 . Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
72
Rounding data and dynamic programming
In each iteration where the current interval is [L, U ], we set T = ⌊(L + U )/2⌋, and run the algorithm Bk , where k = ⌈1/ϵ⌉. If the algorithm Bk produces a schedule, then update U ← T ; otherwise, update L ← T +1. The bisection continues until L = U , at which point the algorithm outputs the schedule associated with U (of length at most (1 + ϵ)U ). It is easy to see that the claimed invariant is maintained: (1) when we update the lower limit L, the algorithm Bk has just shown that no feasible schedule of length T exists, and by the integrality of the processing times, we know then that T + 1 is a valid lower bound; (2) when we update the upper limit U , the algorithm Bk has just produced the schedule of length at most (1 + ϵ)T , which is exactly what is required to justify this update. The diﬀerence between the upper and lower limits is initially at most maxj=1,...,n pj , and is halved in each iteration. Hence, after a polynomial number of iterations, the diﬀerence becomes less than 1 and, by integrality, is therefore 0. Since both invariants must still hold for this trivial ∗ interval [L, L], then we know that Cmax ≥ L, and that the ﬁnal schedule output by the algorithm ∗ is of length at most (1 + ϵ)L ≤ (1 + ϵ)Cmax . The algorithm is a (1 + ϵ)approximation algorithm. Theorem 3.7: There is a polynomialtime approximation scheme for the problem of minimizing the makespan on an input number of identical parallel machines. 2
Note that since we consider (k + 1)k conﬁgurations and k = ⌈1/ϵ⌉, the running time in the worst case is exponential in O(1/ϵ2 ). Thus, in this case, we did not obtain a fully polynomialtime approximation scheme (in contrast to the knapsack problem). This is for a fundamental reason. This scheduling problem is strongly NPcomplete; that is, even if we require that the processing times be restricted to values at most q(n), a polynomial function of the number of jobs, this special case is still NPcomplete. We claim that if a fully polynomialtime approximation scheme exists for this problem, it could be used to solve this special case in polynomial time, which would imply that P = NP. How can we use a fully polynomialtime approximation scheme to solve the special case with polynomially bounded processing times? Let P be the maximum processing time for an input satisfying this special structure. This implies that the optimal makespan is at most nP ≤ nq(n). If there were a fully polynomialtime approximation scheme {Ak }, suppose that we use the algorithm Ak that guarantees a solution with relative error at most 1/k where k = ⌈2nq(n)⌉. This implies that the algorithm ﬁnds a solution of makespan at most (
1 1+ k
)
1 ∗ ∗ Cmax ≤ Cmax + . 2
But since any feasible schedule is simply an assignment of jobs to machines, the makespan is clearly integer, and hence the algorithm must ﬁnd the optimal assignment. We know that q(n) is a polynomial; the requirements of a fully polynomialtime approximation scheme imply that the running time must be bounded by a polynomial in k, and hence we have computed this optimal schedule in polynomial time. Therefore, the existence of such a scheme implies that P = NP. This result is one special case of a much more general result; we know that (with very mild assumptions on the nature of the objective function) for any optimization problem, if the problem is strongly NPcomplete, then it does not have a fully polynomialtime approximation scheme. We give this as an exercise (Exercise 3.9). Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
3.3
The binpacking problem
3.3
73
The binpacking problem
In the binpacking problem, we are given n pieces (or items) with speciﬁed sizes a1 , a2 , . . . , an , such that 1 > a1 ≥ a2 ≥ · · · ≥ an > 0; we wish to pack the pieces into bins, where each bin can hold any subset of pieces of total size at most 1, so as to minimize the number of bins used. The binpacking problem is related to a decision problem called the partition problem. In ∑n the partition problem, we are given n positive integers b1 , . . . , bn whose sum B = i=1 bi is even, and ∑ we wish to∑know if we can partition the set of indices {1, . . . , n} into sets S and T such that i∈S bi = i∈T bi . The partition problem is well known to be NPcomplete. Notice that we can reduce this problem to a binpacking problem by setting ai = 2bi /B and checking whether we can pack all the pieces into two bins or not. This gives the following theorem. Theorem 3.8: Unless P = NP, there cannot exist a ρapproximation algorithm for the binpacking problem for any ρ < 3/2. However, consider the FirstFitDecreasing algorithm, where the pieces are packed in order of nonincreasing size, and the next piece is always packed into the ﬁrst bin in which it ﬁts; that is, we ﬁrst open bin 1, and we start bin k + 1 only when the current piece does not ﬁt into any of the bins 1, . . . , k. If FFD(I) denotes the number of bins used by this algorithm on input I, and OPT(I) denotes the number of bins used in the optimal packing, then a celebrated classic result shows that that FFD(I) ≤ (11/9) OPT(I) + 4 for any input I. Thus, signiﬁcantly stronger results can be obtained by relaxing the notion of the performance guarantee to allow for small additive terms. In fact, it is completely consistent with our current understanding of complexity theory that there is an algorithm that always produces a packing with at most OPT(I) + 1 bins. Why is it that we have bothered to mention hardness results of the form “there does not exist a ρapproximation algorithm unless P = NP” if such a result can be so easily circumvented? The reason is that for all of the weighted problems that we have discussed, any distinction between the two types of guarantees disappears; any algorithm guaranteed to produce a solution of value at most ρ OPT +c can be converted to a ρapproximation algorithm. Each of these problems has a natural rescaling property: for any input I and any value κ, we can construct an essentially identical instance I ′ such that the objective function value of any feasible solution is rescaled by κ. For example, for the scheduling problem of Section 3.2, if one simply multiplies each processing time pj by κ, one can accomplish this rescaling, or for a combinatorial problem such as the unweighted vertex cover problem, one can consider an input with κ disjoint copies of the original input graph. Such rescaling makes it possible to blunt the eﬀect of any small additive term c < κ in the guarantee, and make it eﬀectively 0. Observe that the binpacking problem does not have this rescaling property; there is no obvious way to “multiply” the instance in a way that does not blur the combinatorial structure of the original input. (Think about what happens if you construct a new input that contains two copies of each piece of I!) Thus, whenever we consider designing approximation algorithms for a new combinatorial optimization problem, it is important to consider ﬁrst whether the problem does have the rescaling property, since that will indicate what sort of performance guarantee one might hope for. Although we will not prove the performance guarantee for the FirstFitDecreasing algorithm for the binpacking problem, we shall show that an exceedingly simple algorithm does perform reasonably well. Consider the FirstFit algorithm, which works exactly as FirstFitDecreasing, except that we don’t ﬁrst sort the pieces in nonincreasing size order. We can analyze its Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
74
Rounding data and dynamic programming
performance in the following way. If we pair up bins 1 and 2, then 3 and 4, and so forth, then any such pair must have the property that the total piece size in them is at least 1: in fact, the ﬁrst item placed in bin 2k is put there only because it did not ﬁt in∑bin 2k − 1. Thus, if we used ℓ bins, then the total size of the pieces in the input, SIZE(I) = ni=1 ai , is at least ⌊ℓ/2⌋. However, it is clear that OPT(I) ≥ SIZE(I), and hence, the number of bins used by FirstFit is FF(I) = ℓ ≤ 2 SIZE(I) + 1 ≤ 2 OPT(I) + 1. Of course, this analysis did not use practically any information about the algorithm; we used only that there should not be two bins whose contents can be feasibly combined into one bin. We shall present a family of polynomialtime approximation algorithms parameterized by ϵ > 0, where each algorithm has the performance guarantee of computing a packing with at most (1 + ϵ) OPT(I) + 1 bins for each input I. Throughout this discussion, we shall view ϵ as a positive constant. Note that this family of algorithms does not meet the deﬁnition of a polynomialtime approximation scheme because of the additive constant. This motivates the following deﬁnition. Deﬁnition 3.9: An asymptotic polynomialtime approximation scheme (APTAS) is a family of algorithms {Aϵ } along with a constant c where there is an algorithm Aϵ for each ϵ > 0 such that Aϵ returns a solution of value at most (1 + ϵ) OPT +c for minimization problems. One of the key ingredients of this asymptotic polynomialtime approximation scheme is the dynamic programming algorithm used in the approximation scheme for scheduling jobs on identical parallel machines, which was presented in Section 3.2. As stated earlier, that algorithm computed the minimum number of machines needed to assign jobs, so that each machine was assigned jobs of total processing requirement at most T . However, by rescaling each processing time by dividing by T , we have an input for the binpacking problem. The dynamic programming algorithm presented solves the binpacking problem in polynomial time in the special case in which there are only a constant number of distinct piece sizes, and only a constant number of pieces can ﬁt into one bin. Starting with an arbitrary input to the binpacking problem, we ﬁrst construct a simpliﬁed input of this special form, which we then solve by dynamic programming. The simpliﬁed input will also have the property that we can transform the resulting packing into a packing for the original input, without introducing too many additional bins. As in the scheduling result of Section 3.2, the ﬁrst key observation is that one may ignore small pieces of size less than any given threshold, and can analyze its eﬀect in a relatively simple way. Lemma 3.10: Any packing of all pieces of size greater than γ into ℓ bins can be extended to a 1 packing for the entire input with at most max{ℓ, 1−γ SIZE(I) + 1} bins. Proof. Suppose that one uses the FirstFit algorithm, starting with the given packing, to compute a packing that also includes these small pieces. If FirstFit never starts a new bin in packing the small pieces, then clearly the resulting packing has ℓ bins. If it does start a new bin, then each bin in the resulting packing (with the lone exception of the last bin started) must not have been able to ﬁt one additional small piece. Let k + 1 denote the number of bins used in this latter case. In other words, each of the ﬁrst k bins must have pieces totaling at least 1 − γ, and hence SIZE(I) ≥ (1 − γ)k. Equivalently, k ≤ SIZE(I)/(1 − γ), which completes the proof of the lemma. Suppose that we were aiming to design an algorithm with a performance guarantee that is better than the one proved for FirstFit (which is truly a modest goal); in particular, we are Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
3.3
The binpacking problem
75
trying to design an algorithm with performance guarantee 1 + ϵ with ϵ < 1. If we apply Lemma 3.10 by setting γ = ϵ/2, then since 1/(1 − ϵ/2) ≤ 1 + ϵ, we see that the composite algorithm produces a packing with at most max{ℓ, (1 + ϵ) OPT(I) + 1} bins. The elimination of the small pieces from the input also enforces one of our requirements for the special case solvable by dynamic programming: if each piece has size greater than ϵ/2, then each bin must contain fewer than 2/ϵ pieces in total. We shall assume for the rest of the discussion that our input I does not have such small pieces to begin with. The last element of the algorithm is a technique to reduce the number of distinct piece sizes, which is accomplished by a linear grouping scheme. This scheme works as follows, and is based on a parameter k, which will be set later. Group the pieces of the given input I as follows: the ﬁrst group consists of the k largest pieces, the next group consists of the next k largest pieces, and so on, until all pieces have been placed in a group. The last group contains h pieces, where h ≤ k. The rounded instance I ′ is constructed by discarding the ﬁrst group, and for each other group, rounding the size of its pieces up to the size of its largest piece. An input I and its transformed version I ′ are shown in Figure 3.1. We can prove the following lemma relating the optimal number of bins needed to pack I to the optimal number of bins needed to pack I ′ . Lemma 3.11: Let I ′ be the input obtained from an input I by applying linear grouping with group size k. Then OPT(I ′ ) ≤ OPT(I) ≤ OPT(I ′ ) + k, and furthermore, any packing of I ′ can be used to generate a packing of I with at most k additional bins. Proof. For the ﬁrst inequality, observe that any packing of the input I yields a packing of the input I ′ : for I ′ , pack its k largest pieces wherever the k largest pieces of I were packed. (There is a onetoone correspondence between these two sets of k pieces, and each piece in the group from I ′ is no larger than the piece to which it corresponds in I.) The second k largest pieces of I ′ are packed wherever the second group of I were packed, and so forth. It is clear that this yields a feasible packing of I ′ , and the ﬁrst inequality follows. To obtain the second inequality, we show how to take a packing of I ′ , and use it to obtain a packing of I. This is also quite simple. To pack I, we pack each of the k largest pieces in its own bin. Now, the next largest group of k pieces in I can be packed wherever the largest group of pieces in I ′ are packed. Again, to do this, we use the fact that one can construct a onetoone mapping between these two sets of k pieces, where each piece in I is no larger than its corresponding piece in I ′ . The fact that this last inequality is proved algorithmically is important, since this means that if we do obtain a good packing for I ′ , then we can use it to obtain an “almostasgood” packing for I. It is relatively straightforward to complete the derivation of an asymptotic polynomialtime approximation scheme for the binpacking problem. If we consider the input I ′ , then the number of distinct piece sizes is at most n/k, where n is the number of pieces in I. Since there are no small pieces in I, SIZE(I) ≥ ϵn/2. If we set k = ⌊ϵ SIZE(I)⌋, then we see that n/k ≤ 2n/(ϵ SIZE(I)) ≤ 4/ϵ2 , where we crudely bound ⌊α⌋ ≥ α/2 when α ≥ 1; we can assume ϵ SIZE(I) ≥ 1 since otherwise there are at most (1/ϵ)/(ϵ/2) = 2/ϵ2 (large) pieces and we could apply the dynamic programming algorithm to solve the input optimally without applying the linear grouping scheme. Consequently, after performing the linear grouping, we are left with a binpacking input in which there are a constant number of distinct piece sizes, and only a constant number of pieces can ﬁt in each bin; hence, we can obtain an optimal packing for the Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
76
Rounding data and dynamic programming
1
0.8
0.6
0.4
0.2
0 
{z
G1
}

{z
G2
}

{z
G3
}

{z
G4
}

{z
G5
}

{z
G6
1
0.8
0.6
0.4
0.2
0
Figure 3.1: An input before and after linear grouping with group size k = 4.
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
}
{z}
G7
3.3
The binpacking problem
77
input I ′ by the dynamic programming algorithm of Section 3.2. This packing for I ′ can then be used to get a packing for the ungrouped input, and then be extended to include all of the small pieces with at most (1 + ϵ) OPT(I) + 1 bins, as we show next. Theorem 3.12: For any ϵ > 0, there is a polynomialtime algorithm for the binpacking problem that computes a solution with at most (1 + ϵ) OPT(I) + 1 bins; that is, there is an APTAS for the binpacking problem. Proof. As we discussed earlier, the algorithm will open max{ℓ, (1+ϵ) OPT(I)+1} bins, where ℓ is the number of bins used to pack the large pieces. By Lemma 3.11, we use at most OPT(I ′ )+k ≤ OPT(I) + k bins to pack these pieces, where k = ⌊ϵ SIZE(I)⌋, so that ℓ ≤ OPT(I) + ϵ SIZE(I) ≤ (1 + ϵ) OPT(I), which completes the proof. It is possible to improve both the running time of this general approach, and its performance guarantee; we shall return to this problem in Section 4.6.
Exercises 3.1 Consider the following greedy algorithm for the knapsack problem. We initially sort all the items in order of nonincreasing ratio of value to size so that v1 /s1 ≥ v2 /s2 ≥ · · · ≥ vn /sn . Let i∗ be the index of an item of maximum value so that vi∗ = maxi∈I vi . The greedy algorithm puts items in the knapsack in index order ∑k+1until the next item no longer ﬁts; ∑k s ≤ B but that is, it ﬁnds k such that i=1 si > B. The algorithm returns i=1 i ∗ either {1, . . . , k} or {i }, whichever has greater value. Prove that this algorithm is a 1/2approximation algorithm for the knapsack problem. 3.2 One key element in the construction of the fully polynomialtime approximation scheme for the knapsack problem was the ability to compute lower and upper bounds for the optimal value that are within a factor of n of each other (using the maximum value piece that ﬁts in the knapsack to get the lower bound). Use the result of the previous exercise to derive a reﬁned approximation scheme that eliminates one factor of n in the running time of the algorithm. 3.3 Consider the following scheduling problem: there are n jobs to be scheduled on a single machine, where each job j has a processing time pj , a weight wj , and a due date dj , j = 1, . . . , n. The objective is to schedule the jobs so as to maximize the total weight of the jobs that complete by their due date. First prove that there always exists an optimal schedule in which all ontime jobs complete before all late jobs, and the ontime jobs complete in an earliest due date order; use this structural result to show ∑ how to solve this problem using dynamic programming in O(nW ) time, where W = j wj . Now use this result to derive a fully polynomialtime approximation scheme. 3.4 Instead of maximizing the total weight of the ontime set of jobs, as in the previous exercise, one could equivalently minimize the total weight of the set of jobs that complete late. This equivalence is only valid when one thinks of optimization, not approximation, since if only one job needs to be late then our approximation for the minimization problem can make only a small error, whereas for the maximization problem the situation is quite diﬀerent. Or even worse, suppose that all jobs can complete on time. The good news is that there is an O(n2 ) algorithm to solve the following problem: minimize the weight of the maximumweight job that completes late. First devise this algorithm, and then show Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
78
Rounding data and dynamic programming
how to use it to derive a fully polynomialtime approximation scheme for the problem of minimizing the total weight of the jobs that complete late. 3.5 Consider the following scheduling problem: there are n jobs to be scheduled on a constant number of machines m, where each job j has a processing time pj and a weight wj , j = 1, . . . , n; once started, each job must be processed to completion on that machine without interruption. For a given schedule, let∑Cj denote the completion time of job j, j = 1, . . . , n, and the objective is to minimize j wj Cj over all possible schedules. First show that there exists an optimal schedule where, for each machine, the jobs are scheduled in nondecreasing pj /wj order. Then use this property to derive a dynamic programming algorithm that can then be used to obtain a fully polynomialtime approximation scheme. 3.6 Suppose we are given a directed acyclic graph with speciﬁed source node s and sink node t, and each arc e has an associated cost ce and length ℓe . We are also given a length bound L. Give a fully polynomialtime approximation scheme for the problem of ﬁnding a minimumcost path from s to t of total length at most L. 3.7 In the proof of Theorem 3.7, we rounded down each processing time to the nearest multiple of T /k 2 . Suppose that instead of constructing identical length intervals that get rounded to the same value, we have intervals that are geometrically increasing in length. Design an alternative polynomialtime approximation scheme, where the term in the running that 2 is O(n(1/k) ) is reduced to O(n(1/k) log(1/k) ). 3.8 The makespan minimization problem discussed in Section 3.2 can be viewed as minimizing the L∞ norm of the machine loads. One can instead minimize the L2 norm of the machine loads or, equivalently, the sum of the squares of the machine loads. To extend the framework discussed for the makespan objective, ﬁrst give a dynamic programming algorithm that solves, in polynomial time, the special case in which there are only a constant number of job lengths, ∑ and each job length is at least a constant fraction of the average machine load (that is, j pj /m, where m is the number of machines). Then use this to derive a polynomialtime approximation scheme. (One additional idea might be useful: for some notion of a “small” job, clump the small jobs into relatively “small” clumps of jobs that are then assigned to machines together.) 3.9 Suppose we have a strongly NPhard minimization problem Π with the following two properties. First, any feasible solution has a nonnegative, integral objective function value. Second, there is a polynomial p, such that if it takes n bits to encode the input instance I in unary, OPT(I) ≤ p(n). Prove that if there is a fully polynomialtime approximation scheme for Π, then there is a pseudopolynomial algorithm for Π. Since there is no pseudopolynomial algorithm for a strongly NPhard problem unless P = NP, conclude that this would imply P = NP.
Chapter Notes The approach to describing the dynamic program for the knapsack problem in Section 3.1 is due to Lawler [210], as is Exercise 3.2. The approximation scheme is due to Ibarra and Kim [173]. Exercises 3.3 and 3.5 are due to Sahni [258]. Gens and Levner [129] proved the result stated in Exercise 3.4. Throughout the 1970s much of the work in this area was being done in parallel in both the United States and the Soviet Union (although not recognized at the time); Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
3.3
The binpacking problem
79
Gens and Levner [128] give a detailed comparison of the evolution of this area at that time. Exercise 3.6 is due to Hassin [157]. The approximation scheme for scheduling parallel machines with a constant number of machines given in Section 3.2 is due to Graham [143]. The approximation scheme for the case in which the number of machines is part of the input is due to Hochbaum and Shmoys [165]. Exercise 3.8 is due to Alon, Azar, Woeginger, and Yadid [6]. Fernandez de la Vega and Lueker [111] gave the ﬁrst approximation scheme for the binpacking problem. The approximation scheme given here is a variant of theirs. The analysis of the FirstFitDecreasing algorithm, showing it to be an 11/9approximation algorithm, is in the Ph.D. thesis of Johnson [178]. The analysis of FirstFit we give is also in this thesis, although we use the analysis there of another algorithm called NextFit. Exercise 3.9 is due to Garey and Johnson [122].
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
80
Rounding data and dynamic programming
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
Chapter 4
Deterministic rounding of linear programs
In the introduction, we said that one of the principal theses of this book is the central role played by linear programming in the design and analysis of approximation algorithms. In the previous two chapters, we have not used linear programming at all, but starting with this chapter we will be using it extensively. In this chapter, we will look at one of the most straightforward uses of linear programming. Given an integer programming formulation of a problem, we can relax it to a linear program. We solve the linear program to obtain a fractional solution, then round it to an integer solution via some process. The easiest way to round the fractional solution to an integer solution in which all values are 0 or 1 is to take variables with relatively large values and round them up to 1, while rounding all other variables down to 0. We saw this technique in Section 1.3 applied to the set cover problem, in which we chose sets whose corresponding linear programming variables were suﬃciently large. We will see another application of this technique when we introduce the prizecollecting Steiner tree problem in Section 4.4. We will revisit this problem several times in the course of the book. For this problem we give an integer programming relaxation in which there are 01 variables for both nodes and edges. We round up the node variables that are suﬃciently large in order to decide which nodes should be spanned in a solution; we then ﬁnd a tree spanning these nodes. In Sections 4.1 and 4.2, we consider a singlemachine scheduling problem, and see another way of rounding fractional solutions to integer solutions. We will see that by solving a relaxation, we are able to get information on how the jobs might be ordered. Then we construct a solution in which we schedule jobs in the same order as given by the relaxation, and we are able to show that this leads to a good solution. In the ﬁrst section, we use a relaxation of the problem to a preemptive scheduling problem, rather than a linear program. The analysis of this algorithm gives some ideas for an algorithm in the next section that uses a linear programming relaxation for a scheduling problem with a more general objective function. In Section 4.5, we introduce the uncapacitated facility location problem, another problem that we will revisit several times in the course of this book. In this problem we have clients and facilities; we must choose a subset of facilities to open, and every client must be assigned to some open facility. Here the rounding procedure is much more complex than simply choosing 81
82
Deterministic rounding of linear programs
Job 1
0
Job 2
2
4
5
Job 3
9
Time
Figure 4.1: An example of a nonpreemptive schedule, in which p1 = 2, r1 = 0, p2 = 1, r∑ 2 = 4, p3 = 4, r3 = 1. In this schedule, C1 = 2, C2 = 5, and C3 = 9, so that j Cj = 2 + 5 + 9 = 16. large values and rounding up. The fractional values indicate to us which facilities we might consider opening and which assignments we might think about making. Finally, in Section 4.6, we revisit the binpacking problem. We give an integer programming formulation in which we have an integer variable to indicate how many bins should be packed in a certain way. Then we round variables down, in the sense that we take the integer part of each variable and pack bins as indicated, then take all the pieces corresponding to the fractional part and iterate. In several cases in this chapter and later in the book, we will need to solve very large linear programs in which the number of constraints is exponential in the input size of the problem. These linear programs can be solved in polynomial time using an algorithm called the ellipsoid method. We introduce the ellipsoid method in Section 4.3, and discuss the cases in which it can be used to solve exponentially large linear programs in polynomial time.
4.1
Minimizing the sum of completion times on a single machine
In this section, we consider the problem of scheduling jobs on a single machine so as to minimize the sum of the job completion times. In particular, we are given as input n jobs, each of which has a processing time pj and release date rj . The values pj and rj are integers such that rj ≥ 0 and pj > 0. We must construct a schedule for these jobs on a single machine such that at most one job is processed at each point in time, no job is processed before its release date, and each job must be processed nonpreemptively; that is, once a job begins to be processed, it must be processed completely before any other job begins its processing. See Figure 4.1 for an example. If Cj denotes the time ∑n at which job j is ﬁnished processing, then the goal is to ﬁnd the schedule that minimizes j=1 Cj . Observe that this objective is equivalent to the problem of minimizing the average completion time, since the average completion time just rescales the objective function for each feasible solution by a factor of 1/n. Below we will show that we can convert any preemptive schedule into a nonpreemptive schedule in such a way that the completion time of each job at most doubles. In a preemptive schedule, we can still schedule only one job at a time on the machine, but we do not need to complete each job’s required processing consecutively; we can interrupt the processing of a job with the processing of other jobs. In the preemptive version of this problem, the goal is to ﬁnd a preemptive schedule that minimizes the sum of the completion times. An optimal solution to the preemptive version of the scheduling problem can be found in polynomial time via the shortest remaining processing time (SRPT) rule, which is as follows. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
4.1
Minimizing the sum of completion times on a single machine
Job 1
0
83
Job 3 Job 2
2
4
5
7
Time
Figure 4.2: An example of a preemptive schedule created using the SRPT rule, with the same ∑ instance as in Figure 4.1. In this schedule, C1 = 2, C2 = 5, and C3 = 7, so that j Cj = 2 + 5 + 7 = 14. Note that we do not interrupt the processing of job 1 when job 3 arrives at time 1, since job 1 has less remaining processing time, but we do interrupt the processing of job 3 when job 2 arrives. We start at time 0 and schedule the job with the smallest amount of remaining processing time as long as the job is past its release date and we have not already completed it. We schedule it until either it is completed or a new job is released. We then iterate. See Figure 4.2 for an example. Let CjP be the completion time of job j in an optimal preemptive schedule, and let OPT be the sum of completion times in an optimal nonpreemptive schedule. We have the following observation. Observation 4.1:
n ∑
CjP ≤ OPT .
j=1
Proof. This immediately follows from the fact that an optimal nonpreemptive schedule is feasible for the preemptive scheduling problem. Now consider the following scheduling algorithm. Find an optimal preemptive schedule using SRPT. We schedule the jobs nonpreemptively in the same order that they complete in this preemptive schedule. To be more precise, suppose that the jobs are indexed such that C1P ≤ C2P ≤ · · · ≤ CnP . Then we schedule job 1 from its release date r1 to time r1 + p1 . We schedule job 2 to start as soon as possible after job 1; that is, we schedule it from max(r1 +p1 , r2 ) to max(r1 + p1 , r2 ) + p2 . The remaining jobs are scheduled analogously. If we let CjN denote the completion time of job j in the nonpreemptive schedule that we construct, for j = 1, . . . , n, N , r } to max{C N , r } + p . then job j is processed from max{Cj−1 j j j−1 j We show below that scheduling nonpreemptively in this way does not delay the jobs by too much. Lemma 4.2: For each job j = 1, . . . , n, CjN ≤ 2CjP . Proof. Let us ﬁrst derive some easy lower bounds on CjP . Since we know that j is processed in the optimal preemptive schedule after jobs 1, . . . , j − 1, we have CjP
≥ max rk and k=1,...,j
CjP
≥
j ∑
pk .
k=1
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
84
Deterministic rounding of linear programs
By construction it is also the case that CjN ≥ max rk . k=1,...,j
Consider the nonpreemptive schedule constructed by the algorithm, and focus on any period of time that the machine is idle; idle time occurs only when the next job to be processed has not yet been released. Consequently, in the time interval from maxk=1,...,j rk to CjN , there cannot be any∑point in time at which the machine is idle. Therefore, this interval can be of length at most jk=1 pk since otherwise we would run out of jobs to process. This implies that CjN
≤ max rk + k=1,...,j
j ∑
pk ≤ 2CjP ,
k=1
where the last inequality follows from the two lower bounds on CjP derived above. This leads easily to the following theorem. Theorem 4.3: Scheduling in order of the completion times of an optimal preemptive schedule is a 2approximation algorithm for scheduling a single machine with release dates to minimize the sum of completion times. Proof. We have that
n ∑ j=1
CjN
≤2
n ∑
CjP ≤ 2 OPT,
j=1
where the ﬁrst inequality follows by Lemma 4.2 and the second by Observation 4.1.
4.2
Minimizing the weighted sum of completion times on a single machine
We now consider a generalization of the problem from the previous section. In this generalization, each job has a weight wj ≥ 0, and our goal is to minimize the weighted sum of completion times. If Cj denotes the time ∑ at which job j is ﬁnished processing, then the goal is to ﬁnd a schedule that minimizes nj=1 wj Cj . We will call the problem of the previous section the unweighted case, and the problem in this section the weighted case. Unlike the unweighted case, it is NPhard to ﬁnd an optimal schedule for the preemptive version of the weighted case. Although the algorithm and analysis of the previous section give us a way to round any preemptive schedule to one whose sum of weighted completion times is at most twice more, we cannot use the same technique of ﬁnding a lower bound on the cost of the optimal nonpreemptive schedule by ﬁnding an optimal preemptive schedule. Nevertheless, we can still get a constant approximation algorithm for this problem by using some of the ideas from the previous section. To obtain the 2approximation algorithm in the previous section, we used that CjN ≤ 2CjP ; if we look at the proof of Lemma 4.2, to prove this inequality we used only that the completion times CjP satisﬁed CjP ≥ maxk=1,...,j rk and ∑ CjP ≥ jk=1 pk (assuming that jobs are indexed such that C1P ≤ C2P ≤ · · · ≤ CnP ). Furthermore, ∑ in order to obtain an approximation algorithm, we needed that nj=1 CjP ≤ OPT. We will show below that we can give a linear programming relaxation of the problem with variables Cj such Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
4.2
Minimizing the weighted sum of completion times on a single machine
85
that these inequalities hold within a constant factor, which in turn will lead to a constant factor approximation algorithm for the problem. To construct our linear programming relaxation, we will let the variable Cj ∑ denote the completion time of job j. Then our objective function is clear: we want to minimize nj=1 wj Cj . We now need to give constraints. The ﬁrst set of constraints is easy: for each job j = 1, . . . , n, job j cannot complete before it is released and processed, so that Cj ≥ rj + pj . In order to introduce the second set of constraints, consider some set S ⊆ {1, . . . , n} of jobs. ∑ Consider the sum j∈S pj Cj . This sum is minimized when all the jobs in S have a release date of zero and all the jobs in S ﬁnish ﬁrst in the schedule. Assuming these two conditions hold, then any completion time Cj for j ∈ S is equal to pj plus the sum of all the processing times of the jobs in S that preceded j in the schedule. Then in the product pj Cj , pj∑ multiplies itself and the processing times of all jobs in S preceding j in the schedule. The sum j∈S pj Cj must contain the term pj pk for all pairs j, k ∈ S, since either k precedes j or j precedes k in the schedule. Thus, ∑ j∈S
pj Cj =
∑
pj pk =
j,k∈S:j≤k
1 2
∑
2 pj +
j∈S
1∑ 2
j∈S
p2j ≥
1 2
∑
2 pj .
j∈S
∑ p , so that the Simplifying notation somewhat, let N = {1, . . . , n}, and let p(S) = j∈S ∑ ∑ j inequality above becomes j∈S pj Cj ≥ 12 p(S)2 . As we said above, the sum j∈S pj Cj can be greater only if the jobs in S have a release date greater than zero or do not ﬁnish ﬁrst in the schedule, so the inequality must hold unconditionally for any S ⊆ N . Thus, our second set of constraints is ∑ 1 pj Cj ≥ p(S)2 2 j∈S
for each S ⊆ N. This gives our linear programming relaxation for the scheduling problem: minimize
n ∑
wj Cj
(4.1)
j=1
subject to
∑ j∈S
C j ≥ rj + p j , 1 pj Cj ≥ p(S)2 , 2
∀j ∈ N,
(4.2)
∀S ⊆ N.
(4.3)
By the arguments above, this LP is a relaxation of the problem, so that for an optimal LP ∑ solution C ∗ , nj=1 wj Cj∗ ≤ OPT, where OPT denotes the value of the optimal solution to the problem. There are an exponential number of the second type of constraint (4.3), but we will show in Section 4.3 that we can use an algorithm called the ellipsoid method to solve this linear program in polynomial time. Let C ∗ be an optimal solution for the relaxation that we obtain in polynomial time. Our algorithm is then almost the same as in the previous section: we schedule the jobs nonpreemptively in the same order as of the completion times of C ∗ . That is, suppose that the jobs are reindexed so that C1∗ ≤ C2∗ ≤ · · · ≤ Cn∗ . Then, as in the previous section, we schedule job 1 from its release date r1 to time r1 + p1 . We schedule job 2 to start as soon as possible after job 1; that is, we schedule it from max(r1 + p1 , r2 ) to max(r1 + p1 , r2 ) + p2 . The remaining jobs are scheduled similarly. We claim that this gives a 3approximation algorithm for the problem. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
86
Deterministic rounding of linear programs
Theorem 4.4: Scheduling in order of the completion time of C ∗ is a 3approximation algorithm for scheduling jobs on a single machine with release dates to minimize the sum of weighted completion times. Proof. Again, assume that the jobs are reindexed so that C1∗ ≤ C2∗ ≤ · · · ≤ Cn∗ . Let CjN be the completion time of job j in the schedule we construct. We will show that CjN ≤ 3Cj∗ for each ∑ ∑ j = 1, . . . , n. Then we have that nj=1 wj CjN ≤ 3 nj=1 wj Cj∗ ≤ 3 OPT, which gives the desired result. As in the proof of Lemma 4.2, there cannot be any idle time between maxk=1,...,j rk and CjN , ∑ and therefore it must be the case that CjN ≤ maxk=1,...,j rk + jk=1 pk . Let ℓ ∈ {1, . . . , j} be the index of the job that maximizes maxk=1,...,j rk so that rℓ = maxk=1,...,j rk . By the indexing of the jobs, Cj∗ ≥ Cℓ∗ , and Cℓ∗ ≥ rℓ by the LP constraint (4.2); thus, Cj∗ ≥ maxk=1,...,j rk . Let [j] denote the set {1, . . . , j}. We will argue that Cj∗ ≥ 21 p([j]), and from these simple facts, it follows that CjN ≤ p([j]) + max rk ≤ 2Cj∗ + Cj∗ = 3Cj∗ . k=1,...,j
Let S = [j]. From the fact that
C∗
is a feasible LP solution, we know that
∑ k∈S
1 pk Ck∗ ≥ p(S)2 . 2
However, by our relabeling, Cj∗ ≥ · · · ≥ C1∗ , and hence Cj∗
∑ k∈S
pk = Cj∗ · p(S) ≥
∑
pk Ck∗ .
k∈S
By combining these two inequalities and rewriting, we see that 1 Cj∗ · p(S) ≥ p(S)2 . 2 Dividing both sides by p(S) shows that Cj∗ ≥ 12 p(S) = 21 p([j]). In Section 5.9, we will show how randomization can be used to obtain a 2approximation algorithm for this problem.
4.3
Solving large linear programs in polynomial time via the ellipsoid method
We now turn to the question of how to solve the linear program (4.1) in polynomial time. The most popular and practical algorithm for solving linear programs is known as the simplex method. Although the simplex method is quite fast in practice, there is no known variant of the algorithm that runs in polynomial time. Interiorpoint methods are a class of algorithms for solving linear programs; while typically not as fast or as popular as the simplex method, interiorpoint methods do solve linear programs in polynomial time. However, this isn’t suﬃcient for solving the linear program above because the size of the linear program is exponential in the size of the input scheduling instance. Therefore, we will use a linear programming algorithm called the ellipsoid method. Because we will use this technique frequently, we will discuss the general technique before turning to how to solve our particular linear program. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
4.3
Solving large linear programs in polynomial time via the ellipsoid method
87
Suppose we have the following general linear program: minimize
n ∑
dj xj
(4.4)
j=1
subject to
n ∑
aij xj ≥ bi ,
i = 1, . . . , m,
j=1
xj ≥ 0,
∀j.
Suppose that we can give a bound ϕ on the number of bits needed to encode any inequality ∑n a x ij j ≥ bi . Then the ellipsoid method for linear programming allows us to ﬁnd an optimal j=1 solution to the linear program in time polynomial in n (the number of variables) and ϕ, given a polynomialtime separation oracle (which we will deﬁne momentarily). It is sometimes desirable for us to obtain an optimal solution that has an additional property of being a basic solution; we do not deﬁne basic solutions here, but the ellipsoid method will return such basic optimal solutions (see Chapter 11 or Appendix A for a deﬁnition). Note that this running time does not depend on m, the number of constraints of the linear program. Thus, as in the case of the previous linear program for the singlemachine scheduling problem, we can solve linear programs with exponentially many constraints in polynomial time given that we have a polynomialtime separation oracle. A separation oracle takes as input a supposedly feasible solution x to the linear program, and either veriﬁes that x is indeed a feasible solution to the linear program or, ∑nif it is infeasible, produces a constraint that is violated by x. That is, if it is not the case that j=1 ∑ aij xj ≥ bi for each i = 1, . . . , m, then the separation oracle returns some constraint i such that nj=1 aij xj < bi . In the notes at the end of the chapter, we sketch how a polynomialtime separation oracle leads to a polynomialtime algorithm for solving LPs with exponentially many constraints. It is truly remarkable that such LPs are eﬃciently solvable. Here, however, eﬃciency is a relative term: the ellipsoid method is not a practical algorithm. For exponentially large LPs that we solve via the ellipsoid method, it is sometimes the case that the LP can be written as a polynomially sized LP, but it is more convenient to discuss the larger LP. We will indicate when this is the case. Even if there is no known way of rewriting the exponentially sized LP, one can heuristically ﬁnd an optimal solution to the LP by repeatedly using any LP algorithm (typically the simplex method) on a small subset of the constraints and using the separation oracle to check if the current solution is feasible. If it is feasible, then we have solved the LP, but if not, we add the violated constraints to the LP and resolve. Practical experience shows that this approach is much more eﬃcient than can be theoretically justiﬁed, and should be in the algorithm designer’s toolkit. We now turn to the scheduling problem at hand, and give a polynomialtime separation oracle for the constraints (4.3). Given a solution C, let us reindex the variables so that C1 ≤ C2 ≤ · · · ≤ Cn . Let S1 = {1}, S2 = {1, 2}, . . . , Sn = {1, . . . , n}. We claim that it is suﬃcient to check whether the constraints are violated for the n sets S1 , . . . , Sn . If any of these n constraints are violated, then we return the set as a violated constraint. If not, we show below that all constraints are satisﬁed. Lemma 4.5: Given variables Cj , if constraints (4.3) are satisﬁed for the n sets S1 , . . . , Sn , then they are satisﬁed for all S ⊆ N . ∑ Proof. Let S be a constraint that is not satisﬁed; that is, j∈S pj Cj < 12 p(S)2 . We will show Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
88
Deterministic rounding of linear programs
that then there must be some set S∑ i that is also not satisﬁed. We do this by considering changes to S that decrease the diﬀerence j∈S pj Cj − 21 p(S)2 . Any such change will result in another set S ′ that also does not satisfy the constraint. Note that removing a job k from S decreases this diﬀerence if 1 −pk Ck + pk p(S − k) + p2k < 0, 2 or if Ck > p(S − k) + 12 pk . Adding a job k to S decreases this diﬀerence if 1 pk Ck − pk p(S) − p2k < 0, 2 or if Ck < p(S) + 12 pk . Now let ℓ be the highest indexed job in S. We remove ℓ from S if Cℓ > p(S − ℓ) + 21 pℓ ; by the reasoning above the resulting set S − ℓ also does not satisfy the corresponding constraint (4.3). We continue to remove the highest indexed job in the resulting set until ﬁnally we have a set S ′ such that its highest indexed job ℓ has Cℓ ≤ p(S ′ − ℓ) + 21 pℓ < p(S ′ ) (using pℓ > 0). Now suppose S ′ ̸= Sℓ = {1, . . . , ℓ}. Then there is some k < ℓ such that k ∈ / S ′ . Then since Ck ≤ Cℓ < p(S ′ ) < p(S ′ ) + 21 pk , adding k to S ′ can only decrease the diﬀerence. Thus, we can add all k < ℓ to S ′ , and the resulting set Sℓ will also not satisfy the constraint (4.3).
4.4
The prizecollecting Steiner tree problem
We now turn to a variation of the Steiner tree problem introduced in Exercise 2.5. This variation is called the prizecollecting Steiner tree problem. As input, we are given an undirected graph G = (V, E), an edge cost ce ≥ 0 for each e ∈ E, a selected root vertex r ∈ V , and a penalty πi ≥ 0 for each i ∈ V . The goal is to ﬁnd a tree T that contains the root vertex r so as to minimize the cost of the edges in the tree plus the penalties of all vertices not in∑ the tree. In ∑other words, if V (T ) is the set of vertices in the tree T , the goal is to minimize e∈T ce + i∈V −V (T ) πi . The Steiner tree problem of Exercise 2.5 is a special case of the problem in which for every i ∈ V either πi = ∞ (and thus i must be included in the tree) or πi = 0 (and thus i may be omitted from the tree with no penalty). The vertices that must be included in the solution in the Steiner tree problem are called terminals. One application of the prizecollecting Steiner tree problem is deciding how to extend cable access to new customers. In this case, each vertex represents a potential new customer, the cost of edge (i, j) represents the cost of connecting i to j by cable, the root vertex r represents a site that is already connected to the cable network, and the “penalties” πi represent the potential proﬁt to be gained by connecting customer i to the network. Thus, the goal is to minimize the total cost of connecting new customers to the network plus the proﬁts lost by leaving the unconnected customers out of the network. The prizecollecting Steiner tree problem can be modeled as an integer program. We use a 01 variable yi for each i ∈ V and xe for each e ∈ E. Variable yi is set to 1 if i is in the solution tree and 0 if it is not, while xe is set to 1 if e is in the solution tree and 0 if it is not. Obviously, then, the objective function is ∑ ∑ minimize ce xe + πi (1 − yi ). e∈E
i∈V
We now need constraints to enforce that the tree connects the root r to each vertex i with yi = 1. Given a nonempty set of vertices S ⊂ V , let δ(S) denote the set of edges in the cut Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
4.4
The prizecollecting Steiner tree problem
89
deﬁned by S; that is, δ(S) is the set of all edges with exactly one endpoint in S. We will introduce the constraints ∑ xe ≥ y i e∈δ(S)
for each i ∈ S, and each S ⊆ V − r. To see that these constraints enforce that the tree connects the root to each vertex i with yi = 1, take any feasible solution x and consider the graph G′ = (V, E ′ ) with E ′ = {e ∈ E : xe = 1}. Pick any i such that yi = 1. The constraints ensure that for any ri cut S, there must be at least one edge of E ′ in δ(S): that is, the size of the minimum ri cut in G′ must be at least one. Thus, by the maxﬂow/mincut theorem the maximum ri ﬂow in G′ is at least one, which implies that there is a path from r to i in G′ . Similarly, if x is not feasible, then there is some ri cut S for which there are no edges of E ′ in δ(S), which implies that the size of minimum ri cut is zero, and thus the maximum ri ﬂow is zero. Hence, there is a path from r to i with yi = 1 if and only if these constraints are satisﬁed, and if they are all satisﬁed, there will be a tree connecting r to all i such that yi = 1. Thus, the following integer program models the prizecollecting Steiner tree problem: ∑ ∑ minimize ce xe + πi (1 − yi ) e∈E
subject to
i∈V
∑
xe ≥ y i ,
∀S ⊆ V − r, S ̸= ∅, ∀i ∈ S,
(4.5)
e∈δ(S)
yr = 1, yi ∈ {0, 1} ,
∀i ∈ V,
xe ∈ {0, 1} ,
∀e ∈ E.
In order to apply deterministic rounding to the problem, we relax the integer programming formulation to a linear programming relaxation by replacing the constraints yi ∈ {0, 1} and xe ∈ {0, 1} with yi ≥ 0 and xe ≥ 0. We can now apply the ellipsoid method, introduced in Section 4.3, ∑ to solve the linear program in polynomial time. The separation oracle for the constraints e∈δ(S) xe ≥ yi is as follows: given a solution (x, y), we construct a network ﬂow problem on the graph G in which the capacity of each edge e is set to xe . For each vertex i, we check whether the maximum ﬂow from i to the root r is at least y∑ i . If not, then the minimum cut S separating i from r gives a violated constraint such that e∈δ(S) xe < yi for i ∈ S, S ⊆ ∑ V − r. If the ﬂow is at least yi , then for all cuts S separating i from r, i ∈ S, S ⊆ V − r, e∈δ(S) xe ≥ yi by the maxﬂow/mincut theorem. Hence, given a solution (x, y), we can ﬁnd a violated constraint, if any exists, in polynomial time. Given an optimal solution (x∗ , y ∗ ) to the linear programming relaxation, there is a very simple deterministic rounding algorithm for the problem, which we give in Algorithm 4.1. It is similar to the deterministic rounding algorithm for the set cover problem given in Section 1.3: for some value α ∈ [0, 1), let U be the set of all vertices such that yi ≥ α. Since yr = 1, the root r is in U . Now build a tree on the vertices in U in the graph G. Since we want to build a tree with cost as low as possible, this is just a Steiner tree problem on the graph G in which the terminals are the set U . We could apply the algorithm given in Exercise 2.5. Instead, we use another algorithm, given in Exercise 7.6 of Chapter 7, whose analysis will be very easy once we have studied the primaldual method of that chapter. Let T be the tree produced by that algorithm. We now begin the analysis of the algorithm in terms of the parameter α. Later we will ﬁx the value of α to give the best possible performance guarantee for the deterministic rounding Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
90
Deterministic rounding of linear programs
Solve LP, get optimal primal solution (x∗ , y ∗ ) U ← {i ∈ V : yi∗ ≥ α} Use algorithm of Exercise 7.6 to build tree T on U return T Algorithm 4.1: Deterministic rounding algorithm for the prizecollecting Steiner tree problem. algorithm. We will analyze the cost of the constructed tree T and the penalties on the vertices not in T separately, comparing them to their contribution to the objective function of the linear programming relaxation. We use the result of Exercise 7.6 to analyze the cost of the tree T . Lemma 4.6 (Exercise 7.6): The tree T returned by the algorithm of Exercise 7.6 has cost ∑
ce ≤
e∈T
2∑ ce x∗e . α e∈E
The analysis for the penalties is simple. Lemma 4.7:
∑
πi ≤
i∈V −V (T )
1 ∑ πi (1 − yi∗ ) 1−α i∈V
Proof. If i is not in the tree T , then clearly it was not in the set of vertices U , and so yi∗ < α. 1−y ∗ Thus, 1 − yi∗ > 1 − α, and 1−αi > 1. The lemma statement follows. Combining the two lemmas immediately gives the following theorem and corollary. Theorem 4.8: The cost of the solution produced by Algorithm 4.1 is ∑ e∈T
ce +
∑ i∈V −V (T )
πi ≤
1 ∑ 2∑ ce x∗e + πi (1 − yi∗ ). α 1−α e∈E
Corollary 4.9: Using Algorithm 4.1 with α = prizecollecting Steiner tree problem.
i∈V
2 3
gives a 3approximation algorithm for the
Proof. Clearly the algorithm runs in polynomial time. We can bound the performance guarantee 1 by the max{2/α, 1/(1 − α)}. We can minimize this maximum by setting α2 = 1−α ; thus, we 2 obtain α = 3 . Then by Theorem 4.8, the cost of the tree obtained is ( ) ∑ ∑ ∑ ∑ ce + πi ≤ 3 ce x∗e + πi (1 − yi∗ ) ≤ 3 OPT, e∈T
i∈V −V (T )
e∈E
i∈V
∑ ∑ since e∈E ce x∗e + i∈V πi (1−yi∗ ) is the objective function of the linear programming relaxation of the problem. In Algorithm 4.1, we choose a set of vertices U such that yi∗ ≥ α = 2/3 and construct a Steiner tree on that set of vertices. A very natural idea is to try all possible values α: since ∗ there are V  variables yi∗ , there { } are at most V  distinct values of yi . Thus, we can construct
V  sets Uj = i ∈ V : yi∗ ≥ yj∗ . For each j ∈ V , construct a Steiner tree Tj on the vertices Uj , and return the best overall solution out of the V  solutions constructed. Unfortunately, we do Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
4.5
The uncapacitated facility location problem
91
not know how to analyze such an algorithm directly. However, we will see in Chapter 5 that this algorithm can be analyzed as a deterministic variant of a randomized algorithm. We will return to the prizecollecting Steiner tree problem in Section 5.7.
4.5
The uncapacitated facility location problem
We now start our consideration of the uncapacitated facility location problem. We will return to this problem many times, since many of the techniques we discuss in this book are useful for devising approximation algorithms for this problem. In the uncapacitated facility location problem, we have a set of clients or demands D and a set of facilities F . For each client j ∈ D and facility i ∈ F , there is a cost cij of assigning client j to facility i. Furthermore, there is a cost fi associated with each facility i ∈ F . The goal of the problem is to choose a subset of facilities F ′ ⊆ F so as to minimize the total cost of the facilities in F ′ and the cost of assigning each client∑j ∈ D to∑the nearest facility in F ′ . In other words, we wish to ﬁnd F ′ so as to minimize i∈F ′ fi + j∈D mini∈F ′ cij . We call the ﬁrst part of the cost the facility cost and the second part of the cost the assignment cost or service cost. We say that we open the facilities in F ′ . The uncapacitated facility location problem is a simpliﬁed variant of a problem that arises in many diﬀerent contexts. In one large computer company, a version of this problem occurs in deciding where to build or lease facilities to warehouse expensive parts needed for computer repair. The clients represent customers with service agreements that might require the use of the part. The facility cost is the cost of building or leasing the warehouse, and the assignment cost is the distance from the warehouse to the customer. In this problem it is also important to ensure that almost all of the customers are within a fourhour trip of a facility containing the needed part. Other typical complications include limits on the number of clients that can be served by a given facility (the capacitated facility location problem), and multiple types of facilities (for example, both distribution centers and warehouses, with clients assigned to warehouses, and warehouses assigned to distribution centers). In its full generality, the uncapacitated facility location problem is as hard to approximate as the set cover problem (see Exercise 1.4). However, it is relatively common for facilities and clients to be points in a metric space, with assignment costs cij representing the distance from facility i to client j. For this reason, we will from here on consider the metric uncapacitated facility location problem, in which clients and facilities correspond to points in a metric space, and assignment costs obey the triangle inequality. More precisely, given clients j, l and facilities i, k, we have that cij ≤ cil + ckl + ckj (see Figure 4.3). Since the clients and facilities correspond to points in a metric space, we assume that we have distances cii′ between two facilities i and i′ , and cjj ′ between two clients j and j ′ ; we will not need this assumption in this section, but it will prove useful in later sections. We are able to give much better approximation algorithms for the metric version of the problem than we can for the more general version. Our ﬁrst approach to this problem will be to apply a deterministic rounding technique, from which we will get a 4approximation algorithm. In subsequent chapters we will get improved performance guarantees by applying randomized rounding, the primaldual method, local search, and greedy techniques. We begin by deﬁning an integer programming formulation for the problem. We will have decision variables yi ∈ {0, 1} for each facility i ∈ F ; if we decide to open facility i, then yi = 1 and yi = 0 otherwise. We also introduce decision variables xij ∈ {0, 1} for all i ∈ F and all j ∈ D; if we assign client j to facility i, then xij = 1 while xij = 0 otherwise. Then the objective function is relatively simple: we wish to minimize the Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
92
Deterministic rounding of linear programs
l k
i j
Figure 4.3: An illustration of the inequality obeyed by the assignment costs. The circles represent clients, and the squares represent facilities. For clients j, l and facilities i, k, cij ≤ cil + ckl + ckj . total facility cost plus the total assignment cost. This can be expressed as ∑ ∑ minimize fi yi + cij xij . i∈F
i∈F,j∈D
We need to ensure that each client j ∈ D is assigned to exactly one facility. This can be expressed via the following constraint: ∑ xij = 1. i∈F
Finally, we need to make sure that clients are assigned to facilities that are open. We achieve this by introducing the constraint xij ≤ yi for all i ∈ F and j ∈ D; this ensures that whenever xij = 1 and client j is assigned to facility i, then yi = 1 and the facility is open. Thus, the integer programming formulation of the uncapacitated facility location problem can be summarized as follows: ∑ ∑ minimize fi yi + cij xij (4.6) i∈F
subject to
i∈F,j∈D
∑
xij = 1,
∀j ∈ D,
(4.7)
xij ≤ yi ,
∀i ∈ F, j ∈ D,
(4.8)
xij ∈ {0, 1},
∀i ∈ F, j ∈ D,
i∈F
yi ∈ {0, 1},
∀i ∈ F.
As usual, we obtain a linear programming relaxation from the integer program by replacing the constraints xij ∈ {0, 1} and yi ∈ {0, 1} with xij ≥ 0 and yi ≥ 0. It will be useful for us to consider the dual of the linear programming relaxation. Although one can derive the dual in a purely mechanical way, we will instead motivate it as a natural lower bound on the uncapacitated facility location problem. The most trivial lower bound for this problem is to ignore the facility costs entirely (that is, to pretend that the cost fi = 0, for each i ∈ F ). In that case, the optimal solution is to open all facilities and ∑ to assign each client to its nearest facility: if we set vj = mini∈F cij , then this lower bound is j∈D vj . How can this be improved? Suppose that ∑ each facility takes its cost fi , and divides it into shares apportioned among the clients: fi = j∈D wij , where each wij ≥ 0. The meaning of this share is that client Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
4.5
The uncapacitated facility location problem
93
j needs to pay this share only if it uses facility i. In this way, we no longer charge explicitly for opening a facility, but still recoup some of its cost. But in this way, we still have the situation that all of the (explicit) facility costs are 0, and so the optimal solution is to assign each client to the facility ∑ where the net cost is minimum; that is, we now can set vj = mini∈F (cij + wij ), and then j∈D vj is a lower bound on the true optimal value. Of course, we did not specify a way to determine each client’s share of facility cost fi ; we can make this an optimization problem by setting the shares so as to maximize the value of the ∑resulting lower bound. If we allow vj to be any value for which vj ≤ cij + wij and maximize j∈D vj , then at the optimum vj will be set to the smallest such righthand side. Thus, we have derived the following form for this lower bound, which is the dual linear program for the primal linear program (4.6): maximize
∑
vj
j∈D
subject to
∑
wij ≤ fi ,
∀i ∈ F,
j∈D
vj − wij ≤ cij , wij ≥ 0,
∀i ∈ F, j ∈ D, ∀i ∈ F, j ∈ D.
∗ If ZLP is the optimal value of the primal linear program (4.6) and OPT is the value of the optimal solution to the instance of the uncapacitated facility location problem, then ∑ for any feasible solution (v, w) to the dual linear program, by weak duality we have that j∈D vj ≤ ∗ ≤ OPT. ZLP Of course, as with any primaldual linear programming pair, we have a correspondence between primal constraints and dual variables, and vice versa. For example, the dual variable wij corresponds to the primal constraint xij ≤ yi , and the primal variable xij corresponds to the dual constraint vj − wij ≤ cij . We would like to use the information from an optimal solution (x∗ , y ∗ ) to the primal LP to determine a lowcost integral solution. In particular, if a client j is fractionally assigned to some facility i — that is, x∗ij > 0 — then perhaps we should also consider assigning j to i. We formalize this by saying that i is a neighbor of j (see Figure 4.4).
Deﬁnition 4.10: Given an LP solution x∗ , we say that facility i neighbors client j if x∗ij > 0. We let N (j) = {i ∈ F : x∗ij > 0}. The following lemma shows a connection between the cost of assigning j to a neighboring facility and the value of the dual variables. Lemma 4.11: If (x∗ , y ∗ ) is an optimal solution to the facility location LP and (v ∗ , w∗ ) is an optimal solution to its dual, then x∗ij > 0 implies cij ≤ vj∗ . ∗ = c . Furthermore, since w ∗ ≥ 0, Proof. By complementary slackness, x∗ij > 0 implies vj∗ − wij ij ij ∗ we have that cij ≤ vj .
The following is an intuitive argument about why neighboring facilities are useful. If we open a set of facilities S such that for all clients j ∈ D, there exists an open facility i ∈ N (j), and then the cost of assigning j∑to i is no more than vj∗ by Lemma 4.11. Thus, the total assignment cost is no more than j∈D vj∗ ≤ OPT, since this is the dual objective function. Unfortunately, such a set of facilities S might have a very high facility cost. What can we do to have a good facility cost? Here is an idea: suppose we can partition some subset F ′ ⊆ F Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
94
Deterministic rounding of linear programs
i j
h
l
Figure 4.4: A representation of the neighborhoods N (j) and N 2 (j). The circles represent clients, the squares represent facilities, and the edges are drawn between facilities i and clients k such that xik > 0. The central client is j. The surrounding facilities are the facilities N (j). The shaded clients are in N 2 (j). of the facilities into sets Fk such that each Fk = N (jk ) for some client jk . Then if we open the cheapest facility ik in N (jk ), we can bound the cost of ik by ∑ ∑ fi x∗ijk , x∗ijk ≤ fik = fik i∈N (jk )
i∈N (jk )
where the equality follows from constraint (4.7) of the linear program, and the inequality follows from the choice of ik as the cheapest facility in Fk . Using the LP constraints (4.8) that xij ≤ yi , we see that ∑ ∑ fi yi∗ . fi x∗ijk ≤ fik ≤ i∈N (jk )
i∈N (jk )
If we sum this inequality over all facilities that we open, then we have ∑ ∑ ∑ ∑ ∑ fi yi∗ = fi yi∗ ≤ fi yi∗ , fik ≤ k
k i∈N (jk )
i∈F ′
i∈F
where the equality follows since the N (jk ) partition F ′ . This scheme bounds the facility costs of open facilities by the facility cost of the linear programming solution. Opening facilities in this way does not guarantee us that every client will be a neighbor of an open facility. However, we can take advantage of the fact that assignment costs obey the triangle inequality and make sure that clients are not too far away from an open facility. We ﬁrst deﬁne an augmented version of neighborhood (see also Figure 4.4). Deﬁnition 4.12: Let N 2 (j) denote all neighboring clients of the neighboring facilities of client j; that is, N 2 (j) = {k ∈ D : client k neighbors some facility i ∈ N (j)}. Consider Algorithm 4.2. The algorithm loops until all clients are assigned to some facility. In each loop it picks the client jk that minimizes vj∗ ; we will see in a little bit why this is helpful. It then opens the cheapest facility ik in the neighborhood of N (jk ), and assigns jk and Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
4.6
The binpacking problem
95
Solve LP, get optimal primal solution (x∗ , y ∗ ) and dual solution (v ∗ , w∗ ) C←D k←0 while C ̸= ∅ do k ←k+1 Choose jk ∈ C that minimizes vj∗ over all j ∈ C Choose ik ∈ N (jk ) to be the cheapest facility in N (jk ) Assign jk and all unassigned clients in N 2 (jk ) to ik C ← C − {jk } − N 2 (jk ) Algorithm 4.2: Deterministic rounding algorithm for the uncapacitated facility location problem. all previously unassigned clients in N 2 (jk ) to ik . Note that by assigning the clients in N 2 (jk ) to ik , we ensure that the neighborhoods N (jk ) form a partition of a subset of the facilities: because no client in the neighborhood of any facility of N (jk ) is unassigned after iteration k, no facility of N (jk ) is a neighbor of some client jl chosen in later iterations (l > k). We can now analyze the performance of this algorithm. Theorem 4.13: Algorithm 4.2 is a 4approximation algorithm for the uncapacitated facility location problem. ∑ ∑ Proof. We have shown above that k fik ≤ i∈F fi yi∗ ≤ OPT . Fix an iteration k, and let j = jk and i = ik . By Lemma 4.11, the cost of assigning j to i is cij ≤ vj∗ . As depicted in Figure 4.4, consider the cost of assigning an unassigned client l ∈ N 2 (j) to facility i, where client l neighbors facility h that neighbors client j; then, applying the triangle inequality and Lemma 4.11, we see that cil ≤ cij + chj + chl ≤ vj∗ + vj∗ + vl∗ . Recall that we have selected client j in this iteration because, among all currently unassigned clients, it has the smallest dual variable vj∗ . However, l is also still unassigned, and so we know that vj∗ ≤ vl∗ . Thus, we can conclude that cil ≤ 3vl∗ . Combining all of these bounds, we see solution constructed has facility cost at most OPT and assignment cost at most ∑that the ∗ 3 j∈D vj ≤ 3 OPT (by weak duality), for an overall cost of at most 4 times optimal. In Section 5.8, we will see how randomization can help us improve this algorithm to a 3approximation algorithm. In subsequent chapters we will improve the performance guarantee still further. The following is known about the hardness of approximating the metric uncapacitated facility location problem via a reduction from the set cover problem. Theorem 4.14: There is no αapproximation algorithm for the metric uncapacitated facility location problem with constant α < 1.463 unless each problem in NP has an O(nO(log log n) ) time algorithm. We will prove this theorem in Section 16.2.
4.6
The binpacking problem
We return to the binpacking problem, for which we gave an asymptotic polynomialtime approximation scheme in Section 3.3. Recall that in the binpacking problem, we are given a collection of n items (or pieces), where item j is of size aj ∈ (0, 1], j = 1, . . . , n, and we wish Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
96
Deterministic rounding of linear programs
to assign each item to a bin, so that the total size assigned to each bin is at most 1; the objective is to use as few bins as possible. We showed previously that for each ϵ > 0, there is a polynomialtime algorithm that computes a solution with at most (1 + ϵ) OPTBP (I) + 1 bins, where OPTBP (I) is the optimal value for instance I of the problem. We shall improve on that result signiﬁcantly; we shall give a polynomialtime algorithm that ﬁnds a solution that uses at most OPTBP (I) + O(log2 (OPTBP (I))) bins. In obtaining this new bound, there will be three key components: an integer programming formulation to replace the dynamic programming formulation, and the integer program will be approximately solved by rounding its LP relaxation; an improved grouping scheme, which we call the harmonic grouping scheme; and an ingenious recursive application of the previous two components. We ﬁrst present the integer programming formulation on which the new algorithm will be based. Suppose that we group pieces of identical size, so that there are b1 pieces of the largest size s1 , b2 of the second largest size s2 , . . . , bm pieces of the smallest size sm . Consider the ways in which a single bin can be packed. The contents of each bin can be described by an mtuple (t1 , . . . , tm ), where ti indicates the ∑ number of pieces of size si that are included. We shall call such an mtuple a conﬁguration if i ti si ≤ 1; that is, the total size of the contents of the bin is at most 1, and so each conﬁguration corresponds to one feasible way to pack a bin. There might be an exponential number of conﬁgurations. Let N denote the number of conﬁgurations, and let T1 , . . . , TN be a complete enumeration of them, where tij denotes the ith component of Tj . We introduce a variable xj for each Tj that speciﬁes the number of bins packed according to conﬁguration Tj ; that is, xj is an integer variable. The total number of bins used can be computed by summing these variables. If we pack xj bins according to conﬁguration Tj , then this packs tij xj pieces of size si , i = 1, . . . , m. In this way, we can restrict attention to feasible solutions by requiring that we pack at least bi pieces of size si , in total. This leads to the following integer programming formulation of the binpacking problem; this is sometimes called the conﬁguration integer program:
minimize
N ∑
xj
(4.9)
j=1
subject to
N ∑
tij xj ≥ bi ,
i = 1, . . . , m,
xj ∈ N,
j = 1, . . . , N.
j=1
This formulation was introduced in the context of designing practical algorithms to ﬁnd optimal solutions to certain binpacking problems. Our algorithm is based on solving the linear programming relaxation of this integer program; we let ∑m OPTLP (I) denote the optimal value of this linear program. If we recall that SIZE(I) = i=1 si bi , then clearly we have that SIZE(I) ≤ OPTLP (I) ≤ OPTBP (I). Recall from Section 4.3 that the ellipsoid method is a polynomialtime algorithm for linear programs that have a polynomial number of variables, and for which there is a polynomialtime separation oracle to ﬁnd a violated constraint. The conﬁguration LP has few constraints, but an exponential number of variables. However, its dual linear program will then have m variables Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
4.6
The binpacking problem
97
and an exponential number of constraints, and is as follows: maximize subject to
m ∑
bi yi
i=1 m ∑
tij yi ≤ 1,
j = 1, . . . , N,
yi ≥ 0,
i = 1, . . . , m.
i=1
We observe that the problem of deciding whether or not there is a violated dual constraint given dual variables y is simply a knapsack problem. If we view yi as the value of piece i, then the dual constraints say that for all possible ways of packing pieces into a bin (or knapsack) of size 1, the total value is at most 1. Hence, to obtain a separation oracle, we can use an algorithm for the knapsack problem to decide, given values y, whether or not there is a way of packing pieces into a bin of size 1 that has value more than 1; such a packing will correspond to a violated dual constraint. Since the knapsack problem is NPhard, it would seem that it is not possible to obtain a polynomialtime separation oracle. However, by delving deeper into the analysis of the ellipsoid method, one can show that a fully polynomialtime approximation scheme for the knapsack problem (as given in Section 3.1) is suﬃcient to ensure the polynomialtime convergence of the ellipsoid method; we defer the details of how this can be done until Section 15.3, after which we give the problem as an exercise (Exercise 15.8). Hence, one can approximately solve the conﬁguration linear program, within an additive error of 1, in time bounded by a polynomial in m and log(n/sm ). In Section 3.3, one of the key ingredients for the asymptotic polynomialtime approximation scheme is a result that enables us to ignore pieces smaller than a given threshold γ. By applying this result, Lemma 3.10, we can assume, without loss of generality, that the smallest piece size sm ≥ 1/ SIZE(I), since smaller pieces can again be packed later without changing the order of magnitude of the additive error. The harmonic grouping scheme works as follows: process the pieces in order of decreasing size, close the current group whenever its total size is at least 2, and then start a new group with the next piece. Let r denote the number of groups, let Gi denote the ith group, and let ni denote the number of pieces in Gi . Observe that since we sort the pieces from largest to smallest, it follows that for i = 2, . . . , r − 1, we have that ni ≥ ni−1 . (For i = r, the group Gr need not have total size at least 2.) As in the proof of Lemma 3.11, from a given input I we form a new instance I ′ , where a number of pieces are discarded and packed separately. For each i = 2, 3, . . . , r − 1, we put ni−1 pieces in I ′ of size equal to the largest piece in Gi and discard the remaining pieces. In eﬀect, we discard G1 , Gr , and the ni − ni−1 smallest pieces in Gi , i = 2, . . . , r − 1, while increasing the size of the remaining pieces in a group to be equal to the largest one. Figure 4.5 shows the eﬀect of the harmonic grouping on one input. First of all, it should be clear that any packing of the rounded instance I ′ can be used to pack those pieces of the original instance that were not discarded; there is a natural mapping from any nondiscarded piece in the original instance to a piece at least as large in I ′ . The following lemma gives two key properties of the harmonic grouping scheme. Lemma 4.15: Let I ′ be the binpacking input obtained from an input I by applying the harmonic grouping scheme. The number of distinct piece sizes in I ′ is at most SIZE(I)/2; the total size of all discarded pieces is O(log SIZE(I)). Proof. The ﬁrst claim of the lemma is easy: each distinct piece size in I ′ corresponds to one Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
98
Deterministic rounding of linear programs
G1 n1 = 2 1
z }{
G2 n2 = 3 z
0.8
0.6
}
{
G3 n3 = 3 z
}
{
G4 n4 = 4 z
}
G5 n5 = 6
{ z
}
{
I 0.4
0.2
G6 n6 = 7 z
}
0
1
0.8
0.6
I0 0.4
0.2
0
Figure 4.5: An input before and after harmonic grouping.
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
{
4.6
The binpacking problem
99
BinPack(I) if SIZE(I) < 10 then Pack remaining pieces using First Fit else Apply harmonic grouping scheme to create instance I ′ ; pack discards in O(log SIZE(I)) bins using First Fit Let x be optimal solution to conﬁguration LP for instance I ′ Pack ⌊xj ⌋ bins in conﬁguration Tj for j = 1, . . . , N ; call the packed pieces instance I1 Let I2 be remaining pieces from I ′ Pack I2 via BinPack(I2 ) Algorithm 4.3: Deterministic rounding algorithm for packing large pieces of the binpacking problem. of the groups G2 , . . . , Gr−1 ; each of these groups has size at least 2, and so there are at most SIZE(I)/2 of them. To prove the second property, suppose, for the moment, that each group Gi has at most one more piece than the previous group Gi−1 , i = 2, . . . , r − 1. Consequently, we discard at most one item from each of the groups G2 , . . . , Gr−1 . To bound the total size of the discarded pieces, the total size of each group is at most 3, and so the total size of G1 and Gr is at most 6. Furthermore, the size of the smallest piece in group Gi is at most 3/ni . Since we discard a piece from Gi , i = 2, . . . , r − 1, only when Gi is the ﬁrst group that has ∑ rni pieces and 3/j. Recall we discard its smallest piece, the total size of these discarded pieces is at most nj=1 1 1 from Section 1.6 that the kth harmonic number, Hk , is deﬁned to be Hk = 1+ 2 + 3 +· · ·+ k1 and that Hk = O(log k). Since each piece has size at least 1/ SIZE(I), we have that nr ≤ 3 SIZE(I), and so we see that the total size of the discarded pieces is O(log SIZE(I)). Now consider the general case in which each group need not be only one larger than the previous one. Consider any group Gi that contains more pieces than Gi−1 , i = 2, . . . , r − 1; more precisely, suppose that it contains k = ni − ni−1 more pieces. The k discarded pieces are of size at most 3(k/ni ) since they are the k smallest of ni pieces of total at most 3. We can upper bound this value by adding k terms, ∑ ieach of which is at least 3/ni ; that is, the total size of these discarded pieces is at most nj=n 3/j. Adding these terms together (for all i−1 +1 ∑ r groups), we get that the total size of the discarded pieces is at most nj=1 3/j, which we have already seen is O(log SIZE(I)). The harmonic grouping scheme can be used to design an approximation algorithm that always ﬁnds a packing with OPTLP (I) + O(log2 (SIZE(I))) bins. This algorithm uses the harmonic scheme recursively. The algorithm applies the grouping scheme, packs the discarded pieces using the FirstFit algorithm (or virtually any other simple algorithm), and solves the linear program for the rounded instance. The fractional variables of the LP are rounded down to the nearest integer to obtain a packing of a subset of the pieces. This leaves some pieces unpacked, and these are handled by a recursive call until the total size of the remaining pieces is less than a speciﬁed constant (say, for example, 10). The algorithm is summarized in Algorithm 4.3. We shall use I to denote the original instance, I ′ to denote the instance on which the ﬁrst linear program is solved, and I1 and I2 to denote the pieces packed based on the integer part of the fractional solution and those left over for the recursive call, respectively. The key to the analysis of this algorithm is the following lemma. Lemma 4.16: For any binpacking input I, from which the harmonic grouping scheme produces Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
100
Deterministic rounding of linear programs
an input I ′ , which is then partitioned into I1 and I2 , based on the integer and fractional parts of an optimal solution to the conﬁguration linear program, we have that OPTLP (I1 ) + OPTLP (I2 ) ≤ OPTLP (I ′ ) ≤ OPTLP (I).
(4.10)
Proof. The crucial property of the harmonic grouping scheme is that each piece in the input I ′ can be mapped to a distinct piece in the original input I of no lesser size. By inverting this mapping, any feasible solution for I can be interpreted as a feasible solution for I ′ (where each piece in I that is not the image of a piece in I ′ is simply deleted). Hence, the optimal value for the binpacking problem for I ′ is at most the optimal value for I. Similarly, each conﬁguration for I induces a corresponding conﬁguration for I ′ , and so we see that OPTLP (I ′ ) ≤ OPTLP (I). By deﬁnition, if x is an optimal solution to the conﬁguration LP for I ′ used to construct I1 and I2 , we have that ⌊xj ⌋, j = 1, . . . , N , is a feasible solution for the conﬁguration LP for I1 (which is even integer!), and xj − ⌊xj ⌋, j = 1, . . . , N , is a feasible solution for the conﬁguration LP for ∑ ′ I2 . Hence, the sum of the optimal values for these two LPs is at most N j=1 xj = OPTLP (I ). In fact, one can prove that OPTLP (I1 ) + OPTLP (I2 ) = OPTLP (I ′ ), but this is not needed. Each level of the recursion partitions the pieces into one of three sets: those pieces packed by the integer part of the LP solution, those pieces packed after having been discarded by the grouping scheme, and those pieces packed by a recursive call of the algorithm. (So, for the top level of the recursion, the ﬁrst corresponds to I1 and the last corresponds to I2 .) If one focuses on those pieces that fall in the ﬁrst category over all recursive calls of the algorithm, the inequality (4.10) implies that the sum of the LP values of these inputs is at most OPTLP (I). This implies that the only error introduced in each level of recursion is caused by the discarded pieces. We have bounded the total size of the pieces discarded in one level of recursion, and so we need to bound the number of levels of recursion. We will do this by showing that the total size of the input called in the recursive call is at most half the total size of the original input. The recursive input I2 is the leftover that corresponds to the fractional part of the optimal conﬁguration LP solution x; hence, we can bound its total size by the sum of the fractions ∑N x − ⌊x j ⌋. A very crude upper bound on this sum is to count the number of nonzeroes j=1 j in the optimal LP solution x. We claim that the number of nonzeroes in the optimal solution x can be bounded by the number of constraints in the conﬁguration LP. We leave this as an exercise to the reader (Exercise 4.5), though it follows directly from the properties of basic optimal solutions, which we discuss in Chapter 11 and Appendix A. The number of constraints is the number of distinct piece sizes in I ′ . By Lemma 4.15, this is at most SIZE(I)/2. By combining all of these arguments, we see that the total size of I2 , that is, SIZE(I2 ), is at most SIZE(I)/2. Hence, the size of the instance decreases by a factor of 2 in each level of recursion; the recursion terminates when the total size remaining is less than 10, and so there are O(log SIZE(I)) levels. In each of these levels, we use O(log SIZE(I)) bins to pack the discarded pieces; since SIZE(I) ≤ OPTLP (I) ≤ OPTBP (I), we obtain the following theorem. Theorem 4.17: The recursive application of the harmonic grouping scheme yields a polynomialtime approximation algorithm for the binpacking problem that uses OPTBP (I)+O(log2 (OPTBP (I))) bins. It is an important open question whether this result can be improved; it is possible that the binpacking problem can be approximated within an additive error of 1; however, showing that this is impossible would also be a striking advance. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
4.6
The binpacking problem
101
Exercises 4.1 The following problem arises in telecommunications networks, and is known as the SONET ring loading problem. The network consists of a cycle on n nodes, numbered 0 through n − 1 clockwise around the cycle. Some set C of calls is given; each call is a pair (i, j) originating at node i and destined to node j. The call can be routed either clockwise or counterclockwise around the ring. The objective is to route the calls so as to minimize the total load on the network. The load Li on link (i, i + 1(mod n)) is the number of calls routed through link (i, i + 1(mod n)), and the total load is max1≤i≤n Li . Give a 2approximation algorithm for the SONET ring loading problem. 4.2 Consider the scheduling problem of Section 4.2, but without release dates. That is, we must schedule jobs nonpreemptively on a single machine to minimize the weighted sum ∑n of completion times j=1 wj Cj . Suppose that jobs are indexed such that wp11 ≥ wp22 ≥ · · · ≥ wpnn . Then show it is optimal to schedule job 1 ﬁrst, job 2 second, and so on. This scheduling rule is called Smith’s rule. 4.3 Recall from Exercise 2.3 the concept of precedence constraints between jobs in a scheduling problem: We say i ≺ j if in any feasible schedule, job i must be completely processed before job j begins processing. Consider a variation of the singlemachine scheduling problem from Section 4.2 in which we have precedence constraints but no release dates. That is, we are given n jobs with processing times pj > 0 and weights wj ≥ 0, and the goal is to ﬁnd a nonpreemptive schedule on a single machine that is feasible with respect to the precedence constraints ≺ and that minimizes the weighted sum of completed times ∑ n j=1 wj Cj . Use the ideas of Section 4.2 to give a 2approximation algorithm for this problem. 4.4 In the algorithm for the binpacking problem, we used harmonic grouping in each iteration of the algorithm to create an instance I ′ from I. Consider the following grouping scheme: for i = 0, . . . , ⌈log2 SIZE(I)⌉, create a group Gi such that all pieces from I of size (2−(i+1) , 2−i ] are placed in group Gi . In each group Gi , create subgroups Gi,1 , Gi,2 , . . . , Gi,ki by arranging the pieces in Gi from largest to smallest and putting the 4 · 2i largest in the ﬁrst subgroup, the next 4 · 2i in the next subgroup, and so on. Now create instance I ′ from I in the following manner: for each i = 0, . . . , ⌈log2 SIZE(I)⌉, discard subgroups Gi,1 and Gi,ki (i.e., the ﬁrst and last subgroups of Gi ), and for j = 2, . . . , ki − 1 round each piece of subgroup Gi,j to the size of the largest piece in Gi,j . Prove that by using this grouping scheme within the binpacking algorithm, we obtain an algorithm that uses at most OPTBP (I) + O(log2 (OPTBP (I))) for all instances I. 4.5 Show that there exists an optimal solution x to the linear programming relaxation of the integer program (4.9) that has at most m nonzero entries, where m is the number of diﬀerent piece sizes. 4.6 Let G = (A, B, E) be a bipartite graph; that is, each edge (i, j) ∈ E has i ∈ A and j ∈ B. Assume that A ≤ B and that we are given nonnegative costs cij ≥ 0 for each edge (i, j) ∈ E. A complete matching of A is a subset of edges M ⊆ E such that each vertex in A has exactly one edge of M incident on it, and each vertex in B has at most one edge of M incident on it. We wish to ﬁnd a minimumcost complete matching. We can formulate an integer program for this problem in which we have an integer variable xij ∈ {0, 1} for Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
102
Deterministic rounding of linear programs
each edge (i, j) ∈ E, where xij = 1 if (i, j) is in the matching, and 0 otherwise. Then the integer program is as follows: ∑
minimize
cij xij
(i,j)∈E
∑
subject to
xij = 1,
∀i ∈ A,
xij ≤ 1,
∀j ∈ B,
xij ∈ {0, 1}
∀(i, j) ∈ E.
j∈B:(i,j)∈E
∑
i∈A:(i,j)∈E
Consider the linear programming relaxation of the integer program in which we replace the integer constraints xij ∈ {0, 1} with xij ≥ 0 for all (i, j) ∈ E. (a) Show that given any fractional solution to the linear programming relaxation, it is possible to ﬁnd in polynomial time an integer solution that costs no more than the fractional solution. (Hint: Given a set of fractional variables, ﬁnd a way to modify their values repeatedly such that the solution stays feasible, the overall cost does not increase, and at least one additional fractional variable becomes 0 or 1.) Conclude that there is a polynomialtime algorithm for ﬁnding a minimumcost complete matching. (b) Show that any extreme point of the linear programming relaxation has the property that xij ∈ {0, 1} for all (i, j) ∈ E. (Recall that an extreme point x is a feasible solution that cannot be expressed as λx1 + (1 − λ)x2 for 0 < λ < 1 and feasible solutions x1 and x2 distinct from x.) 4.7 This exercise introduces a deterministic rounding technique called pipage rounding, which builds on ideas similar to those used in Exercise 4.6. To illustrate this technique, we will consider the problem of ﬁnding a maximum cut in a graph with a constraint on the size of each part. In the maximum cut problem, we are given as input an undirected graph G = (V, E) with nonnegative weights wij ≥ 0 for all (i, j) ∈ E. We wish to partition the vertex set into two parts U and W = V − U so as to maximize the weight of the edges whose two endpoints are in diﬀerent parts. We will also assume that we are given an integer k ≤ V /2, and we must ﬁnd a partition such that U  = k (we will consider the maximum cut problem without this constraint in Sections 5.1 and 6.2). (a) Show that the following nonlinear integer program models the maximum cut problem with a constraint on the size of the parts: maximize
∑ (i,j)∈E
subject to
wij (xi + xj − 2xi xj ) ∑
xi = k,
i∈V
xi ∈ {0, 1} ,
∀i ∈ V.
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
4.6
The binpacking problem
103
(b) Show that the following linear program is a relaxation of the problem: ∑ maximize wij zij (i,j)∈E
zij ≤ xi + xj ,
∀(i, j) ∈ E,
zij ≤ 2 − xi − xj , ∑ xi = k,
∀(i, j) ∈ E,
subject to
i∈V
0 ≤ zij ≤ 1,
∀(i, j) ∈ E,
0 ≤ xi ≤ 1,
∀i ∈ V.
∑ (c) Let F (x) = (i,j)∈E wij (xi +xj −2xi xj ) be the objective function from the nonlinear integer program. Show that for any (x, z) that is a feasible solution to the linear 1∑ programming relaxation, F (x) ≥ 2 (i,j)∈E wij zij . (d) Argue that given a fractional solution x, for two fractional variables xi and xj , it is possible to increase one by ϵ > 0 and decrease the other by ϵ such that F (x) does not decrease and one of the two variables becomes integer. (e) Use the arguments above to devise a 12 approximation algorithm for the maximum cut problem with a constraint on the size of the parts.
Chapter Notes Early deterministic LP rounding algorithms include the set cover algorithm given in Section 1.3 due to Hochbaum [160] and the binpacking algorithm of Section 4.6 due to Karmarkar and Karp [187]. Both of these papers appeared in 1982. Work on deterministic rounding approximation algorithms did not precede this date by much because the ﬁrst polynomialtime algorithms for solving linear programs did not appear until the late 1970s with the publication of the ellipsoid method. As discussed in the introduction to the chapter, the easiest way to round a fractional solution is to round some variables up to 1 and others down to 0. This is the case for the prizecollecting Steiner tree problem introduced in Section 4.4. The prizecollecting Steiner tree problem is a variant of a problem introduced by Balas [31]. This version of the problem and the algorithm of this section are due to Bienstock, Goemans, SimchiLevi, and Williamson [48]. The ellipsoid method discussed in Section 4.3 as a polynomialtime algorithm for solving linear programs was ﬁrst given by Khachiyan [189], building on earlier work for nonlinear programs by Shor [267]. The algorithm that uses a separation oracle for solving linear programs with an exponential number of constraints is due to Gr¨otschel, Lov´asz, and Schrijver [144]. An extended treatment of the topic can be found in the book of Gr¨otschel, Lov´asz, and Schrijver [145]; a surveylevel treatment can be found in Bland, Goldfarb, and Todd [50]. At a high level, the ellipsoid method works as follows. Suppose we are trying to solve the linear program (4.4) from Section 4.3. Initially, the algorithm ﬁnds an ellipsoid in ℜn containing all basic feasible solutions for the linear program (see Chapter 11 or Appendix A for discussion of basic feasible and basic optimal solutions). Let x ˜ be the center of the ellipsoid. The algorithm calls the separation oracle with x ˜ . If x ˜ is feasible, it creates a constraint ∑n ∑n ˜j , since a basic optimal solution must have objective function value no j=1 dj x j=1 dj xj ≤ greater than the feasible solution x ˜ (this constraint is sometimes called an objective function Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
104
Deterministic rounding of linear programs
∑ cut). If x ˜ is not feasible, the separation oracle returns a constraint nj=1 aij xj ≥ bi that is violated by x ˜. In either case, we have a hyperplane through x ˜ such that a basic optimal solution to the linear program (if one exists) must lie on one side of the hyperplane; in the case ∑n ∑n of a feasible x ˜ the∑hyperplane is ˜j , and in the case of an infeasible x ˜ ∑ j=1 dj xj ≤ j=1 dj x the hyperplane is nj=1 aij xj ≥ nj=1 aij x ˜j . The hyperplane containing x ˜ splits the ellipsoid in two. The algorithm then ﬁnds a new ellipsoid containing the appropriate half of the original ellipsoid, and then considers the center of the new ellipsoid. This process repeats until the ellipsoid is suﬃciently small that it can contain at most one basic feasible solution, if any; this then must be a basic optimal solution if one exists. The key to the proof of the running time of the algorithm is to show that after O(n) iterations, the volume of the ellipsoid has dropped by a constant factor; then by relating the size of the initial to the ﬁnal ellipsoid, the polynomial bound on the running time can be obtained. As we saw in the chapter, sometimes rounding algorithms are more sophisticated than simply choosing large fractional variables and rounding up. The algorithm for the unweighted singlemachine scheduling problem in Section 4.1 is due to Phillips, Stein, and Wein [241]. The algorithm for the weighted case in Section 4.2 is due to Hall, Schulz, Shmoys, and Wein [155]; the linear programming formulation used in the section was developed by Wolsey [292] and Queyranne [243], and the separation oracle for the linear program in Section 4.3 is due to Queyranne [243]. The algorithm for the uncapacitated facility location problem in Section 4.5 is due to Chudak and Shmoys [77], building on earlier work of Shmoys, Tardos, and Aardal [264]. The hardness result of Theorem 4.14 is due to Guha and Khuller [146]. The pipage rounding technique in Exercise 4.7 for the maximum cut problem with size constraints is due to Ageev and Sviridenko [3, 2]. Smith’s rule in Exercise 4.2 is due to Smith [270]. The result of Exercise 4.3 is due to Hall, Schulz, Shmoys, and Wein [155]. Exercise 4.4 is due to Karmarkar and Karp [187]. Exercise 4.6 is essentially due to Birkhoﬀ [49].
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
Chapter 5
Random sampling and randomized rounding of linear programs
Sometimes it turns out to be useful to allow our algorithms to make random choices; that is, the algorithm can ﬂip a coin, or ﬂip a biased coin, or draw a value uniformly from a given interval. The performance guarantee of an approximation algorithm that makes random choices is then the expected value of the solution produced relative to the value of an optimal solution, where the expectation is taken over the random choices of the algorithm. At ﬁrst this might seem like a weaker class of algorithm. In what sense is there a performance guarantee if it only holds in expectation? However, in most cases we will be able to show that randomized approximation algorithms can be derandomized: that is, we can use a certain algorithmic technique known as the method of conditional expectations to produce a deterministic version of the algorithm that has the same performance guarantee as the randomized version. Of what use then is randomization? It turns out that it is often much simpler to state and analyze the randomized version of the algorithm than to state and analyze the deterministic version that results from derandomization. Thus randomization gains us simplicity in our algorithm design and analysis, while derandomization ensures that the performance guarantee can be obtained deterministically. In a few cases, it is easy to state the deterministic, derandomized version of an algorithm, but we only know how to analyze the randomized version. Here the randomized algorithm allows us to analyze an algorithm that we are unable to analyze otherwise. We will see an example of this when we revisit the prizecollecting Steiner tree problem in Section 5.7. It is also sometimes the case that we can prove that the performance guarantee of a randomized approximation algorithm holds with high probability. By this we mean that the probability that the performance guarantee does not hold is one over some polynomial in the input size of the problem. Usually we can make this polynomial as large as we want (and thus the probability as small as we want) by weakening the performance guarantee by some constant factor. Here derandomization is less necessary, though sometimes still possible by using more sophisticated techniques. We begin the chapter by looking at very simple randomized algorithms for two problems, the maximum satisﬁability problem and the maximum cut problem. Here we show that sampling a solution uniformly at random from the set of all possible solutions gives a good randomized approximation algorithm. For the maximum satisﬁability problem, we are able to go still further 105
106
Random sampling and randomized rounding of linear programs
and show that biasing our choice yields a better performance guarantee. We then revisit the idea of using randomized rounding of linear programming relaxations introduced in Section 1.7, and show that it leads to still better approximation algorithms for the maximum satisﬁability problem, as well as better algorithms for other problems we have seen previously, such as the prizecollecting Steiner tree problem, the uncapacitated facility location problem, and a single machine scheduling problem. We then give Chernoﬀ bounds, which allow us to bound the probability that a sum of random variables is far away from its expected value. We show how these bounds can be applied to an integer multicommodity ﬂow problem, which is historically the ﬁrst use of randomized rounding. We end with a much more sophisticated use of drawing a random sample, and show that this technique can be used to 3color certain kinds of dense 3colorable graphs with high probability.
5.1
Simple algorithms for MAX SAT and MAX CUT
Two problems will play an especially prominent role in our discussion of randomization in the design and analysis of approximation algorithms: the maximum satisﬁability problem, and the maximum cut problem. The former will be highlighted in this chapter, whereas the central developments for the latter will be deferred to the next chapter. However, in this section we will give a simple randomized 12 approximation algorithm for each problem. In the maximum satisﬁability problem (often abbreviated as MAX SAT), the input consists of n Boolean variables x1 , . . . , xn (each of which may be set to either true or false), m clauses C1 , . . . , Cm (each of which consists of a disjunction (that is, an “or”) of some number of the variables and their negations – for example, x3 ∨ x ¯5 ∨ x11 , where x ¯i is the negation of xi ), and a nonnegative weight wj for each clause Cj . The objective of the problem is to ﬁnd an assignment of true/false to the xi that maximizes the weight of the satisﬁed clauses. A clause is said to be satisﬁed if one of the unnegated variables is set to true, or one of the negated variables is set to false. For example, in the clause x3 ∨ x ¯5 ∨ x11 , the clause is not satisﬁed only if x3 is set to false, x5 to true, and x11 to false. Some terminology will be useful in discussing the MAX SAT problem. We say that a variable xi or a negated variable x ¯i is a literal, so that each clause consists of some number of literals. A variable xi is called a positive literal and a negated variable x ¯i is called a negative literal. The number of literals in a clause is called its size or length. We will denote the length of a clause Cj by lj . Clauses of length one are sometimes called unit clauses. Without loss of generality, we assume that no literal is repeated in a clause (since this does not aﬀect the satisﬁability of the instance), and that at most one of xi and x ¯i appears in a clause (since if both xi and x ¯i are in a clause, it is trivially satisﬁable). Finally, it is natural to assume that the clauses are distinct, since we can simply sum the weights of two identical clauses. A very straightforward use of randomization for MAX SAT is to set each xi to true independently with probability 1/2. An alternate perspective on this algorithm is that we choose a setting of the variables uniformly at random from the space of all possible settings. It turns out that this gives a reasonable approximation algorithm for this problem. Theorem 5.1: Setting each xi to true with probability 1/2 independently gives a randomized 1 2 approximation algorithm for the maximum satisﬁability problem. Proof. Consider a random variable Yj such that Yj is 1 if clause j is satisﬁed and 0 otherwise. Let W∑be a random variable that is equal to the total weight of the satisﬁed clauses, so that W = m j=1 wj Yj . Let OPT denote the optimum value of the MAX SAT instance. Then, by Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
5.1
Simple algorithms for MAX SAT and MAX CUT
107
linearity of expectation, and the deﬁnition of the expectation of a 01 random variable, we know that m m ∑ ∑ E[W ] = wj E[Yj ] = wj Pr[clause Cj satisﬁed]. j=1
j=1
For each clause Cj , j = 1, . . . , n, the probability that it is not satisﬁed is the probability that each positive literal in Cj is set to false and each negative literal in Cj is set to true, each of which happens with probability 1/2 independently; hence ( Pr[clause Cj satisﬁed] =
( )l j ) 1 1 1− ≥ , 2 2
where the last inequality is a consequence of the fact that lj ≥ 1. Hence, 1∑ 1 E[W ] ≥ wj ≥ OPT, 2 2 m
j=1
where the last inequality follows from the fact that the total weight is an easy upper bound on the optimal value, since each weight is assumed to be nonnegative. that if lj ≥ k for each clause j, then the analysis above shows that the algorithm is a ( Observe ( 1 )k ) 1− 2 approximation algorithm for such instances. Thus the performance of the algorithm is better on MAX SAT instances consisting of long clauses. This observation will be useful to us later on. Although this seems like a pretty naive algorithm, a hardness theorem shows that this is the best that can be done in some cases. Consider the case in which lj = 3 for all clauses j; this restriction of the problem is sometimes called MAX E3SAT, since there are exactly 3 literals in each clause. The analysis above shows that randomized algorithm gives an approximation ( the ( 1 )3 ) = 78 . A truly remarkable result shows that algorithm with performance guarantee 1 − 2 nothing better is possible for these instances unless P = NP. Theorem 5.2: If there is an ( 78 +ϵ)approximation algorithm for MAX E3SAT for any constant ϵ > 0, then P = NP. We discuss this result further in Section 16.3. In the maximum cut problem (sometimes abbreviated MAX CUT), the input is an undirected graph G = (V, E), along with a nonnegative weight wij ≥ 0 for each edge (i, j) ∈ E. The goal is to partition the vertex set into two parts, U and W = V − U , so as to maximize the weight of the edges whose two endpoints are in diﬀerent parts, one in U and one in W . We say that an edge with endpoints in both U and W is in the cut. In the case wij = 1 for each edge (i, j) ∈ E, we have an unweighted MAX CUT problem. It is easy to give a 12 approximation algorithm for the MAX CUT problem along the same lines as the previous randomized algorithm for MAX SAT. Here we place each vertex v ∈ V into U independently with probability 1/2. As with the MAX SAT algorithm, this can be viewed as sampling a solution uniformly from the space of all possible solutions. Theorem 5.3: If we place each vertex v ∈ V into U independently with probability 1/2, then we obtain a randomized 12 approximation algorithm for the maximum cut problem. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
108
Random sampling and randomized rounding of linear programs
Proof. Consider a random variable Xij that is 1 if the edge (i, j) is in the cut, and 0 otherwise. Let ∑ Z be the random variable equal to the total weight of edges in the cut, so that Z = (i,j)∈E wij Xij . Let OPT denote the optimal value of the maximum cut instance. Then, as before, by linearity of expectation and the deﬁnition of expectation of a 01 random variable, we get that ∑ ∑ E[Z] = wij E[Xij ] = wij Pr[Edge (i, j) in cut]. (i,j)∈E
(i,j)∈E
In this case, the probability that a speciﬁc edge (i, j) is in the cut is easy to calculate: since the two endpoints are placed in the sets independently, they are in diﬀerent sets with probability equal to 21 . Hence, 1 ∑ 1 E[Z] = wij ≥ OPT, 2 2 (i,j)∈E
where the inequality follows directly from the fact that the sum of the (nonnegative) weights of all edges is obviously an upper bound on the weight of the edges in an optimal cut. We will show in Section 6.2 that by using more sophisticated techniques we can get a substantially better performance guarantee for the MAX CUT problem.
5.2
Derandomization
As we mentioned in the introduction to the chapter, it is often possible to derandomize a randomized algorithm; that is, to obtain a deterministic algorithm whose solution value is as good as the expected value of the randomized algorithm. To illustrate, we will show how the algorithm of the preceding section for the maximum satisﬁability problem can be derandomized by replacing the randomized decision of whether to set xi to true with a deterministic one that will preserve the expected value of the solution. These decisions will be made sequentially: the value of x1 is determined ﬁrst, then x2 , and so on. How should x1 be set so as to preserve the expected value of the algorithm? Assume for the moment that we will make the choice of x1 deterministically, and all other variables will be set true with probability 1/2 as before. Then the best way to set x1 is that which will maximize the expected value of the resulting solution; that is, we should determine the expected value of W , the weight of satisﬁed clauses, given that x1 is set to true, and the expected weight of W given that x1 is set to false, and set x1 to whichever value maximizes the expected value of W . It makes intuitive sense that this should work, since the maximum is always greater than an average, and the expected value of W is the average of its expected value given the two possible settings of x1 . In this way, we maintain an algorithmic invariant that the expected value is at least half the optimum, while having fewer random variables left. More formally, if E[W x1 ← true] ≥ E[W x1 ← false], then we set x1 true, otherwise we set it to false. Since by the deﬁnition of conditional expectations, E[W ] = E[W x1 ← true] Pr[x1 ← true] + E[W x1 ← false] Pr[x1 ← false] 1 = (E[W x1 ← true] + E[W x1 ← false]) , 2 if we set x1 to truth value b1 so as to maximize the conditional expectation, then E[W x1 ← b1 ] ≥ E[W ]; that is, the deterministic choice of how to set x1 guarantees an expected value no less than the expected value of the completely randomized algorithm. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
5.2
Derandomization
109
Assuming for the moment that we can compute these conditional expectations, the deterministic decision of how to set the remaining variables is similar. Assume that we have set variables x1 , . . . , xi to truth values b1 , . . . , bi respectively. How shall we set variable xi+1 ? Again, assume that the remaining variables are set randomly. Then the best way to set xi+1 is so as to maximize the expected value given the previous settings of x1 , . . . , xi . So if E[W x1 ← b1 , . . . , xi ← bi , xi+1 ← true] ≥ E[W x1 ← b1 , . . . , xi ← bi , xi+1 ← false] we set xi+1 to true (thus setting bi+1 to true), otherwise we set xi+1 to false (thus setting bi+1 to false). Then since E[W x1 ← b1 , . . . , xi ← bi ] = E[W x1 ← b1 , . . . , xi ← bi , xi+1 ← true] Pr[xi+1 ← true] + E[W x1 ← b1 , . . . , xi ← bi , xi+1 ← false] Pr[xi+1 ← false] 1 = (E[W x1 ← b1 , . . . , xi ← bi , xi+1 ← true] + E[W x1 ← b1 , . . . , xi ← bi , xi+1 ← false]) , 2 setting xi+1 to truth value bi+1 as described above ensures that E[W x1 ← b1 , . . . , xi ← bi , xi+1 ← bi+1 ] ≥ E[W x1 ← b1 , . . . , xi ← bi ]. By induction, this implies that E[W x1 ← b1 , . . . , xi ← bi , xi+1 ← bi+1 ] ≥ E[W ]. We continue this process until all n variables have been set. Then since the conditional expectation given the setting of all n variables, E[W x1 ← b1 , . . . , xn ← bn ], is simply the value of the solution given by the deterministic algorithm, we know that the value of the solution returned is at least E[W ] ≥ 21 OPT . Therefore, the algorithm is a 12 approximation algorithm. These conditional expectations are not diﬃcult to compute. By deﬁnition, m ∑ E[W x1 ← b1 , . . . , xi ← bi ] = wj E[Yj x1 ← b1 , . . . , xi ← bi ] =
j=1 m ∑
wj Pr[clause Cj satisﬁedx1 ← b1 , . . . , xi ← bi ].
j=1
Furthermore, the probability that clause Cj is satisﬁed given that x1 ← b1 , . . . , xi ← bi is easily seen to be 1 if the settings of x1 , . . . , xi already satisfy the clause, and is 1 − (1/2)k otherwise, where k is the number of literals in the clause that remain unset by this procedure. For example, consider the clause x3 ∨ x ¯5 ∨ x ¯7 . It is the case that Pr[clause satisﬁedx1 ← true, x2 ← false, x3 ← true] = 1, since setting x3 to true satisﬁes the clause. On the other hand, Pr[clause satisﬁedx1 ← true, x2 ← false, x3 ← false] = 1 −
( )2 3 1 = , 2 4
since the clause will be unsatisﬁed only if x5 and x7 are set true, an event that occurs with probability 1/4. This technique for derandomizing algorithms works with a wide variety of randomized algorithms in which variables are set independently and the conditional expectations are polynomialtime computable. It is sometimes called the method of conditional expectations, due to its use of conditional expectations. In particular, an almost identical argument leads to a derandomized version of the randomized 12 approximation algorithm for the MAX CUT problem. Most of the randomized algorithms we discuss in this chapter can be derandomized via this method. The randomized versions of the algorithms are easier to present and analyze, and so we will frequently not discuss their deterministic variants. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
110
Random sampling and randomized rounding of linear programs
5.3
Flipping biased coins
How might we improve the randomized algorithm for MAX SAT? We will show here that biasing the probability with which we set xi is actually helpful; that is, we will set xi true with some probability not equal to 1/2. To do this, it is easiest to start by considering only MAX SAT instances with no unit clauses x ¯i , that is, no negated unit clauses. We will later show that we can remove this assumption. Suppose now we set each xi to be true independently with probability p > 1/2. As in the analysis of the previous randomized algorithm, we will need to analyze the probability that any given clause is satisﬁed. Lemma 5.4: If each xi is set to true with probability p > 1/2 independently, then the probability that any given clause is satisﬁed is at least min(p, 1−p2 ) for MAX SAT instances with no negated unit clauses. Proof. If the clause is a unit clause, then the probability the clause is satisﬁed is p, since it must be of the form xi , and the probability xi is set true is p. If the clause has length at least two, then the probability that the clause is satisﬁed is 1 − pa (1 − p)b , where a is the number of negated variables in the clause and b is the number of unnegated variables in the clause, so that a + b = lj ≥ 2. Since p > 21 > 1 − p, this probability is at least 1 − pa+b = 1 − plj ≥ 1 − p2 , and the lemma is proved. We can obtain the best performance guarantee by setting p = 1 − p2 . This yields p = √ 1 2 ( 5 − 1) ≈ .618. The lemma immediately implies the following theorem. Theorem 5.5: Setting each xi to true with probability p independently gives a randomized min(p, 1 − p2 )approximation algorithm for MAX SAT instances with no negated unit clauses. Proof. This follows since E[W ] =
m ∑
wj Pr[clause Cj satisﬁed] ≥ min(p, 1 − p2 )
j=1
m ∑
wj ≥ min(p, 1 − p2 ) OPT .
j=1
We would like to extend ∑ this result to all MAX SAT instances. To do this, we will use a better bound on OPT than m j=1 wj . Assume that for every i the weight of the unit clause xi appearing in the instance is at least the weight of the unit clause x ¯i ; this is without loss of generality since we could negate all occurrences of xi if the assumption is not true. Let vi be the weight of the unit clause x ¯i if it exists in the instance, and let vi be zero otherwise. ∑ ∑m Lemma 5.6: OPT ≤ j=1 wj − ni=1 vi . Proof. For each i, the optimal solution can satisfy exactly one of xi and x ¯i . Thus the weight of the optimal solution cannot include both the weight of the clause xi and the clause x ¯i . Since vi is the smaller of these two weights, the lemma follows. We can now extend the result.
√ Theorem 5.7: We can obtain a randomized 12 ( 5−1)approximation algorithm for MAX SAT. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
5.4
Randomized rounding
111
Proof. Let U be the set of indices of clauses of the instance excluding unit clauses of the form x ¯i . As above, we assume without loss of ∑ generality that ¯i is no ∑ the weight ∑n of each clause x greater than the weight of clause xi . Thus j∈U wj = m w − v . Then set each xi to j=1 j i=1 i √ 1 be true independently with probability p = 2 ( 5 − 1). Then E[W ] =
m ∑
wj Pr[clause Cj satisﬁed]
j=1
≥
∑
wj Pr[clause Cj satisﬁed]
j∈U
≥ p·
∑
wj
(5.1)
j∈U
m n ∑ ∑ wj − vi ≥ p · OPT, = p· j=1
i=1
where (5.1) follows by Theorem 5.5 and the fact that p = min(p, 1 − p2 ). This algorithm can be derandomized using the method of conditional expectations.
5.4
Randomized rounding
The algorithm of the previous section shows that biasing the probability with which we set xi true yields an improved approximation algorithm. However, we gave each variable the same bias. In this section, we show that we can do still better by giving each variable its own bias. We do this by returning to the idea of randomized rounding, which we examined brieﬂy in Section 1.7 in the context of the set cover problem. Recall that in randomized rounding, we ﬁrst set up an integer programming formulation of the problem at hand in which there are 01 integer variables. In this case we will create an integer program with a 01 variable yi for each Boolean variable xi such that yi = 1 corresponds to xi set true. The integer program is relaxed to a linear program by replacing the constraints yi ∈ {0, 1} with 0 ≤ yi ≤ 1, and the linear programming relaxation is solved in polynomial time. Recall that the central idea of randomized rounding is that the fractional value yi∗ is interpreted as the probability that yi should be set to 1. In this case, we set each xi to true with probability yi∗ independently. We now give an integer programming formulation of the MAX SAT problem. In addition to the variables yi , we introduce a variable zj for each clause Cj that will be 1 if the clause is satisﬁed and 0 otherwise. For each clause Cj let Pj be the indices of the variables xi that occur positively in the clause, and let Nj be the indices of the variables xi that are negated in the clause. We denote the clause Cj by ∨
xi ∨
i∈Pj
Then the inequality
∑ i∈Pj
yi +
∨
x ¯i .
i∈Nj
∑
(1 − yi ) ≥ zj
i∈Nj
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
112
Random sampling and randomized rounding of linear programs
must hold for clause Cj since if each variable that occurs positively in the clause is set to false (and its corresponding yi is set to 0) and each variable that occurs negatively is set to true (and its corresponding yi is set to 1), then the clause is not satisﬁed, and zj must be 0. This inequality yields the following integer programming formulation of the MAX SAT problem: m ∑
maximize subject to
∑
wj zj
j=1
yi +
i∈Pj
∑
(1 − yi ) ≥ zj ,
∨
∀Cj =
i∈Nj
xi ∨
i∈Pj
yi ∈ {0, 1} , 0 ≤ zj ≤ 1,
∨
x ¯i ,
i∈Nj
i = 1, . . . , n, j = 1, . . . , m.
∗ is the optimal value of this integer program, then it is not hard to see that Z ∗ = OPT. If ZIP IP
The corresponding linear programming relaxation of this integer program is m ∑
maximize subject to
∑ i∈Pj
wj zj
j=1
yi +
∑
(1 − yi ) ≥ zj ,
∀Cj =
i∈Nj
∨
xi ∨
i∈Pj
0 ≤ yi ≤ 1,
i = 1, . . . , n,
0 ≤ zj ≤ 1,
j = 1, . . . , m.
∨
x ¯i ,
i∈Nj
∗ is the optimal value of this linear program, then clearly Z ∗ ≥ Z ∗ = OPT . If ZLP LP IP Let (y ∗ , z ∗ ) be an optimal solution to the linear programming relaxation. We now consider the result of using randomized rounding, and setting xi to true with probability yi∗ independently. Before we can begin the analysis, we will need two facts. The ﬁrst is commonly called the arithmeticgeometric mean inequality, because it compares the arithmetic and geometric means of a set of numbers.
Fact 5.8 (Arithmeticgeometric mean inequality): For any nonnegative a1 , . . . , ak , (
k ∏ i=1
)1/k ai
k 1∑ ≤ ai . k i=1
Fact 5.9: If a function f (x) is concave on the interval [0, 1] (that is, f ′′ (x) ≤ 0 on [0, 1]), and f (0) = a and f (1) = b + a, then f (x) ≥ bx + a for x ∈ [0, 1] (see Figure 5.1). Theorem 5.10: Randomized rounding gives a randomized (1 − 1e )approximation algorithm for MAX SAT. Proof. As in the analyses of the algorithms in the previous sections, the main diﬃculty is analyzing the probability that a given clause Cj is satisﬁed. Pick an arbitrary clause Cj . Then, by applying the arithmeticgeometric mean inequality, we see that lj ∏ ∏ ∑ ∑ 1 Pr[clause Cj not satisﬁed] = (1 − yi∗ ) yi∗ ≤ (1 − yi∗ ) + yi∗ . lj i∈Pj
i∈Nj
i∈Pj
i∈Nj
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
5.4
Randomized rounding
a+b
a
0
1
Figure 5.1: An illustration of Fact 5.9.
By rearranging terms, we can derive that lj lj ∑ ∑ ∑ ∑ 1 1 yi∗ = 1 − (1 − yi∗ ) . (1 − yi∗ ) + yi∗ + lj lj
i∈Nj
i∈Pj
i∈Pj
i∈Nj
By invoking the corresponding inequality from the linear program, ∑ i∈Pj
yi∗ +
∑
(1 − yi∗ ) ≥ zj∗ ,
i∈Nj
we see that ( Pr[clause Cj not satisﬁed] ≤ ( The function f (zj∗ ) = 1 − 1 −
zj∗ lj
)lj
zj∗ 1− lj
)l j .
is concave for lj ≥ 1. Then by using Fact 5.9,
( ) zj∗ lj Pr[clause Cj satisﬁed] ≥ 1 − 1 − lj [ ( ) ] 1 lj ∗ ≥ 1− 1− zj . lj Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
113
114
Random sampling and randomized rounding of linear programs
Therefore the expected value of the randomized rounding algorithm is E[W ] =
m ∑
wj Pr[clause Cj satisﬁed]
j=1
)lj ] 1 ≥ wj zj∗ 1 − 1 − lj j=1 [ ] ) ( m 1 k ∑ ≥ min 1 − 1 − wj zj∗ . k≥1 k m ∑
[
(
j=1
[ ( )k ] ) ( is a nonincreasing function in k and that it approaches 1 − 1e from Note that 1 − 1 − k1 ∑ ∗ ∗ above as k tends to inﬁnity. Since m j=1 wj zj = ZLP ≥ OPT, we have that [
(
1 E[W ] ≥ min 1 − 1 − k≥1 k
)k ] ∑ m
wj zj∗
( ≥
j=1
1 1− e
) OPT .
This randomized rounding algorithm can be derandomized in the standard way using the method of conditional expectations.
5.5
Choosing the better of two solutions
In this section we observe that choosing the best solution from the two given by the randomized rounding algorithm of the previous section and the unbiased randomized algorithm of the ﬁrst section gives a better performance guarantee than that of either algorithm. This happens because, as we shall see, the algorithms have contrasting bad cases: when one algorithm is far from optimal, the other is close, and vice versa. This technique can be useful in other situations, and does not require using randomized algorithms. In this case, consider a given clause Cj of length lj . [ The randomized] rounding algorithm of ( )l 1 j Section 5.4 satisﬁes the clause with probability at least 1 − 1 − lj zj∗ , while the unbiased randomized algorithm of Section 5.1 satisﬁes the clause with probability 1 − 2−lj ≥ (1 − 2−lj )zj∗ . Thus when the clause is short, it is very likely to be satisﬁed by the randomized rounding algorithm, though not by the unbiased randomized algorithm, and when the clause is long the opposite is true. This observation is made precise and rigorous in the following theorem. Theorem 5.11: Choosing the better of the two solutions given by the randomized rounding algorithm and the unbiased randomized algorithm yields a randomized 34 approximation algorithm for MAX SAT. Proof. Let W1 be a random variable denoting the value of the solution returned by the randomized rounding algorithm, and let W2 be a random variable denoting the value of the solution returned by the unbiased randomized algorithm. Then we wish to show that E[max(W1 , W2 )] ≥
3 OPT . 4
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
5.5
Choosing the better of two solutions
115
1 randomized rounding flipping coins average
0.9
0.8
0.7
0.6
0.5 1
2
3
4
5
6
Figure 5.2: Illustration of the proof of Theorem 5.11. The “randomized rounding” line is the function 1 − (1 − k1 )k . The “ﬂipping coins” line is the function 1 − 2−k . The “average” line is the average of these two functions, which is at least 43 for all integers k ≥ 1. To obtain this inequality, observe that ] [ 1 1 E[max(W1 , W2 )] ≥ E W1 + W2 2 2 1 1 = E[W1 ] + E[W2 ] 2 2 [ ( ) ] m m ) ∑ 1 1 lj 1∑ ( ∗ ≥ wj zj 1 − 1 − wj 1 − 2−lj + 2 lj 2 j=1 j=1 [ ( ) ] ( ) m l ( ) j ∑ 1 1 1 ≥ wj zj∗ 1− 1− + 1 − 2−lj . 2 lj 2 j=1
We claim that
[ ( ] ( ) ) ) 1 1 lj 3 1( 1− 1− 1 − 2−lj ≥ + 2 lj 2 4
for all positive integers lj . We will prove this shortly, but this can be seen in Figure 5.2. Given the claim, we have that 3 ∗ 3 3∑ wj zj∗ = ZLP ≥ OPT . 4 4 4 m
E[max(W1 , W2 )] ≥
j=1
Now to prove the claim. Observe that the claim holds for lj = 1, since 1 1 3 1 ·1+ · = , 2 2 2 4 Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
116
Random sampling and randomized rounding of linear programs
and the claim holds for lj = 2, since ( ( )2 ) 1 1 3 1 · 1− + (1 − 2−2 ) = . 2 2 2 4 ( For all lj ≥ 3,
(
1− 1−
1 lj
) lj )
≥1− 1 2
(
1 e
1 1− e
( ) and 1 − 2−lj ≥ 78 , and ) +
1 7 3 · ≈ .753 ≥ , 2 8 4
so the claim is proven. Notice that taking the best solution of the two derandomized algorithms gives at least max(E[W1 ], E[W2 ]) ≥ 12 E[W1 ] + 21 E[W2 ]. The proof above shows that this quantity is at least 3 4 OPT. Thus taking the best solution of the two derandomized algorithms is a deterministic 3 4 approximation algorithm.
5.6
Nonlinear randomized rounding
Thus far in our applications of randomized rounding, we have used the variable yi∗ from the linear programming relaxation as a probability to decide whether to set yi to 1 in the integer programming formulation of the problem. In the case of the MAX SAT problem, we set xi to true with probability yi∗ . There is no reason, however, that we cannot use some function f : [0, 1] → [0, 1] to set xi to true with probability f (yi∗ ). Sometimes this yields approximation algorithms with better performance guarantees than using the identity function, as we will see in this section. In this section we will show that a 34 approximation algorithm for MAX SAT can be obtained directly by using randomized rounding with a nonlinear function f . In fact, there is considerable freedom in choosing such a function f : let f be any function such that f : [0, 1] → [0, 1] and 1 − 4−x ≤ f (x) ≤ 4x−1 .
(5.2)
See Figure 5.3 for a plot of the bounding functions. We will see that this ensures that the ∗ probability a clause Cj is satisﬁed is at least 1−4−zj ≥ 34 zj∗ , which will give the 34 approximation algorithm. Theorem 5.12: Randomized rounding with the function f is a randomized algorithm for MAX SAT.
3 4 approximation
Proof. Once again, we only need analyze the probability that a given clause Cj is satisﬁed. By the deﬁnition of f , ∏ ∏ ∏ ∏ ∗ ∗ Pr[clause Cj not satisﬁed] = (1 − f (yi∗ )) f (yi∗ ) ≤ 4−yi 4yi −1 . i∈Pj
i∈Nj
i∈Pj
i∈Nj
Rewriting the product and using the linear programming constraint for clause Cj gives us Pr[clause Cj not satisﬁed] ≤
∏ i∈Pj
−yi∗
4
∏
4
yi∗ −1
=4
(∑ ) ∑ ∗ ∗ − i∈P yi + i∈N (1−yi ) j
j
∗
≤ 4−zj .
i∈Nj
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
5.6
Nonlinear randomized rounding
117
1
0.5
0 0
0.5
1
Figure 5.3: A plot of the functions of (5.2). The upper curve is 4x−1 and the lower curve is 1 − 4−x . Then using Fact 5.9 and observing that the function g(z) = 1 − 4−z is concave on [0,1], we have ∗ 3 Pr[clause Cj satisﬁed] ≥ 1 − 4−zj ≥ (1 − 4−1 )zj∗ = zj∗ . 4 It follows that the expected performance of the algorithm is
E[W ] =
m ∑ j=1
3 3∑ wj zj∗ ≥ OPT . 4 4 m
wj Pr[clause Cj satisﬁed] ≥
j=1
Once again, the algorithm can be derandomized using the method of conditional expectations. There are other choices of the function f that also lead to a 34 approximation algorithm for MAX SAT. Some other possibilities are presented in the exercises at the end of the chapter. Is it possible to get an algorithm with a performance guarantee better than 34 by using some more complicated form of randomized rounding? It turns out that the answer is no, at least for any algorithm that derives its performance guarantee by comparing its value to that of the linear programming relaxation. To see this, consider the instance of the maximum satisﬁability problem with two variables x1 and x2 and four clauses of weight one each, x1 ∨ x2 , x1 ∨ x ¯2 , x ¯1 ∨ x2 , and x ¯1 ∨ x ¯2 . Any feasible solution, including the optimal solution, satisﬁes exactly three of the four clauses. However, if we set y1 = y2 = 21 and zj = 1 for all four clauses Cj , then this solution is feasible for the linear program and 4. Thus the value to any solution to ∑ has value ∗ . We say that this integer programming the MAX SAT instance can be at most 34 m w z j=1 j j formulation of the maximum satisﬁability problem has an integrality gap of 34 . Deﬁnition 5.13: The integrality gap of an integer program is the worstcase ratio over all instances of the problem of value of an optimal solution to the integer programming formulation to value of an optimal solution to its linear programming relaxation. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
118
Random sampling and randomized rounding of linear programs
The example above shows that the integrality gap is at most 34 , whereas the proof of Theorem 5.12 shows that it is at least 34 . If an integer programming formulation has integrality gap ρ, then any algorithm for a maximization problem whose performance guarantee α is proven by showing that the value of its solution is at least α times the value of the linear programming relaxation can have performance guarantee at most ρ. A similar statement holds for minimization problems.
5.7
The prizecollecting Steiner tree problem
We now consider other problems for which randomized techniques are useful. In particular, in the next few sections, we revisit some of the problems we studied earlier in the book and show that we can obtain improved performance guarantees by using the randomized methods that we have developed in this chapter. We start by returning to the prizecollecting Steiner tree problem we discussed in Section 4.4. Recall that in this problem we are given an undirected graph G = (V, E), an edge cost ce ≥ 0 for each e ∈ E, a selected root vertex r ∈ V , and a penalty πi ≥∑ 0 for each ∑ i ∈ V . The goal is to ﬁnd a tree T that contains the root vertex r so as to minimize e∈T ce + i∈V −V (T ) πi , where V (T ) is the set of vertices in the tree. We used the following linear programming relaxation of the problem: ∑ ∑ minimize ce xe + πi (1 − yi ) e∈E
subject to
i∈V
∑
xe ≥ yi ,
∀S ⊆ V − r, S ̸= ∅, ∀i ∈ S,
e∈δ(S)
yr = 1, yi ≥ 0,
∀i ∈ V,
xe ≥ 0,
∀e ∈ E.
In the 3approximation algorithm given in Section 4.4, we found an optimal LP solution (x∗ , y ∗ ), and for a speciﬁed value of α, built a Steiner tree on all nodes such that yi∗ ≥ α. We claimed that the cost of the edges in the tree is within a factor of 2/α of the fractional cost of the tree edges in the LP solution, while the cost of the penalties is within a factor of 1/(1 − α) of the cost of the penalties in the LP solution. Thus if α is close to 1, the cost of the tree edges is within a factor of two of the corresponding cost of the LP, while if α is close to 0, our penalty cost is close to the corresponding cost of the LP. In the previous section we set α = 2/3 to trade oﬀ the costs of the tree edges and the penalties, resulting in a performance guarantee of 3. Suppose instead that we choose the value of α randomly rather than considering just one value of α. We will see that this improves the performance guarantee from 3 to about 2.54. Recall Lemma 4.6 from Section 4.4, which allowed us to bound the cost of the spanning tree T constructed in terms of α and the LP solution (x∗ , y ∗ ). Lemma 5.14 (Lemma 4.6):
∑ e∈T
ce ≤
2∑ ce x∗e . α e∈E
Because the bound becomes inﬁnite as α tends to zero, we don’t want to choose α too close to zero. Instead, we will choose α uniformly from the range [γ, 1], and will later decide Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
5.7
The prizecollecting Steiner tree problem
119
how to set γ. Recall that in computing the expected value of a continuous random variable X, if that variable has a probability density function f (x) over the domain [a, b] (specifying the probability that X = x), then we compute the expectation of X by integrating f (x)xdx over that interval. The probability density function for a uniform random variable over [γ, 1] is the constant 1/(1 − γ). We can then analyze the expected cost of the tree for the randomized algorithm below. [
Lemma 5.15: E
∑
]
(
ce ≤
e∈T
2 1 ln 1−γ γ
)∑
ce x∗e
e∈E
Proof. Using simple algebra and calculus, we obtain ] [ ] [ ∑ 2∑ ce ≤ E ce x∗e E α e∈T e∈E [ ]∑ 2 = E ce x∗e α e∈E )∑ ( ∫ 1 2 1 dx ce x∗e = 1−γ γ x e∈E [ ]1 ∑ 2 = ln x · ce x∗e 1−γ γ e∈E )∑ ( 1 2 ln ce x∗e . = 1−γ γ e∈E
The expected penalty for the vertices not in the tree is also easy to analyze. Lemma 5.16:
∑
E
i∈V −V (T )
πi ≤
1 ∑ πi (1 − yi∗ ) 1−γ i∈V
Proof.∑Let U = {i ∈ V∑: yi∗ ≥ α}; any vertex not in the tree must not be in U , so we have ∗ / U is that i∈V −V (T ) πi ≤ i∈U / πi . Observe that if yi ≥ γ, then the probability that i ∈ ∗ ∗ (1 − yi )/(1 − γ). If yi < γ, then i ∈ / U with probability 1. But then 1 ≤ (1 − yi∗ )/(1 − γ), and so the lemma statement follows. Thus we have the following theorem and corollary. Theorem 5.17: The expected cost of the solution produced by the randomized algorithm is ( ) ∑ ∑ 2 1 ∑ 1 ∑ E ce + πi ≤ ln ce x∗e + πi (1 − yi∗ ). 1−γ γ 1−γ e∈T
i∈V −V (T )
e∈E
i∈V
Corollary 5.18: Using the randomized rounding algorithm with γ = e−1/2 gives a (1−e−1/2 )−1 approximation algorithm for the prizecollecting Steiner tree problem, where (1 − e−1/2 )−1 ≈ 2.54. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
120
Random sampling and randomized rounding of linear programs
Proof. We want to ensure that the maximum of the coeﬃcients of the two terms in the bound of Theorem 5.17 is as small as possible. The ﬁrst coeﬃcient is a decreasing function of γ, and the second is an increasing function. We can minimize the maximum by setting them equal. 2 1 ln γ1 = 1−γ , we obtain γ = e−1/2 . Then by Theorem 5.17, the expected cost of By setting 1−γ the tree obtained is no greater than 1 1 − e−1/2
(
∑
e∈E
ce x∗e +
∑ i∈V
) πi (1 − yi∗ )
≤
1 · OPT . 1 − e−1/2
∗ The derandomization of the algorithm is straightforward: since there are { V  variables y}i , there are at most V  distinct values of yi∗ . Thus, consider the V  sets Uj = i ∈ V : yi∗ ≥ yj∗ . Any possible value of α corresponds to one of these V  sets. Thus if a random choice of α has a certain expected performance guarantee, the algorithm which tries each set Uj and chooses the best solution generated will have a deterministic performance guarantee at least as good as that of the expectation of the randomized algorithm. Interestingly, the use of randomization allows us to analyze this natural deterministic algorithm, whereas we know of no means of analyzing the deterministic algorithm directly. Recall from the end of the previous section that we deﬁned the integrality gap of an integer programming formulation to be the worstcase ratio, over all instances of a problem, of the value of an optimal solution to the integer programming formulation to the value of an optimal solution to the linear programming relaxation. We also explained that the integrality gap bounds the performance guarantee that we can get via LP rounding arguments. Consider a graph G that is a cycle on n nodes, and let the penalty for each node be inﬁnite and the cost of each edge by 1. Then there is a feasible solution to the linear programming relaxation of cost n/2 by setting each edge variable to 1/2, while there is an optimal integral solution of cost n − 1 by taking every edge of the cycle except one (see Figure 5.4). Hence the integrality gap for this instance of the problem is at least (n − 1)/(n/2) = 2 − 2/n. Thus we cannot expect a performance guarantee for the prizecollecting Steiner tree problem better than 2 − n2 using LP rounding arguments with this formulation. In Chapter 14, we will return to the prizecollecting Steiner tree problem and show that the primaldual method can be used to obtain a 2approximation algorithm for the problem. The argument there will show that the integrality gap of the integer programming formulation is at most 2.
5.8
The uncapacitated facility location problem
In this section, we revisit the metric uncapacitated facility location problem introduced in Section 4.5. Recall that in this problem we are given a set of clients D and a set of facilities F , along with facility costs fi for all facilities i ∈ F , and assignment costs cij for all facilities i ∈ F and clients j ∈ D. All clients and facilities are points in a metric space, and given clients j, l and facilities i, k, we have that cij ≤ cil + ckl + ckj . The goal of the problem is to select a subset of facilities to open and an assignment of clients to open facilities so as to minimize the total cost of the open facilities plus the assignment costs. We used the following linear programming Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
5.8
The uncapacitated facility location problem
r
121
r
Figure 5.4: Example of integrality gap of the integer programming formulation for the prizecollecting Steiner tree problem. On the left is a feasible solution for the linear program in which each edge has value 1/2. On the right is an optimal solution for the integer programming formulation in which each edge shown has value 1. relaxation of the problem: minimize
∑
∑
fi yi +
i∈F
cij xij
i∈F,j∈D
∑
subject to
xij = 1,
∀j ∈ D,
xij ≤ yi ,
∀i ∈ F, j ∈ D,
xij ≥ 0,
∀i ∈ F, j ∈ D,
i∈F
yi ≥ 0,
∀i ∈ F,
where the variable xij indicates whether client j is assigned to facility i, and the variable yi indicates whether facility i is open or not. We also used the dual of the LP relaxation: ∑ maximize vj j∈D
subject to
∑
wij ≤ fi ,
∀i ∈ F,
j∈D
vj − wij ≤ cij , wij ≥ 0,
∀i ∈ F, j ∈ D, ∀i ∈ F, j ∈ D.
Finally, given an LP solution (x∗ , y ∗ ), we said that a client j neighbors a facility i if x∗ij > 0. We denote the neighbors of j as N (j) = {i ∈ F : x∗ij > 0}, and the neighbors of the neighbors of j as N 2 (j) = {k ∈ D : ∃i ∈ N (j), x∗ik > 0}. Recall that we showed in Lemma 4.11 that if (v ∗ , w∗ ) is an optimal dual solution and i ∈ N (j) then the cost of assigning j to i is bounded by vj∗ (that is, cij ≤ vj∗ ). We gave a 4approximation algorithm in Algorithm 4.2 of Section 4.5 that works by choosing an unassigned client j that minimizes the value of vj∗ among all remaining unassigned clients, opening the cheapest facility in the neighborhood of N (j), and then assigning j and all clients in N 2 (j) to this facility. We showed that for an ∑ optimal LP solution (x∗ , y ∗ ) and optimal dual ∑ ∗ ∗ solution v , this gave a solution of cost at most i∈F fi yi + 3 j∈D vj∗ ≤ 4 · OPT. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
122
Random sampling and randomized rounding of linear programs
Solve LP, get optimal primal solution (x∗ , y ∗ ) and dual solution (v ∗ , w∗ ) C←D k←0 while C ̸= ∅ do k ←k+1 Choose jk ∈ C that minimizes vj∗ + Cj∗ over all j ∈ C Choose ik ∈ N (jk ) according to the probability distribution x∗ijk Assign jk and all unassigned clients in N 2 (jk ) to ik C ← C − {jk } − N 2 (jk ) Algorithm 5.1: Randomized rounding algorithm for the uncapacitated facility location problem.
i j
h
l
Figure 5.5: Illustration of proof of Theorem 5.19. ∑ ∗ This analysis is a little unsatisfactory in∑ the sense that we bound i∈F fi yi by OPT, whereas ∑ we know that we have the stronger bound i∈F fi yi∗ + i∈F,j∈D cij x∗ij ≤ OPT. In this section, we show that by using randomized rounding we can modify the algorithm of Section 4.5 slightly and improve the analysis to a 3approximation algorithm. The basic idea is that once we have selected a client j in the algorithm, instead of opening the cheapest facility in N (j), we use randomized ∑ rounding to choose the facility, and open facility i ∈ N (j) with probability x∗ij (note that i∈N (j) x∗ij = 1). This improves the analysis since in the previous version of the algorithm we had to make worstcase assumptions about how far away the cheapest facility would be from the clients assigned to it. In this algorithm we can amortize the costs over all possible choices of facilities in N (j). In order to get our analysis ∑ to work, we modify the choice of client selected in each iteration as well. We deﬁne Cj∗ = i∈F cij x∗ij ; that is, the assignment cost incurred by client j in the LP solution (x∗ , y ∗ ). We now choose the unassigned client that minimizes vj∗ + Cj∗ over all unassigned clients in each iteration. Our new algorithm is given in Algorithm 5.1. Note that the only changes from the previous algorithm of Section 4.5 (Algorithm 4.2) are in the third to the last and second to the last lines. We can now analyze this new algorithm. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
5.8
The uncapacitated facility location problem
123
Theorem 5.19: Algorithm 5.1 is a randomized 3approximation algorithm for the uncapacitated facility location problem. Proof. In an iteration k, the expected cost of the facility opened is ∑ ∑ fi x∗ijk ≤ fi yi∗ , i∈N (jk )
i∈N (jk )
using the LP constraint x∗ijk ≤ yi∗ . As we argued in Section 4.5, the neighborhoods N (jk ) form a partition of a subset of the facilities so that the overall expected cost of facilities opened is at most ∑ ∑ ∑ fi yi∗ . fi yi∗ ≤ i∈F
k i∈N (jk )
We now ﬁx an iteration k and let j denote the client jk selected and let i denote the facility ik opened. The expected cost of assigning j to i is ∑ cij x∗ij = Cj∗ . i∈N (j)
As can be seen from Figure 5.5, the expected cost of assigning an unassigned client l ∈ N 2 (j) to i, where the client l neighbors facility h which neighbors client j is at most ∑ chl + chj + cij x∗ij = chl + chj + Cj∗ . i∈N (j)
By Lemma 4.11, chl ≤ vl∗ and chj ≤ vj∗ , so that this cost is at most vl∗ + vj∗ + Cj∗ . Then since we chose j to minimize vj∗ + Cj∗ among all unassigned clients, we know that vj∗ + Cj∗ ≤ vl∗ + Cl∗ . Hence the expected cost of assigning l to i is at most vl∗ + vj∗ + Cj∗ ≤ 2vl∗ + Cl∗ . Thus we have that our total expected cost is no more than ∑ ∑ ∑ ∑ ∑ fi yi∗ + (2vj∗ + Cj∗ ) = fi yi∗ + cij x∗ij + 2 vj∗ i∈F
j∈D
i∈F
i∈F,j∈D
j∈D
≤ 3 OPT .
Note that we were able to reduce the performance guarantee from 4 to 3 because the random choice of facility allows us to include the assignment cost Cj∗ in the analysis; instead of bounding only the facility cost by OPT, we can bound both the facility cost and part of the assignment cost by OPT. One can imagine a diﬀerent type of randomized rounding algorithm: suppose we obtain an optimal LP solution (x∗ , y ∗ ) and open each facility i ∈ F with probability yi∗ . Given the open facilities, we then assign each client to the open facility. This algorithm has the nice ∑ closest ∗ feature that the expected facility cost is i∈F fi yi . However, this simple algorithm clearly has the diﬃculty that with nonzero probability, the algorithm opens no facilities at all, and hence the expected assignment cost is unbounded. We consider a modiﬁed version of this algorithm later in the book, in Section 12.1. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
124
Random sampling and randomized rounding of linear programs
5.9
Scheduling a single machine with release dates
In this section, we return to the problem considered in Section 4.2 of scheduling a single machine with release dates so as to minimize the weighted sum of completion times. Recall that we are given as input n jobs, each of which has a processing time pj > 0, weight wj ≥ 0, and release date rj ≥ 0. The values pj , rj , and wj are all nonnegative integers. We must construct a schedule for these jobs on a single machine such that at most one job is processed at any point in time, no job is processed before its release date, and once a job begins to be processed, it must be processed nonpreemptively; that is, it must be processed completely before any other job can be scheduled. If Cj denotes the∑ time at which job j is ﬁnished processing, then the goal is to ﬁnd the schedule that minimizes nj=1 wj Cj . In Section 4.2, we gave a linear programming relaxation of the problem. In order to apply randomized rounding, we will use a diﬀerent integer programming formulation of this problem. In fact, we will not use an integer programming formulation of the problem, but an integer programming relaxation. Solutions in which jobs are scheduled preemptively are feasible; however, the contribution of job j to the objective function is less than wj Cj unless job j is scheduled nonpreemptively. Thus the integer program is a relaxation since for any solution corresponding to a nonpreemptive schedule, the objective function value is equal to the sum of weighted completion times of the schedule. Furthermore, although this relaxation is an integer program and has a number of constraints and variables exponential in the size of the problem instance, we will be able to ﬁnd a solution to it in polynomial time. ∑ We now give the integer programming relaxation. Let T equal maxj rj + nj=1 pj , which is the latest possible time any job can be processed in any schedule that processes a job nonpreemptively whenever it can. We introduce variables yjt for j = 1, . . . , n, t = 1, . . . , T , where { 1 if job j is processed in time [t − 1, t) yjt = 0 otherwise We derive a series of constraints for the integer program to capture the constraints of the scheduling problem. Since at most one job can be processed at any point in time, for each time t = 1, . . . , T we impose the constraint n ∑
yjt ≤ 1.
j=1
Since each job j must be processed for pj units of time, for each job j = 1, . . . , n we impose the constraint T ∑ yjt = pj . t=1
Since no job j can be processed before its release date, we set yjt = 0 for each job j = 1, . . . , n and each time t = 1, . . . , rj . For t = rj + 1, . . . , T , obviously we want yjt ∈ {0, 1} . Note that this integer program has size that is exponential in the size of the scheduling instance because T is exponential in the number of bits used to encode rj and pj . Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
5.9
Scheduling a single machine with release dates
125
Finally, we need to express the completion time of job j in terms of the variables yjt , j = 1, . . . , n. Given a nonpreemptive schedule, suppose that job j completes at time D. If we set the variables yjt to indicate the times at which jobs are processed in the schedule, and so yjt = 1 for t = D −pj +1, . . . , D, whereas yjt = 0 otherwise. Observe that if we take the average of the midpoints of each unit of time at which job j is processed (t− 12 for t = D−pj +1, , . . . , D), p we get the midpoint of the processing time of the job, namely D − 2j . Thus 1 pj
(
D ∑ t=D−pj +1
1 t− 2
) =D−
pj . 2
Given the settings of the variables yjt , we can rewrite this as ( ) T pj 1 1 ∑ yjt t − =D− . pj 2 2 t=1
We wish to have variable Cj represent the completion time of job j. Rearranging terms, then, we set the variable Cj as follows: ( ) T pj 1 ∑ 1 Cj = yjt t − + . pj 2 2 t=1
This variable Cj underestimates the completion time of job j when all of the variables yjt that are set to 1 are not consecutive in time. To see this, ﬁrst start with the case above in which yjt = 1 for t = D − pj + 1, . . . , D for a completion time D. By the arguments above, Cj = D. If we then modify the variables yjt by successively setting yjt = 0 for some t ∈ [D − pj + 1, D − 1] and yjt = 1 for some t ≤ D − pj , it is clear that the variable Cj only decreases. The overall integer programming relaxation of the problem we will use is minimize
n ∑
wj Cj
(5.3)
j=1
subject to
n ∑
yjt ≤ 1,
t = 1, . . . , T,
(5.4)
yjt = pj ,
j = 1, . . . , n,
(5.5)
yjt = 0,
j = 1, . . . , n; t = 1, . . . , rj ,
j=1 T ∑ t=1
yjt ∈ {0, 1} , Cj =
1 pj
T ∑ t=1
j = 1, . . . , n; t = 1, . . . , T, ( yjt t −
1 2
) +
pj , 2
j = 1, . . . , n.
(5.6)
Even though we restrict the variables yjt to take on integer values, the corresponding linear programming relaxation is wellknown to have optimal solutions for which these variables have integer values (for a simpler case of this phenomenon, see Exercise 4.6). We can now consider a randomized rounding algorithm for this problem. Let (y ∗ , C ∗ ) be an optimal solution to the integer programming relaxation. For each job j, let Xj be a random ∗ ∑T yjt ∗ /p ; observe that by (5.5), variable which is t − 12 with probability yjt j t=1 pj = 1 so that Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
126
Random sampling and randomized rounding of linear programs
∗ /p give a probability distribution on time t for each job j. We defer for the moment a the yjt j discussion on how to make this run in randomized polynomial time, since T is exponential in the input size and we need to solve the integer program in order to perform the randomized rounding. As in the algorithms of Sections 4.1 and 4.2, we now schedule the jobs as early as possible in the same relative order as the value of the Xj . Without loss of generality, suppose that X1 ≤ X2 ≤ · · · ≤ Xn . Then we schedule job 1 as early as possible (that is, not before r1 ), then job 2, and in general schedule job j to start at the maximum of the completion time of job j − 1 and rj . Let Cˆj be a random variable denoting the completion time of job j in this schedule. We begin the analysis of this algorithm by considering the expected value of Cˆj given a ﬁxed value of Xj .
Lemma 5.20: E[Cˆj Xj = x] ≤ pj + 2x. Proof. As we argued in the proof of Lemma 4.2, there cannot be any idle time ∑ between ˆ ˆ maxk=1,...,j rk and Cj , and therefore it must be the case that Cj ≤ maxk=1,...,j rk + jk=1 pj . Because the ordering of the jobs results from randomized rounding, we let R be a random ∑j−1 variable such that R = maxk=1,...,j rk , and let random variable P = k=1 pj , so that the bound ˆ ˆ on Cj becomes Cj ≤ R + P + pj . ∗ = 0 for t ≤ r , it First, we bound the value of R given that Xj = x. Note that since ykt k must be the case that Xk ≥ rk + 21 for any job k. Thus, 1 1 1 ≤ Xj − = x − . 2 2 2
R ≤ max rk ≤ max Xk − k:Xk ≤Xj
k:Xk ≤Xj
Now we bound the expected value of P given that Xj = x. We can bound it as follows: ∑ pk Pr[job k is processed before jXj = x] E[P Xj = x] = k:k̸=j
=
∑
pk Pr[Xk ≤ Xj Xj = x]
k:k̸=j
=
∑ k:k̸=j
x+ 12
pk
∑ t=1
[ ] 1 Pr Xk = t − . 2
∗ /p , then with probability ykt k 1 1 x+ 2 x+ 12 ∑ x+ ∑ ∑ y∗ ∑2 ∑ ∑ ∗ ∗ kt E[P Xj = x] = pk = ykt . ykt = pk
Since we set Xk = t −
1 2
k:k̸=j
t=1
k:k̸=j t=1
t=1 k:k̸=j
Constraint (5.4) of the integer programming relaxation imposes that times t, so then x+ 12
E[P Xj = x] =
∑ ∑ t=1 k:k̸=j
∑
k:k̸=j
ykt ≤ 1 for all
1 ∗ ykt ≤x+ . 2
Therefore,
( ) ( ) 1 1 E[Cˆj Xj = x] ≤ pj + E[RXj = x] + E[P Xj = x] ≤ pj + x − + x+ = pj + 2x. 2 2
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
5.9
Scheduling a single machine with release dates
127
Given the lemma above, we can prove the following theorem. Theorem 5.21: The randomized rounding algorithm is a randomized 2approximation algorithm for the single machine scheduling problem with release dates minimizing the sum of weighted completion times. Proof. Using the lemma above, we have that ] [ ] 1 1 ˆ E Cj Xj = t − Pr Xj = t − 2 2 t=1 ) [ ] T ( ∑ 1 1 ≤ pj + 2 t− Pr Xj = t − 2 2 t=1 ) ( T ∗ ∑ 1 yjt = pj + 2 t− 2 pj t=1 [ ) ] T −1 ( pj 1 ∑ 1 ∗ = 2 + t− yjt 2 pj 2
E[Cˆj ] =
T ∑
[
t=0
= 2Cj∗ ,
(5.7)
where equation (5.7) follows from the deﬁnition of Cj∗ in the integer programming relaxation (equation (5.6)). Thus we have that n n n ∑ ∑ ∑ ˆ ˆ wj Cj = wj E[Cj ] ≤ 2 wj Cj∗ ≤ 2 OPT, E j=1
j=1
j=1
∑ since nj=1 wj Cj∗ is the objective function of the integer programming relaxation and thus a lower bound on OPT . Unlike the previous randomized algorithms of the section, we do not know directly how to derandomize this algorithm, although a deterministic 2approximation algorithm for this problem does exist. We now show that the integer programming relaxation can be solved in polynomial time, and that the randomized rounding algorithm can be made to run in polynomial time. First, sort the jobs in order of nonincreasing ratio of weight to processing time; we assume jobs are then indexed in this order so that wp11 ≥ wp22 ≥ · · · ≥ wpnn . Now we use the following rule to create a (possibly preemptive) schedule: we always schedule the job of minimum index which is available but not yet completely processed. ∑ More formally, as t varies from 1 to T , let j be ∗ the smallest index such that rj ≤ t − 1 and t−1 z=1 yjz < pj , if such a job j exists. Then set ∗ = 0 for all jobs j. ∗ ∗ yjt = 1 for job j and ykt = 0 for all jobs k ̸= j. If no such j exists, we set yjt Since in creating this schedule there are only n points in time corresponding to release dates rj and n points in time at which a job has ﬁnished being processed, there are at most 2n point in time at which the index attaining the minimum might change. Thus we can actually give the schedule as a sequence of at most 2n intervals of time, specifying which job (if any) is scheduled for each interval. It is not hard to see that we can compute this set of intervals in polynomial time without explicitly enumerating each time t. Furthermore, from the discussion above in which we explained how to express the variable Cj in terms of the yjt , we know that if yjt = 1 Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
128
Random sampling and randomized rounding of linear programs
( ) ∑ for t = a + 1 to b, then bt=a+1 yjt t − 12 = (b − a)(b − 12 (b − a)), so that we can compute the values of the variables Cj∗ from these intervals. The randomized rounding algorithm can be made to run in polynomial time because it is equivalent to the following algorithm: for each job j, choose a value αj ∈ [0, 1] independently and uniformly. Let Xj be the αj point of job j: that is, the time when αj pj units of job j have been processed in the preemptive schedule. Observe that it is easy to compute this point in time from the intervals describing the preemptive schedule. Then schedule the jobs according to the ordering of the Xj as in the randomized rounding algorithm. To see why this is equivalent to the original algorithm, consider the probability that Xj ∈ [t − 1, t). This is simply the probability that αj pj units of job j have ﬁnished processing in this interval, which is the probability that t−1 t ∑ ∑ ∗ ∗ yjs ≤ αj pj < yjs . s=1
s=1
∑t−1
∑t ∗ , 1 ∗ This, then, is the probability that αj ∈ [ p1j s=1 yjs s=1 yjs ); since αj is chosen uniformly, pj ∗ /p . So the probability that X ∈ [t − 1, t) in the α point algorithm is this probability is yjt j j j the same as in the original algorithm, and the proof of the performance guarantee goes through with some small modiﬁcations. Interestingly, one can also prove that if a single value of α is chosen uniformly from [0, 1], and Xj is the αpoint of job j, then scheduling jobs according to the ordering of the Xj is also a 2approximation algorithm. Proving this fact is beyond the scope of this section. However, the algorithm in which a single value of α is chosen is easy to derandomize, because it is possible to show that at most n diﬀerent schedules can result from all possible choices of α ∈ [0, 1]. Then to derive a deterministic 2approximation algorithm, we need only enumerate the n diﬀerent schedules, and choose the one that minimizes the weighted sum of the completion times. We need only show that the constructed solution (y ∗ , C ∗ ) is in fact optimal. This is left to the reader to complete by a straightforward interchange argument in Exercise 5.11. Lemma 5.22: The solution (y ∗ , C ∗ ) given above to the integer program is an optimal solution.
5.10
Chernoﬀ bounds
This section introduces some theorems that are extremely useful for analyzing randomized rounding algorithms. In essence, the theorems say that it is very likely that the sum of n independent 01 random variables is not far away from the expected value of the sum. In the subsequent two sections, we will illustrate the usefulness of these bounds. We begin by stating the main theorems. Theorem 5.23: Let X1 , . . . , Xn be ∑nn independent 01 random variables, not necessarily identically distributed. Then for X = i=1 Xi and µ = E[X], L ≤ µ ≤ U , and δ > 0, ( Pr[X ≥ (1 + δ)U ] < and
( Pr[X ≤ (1 − δ)L]
0, ( Pr[X ≥ (1 + δ)U ] < and
( Pr[X ≤ (1 − δ)L]
0. Proof. Since X takes on nonnegative values, E[X] ≥ a Pr[X ≥ a] and the inequality follows. Now we can prove Theorem 5.24. Proof of Theorem 5.24. We only prove the ﬁrst bound in the theorem; the proof of the other bound is analogous. Note that if E[X] = 0, then X = 0 and the bound holds trivially, so we can assume E[X] > 0 and E[Xi ] > 0 for some i. We ignore all i such that E[Xi ] =∑ 0 since Xi = 0 for such i. Let pi = Pr[Xi = ai ]. Since E[Xi ] > 0, pi > 0. Then µ = E[X] = ni=1 pi ai ≤ U . For any t > 0, Pr[X ≥ (1 + δ)U ] = Pr[etX ≥ et(1+δ)U ]. By Markov’s inequality, E[etX ] . et(1+δ)U [ n ] n [ ∑n ] ∏ ∏ tX t i=1 Xi tXi E[e ] = E e =E e = E[etXi ], Pr[etX ≥ et(1+δ)U ] ≤
Now
i=1
(5.8)
(5.9)
i=1
where the last equality follows by the independence of the Xi . Then for each i, E[etXi ] = (1 − pi ) + pi etai = 1 + pi (etai − 1). We will later show that etai − 1 ≤ ai (et − 1) for t > 0, so that E[etXi ] ≤ 1 + pi ai (et − 1). Using that 1 + x < ex for x > 0, and that t, ai , pi > 0, we obtain E[etXi ] < epi ai (e
t −1)
.
Plugging these back into equation (5.9), we have tX
E[e
]
0, we see that E[etX ] et(1+δ)U t eU (e −1) < t(1+δ)U e )U ( eδ , = (1 + δ)(1+δ)
Pr[X ≥ (1 + δ)U ] ≤
as desired. Finally, to see that eai t − 1 ≤ ai (et − 1) for t > 0, let f (t) = ai (et − 1) − eai t − 1. Then ′ f (t) = ai et − ai eai t ≥ 0 for any t ≥ 0 given that 0 < ai ≤ 1; thus f (t) is nondecreasing for t ≥ 0. Since f (0) = 0 and the function f is nondecreasing, the inequality holds. The righthand sides of the inequalities in Theorems 5.23 and 5.24 are a bit complicated, and so it will be useful to consider variants of the results in which the righthand side is simpler, at the cost of restricting the results somewhat. Lemma 5.26: For 0 ≤ δ ≤ 1, we have that (
eδ (1 + δ)(1+δ)
)U
≤ e−U δ
2 /3
and for 0 ≤ δ < 1, we have that (
e−δ (1 − δ)(1−δ)
)L
≤ e−Lδ
2 /2
.
Proof. For the ﬁrst inequality, we take the logarithm of both sides. We would like to show that U (δ − (1 + δ) ln(1 + δ)) ≤ −U δ 2 /3. We observe that the inequality holds for δ = 0; if we can show that the derivative of the lefthand side is no more than that of the righthand side for 0 ≤ δ ≤ 1, the inequality will hold for 0 ≤ δ ≤ 1. Taking derivatives of both sides, we need to show that −U ln(1 + δ) ≤ −2U δ/3. Letting f (δ) = −U ln(1 + δ) + 2U δ/3, we need to show that f (δ) ≤ 0 on [0, 1]. Note that f (0) = 0 and f (1) ≤ 0 since − ln 2 ≈ −0.693 < −2/3. As long as the function f (δ) is convex on [0, 1] (that is, f ′′ (δ) ≥ 0), we may conclude that f (δ) ≤ 0 on [0, 1], in the convex analog of Fact 5.9. We observe that f ′ (δ) = −U/(1 + δ) + 2U/3 and f ′′ (δ) = U/(1 + δ)2 , so that f ′′ (δ) ≥ 0 for δ ∈ [0, 1], and the inequality is shown. We turn to the second inequality for the case 0 ≤ δ < 1. Taking the logarithm of both sides, we would like to show that L(−δ − (1 − δ) ln(1 − δ)) ≤ −Lδ 2 /2. The inequality holds for δ = 0, and will hold for 0 < δ < 1 if the derivative of the lefthand side is no more than the derivative of the righthand side for 0 ≤ δ < 1. Taking derivatives of both sides, we would like to show that L ln(1 − δ) ≤ −Lδ Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
5.10
Chernoﬀ bounds
131
for 0 ≤ δ < 1. Again the inequality holds for δ = 0 and will hold for 0 ≤ δ < 1 if the derivative of the lefthand side is no more than the derivative of the righthand side for 0 ≤ δ < 1. Taking derivatives of both sides again, we obtain −L/(1 − δ) ≤ −L, which holds for 0 ≤ δ < 1. It will sometimes be useful to provide a bound on the probability that X ≤ (1 − δ)L in the case δ = 1. Notice that since the variables Xi are either 01 or 0ai this is asking for a bound on the probability that X = 0. We can give a bound as follows. Lemma 5.27: Let X1 , . . . , Xn be n independent random variables, not necessarily identically distributed, such ∑ that each Xi takes either the value 0 or the value ai for some 0 < ai ≤ 1. Then for X = ni=1 Xi and µ = E[X], L ≤ µ, Pr[X = 0] < e−L . X = 0 and L ≤ µ = 0 and the bound holds Proof. We assume µ = E[X] > 0 since otherwise ∑n trivially. Let pi = Pr[Xi = ai ]. Then µ = i=1 ai pi and Pr[X = 0] =
n ∏ (1 − pi ). i=1
Applying the arithmetic/geometric mean inequality from Fact 5.8, we get that ]n [ ]n [ n n n ∏ 1∑ 1∑ (1 − pi ) = 1 − pi . (1 − pi ) ≤ n n i=1
i=1
i=1
Since each ai ≤ 1, we then obtain ]n [ ]n [ [ ] n n 1∑ 1 n 1∑ pi ≤ 1 − ai pi = 1 − µ . 1− n n n i=1
i=1
Then using the fact that 1 − x < e−x for x > 0, we get [ ] 1 n 1 − µ < e−µ ≤ e−L . n
As a corollary, we can then extend the bound of Lemma 5.26 to the case δ = 1. In fact, since the probability that X < (1 − δ)L for δ > 1 and L ≥ 0 is 0, we can extend the bound to any positive δ. Corollary 5.28: Let X1 , . . . , Xn be n independent random variables, not necessarily identically distributed, ∑ such that each Xi takes either the value 0 or the value ai for some ai ≤ 1. Then for X = ni=1 Xi and µ = E[X], 0 ≤ L ≤ µ, and δ > 0, Pr[X ≤ (1 − δ)L] < e−Lδ
2 /2
.
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
132
Random sampling and randomized rounding of linear programs
5.11
Integer multicommodity ﬂows
To see how Chernoﬀ bounds can be used in the context of randomized rounding, we will apply them to the minimumcapacity multicommodity ﬂow problem. In this problem, we are given as input an undirected graph G = (V, E) and k pairs of vertices si , ti ∈ V , i = 1, . . . , k. The goal is to ﬁnd, for each i = 1, . . . , k, a single simple path from si to ti so as to minimize the maximum number of paths containing the same edge. This problem arises in routing wires on chips. In this problem, k wires need to be routed, each wire from some point si on the chip to another point ti on the chip. Wires must be routed through channels on the chip, which correspond to edges in a graph. The goal is to route the wires so as minimize the channel capacity needed; that is, the number of wires routed through the same channel. The problem as stated here is a special case of a more interesting problem, since often the wires must join up three or more points on a chip. We give an integer programming formulation of the problem. Let Pi be the set of all possible simple paths P in G from si to ti , where P is the set of edges in the path. We create a 01 variable xP for each path P ∈ Pi to indicate when ∑ path P from si to ti is used. Then the total number of paths using an edge e ∈ E is simply P :e∈P xP . We create another decision variable W to denote the maximum number of paths using an edge, so that our objective function is to minimize W . We have the constraint that ∑ xP ≤ W P :e∈P
for each edge e ∈ E. Finally, we need to choose some path P ∈ Pi for every si ti pair, so that ∑ xP = 1 P ∈Pi
for each i = 1, . . . , k. This gives us the following integer programming formulation of the minimumcapacity multicommodity ﬂow problem: minimize subject to
∑
W
(5.10)
xP = 1,
i = 1, . . . , k,
xP ≤ W,
e ∈ E,
xP ∈ {0, 1} ,
∀P ∈ Pi , i = 1, . . . , k.
P ∈Pi
∑
(5.11)
P :e∈P
The integer program can be relaxed to a linear program by replacing the constraints xP ∈ {0, 1} with xP ≥ 0. We claim for now that this linear program can be solved in polynomial time and that most a polynomial number of variables xP of an optimal solution can be nonzero. We now apply randomized rounding to obtain a solution to the problem. For i = 1, . . . , k, we choose exactly one path P ∈ Pi according to the probability distribution x∗P on paths P ∈ Pi , where x∗ is an optimal solution of value W ∗ . Assuming W ∗ is large enough, we can show that the total number of paths going through any edge is close to W ∗ by using the Chernoﬀ bound from Theorem 5.23. Let n be the number of vertices in the graph. Recall that in Section 1.7 we said that a probabilistic event happens with high probability if the probability that it does not occur is at most n−c for some integer c ≥ 1. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
5.12
Random sampling and coloring dense 3colorable graphs
133
Theorem 5.29: If W ∗ ≥ c ln n for some constant c, then with high probability, the total number √ of paths using any edge is at most W ∗ + cW ∗ ln n. Proof. For each e ∈ E, deﬁne random variables Xei , where Xei = 1 if the chosen s∑ i ti path uses edge e, and Xei = 0 otherwise. Then the number of paths using edge e is Ye = ki=1 Xei . We want to bound maxe∈E Ye , and show that this is close to the LP value W ∗ . Certainly E[Ye ] =
k ∑
∑
x∗P =
i=1 P ∈Pi :e∈P
∑
x∗P ≤ W ∗ ,
P :e∈P
i by constraint (5.11) from the LP. For a ﬁxed edge e, the random √ variables Xe are independent, so we can apply the Chernoﬀ bound of Theorem 5.23. Set δ = (c ln n)/W ∗ . Since W ∗ ≥ c ln n by assumption, it follows that δ ≤ 1. Then by Theorem 5.23 and Lemma 5.26 with U = W ∗ ,
Pr[Ye ≥ (1 + δ)W ∗ ] < e−W
∗ δ 2 /3
= e−(c ln n)/3 =
1 . nc/3
√ Also (1 + δ)W ∗ = W ∗ + cW ∗ ln n. Since there can be at most n2 edges, [ ] ∑ ∗ Pr max Ye ≥ (1 + δ)W ≤ Pr[Ye ≥ (1 + δ)W ∗ ] e∈E
e∈E
≤ n2 ·
1 nc/3
= n2−c/3 .
For a constant c ≥ 12, this ensures that the theorem statement fails to hold with probability at most n12 , and by increasing c we can make the probability as small as we like. Observe that since W ∗ ≥ c ln n, the theorem above guarantees that the randomized algorithm produces a solution of no more than 2W ∗ ≤ 2 OPT. However, the algorithm might produce a solution considerably closer to optimal if W ∗ ≫ c ln n. We also observe the following corollary. Corollary 5.30: If W ∗ ≥ 1, then with high probability, the total number of paths using any edge is O(log n) · W ∗ . Proof. We repeat the proof above with U = (c ln n)W ∗ and δ = 1. In fact, the statement of the corollary can be sharpened by replacing the O(log n) with O(log n/ log log n) (see Exercise 5.13). To solve the linear program in polynomial time, we show that it is equivalent to a polynomiallysized linear program; we leave this as an exercise to the reader (Exercise 5.14, to be precise).
5.12
Random sampling and coloring dense 3colorable graphs
In this section we turn to another application of Chernoﬀ bounds. We consider coloring a δdense 3colorable graph. We say( )that a graph is dense if for some constant α the number of edges in the graph is at least α n2 ; in other words, some constant fraction of the edges that could exist in the graph do exist. A δdense graph is a special case of a dense graph. A graph is δdense if every node has at least δn neighbors for some constant δ; that is, every node has some constant fraction of neighbors it could have. Finally, a graph is kcolorable if each of the Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
134
Random sampling and randomized rounding of linear programs
nodes can be assigned exactly one of k colors in such a way that no edge has its two endpoints assigned the same color. In general, it is NPcomplete to decide whether a graph is 3colorable; in fact, the following is known. Theorem 5.31: It is NPhard to decide if a graph can be colored with only 3 colors, or needs at least 5 colors. Theorem 5.32: Assuming a variant of the unique games conjecture, for any constant k > 3, it is NPhard to decide if a graph can be colored with only 3 colors, or needs at least k colors. In Sections 6.5 and 13.2 we will discuss approximation algorithms for coloring any 3colorable graph. Here we will show that with high probability we can properly color any δdense 3colorable graph in polynomial time. While this is not an approximation algorithm, it is a useful application of Chernoﬀ bounds, which we will use again in Section 12.4. In this case, we use the bounds to show that if we know the correct coloring for a small, randomly chosen sample of a δdense graph, we can give a polynomialtime algorithm that with high probability successfully colors the rest of the graph. This would seem to pose a problem, though, since we do not know the coloring for the sample. Nevertheless, if the sample is no larger than O(log n), we can enumerate all possible colorings of the sample in polynomial time, and run the algorithm above for each coloring. Since one of the colorings of the sample will be the correct one, for at least one of the possible colorings of the sample the algorithm will result in a correct coloring of the graph with high probability. More speciﬁcally, given a δdense graph, we will select a random subset S ⊆ V of O((ln n)/δ) vertices by including each vertex in the subset with probability (3c ln n)/δn for some constant c. We will show ﬁrst that the set size is no more than (6c ln n)/δ with high probability, and then that with high probability, every vertex has at least one neighbor in S. Thus given a correct coloring of S, we can use the information about the coloring of S to deduce the colors of the rest of the vertices. Since each vertex has a neighbor in S, its color is restricted to be one of the two remaining colors, and this turns out to be enough of a restriction that we can infer the remaining coloring. Finally, although we do not know the correct coloring of S we can run this algorithm for each of the 3(6c ln n)/δ = nO(c/δ) possible colorings of S. One of the colorings of S will be the correct coloring, and thus in at least one run of the algorithm we will be able to color the graph successfully. Lemma 5.33: With probability at most n−c/δ , the set S has size S ≥ (6c ln n)/δ. Proof. We use the Chernoﬀ bound (Theorem 5.23 and Lemma 5.26). Let Xi be a 01 random variable indicating whether vertex ∑ i is included in S. Then since each vertex is included with probability 3c ln n/δn, µ = E[ ni=1 Xi ] = (3c ln n)/δ. Applying the lemma with U = (3c ln n)/δ, the probability that S ≥ 2U is at most e−µ/3 = n−c/δ . Lemma 5.34: The probability that a given vertex v ∈ / S has no neighbor in S is at most n−3c . Proof. Let Xi∑ be a 01 random variable indicating whether the ith neighbor of v is in S or not. Lemma 5.27 Then µ = E[ i Xi ] ≥ 3c ln n, since v has at least δn neighbors. Then applying∑ with L = 3c ln n, the probability that v has no neighbors in S is no more than Pr[ i Xi = 0] ≤ e−L = n−3c . Corollary 5.35: With probability at least 1 − 2n−(c−1) , S ≤ (6c ln n)/δ and every v ̸∈ S has at least one neighbor in S. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
5.12
Random sampling and coloring dense 3colorable graphs
135
Proof. This follows from Lemmas 5.33 and 5.34. The probability that both statements of the lemma are true is at least one minus the sum of the probabilities that either statement is false. The probability that every v ∈ / S has no neighbor in S is at worst n times the probability that a given vertex v ∈ / S has no neighbor in S. Since δ ≤ 1, the overall probability that both statements are true is at least 1 − n−c/δ − n · n−3c ≥ 1 − 2n−(c−1) .
Now we assume we have some coloring of the vertices in S, not necessarily one that is consistent with the correct coloring of the graph. We also assume that every vertex not in S has at least one neighbor in S. We further assume that the coloring of S is such that every edge with both endpoints in S has diﬀerently colored endpoints, since otherwise this is clearly not a correct coloring of the graph. Assume we color the graph with colors {0, 1, 2}. Given a vertex v∈ / S, because it has some neighbor in S colored with some color n(v) ∈ {0, 1, 2}, we know that v cannot be colored with color n(v). Possibly v has other neighbors in S with colors other than n(v). Either this forces the color of v or there is no way we can successfully color v; in the latter case our current coloring of S must not have been correct, and we terminate. If the color of v is not determined, then we create a binary variable x(v), which if true indicates that we color v with color n(v) + 1 (mod 3), and if false indicates that we color v with color n(v) − 1 (mod 3). Now every edge (u, v) ∈ E for u, v ∈ / S imposes the constraint n(u) ̸= n(v). To capture this, we create an instance of the maximum satisﬁability problem such that all clauses are satisﬁable if and only if the vertices not in S can be correctly colored. For each possible setting of the Boolean variables x(u) and x(v) that would cause n(u) = n(v), we create a disjunction of x(u) and x(v) that is false if it implies n(u) = n(v); for example, if x(u) = true and x(v) = false implies that n(u) = n(v), then we create a clause (x(u) ∨ x(v)). Since G is 3colorable, given a correct coloring of S, there exists a setting of the variables x(v) which satisﬁes all the clauses. Since each clause has two variables, it is possible to determine in polynomial time whether the instance is satisﬁable or not; we leave it as an exercise to the reader to prove this (Exercise 6.3). Obviously if we ﬁnd a setting of the variables that satisﬁes all constraints, this implies a correct coloring of the entire graph, whereas if the constraints are not satisﬁable, our current coloring of S must not have been correct. In Section 12.4, we’ll revisit the idea from this section of drawing a small random sample of a graph and using it to determine the overall solution for the maximum cut problem in dense graphs.
Exercises 5.1 In the maximum kcut problem, we are given an undirected graph G = (V, E), and nonnegative weights wij ≥ 0 for all (i, j) ∈ E. The goal is to partition the vertex set V into k parts V1 , . . . , Vk so as to maximize the weight of all edges whose endpoints are in diﬀerent parts (i.e., max(i,j)∈E:i∈Va ,j∈Vb ,a̸=b wij ). Give a randomized
k−1 k approximation
algorithm for the MAX kCUT problem.
5.2 Consider the following greedy algorithm for the maximum cut problem. We suppose the vertices are numbered 1, . . . , n. In the ﬁrst iteration, the algorithm places vertex 1 in U . In the kth iteration of the algorithm, we will place vertex k in either U or in W . In order to decide which choice to make, we will look at all the edges F that have the vertex k as one Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
136
Random sampling and randomized rounding of linear programs
endpoint and whose other endpoint is 1, . . . , k−1, so that F = {(j, k) ∈ E : 1 ≤ j ≤ k − 1}. We choose to put vertex k in U or W depending on which of these two choices maximizes the number of edges of F being in the cut. (a) Prove that this algorithm is a 1/2approximation algorithm for the maximum cut problem. (b) Prove that this algorithm is equivalent to the derandomization of the maximum cut algorithm of Section 5.1 via the method of conditional expectations. 5.3 In the maximum directed cut problem (sometimes called MAX DICUT) we are given as input a directed graph G = (V, A). Each directed arc (i, j) ∈ A has nonnegative weight wij ≥ 0. The goal is to partition V into two sets U and W = V − U so as to maximize the total weight of the arcs going from U to W (that is, arcs (i, j) with i ∈ U and j ∈ W ). Give a randomized 41 approximation algorithm for this problem. 5.4 Consider the nonlinear randomized rounding algorithm for MAX SAT as given in Section 5.6. Prove that using randomized rounding with the linear function f (yi ) = 12 yi + 14 also gives a 43 approximation algorithm for MAX SAT. 5.5 Consider the nonlinear randomized rounding algorithm for MAX SAT as given in Section 5.6. Prove that using randomized rounding with the piecewise linear function 3 4 yi + 41 for 0 ≤ yi ≤ 31 1/2 for 13 ≤ yi ≤ 23 f (yi ) = 3 for 23 ≤ yi ≤ 1 4 yi also gives a 34 approximation algorithm for MAX SAT. 5.6 Consider again the maximum directed cut problem from Exercise 5.3. (a) Show that the following integer program models the maximum directed cut problem: ∑ maximize wij zij (i,j)∈A
subject to
zij ≤ xi ,
∀(i, j) ∈ A,
zij ≤ 1 − xj ,
∀(i, j) ∈ A,
xi ∈ {0, 1} ,
∀i ∈ V,
0 ≤ zij ≤ 1,
∀(i, j) ∈ A.
(b) Consider a randomized rounding algorithm for the maximum directed cut problem that solves a linear programming relaxation of the integer program and puts vertex i ∈ U with probability 1/4 + xi /2. Show that this gives a randomized 1/2approximation algorithm for the maximum directed cut problem. 5.7 In this exercise, we consider how to derandomize the randomized rounding algorithm for the set cover problem given in Section 1.7. We would like to apply the method of conditional expectations, but we need to ensure that at the end of the process we obtain a valid set cover. Let Xj be a random variable indicating whether set Sj is included in the solution. Then if wj is the weight of set Sj , let W be the weight of the set cover Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
5.12
Random sampling and coloring dense 3colorable graphs
137
∑ obtained by randomized rounding, so that W = m j=1 wj Xj . Let Z be a random variable such that Z = 1 if randomized rounding does not produce a valid set cover, and Z = 0 if it does. Then consider applying the method of conditional expectations to the objective function W + λZ for some choice of λ ≥ 0. Show that for the proper choice of λ, the method of conditional expectations applied to the randomized rounding algorithm yields an O(ln n)approximation algorithm for the set cover problem that always produces a set cover. 5.8 Consider a variation of the maximum satisﬁability problem in which all variables occur positively in each clause, and there is an additional nonnegative weight vi ≥ 0 for each Boolean variable xi . The goal is now to set the Boolean variables to maximize the total weight of the satisﬁed clauses plus the total weight of variables set to be false. Give an integer programming formulation for this problem, with 01 variables yi to indicate whether xi is set true. Show that a randomized rounding of the linear program in which √ 2 − 1)approximation variable xi is set true with probability 1 − λ + λyi∗ gives a 2( √ algorithm for some appropriate setting of λ; note that 2( 2 − 1) ≈ .828. 5.9 Recall the maximum coverage problem from Exercise 2.11; in it, we are given a set of elements E, and m subsets of elements S1 , . . . , Sm ⊆ E with a nonnegative weight wj ≥ 0 for each subset Sj . We would like to ﬁnd a subset S ⊆ E of size k that maximizes the total weight of the subsets covered by S, where S covers Sj if S ∩ Sj ̸= ∅. (a) Show that the following nonlinear integer program models the maximum coverage problem: ∑ ∏ (1 − xe ) maximize wj 1 − j∈[m]
e∈Sj
subject to
∑
xe = k,
e∈E
xe ∈ {0, 1} ,
∀e ∈ E.
(b) Show that the following linear program is a relaxation of the maximum coverage problem: ∑ maximize wj zj j∈[m]
subject to
∑
xe ≥ zj ,
∀j ∈ [m]
e∈Sj
∑
xe = k,
e∈E
0 ≤ zj ≤ 1,
∀j ∈ [m],
0 ≤ xe ≤ 1,
∀e ∈ E.
(c) Using the pipage rounding technique from Exercise 4.7, give an algorithm that deterministically rounds the optimal LP solution to an integer solution and has a performance guarantee of 1 − 1e . Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
138
Random sampling and randomized rounding of linear programs
5.10 In the uniform labeling problem, we are given a graph G = (V, E), costs ce ≥ 0 for all e ∈ E, and a set of labels L that can be assigned to the vertices of V . There is a nonnegative cost civ ≥ 0 for assigning label i ∈ L to vertex v ∈ V , and an edge e = (u, v) incurs cost ce if u and v are assigned diﬀerent labels. The goal of the problem is to assign each vertex in V a label so as to minimize the total cost. We give an integer programming formulation of the problem. Let the variable xiv ∈ {0, 1} be 1 if vertex v is assigned label i ∈ L, and 0 otherwise. Let the variable zei be 1 if exactly one of the two endpoints of the edge e is assigned label i, and 0 otherwise. Then the integer programming formulation is as follows: ∑ 1∑ ∑ i ce ze + civ xiv 2 e∈E i∈L v∈V,i∈L ∑ subject to xiv = 1,
minimize
∀v ∈ V,
i∈L
zei ≥ xiu − xiv zei zei xiv
≥
xiv
−
xiu
∀(u, v) ∈ E, ∀i ∈ L, ∀(u, v) ∈ E, ∀i ∈ L,
∈ {0, 1} ,
∀e ∈ E, ∀i ∈ L,
∈ {0, 1} ,
∀v ∈ V, ∀i ∈ L.
(a) Prove that the integer programming formulation models the uniform labeling problem. Consider now the following algorithm. First, the algorithm solves the linear programming relaxation of the integer program above. The algorithm then proceeds in phases. In each phase, it picks a label i ∈ L uniformly at random, and a number α ∈ [0, 1] uniformly at random. For each vertex v ∈ V that has not yet been assigned a label, we assign it label i if α ≤ xiv . (b) Suppose that vertex v ∈ V has not yet been assigned a label. Prove that the probability that v is assigned label i ∈ L in the next phase is exactly xiv /L, and the probability that it is assigned a label in the next phase is exactly 1/L. Further prove that the probability that v is assigned label i by the algorithm is exactly xiv . (c) We say that an edge e is separated by a phase if both endpoints were not assigned labels prior to the phase, and exactly one of the endpoints is assigned a label in this∑phase. Prove that the probability that an edge e is separated by a phase is 1 i i∈L ze . L that the probability that the endpoints of edge e receive diﬀerent labels is at (d) Prove∑ most i∈L zei . (e) Prove that the algorithm is a 2approximation algorithm for the uniform labeling problem. 5.11 Prove Lemma 5.22, and show that the integer programming solution (y ∗ , C ∗ ) described at the end of Section 5.9 must be optimal for the integer program (5.3). 5.12 Using randomized rounding and ﬁrst ﬁt, give a randomized polynomialtime algorithm for the binpacking problem that uses ρ · OPT(I) + k bins for some ρ < 2 and some small constant k. One idea is to consider the linear program from Section 4.6. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
5.12
Random sampling and coloring dense 3colorable graphs
139
5.13 Show that the O(log n) factor in Corollary 5.30 can be replaced with O(log n/ log log n) by using Theorem 5.23. 5.14 Show that there is a linear programming relaxation for the integer multicommodity ﬂow problem of Section 5.10 that is equivalent to the linear program (5.10) but has a number of variables and constraints that are bounded by a polynomial in the input size of the ﬂow problem.
Chapter Notes The textbooks of Mitzenmacher and Upfal [226] and Motwani and Raghavan [228] give more extensive treatments of randomized algorithms. A 1967 paper of Erd˝os [99] on the maximum cut problem showed that sampling a solution uniformly at random as in Theorem 5.3 gives a solution whose expected value is at least half the sum of the edge weights. This is one of the ﬁrst randomized approximation algorithms of which we are aware. This algorithm can also be viewed as a randomized version of a deterministic algorithm given by Sahni and Gonzalez [257] (the deterministic algorithm of Sahni and Gonzalez is given in Exercise 5.2). Raghavan and Thompson [247] were the ﬁrst to introduce the idea of the randomized rounding of a linear programming relaxation. The result for integer multicommodity ﬂows in Section 5.11 is from their paper. Random sampling and randomized rounding are most easily applied to unconstrained problems, such as the maximum satisﬁability problem and the maximum cut problem, in which any solution is feasible. Even problems such as the prizecollecting Steiner tree problem and the uncapacitated facility location problem can be viewed as unconstrained problems: we merely need to select a set of vertices to span or facilities to open. Randomized approximation algorithms for constrained problems exist, but are much rarer. The results for the maximum satisﬁability problem in this chapter are due to a variety of authors. The simple randomized algorithm of Section 5.1 is given by Yannakakis [293] as a randomized variant of an earlier deterministic algorithm introduced by Johnson [179]. The “biased coins” algorithm of Section 5.3 is a similar randomization of an algorithm of Lieberherr and Specker [216]. The randomized rounding, “better of two”, and nonlinear randomized rounding algorithms in Sections 5.4, 5.5, and 5.6 respectively are due to Goemans and Williamson [137]. The derandomization of randomized algorithms is a major topic of study. The method of conditional expectations given in Section 5.2 is implicit in the work of Erd˝os and Selfridge [101], and has been developed by Spencer [271]. The randomized algorithm for the prizecollecting Steiner tree problem in Section 5.7 is an unpublished result of Goemans. The algorithm of Section 5.8 for uncapacitated facility location is due to Chudak and Shmoys [77]. The scheduling algorithm of Section 5.9 is due to Schulz and Skutella [261]. The algorithm for solving the integer programming relaxation of this problem is due to Goemans [132], and the αpoint algorithm that uses a single value of α is also due to Goemans [133]. Chernoﬀ [71] gives the general ideas used in the proof of Chernoﬀ bounds. Our proofs of these bounds follow those of Mitzenmacher and Upfal [226] and Motwani and Raghavan [228]. The randomized algorithm for 3coloring a dense 3colorable graph is due to Arora, Karger, and Karpinski [16]; a deterministic algorithm for the problem had earlier been given by Edwards Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
140
Random sampling and randomized rounding of linear programs
[97]. Theorems 5.31 and 5.32 are due to Khanna, Linial and Safra [190] (see also Guruswami and Khanna [152]) and Dinur, Mossel, and Regev [90] respectively. Exercises 5.4 and 5.5 are due to Goemans and Williamson [137]. Exercise 5.6 is due to Trevisan [279, 280]. Exercise 5.7 is due to Norton [237]. Ageev and Sviridenko [1] gave the algorithm and analysis in Exercise 5.8, while Exercise 5.9 is also due to Ageev and Sviridenko [2, 3]. The algorithm for the uniform labeling problem in Exercise 5.10 is due to Kleinberg and Tardos [197]; the uniform labeling problem models a problem arising in image processing. Exercise 5.12 is an unpublished result of Williamson.
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
Chapter 6
Randomized rounding of semideﬁnite programs
We now turn to a new tool which gives substantially improved performance guarantees for some problems. So far we have used linear programming relaxations to design and analyze various approximation algorithms. In this section, we show how nonlinear programming relaxations can give us better algorithms than we know how to obtain via linear programming; in particular we use a type of nonlinear program called a semideﬁnite program. Part of the power of semideﬁnite programming is that semideﬁnite programs can be solved in polynomial time. We begin with a brief overview of semideﬁnite programming. Throughout the chapter we assume some basic knowledge of vectors and linear algebra; see the notes at the end of the chapter for suggested references on these topics. We then give an application of semideﬁnite programming to approximating the maximum cut problem. The algorithm for this problem introduces a technique of rounding the semideﬁnite program by choosing a random hyperplane. We then explore other problems for which choosing a random hyperplane, or multiple random hyperplanes, is useful, including approximating quadratic programs, approximating clustering problems, and coloring 3colorable graphs.
6.1
A brief introduction to semideﬁnite programming
Semideﬁnite programming uses symmetric, positive semideﬁnite matrices, so we brieﬂy review a few properties of these matrices. In what follows, X T is the transpose of the matrix X, and vectors v ∈ ℜn are assumed to be column vectors, so that v T v is the inner product of v with itself, while vv T is an n by n matrix. Deﬁnition 6.1: A matrix X ∈ ℜn×n is positive semideﬁnite iﬀ for all y ∈ ℜn , y T Xy ≥ 0. Sometimes we abbreviate “positive semideﬁnite” as “psd.” Sometimes we will write X ≽ 0 to denote that a matrix X is positive semideﬁnite. Symmetric positive semideﬁnite matrices have some special properties which we list below. From here on, we will generally assume (unless otherwise stated) that any psd matrix X is also symmetric. Fact 6.2: If X ∈ ℜn×n is a symmetric matrix, then the following statements are equivalent: 1. X is psd; 141
142
Randomized rounding of semideﬁnite programs
2. X has nonnegative eigenvalues; 3. X = V T V for some V ∈ ℜm×n where m ≤ n; ∑n T n T 4. X = i=1 λi wi wi for some λi ≥ 0 and vectors wi ∈ ℜ such that wi wi = 1 and wiT wj = 0 for i ̸= j. A semideﬁnite program (SDP) is similar to a linear program in that there is a linear objective function and linear constraints. In addition, however, a square symmetric matrix of variables can be constrained to be positive semideﬁnite. Below is an example in which the variables are xij for 1 ≤ i, j ≤ n. maximize or minimize
∑
cij xij
(6.1)
i,j
∑
subject to
aijk xij = bk ,
∀k,
i,j
xij = xji ,
∀i, j,
X = (xij ) ≽ 0. Given some technical conditions, semideﬁnite programs can be solved to within an additive error of ϵ in time that is polynomial in the size of the input and log(1/ϵ). We explain the technical conditions in more detail in the notes at the end of the chapter. We will usually ignore the additive error when discussing semideﬁnite programs and assume that the SDPs can be solved exactly, since the algorithms we will use do not assume exact solutions, and one can usually analyze the algorithm that has additive error in the same way with only a small loss in performance guarantee. We will often use semideﬁnite programming in the form of vector programming. The variables of vector programs are vectors vi ∈ ℜn , where the dimension n of the space is the number of vectors in the vector program. The vector program has an objective function and constraints that are linear in the inner product of these vectors. We write the inner product of vi and vj as vi · vj , or sometimes as viT vj . Below we give an example of a vector program. maximize or minimize
∑
cij (vi · vj )
(6.2)
i,j
subject to
∑
aijk (vi · vj ) = bk ,
∀k,
i,j
vi ∈ ℜn ,
i = 1, . . . , n.
We claim that in fact the SDP (6.1) and the vector program (6.2) are equivalent. This follows from Fact 6.2; in particular, it follows since a symmetric X is psd if and only if X = V T V for some matrix V . Given a solution to the SDP (6.1), we can take the solution X, compute in polynomial time a matrix V for which X = V T V (to within small error, which we again will ignore), and set vi to be the ith column of V . Then xij = viT vj = vi · vj , and the vi are a feasible solution of the same value to the vector program (6.2). Similarly, given a solution vi to the vector program, we construct a matrix V whose ith column is vi , and let X = V T V . Then X is symmetric and psd, with xij = vi · vj , so that X is a feasible solution of the same value for the SDP (6.1). Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
6.2
Finding large cuts
6.2
143
Finding large cuts
In this section, we show how to use semideﬁnite programming to ﬁnd an improved approximation algorithm for the maximum cut problem, or MAX CUT problem, which we introduced in Section 5.1. Recall that for this problem, the input is an undirected graph G = (V, E), and nonnegative weights wij ≥ 0 for each edge (i, j) ∈ E. The goal is to partition the vertex set into two parts, U and W = V − U , so as to maximize the weight of the edges whose two endpoints are in diﬀerent parts, one in U and one in W . In Section 5.1, we gave a 21 approximation algorithm for the maximum cut problem. We will now use semideﬁnite programming to give a .878approximation algorithm for the problem in general graphs. We start by considering the following formulation of the maximum cut problem: maximize
1 ∑ wij (1 − yi yj ) 2
(6.3)
(i,j)∈E
yi ∈ {−1, +1} ,
subject to
i = 1, . . . , n.
We claim that if we can solve this formulation, then we can solve the MAX CUT problem. Lemma 6.3: The program (6.3) models the maximum cut problem. Proof. Consider the cut U = {i : yi = −1} and W = {i : yi = +1}. Note that if an edge (i, j) is in this cut, then yi yj = −1, while if the edge is not in the cut, yi yj = 1. Thus 1 ∑ wij (1 − yi yj ) 2 (i,j)∈E
gives the weight of all the edges in the cut. Hence ﬁnding the setting of the yi to ±1 that maximizes this sum gives the maximumweight cut. We can now consider the following vector programming relaxation of the program (6.3): maximize
1 ∑ wij (1 − vi · vj ) 2
(6.4)
(i,j)∈E
subject to
vi · vi = 1, vi ∈ ℜ , n
i = 1, . . . , n, i = 1, . . . , n.
This program is a relaxation of (6.3) since we can take any feasible solution y and produce a feasible solution to this program of the same value by setting vi = (yi , 0, 0, . . . , 0): clearly vi · vi = 1 and vi · vj = yi yj for this solution. Thus if ZV P is the value of an optimal solution to the vector program, it must be the case that ZV P ≥ OPT. We can solve (6.4) in polynomial time. We would now like to round the solution to obtain a nearoptimal cut. To do this, we introduce a form of randomized rounding suitable for vector programming. In particular, we pick a random vector r = (r1 , . . . , rn ) by drawing each component from N (0, 1), the normal distribution with mean 0 and variance 1. The normal distribution can be simulated by an algorithm that draws repeatedly from the uniform distribution on [0,1]. Then given a solution to (6.4), we iterate through all the vertices and put i ∈ U if vi · r ≥ 0 and i ∈ W otherwise. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
144
Randomized rounding of semideﬁnite programs
Figure 6.1: An illustration of a random hyperplane. Another way of looking at this algorithm is that we consider the hyperplane with normal r containing the origin. All vectors vi lie on the unit sphere, since vi · vi = 1 and they are unit vectors. The hyperplane with normal r containing the origin splits the sphere in half; all vertices in one half (the half such that vi · r ≥ 0) are put into U , and all vertices in the other half are put into W (see Figure 6.1). As we will see below, the vector r/∥r∥ is uniform over the unit sphere, so this is equivalent to randomly splitting the unit sphere in half. For this reason, this technique is sometimes called choosing a random hyperplane. To prove that this is a good approximation algorithm, we need the following facts. Fact 6.4: The normalization of r, r/r, is uniformly distributed over the ndimensional unit sphere. Fact 6.5: The projections of r onto two unit vectors e1 and e2 are independent and are normally distributed with mean 0 and variance 1 iﬀ e1 and e2 are orthogonal. Corollary 6.6: Let r′ be the projection of r onto a twodimensional plane. Then the normalization of r′ , r′ /r′ , is uniformly distributed on a unit circle in the plane. We now begin the proof that choosing a random hyperplane gives a .878approximation algorithm for the problem. We will need the following two lemmas. Lemma 6.7: The probability that edge (i, j) is in the cut is
1 π
arccos(vi · vj ).
Proof. Let r′ be the projection of r onto the plane deﬁned by vi and vj . If r = r′ + r′′ , then r′′ is orthogonal to both vi and vj , and vi · r = vi · (r′ + r′′ ) = vi · r′ . Similarly vj · r = vj · r′ . Consider Figure 6.2, where line AC is perpendicular to the vector vi and line BD is perpendicular to the vector vj . By Corollary 6.6, the vector r′ with its tail at the origin O is oriented with respect to the vector vi by an angle α chosen uniformly from [0, 2π). If r′ is to the right of the line AC, vi will have a nonnegative inner product with r′ , otherwise not. If r′ is above the line BD, vj will have nonnegative inner product with r′ , otherwise not. Thus we have i ∈ W and j ∈ U if r′ is in the sector AB and i ∈ U and j ∈ W if r′ is in the sector CD. If the angle formed by vi and vj is θ radians, then the angles ∠AOB and ∠COD are also θ radians. Hence the fraction of values for which α, the angle of r′ , corresponds to the event Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
6.2
Finding large cuts
145
A
vj
B
θ
θ
O
vi
θ D C
Figure 6.2: Figure for proof of Lemma 6.7. in which (i, j) is in the cut is 2θ/2π. Thus the probability that (i, j) is in the cut is πθ . We know that vi · vj = ∥vi ∥∥vj ∥ cos θ. Since vi and vj are both unit length vectors, we have that θ = arccos(vi · vj ), which completes the proof of the lemma. Lemma 6.8: For x ∈ [−1, 1], 1 1 arccos(x) ≥ 0.878 · (1 − x). π 2 Proof. The proof follows from simple calculus. See Figure 6.3 for an illustration. Theorem 6.9: Rounding the vector program (6.4) by choosing a random hyperplane is a .878approximation algorithm for the maximum cut problem. Proof. Let Xij be a random variable for edge (i, j) such that Xij = 1 if (i, j) is in the cut given by the algorithm, ∑ and 0 otherwise. Let W be a random variable which gives the weight of the cut; that is, W = (i,j)∈E wij Xij . Then by Lemma 6.7, E[W ] =
∑
wij · Pr[edge (i, j) is in cut] =
(i,j)∈E
(i,j)∈E
By Lemma 6.8, we can bound each term E[W ] ≥ 0.878 ·
∑
1 π
wij ·
1 arccos(vi · vj ). π
arccos(vi · vj ) below by 0.878 · 12 (1 − vi · vj ), so that
1 ∑ wij (1 − vi · vj ) = 0.878 · ZV P ≥ 0.878 · OPT . 2 (i,j)∈E
We know that ZV P ≥ OPT. The proof of the theorem above shows that there is a cut of value 1 at least 0.878 · ZV P , so that OPT ≥ 0.878 · ZV P . Thus we have that OPT ≤ ZV P ≤ 0.878 OPT. It has been shown that there are graphs for which the upper inequality is met with equality. This implies that we can get no better performance guarantee for the maximum cut problem Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
146
Randomized rounding of semideﬁnite programs
1 (1/pi) arccos(x) 1/2 (1x)
0.8
0.6
0.4
0.2
0 1
0.5
0
0.5
1
2 ratio .878
1.5
1
0.5
0 1
0.5
0
0.5
1
Figure 6.3: Illustration of Lemma 6.8. The upper ﬁgure shows plots of the functions 1 1 π arccos(x) and 2 (1 − x). The lower ﬁgure shows a plot of the ratio of the two functions.
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
6.3
Approximating quadratic programs
147
by using ZV P as an upper bound on OPT. Currently, .878 is the best performance guarantee known for the maximum cut problem problem. The following theorems show that this is either close to, or exactly, the best performance guarantee that is likely attainable. Theorem 6.10: If there is an αapproximation algorithm for the maximum cut problem with α > 16 17 ≈ 0.941, then P = NP. Theorem 6.11: Given the unique games conjecture there is no αapproximation algorithm for the maximum cut problem with constant α > min
1 π
−1≤x≤1
arccos(x) ≥ .878 − x)
1 2 (1
unless P = NP. We sketch the proof of the second theorem in Section 16.5. So far we have only discussed a randomized algorithm for the maximum cut problem. It is possible to derandomize the algorithm by using a sophisticated application of the method of conditional expectations that iteratively determines the various coordinates of the random vector. The derandomization incurs a loss in the performance guarantee that can be made as small as desired (by increasing the running time).
6.3
Approximating quadratic programs
We can extend the algorithm above for the maximum cut problem to the following more general problem. Suppose we wish to approximate the quadratic program below: ∑ maximize aij xi xj (6.5) 1≤i,j≤n
xi ∈ {−1, +1} ,
subject to
i = 1, . . . , n.
We need to be slightly careful in this case since as stated it is possible that the value of an optimal solution is negative (for instance, if the values of aii are negative and all other aij are zero). Thus far we have only been considering problems in which all feasible solutions have nonnegative value so that the deﬁnition of an αapproximation algorithm makes sense. To see that the deﬁnition might not make sense in the case of negative solution values, suppose we have an αapproximation algorithm for a maximization problem with α < 1 and suppose we have a problem instance in which OPT is negative. Then the approximation algorithm guarantees the value of our solution is at least α · OPT, which means that the value of our solution will be greater than that of OPT. In order to get around this diﬃculty, in this case we will restrict the objective function matrix A = (aij ) in (6.5) to itself be positive semideﬁnite. Observe then that for any feasible solution x, the value of the objective function will be xT Ax and will be nonnegative by the deﬁnition of positive semideﬁnite matrices. As in the case of the maximum cut problem, we can then have the following vector programming relaxation: ∑ maximize aij (vi · vj ) (6.6) 1≤i,j≤n
subject to
vi · vi = 1, vi ∈ ℜ , n
i = 1, . . . , n, i = 1, . . . , n.
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
148
Randomized rounding of semideﬁnite programs
Let ZV P be the value of an optimal solution for this vector program. By the same argument as in the previous section, ZV P ≥ OPT. We can also use the same algorithm as we did for the maximum cut problem. We solve the vector program (6.6) in polynomial time and obtain vectors vi . We choose a random hyperplane with normal r, and generate a solution x ¯ for the quadratic program (6.5) by setting x ¯i = 1 if 2 r · vi ≥ 0 and x ¯i = −1 otherwise. We will show below that this gives a π approximation algorithm for the quadratic program (6.5). Lemma 6.12: E[¯ xi x ¯j ] =
2 arcsin(vi · vj ) π
Proof. Recall from Lemma 6.7 that the probability vi and vj will be on diﬀerent sides of the random hyperplane is π1 arccos(vi · vj ). Thus the probability that x ¯i and x ¯j have diﬀerent values 1 ¯i and x ¯j have diﬀerent values, then their product must be is π arccos(vi · vj ). Observe that if x −1, while if they have the same value, their product must be 1. Thus the probability that the product is 1 must be 1 − π1 arccos(vi · vj ). Hence E[¯ xi x ¯j ] = Pr[¯ xi x ¯j = 1] − Pr[¯ xi x ¯j = −1] ( ) ( ) 1 1 = 1 − arccos(vi · vj ) − arccos(vi · vj ) π π 2 = 1 − arccos(vi · vj ). π Using arcsin(x) + arccos(x) = π2 , we get ] 2 2 [π − arcsin(vi · vj ) = arcsin(vi · vj ). E[¯ xi x ¯j ] = 1 − π 2 π We would like to make the same argument as we did for the maximum cut problem in Theorem 6.9, but there is a diﬃculty with the analysis. Let α = min
−1≤x≤1
2 π
arcsin(x) . x
Then we would like to argue that the expected value of the solution is ∑ ∑ E aij x ¯i x ¯j = aij E[¯ xi x ¯j ] i,j
i,j
2∑ aij arcsin(vi · vj ) π i,j ∑ ≥ α aij (vi · vj ) =
i,j
≥ α · OPT by the same reasoning as in Theorem 6.9. However, the penultimate inequality is not necessarily correct since it may be the case that some of the aij are negative. We are assuming that the inequality holds on a termbyterm basis, and this is not true when some of the aij < 0. Thus in order to analyze this algorithm, we will have to use a global analysis, rather than a termbyterm analysis. To do so, we will need the following fact, called the Schur product theorem, and the subsequent two corollaries. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
6.3
Approximating quadratic programs
149
Fact 6.13 (Schur product theorem): For matrices A = (aij ) and B = (bij ), deﬁne A ◦ B = (aij bij ). Then if A ≽ 0 and B ≽ 0, then A ◦ B ≽ 0. ∑ Corollary 6.14: If A ≽ 0 and B ≽ 0, then i,j aij bij ≥ 0. Proof. By Fact 6.13, we know that A ◦ B ≽ 0, so that for the vector ⃗1 of all ones, ⃗1T (A ◦ B)⃗1 ≥ 0, by the deﬁnition of psd matrices.
∑ i,j
aij bij =
Corollary 6.15: If X ≽ 0, xij  ≤ 1 for all i, j, and Z = (zij ) such that zij = arcsin(xij ) − xij , then Z ≽ 0. Proof. We recall that the Taylor series expansion for arcsin x around zero is arcsin x = x +
1·3 5 1 · 3 · 5 · · · (2n + 1) 1 3 x + x + ··· + x2n+1 + · · · , 2·3 2·4·5 2 · 4 · 6 · · · 2n · (2n + 1)
and it converges for x ≤ 1. Since the matrix Z = (zij ) where zij = arcsin(xij ) − xij , we can express it as Z=
1·3 1 ((X ◦ X) ◦ X) + (((((X ◦ X) ◦ X) ◦ X) ◦ X) + · · · . 2·3 2·4·5
By Fact 6.13, because X ≽ 0, each term on the righthand side is a positive semideﬁnite matrix, and therefore their sum will be also. We can now show the following theorem. Theorem 6.16: Rounding the vector program (6.6) by choosing a random hyperplane is a π2 approximation algorithm for the quadratic program (6.5) when the objective function matrix A is positive semideﬁnite. Proof. We want to show that ∑ 2∑ 2 E aij x ¯i x ¯j ≥ aij (vi · vj ) ≥ · OPT . π π i,j
i,j
∑ 2∑ E aij x ¯i x ¯j = aij arcsin(vi · vj ). π
We know that
i,j
Thus we need to show that 2∑ π
i,j
aij arcsin(vi · vj ) −
i,j
2∑ aij (vi · vj ) ≥ 0. π i,j
By setting xij = vi · vj and letting θij denote the angle between the two vectors, we obtain that X = (xij ) ≽ 0 and xij  ≤ 1 since vi · vj  = ∥vi ∥∥vj ∥ cos θij  =  cos θij  ≤ 1. Thus we want to show that 2∑ aij (arcsin(xij ) − xij ) ≥ 0. π i,j
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
150
Randomized rounding of semideﬁnite programs
If we set zij = arcsin(xij ) − xij , then it is equivalent to show that 2∑ aij zij ≥ 0. π i,j
This follows since Z = (zij ) ≽ 0 by Corollary 6.15 and thus since A ≽ 0.
6.4
∑ i,j
aij zij ≥ 0 by Corollary 6.14
Finding a correlation clustering
In this section, we show that semideﬁnite programming can be used to obtain a good correlation clustering in an undirected graph. In this problem we are given an undirected graph in which + − each edge (i, j) ∈ E is given two nonnegative weights, wij ≥ 0 and wij ≥ 0. The goal is to cluster the vertices into sets of similar vertices; the degree to which i and j are similar is given + − by wij and the degree to which they are diﬀerent is given by wij . We represent a clustering of the vertices by a partition S of the vertex set into nonempty subsets. Let δ(S) be the set of edges that have endpoints in diﬀerent sets of the partition, and let E(S) be the set of edges that have both endpoints in the same part of the partition. Then the goal of the problem is to ﬁnd a partition S that maximizes the total w+ weight of edges inside the sets of the partition plus the total w− weight of edges between sets of the partition; in other words, we ﬁnd S to maximize ∑ ∑ + − wij + wij . (i,j)∈E(S)
(i,j)∈δ(S)
Observe that it is easy to get a 21 approximation algorithm for this problem. If∑we put all the + vertices into a single cluster (that is, S = {V }), then the value of this solution is (i,j)∈E wij . If we∑ make each vertex its own cluster (that is, S = {{i} : i ∈ V }), then the value of this solution ∑ − + − is (i,j)∈E wij . Since OPT ≤ (i,j)∈E (wij + wij ), at least one of these two solutions has a value 1 of at least 2 OPT. We now show that we can obtain a 34 approximation algorithm by using semideﬁnite programming. Here we model the problem as follows. Let ek be the kth unit vector; that is, it has a one in the kth coordinate and zeros elsewhere. We will have a vector xi for each vertex i ∈ V ; we set xi = ek if i is in the kth cluster. The model of the problem then becomes ) ∑ ( + − maximize wij (xi · xj ) + wij (1 − xi · xj ) (i,j)∈E
xi ∈ {e1 , . . . , en } ,
subject to
∀i,
since ek · el = 1 if k = l and 0 otherwise. We can then relax this model to the following vector program: ) ∑ ( + − maximize wij (vi · vj ) + wij (1 − vi · vj ) (6.7) (i,j)∈E
subject to
vi · vi = 1,
∀i,
vi · vj ≥ 0,
∀i, j,
vi ∈ ℜ , n
∀i.
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
6.4
Finding a correlation clustering
151
Let ZCC be the value of an optimal solution for the vector program. Observe that the vector program is a relaxation, since any feasible solution to the model above is feasible for the vector program and has the same value. Thus ZCC ≥ OPT . In the previous two sections, we chose a random hyperplane to partition the vectors into two sets; in the case of the maximum cut problem, this gave the two sides of the cut, and in the case of quadratic programming this gave the variables of value +1 and −1. In this case, we will choose two random hyperplanes, with two independent random vectors r1 and r2 as their normals. This partitions the vertices into 22 = 4 sets: in particular, we partition the vertices into the sets R1 = {i ∈ V : r1 · vi ≥ 0, r2 · vi ≥ 0} R2 = {i ∈ V : r1 · vi ≥ 0, r2 · vi < 0} R3 = {i ∈ V : r1 · vi < 0, r2 · vi ≥ 0} R4 = {i ∈ V : r1 · vi < 0, r2 · vi < 0} . We will show that the solution consisting of these four clusters, with S = {R1 , R2 , R3 , R4 }, comes within a factor of 34 of the optimal value. We ﬁrst need the following lemma. Lemma 6.17: For x ∈ [0, 1],
and
(1 −
1 π
arccos(x))2 ≥ .75 x
1 − (1 − π1 arccos(x))2 ≥ .75. (1 − x)
Proof. These statements follow by simple calculus. See Figure 6.4 for an illustration. We can now give the main theorem. Theorem 6.18: Rounding the vector program (6.7) by using two random hyperplanes as above gives a 34 approximation algorithm for the correlation clustering problem. Proof. Let Xij be a random variable which is 1 if vertices i and j end up in the same cluster, and is 0 otherwise. Note that the probability that a single random hyperplane has the vectors vi and vj on diﬀerent sides of the hyperplane is π1 arccos(vi ·vj ) by Lemma 6.7. Then the probability that vi and vj are on the same side of a single random hyperplane is 1− π1 arccos(vi ·vj ). Furthermore, the probability that the vectors are both on the same sides of the two random hyperplanes deﬁned by r1 and r2 is (1 − π1 arccos(vi · vj ))2 , since r1 and r2 are chosen independently. Thus E[Xij ] = (1 − π1 arccos(vi · vj ))2 . Let W be a random variable denoting the weight of the partition. Observe that ) ∑ ( + − W = wij Xij + wij (1 − Xij ) . (i,j)∈E
Thus E[W ] =
∑ ( (i,j)∈E
=
∑ (i,j)∈E
) + − wij E[Xij ] + wij (1 − E[Xij ])
[
( + wij
( )2 ( )2 )] 1 1 − 1 − arccos(vi · vj ) + wij 1 − 1 − arccos(vi · vj ) . π π
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
152
Randomized rounding of semideﬁnite programs
2 (1(1/pi) arccos(x))^2/x .75
1.5
1
0.5
0 0
0.2
0.4
0.6
0.8
1
2 (1(1(1/pi)arccos(x))^2)/(1x) .75
1.5
1
0.5
0 0
0.2
0.4
0.6
0.8
1
Figure 6.4: Illustration of Lemma 6.17. The upper ﬁgure shows a plot of [ ] [ ] 1 1 2 2 (1 − π arccos(x)) /x and the lower ﬁgure shows a plot of 1 − (1 − π arccos(x)) /(1− x).
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
6.5
Coloring 3colorable graphs
153
We now want to use Lemma 6.17 to bound each term in the sum from below; we can do so because the constraints of the vector program imply that vi · vj ∈ [0, 1]. Thus ) ∑ ( + − E[W ] ≥ .75 wij (vi · vj ) + wij (1 − vi · vj ) = .75 · ZCC ≥ .75 · OPT . (i,j)∈E
6.5
Coloring 3colorable graphs
In Section 5.12, we saw that with high probability we can 3color a δdense 3colorable graph. The situation for arbitrary 3colorable graphs is much worse, however. We will give a quite √ simple algorithm that colors a 3colorable graph G = (V, E) with O( n) colors, where n = V . ˜ 0.387 ) colors Then by using semideﬁnite programming, we will obtain an algorithm that uses O(n ˜ is deﬁned below. where O ˜ (n)) if there exists some constant c ≥ 0 and some n0 Deﬁnition 6.19: A function g(n) = O(f such that for all n ≥ n0 , g(n) = O(f (n) logc n). ˜ 0.387 ) colors. Graph coloring The best known algorithm does not use much fewer than O(n is one of the most diﬃcult problems to approximate. Some coloring problems are quite simple: it is known that a 2colorable graph can be 2colored in polynomial time, and that an arbitrary graph with maximum degree ∆ can be (∆ + 1)colored in polynomial time. We leave these results as exercises (Exercise 6.4). Given these results, it is quite simple to give an algorithm that colors a 3colorable graph √ √ with O( n) colors. As long as there exists a vertex in the graph with degree at least n, we pick three new colors, color the vertex with one of the new colors, and use the 2coloring algorithm to 2color the neighbors of the vertex with the other two new colors; we know that we can do this because the graph is 3colorable. We remove all these vertices from the graph √ and repeat. When we have no vertices in the graph of degree at least n, we use the algorithm √ that colors a graph with ∆ + 1 colors to color the remaining graph with n new colors. We can prove the following. √ Theorem 6.20: The algorithm above colors any 3colorable graph with at most 4 n colors. √ Proof. Each time the algorithm ﬁnds a vertex of degree at least n, we use three new colors. √ √ This can happen at most n/ n times, since we remove at least n vertices from the graph each √ time we do this. Hence this loop uses at most 3 n colors. The ﬁnal step uses an additional √ n colors. We now turn to semideﬁnite programming to help improve our coloring algorithms. We will use the following vector program in which we have a vector vi for each i ∈ V : minimize
λ
(6.8)
subject to vi · vj ≤ λ, vi · vi = 1, vi ∈ ℜ , n
∀(i, j) ∈ E, ∀i ∈ V, ∀i ∈ V.
The lemma below suggests why the vector program will be useful in deriving an algorithm for coloring 3colorable graphs. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
154
Randomized rounding of semideﬁnite programs Blue
Red
Green
Figure 6.5: Proof of Lemma 6.21. Lemma 6.21: For any 3colorable graph, there is a feasible solution to (6.8) with λ ≤ − 12 . Proof. Consider an equilateral triangle, and associate the unit vectors for all the vertices with the three diﬀerent colors with the three diﬀerent vertices of the triangle (see Figure 6.5). Note that the angle between any two vectors of the same color is 0, while the angle between vectors of diﬀerent colors is 2π/3. Then for vi , vj such that (i, j) ∈ E, we have ( vi · vj = ∥vi ∥∥vj ∥ cos
2π 3
)
1 =− . 2
Since we have given a feasible solution to the vector program with λ = −1/2, it must be the case that for the optimal solution λ ≤ −1/2. Note that the proof of the lemma has actually shown the following corollary; this will be useful later on. Corollary 6.22: For any 3colorable graph, there is a feasible solution to (6.8) such that vi ·vj = −1/2 for all edges (i, j) ∈ E. To achieve our result, we will show how to obtain a randomized algorithm that produces a semicoloring. A semicoloring is a coloring of nodes such that at most n/4 edges have endpoints with the same color. This implies that at least n/2 vertices are colored such that any edge between them has endpoints that are colored diﬀerently. We claim that an algorithm for producing a semicoloring is suﬃcient, for if we can semicolor a graph with k colors, then we can color the entire graph with k log n colors in the following way. We ﬁrst semicolor the graph with k colors, and take the vertices that are colored correctly. We then semicolor the vertices left over (no more than n/2) with k new colors, take the vertices that are colored correctly, and repeat. This takes log n iterations, after which the entire graph is colored correctly with k log n colors. Now we give the randomized algorithm for producing a semicoloring. The basic idea is similar to that used in the correlation clustering algorithm in Section 6.4. We solve the vector program (6.8), and choose t = 2 + log3 ∆ random vectors r1 , . . . , rt , where ∆ is the maximum degree of any vertex in the graph. The t random vectors deﬁne 2t diﬀerent regions into which the vectors vi can fall: one region for each distinct possibility of whether rj · vi ≥ 0 or rj · vi < 0 for all j = 1, . . . , t. We then color the vectors in each region with a distinct color. Theorem 6.23: This coloring algorithm produces a semicoloring of 4∆log3 2 colors with probability at least 1/2. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
6.5
Coloring 3colorable graphs
155
Proof. Since we used 2t colors for t = 2 + log3 ∆, we use 4 · 2log3 ∆ = 4∆log3 2 colors. We now need to show that this produces a semicoloring with probability at least 1/2. First, we consider the probability that vertices i and j get the same color for a given edge (i, j). This probability is the probability that both i and j fall into the same region; that is, the probability that none of the t random hyperplanes separate i and j. Note that by Lemma 6.7, the probability that a single random hyperplane separates i and j is π1 arccos(vi · vj ). Therefore the probability that t independently chosen hyperplanes fail to separate i and j is (1 − π1 arccos(vi · vj ))t . Thus ( Pr[i and j get the same color for edge (i, j)] =
1−
)t ( )t 1 1 arccos(vi · vj ) ≤ 1 − arccos(λ) , π π
where the last inequality follows from the inequalities of the vector program (6.8) and since arccos is a nonincreasing function. Then by Lemma 6.21, (
)t ( )t 1 1 1 − arccos(λ) ≤ 1 − arccos(−1/2) . π π
Finally, using some simple algebra and the deﬁnition of t, (
1 1 − arccos(−1/2) π
)t
( =
1 2π 1− π 3
)t
( )t 1 1 . = ≤ 3 9∆
Therefore, 1 . 9∆ If m denotes the number of edges in the graph, then m ≤ n∆/2. Thus the expected number of edges which have both endpoints colored the same is no more than m/9∆, which is at most n/18. Let X be a random variable denoting the number of edges which have both endpoints colored the same. By Markov’s inequality (Lemma 5.25), the probability that there are more than n/4 edges which have both endpoints colored the same is at most Pr[i and j get the same color for edge (i, j)] ≤
Pr[X ≥ n/4] ≤
n/18 1 E[X] ≤ ≤ . n/4 n/4 2
If we use n as a bound on the maximum degree ∆, then we obtain an algorithm that ˜ log3 2 ) colors. Since produces a semicoloring with O(nlog3 2 ) colors and thus a coloring with O(n log3 2 ≈ .631, this is worse than the algorithm we presented at the beginning of the section that uses O(n1/2 ) colors. However, we can use some ideas from that algorithm to do better. Let σ be some parameter we will choose later. As long as there is a vertex in the graph of degree at least σ, we pick three new colors, color the vertex with one of the new colors, and use the 2coloring algorithm to 2color the neighbors of the vertex with the other two new colors; we know that we can do this because the graph is 3colorable. We remove all these vertices from the graph and repeat. When we have no vertices in the graph of degree at least σ, we use the algorithm above to semicolor the remaining graph with O(σ log3 2 ) new colors. We can now show the following. Theorem 6.24: The algorithm above semicolors a 3colorable graph with O(n0.387 ) colors with probability at least 1/2. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
156
Randomized rounding of semideﬁnite programs
Proof. The ﬁrst part of this algorithm uses 3n/σ colors in total, since we remove at least σ vertices from the graph each time. To balance the two parts of the algorithm, we set σ such that nσ = σ log3 2 , which gives σ = nlog6 3 , or σ ≈ n0.613 . Thus both parts use O(n0.387 ) colors, which gives the theorem. ˜ 0.387 ) colors. From the theorem we get an overall algorithm that colors a graph with O(n In Section 13.2, we will show how √ to use semideﬁnite programming to obtain an algorithm ˜ 1/3 ln ∆) colors. Using the same ideas as above, this can be that 3colors a graph with O(∆ ˜ 1/4 ) colors. converted into an algorithm that colors using O(n
Exercises 6.1 As with linear programs, semideﬁnite programs have duals. The dual of the MAX CUT SDP (6.4) is: minimize
1∑ 1∑ wij + γi 2 4 i 0. How large an ϵ can you get? 6.3 Given a MAX 2SAT instance as deﬁned in Exercise 6.2, prove that it possible to decide in polynomial time whether all the clauses can be satisﬁed or not. 6.4 Give polynomialtime algorithms for the following: (a) Coloring a 2colorable graph with 2 colors. (b) Coloring a graph of maximum degree ∆ with ∆ + 1 colors. 6.5 An important quantity in combinatorial optimization is called the Lov´ asz Theta Function. The theta function is deﬁned on undirected graphs G = (V, E). One of its many deﬁnitions Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
6.5
Coloring 3colorable graphs
157
is given below as a semideﬁnite program: ∑ ¯ = maximize ϑ(G) bij i,j
∑
subject to
bii = 1,
i
bij = 0, B = (bij ) ≽ 0,
∀i ̸= j, (i, j) ∈ / E, B symmetric.
¯ ≤ χ(G), where ω(G) is the size of the largest clique in Lov´asz showed that ω(G) ≤ ϑ(G) G and χ(G) is the minimum number of colors needed to color G. ¯ (a) Show that ω(G) ≤ ϑ(G). (b) The following is a small variation in the vector program we used for graph coloring: minimize
α
subject to vi · vj = α, vi · vi = 1,
∀(i, j) ∈ E, ∀i ∈ V,
vi ∈ ℜ , n
∀i ∈ V.
Its dual is maximize −
∑
u i · ui
i
subject to
∑
ui · uj ≥ 1,
i̸=j
ui · uj = 0, ui ∈ ℜ
n
∀(i, j) ∈ / E, i ̸= j, ∀i ∈ V.
¯ Show that the value of the dual is 1/(1 − ϑ(G)). By strong duality, this is also the value of the primal; however, see the chapter notes for a discussion of conditions under which strong duality holds. The value of this vector program is sometimes called the strict vector chromatic number of the graph, and the value of original vector programming relaxation (6.8) is the vector chromatic number of the graph. 6.6 Recall the maximum directed cut problem from Exercises 5.3 and 5.6: we are given as input a directed graph G = (V, A), with a nonnegative weight wij ≥ 0 for all arcs (i, j) ∈ A. The goal is to partition V into two sets U and W = V − U so as to maximize the total weight of the arcs going from U to W (that is, arcs (i, j) with i ∈ U and j ∈ W ). (a) As in the case of the maximum cut problem, we’d like to express the maximum directed cut problem as an integer quadratic program in which the only constraints are yi ∈ {−1, 1} and the objective function is quadratic in yi . Show that the maximum directed cut problem can be expressed in this way. (Hint: As is the case for MAX 2SAT in Exercise 6.2, it may help to introduce a variable y0 which indicates whether the value −1 or 1 means that yi is in the set U ). Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
158
Randomized rounding of semideﬁnite programs
(b) Find an αapproximation algorithm for the maximum directed cut problem using a vector programming relaxation of the integer quadratic program above. Find the best value of α that you can. 6.7 Recall the MAX 2SAT problem from Exercise 6.2 above. We consider a variant, called MAX E2SAT, in which every clause has exactly two literals in it; that is, there are no unit clauses. We say that a MAX E2SAT instance is balanced if for each i, the weight of clauses in which xi appears is equal to the weight of the clauses in which x ¯i appears. Give a βapproximation algorithm for balanced MAX E2SAT instances, where β=
min
1 2
+
x:−1≤x≤1
1 2π arccos x 3 1 4 − 4x
≈ .94394.
6.8 Consider again the maximum directed cut problem given in Exercise 6.6. We say that an instance of the directed cut problem is balanced if for each vertex i ∈ V , the total weight of arcs entering i is equal to the total weight of arcs leaving i. Give an αapproximation algorithm for balanced maximum directed cut instances, where α is the same performance guarantee as for the maximum cut problem; that is, α=
min
x:−1≤x≤1
1 π
arccos(x) . − x)
1 2 (1
Chapter Notes Strang [273, 274] provides introductions to linear algebra that are useful in the context of our discussion of semideﬁnite programming and various operations on vectors and matrices. A 1979 paper of Lov´asz [219] gives an early application of SDP to combinatorial optimization with his ϑnumber for graphs (see Exercise 6.5). The algorithmic implications of the ϑnumber were highlighted in the work of Gr¨otschel, Lov´asz, and Schrijver [144]; they showed that the ellipsoid method could be used to solve the associated semideﬁnite program for the ϑnumber, and that in general the ellipsoid method could be used to solve convex programs in polynomial time given a polynomialtime separation oracle. Alizadeh [5] and Nesterov and Nemirovskii [236] showed that polynomialtime interiorpoint methods for linear programming could be extended to SDP. A good overview of SDP can be found in the edited volume of Wolkowicz, Saigal, and Vandenberghe [290]. Unlike linear programs, the most general case of semideﬁnite programs is not solvable in polynomial time without additional assumptions. There are examples of SDPs in which the coeﬃcients of all variables are either 1 or 2, but whose optimal value is doubly exponential in the number of variables (see [5, Section 3.3]), and so no polynomialtime algorithm is possible. Even when SDPs are solvable in polynomial time, they are only solvable to within an additive error of ϵ; this is in part due to the fact that exact solutions can be irrational, and thus not expressible in polynomial space. To show that an SDP is solvable in polynomial time to within an additive error of ϵ, it is suﬃcient for the feasible region to be nonempty and to be contained in a polynomiallysize ball centered on the origin. These conditions are met for the problems we discuss. Weak duality always holds for semideﬁnite programs. If there are points in the interior of the feasible region of the primal and dual (called the “Slater conditions”), strong duality also holds. As a quick way to see the solvability of semideﬁnite programs in polynomial time, we observe that the constraint X ≽ 0 is equivalent to the inﬁnite family of constraints y T Xy ≥ 0 for all y ∈ ℜn . Suppose we compute the minimum eigenvalue λ and corresponding Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
6.5
Coloring 3colorable graphs
159
eigenvector v of the matrix X. If λ ≥ 0, then X is positive semideﬁnite, whereas if λ < 0, then v T (Xv) = v T (λv) = λv T v < 0 gives a violated constraint. Thus we have a separation oracle and can apply the ellipsoid method to ﬁnd an approximate solution assuming the initial feasible region can be bounded by a polynomiallysized ellipsoid and assuming that we can carry out the eigenvalue and eigenvector computation to reasonable precision in polynomial time. Gr¨otschel, Lov´asz, and Schrijver [144] avoid the issue of computing an eigenvector in polynomial time by giving a polynomialtime separation oracle that computes a basis of the columns of X, and then computes determinants to check whether X is positive semideﬁnite and to return a constraint y T Xy < 0 if not. The SDPbased algorithm of Section 6.2 for the maximum cut problem is due to Goemans and Williamson [139]; they gave the ﬁrst use of semideﬁnite programming for approximation algorithms. Knuth [200, Section 3.4.1C] gives algorithms for sampling from the normal distribution via samples from the uniform [0,1] distribution. Fact 6.4 is from Knuth [200, p. 135136] and Fact 6.5 is a paraphrase of Theorem IV.16.3 of R´enyi [251]. Feige and Schechtman [109] give 1 OPT. Theorem 6.10 is due to H˚ astad [159]. Theorem 6.11 is due graphs for which ZV P = 0.878 to Khot, Kindler, Mossel, and O’Donnell [193] together with a result of Mossel, O’Donnell, and Oleszkiewicz [227]. The derandomization of the maximum cut algorithm is due to Mahajan and Ramesh [222]. This derandomization technique works for many of the randomized algorithms in this chapter that use random hyperplanes. Subsequent to the work of Goemans and Williamson, Nesterov [235] gave the algorithm for quadratic programs found in Section 6.3, Swamy [277] gave the algorithm of Section 6.4 on correlation clustering, and Karger, Motwani, and Sudan [182] gave the SDPbased algorithm √ for coloring 3colorable graphs in Section 6.5. The O( n)approximation algorithm for coloring 3colorable graphs given at the beginning of Section 6.5 is due to Wigderson [286]. Fact 6.13 is known as the Schur product theorem; see, for example, Theorem 7.5.3 of Horn and Johnson [171]. Exercises 6.1, 6.2, and Exercise 6.6 are from Goemans and Williamson [139]. Exercise 6.3 has been shown by Even, Itai, and Shamir [104], who also point to previous work on the 2SAT problem. Exercise 6.5 is due to Tardos and Williamson as cited in [182]. Exercises 6.7 and 6.8 on balanced MAX 2SAT and balanced maximum directed cut problem instances are due to Khot, Kindler, Mossel, and O’Donnell [193].
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
160
Randomized rounding of semideﬁnite programs
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
Chapter 7
The primaldual method
We introduced the primaldual method in Section 1.5, and showed how it gave an approximation algorithm for the set cover problem. Although there it did not give a better performance guarantee than various LP rounding algorithms, we observed that in practice the primaldual method gives much faster algorithms than those that require solving a linear program. In this chapter, we will cover primaldual algorithms in more depth. We begin by reviewing the primaldual algorithm for the set cover problem from Section 1.5. We then apply the primaldual method to a number of problems, gradually developing a number of principles in deciding how to apply the technique so as to get good performance guarantees. In discussing the feedback vertex set problem in Section 7.2, we see that it is sometimes useful to focus on increasing particular dual variables that correspond to small or minimal constraints not satisﬁed by the current primal solution. In Section 7.3, we discuss the shortest st path problem, and see that to obtain a good performance guarantee it is sometimes necessary to remove unneeded elements in the primal solution returned by the algorithm. In Section 7.4 we introduce the generalized Steiner tree problem (also known as the Steiner forest problem), and show that to obtain a good performance guarantee it can be helpful to increase multiple dual variables at the same time. In Section 7.5, we see that it can be useful to consider alternate integer programming formulations in order to obtain improved performance guarantees. We conclude the chapter with an application of the primaldual method to the uncapacitated facility location problem, and an extension of this algorithm to the related kmedian problem. For the latter problem, we use a technique called Lagrangean relaxation to obtain the appropriate relaxation for the approximation algorithm.
7.1
The set cover problem: a review
We begin by reviewing the primaldual algorithm and its analysis for the set cover problem from Section 1.5. Recall that in the set cover problem, we are given as input a ground set of elements E = {e1 , . . . , en }, some subsets of those elements S1 , S2 , . . . , Sm ⊆ E, and a nonnegative weight wj for each subset Sj . The goal is to ﬁnd a minimumweight collection of subsets that ∑ ∪ covers all of E; that is, we wish to ﬁnd an I ⊆ {1, . . . , m} that minimizes j∈I wj subject to j∈I Sj = E. We observed in Section 1.5 that the set cover problem can be modelled as the following 161
162
The primaldual method
y←0 I←∅ ∪ while there exists ei ∈ / j∈I Sj do ∑ Increase the dual variable yi until there is some ℓ such that j:ej ∈Sℓ yj = wℓ I ← I ∪ {ℓ} return I Algorithm 7.1: Primaldual algorithm for the set cover problem. integer program: minimize
m ∑
wj xj
(7.1)
j=1
∑
subject to
xj ≥ 1,
i = 1, . . . , n,
(7.2)
xj ∈ {0, 1}
j = 1, . . . , m.
(7.3)
j:ei ∈Sj
If we relax the integer program to a linear program by replacing the constraints xj ∈ {0, 1} with xj ≥ 0, and take the dual, we obtain maximize
n ∑
yi
i=1
subject to
∑
yi ≤ wj ,
j = 1, . . . , m,
yi ≥ 0,
i = 1, . . . , n.
i:ei ∈Sj
We then gave the following algorithm, which we repeat in Algorithm 7.1. We begin with the dual solution y = 0; this is a feasible solution since wj ≥ 0 for all j. We also have an infeasible primal solution I = ∅. As long as there is some element ei not covered by I, we look at all the sets Sj that contain ei , and consider the amount by which we can increase the dual variable associated )with ei and still maintain dual feasibility. This amount is ( yi ∑ ϵ = minj:ei ∈Sj wj − k:ek ∈Sj yk (note that possibly this is zero). We then increase yi by ϵ. This will cause some dual constraint associated with some set Sℓ to become tight; that is, after increasing yi we will have for this set Sℓ ∑ yk = wℓ . k:ek ∈Sℓ
We add the set Sℓ to our cover (by adding ℓ to I) and continue until all elements are covered. In Section 1.5, we argued that this algorithm is an f approximation algorithm for the set cover problem, where f = maxi  {j : ei ∈ Sj } . We repeat the analysis here, since there are several features of the analysis that are used frequently in analyzing primaldual approximation algorithms. Theorem 7.1: Algorithm 7.1 is an f approximation algorithm for the set cover problem. ∑ Proof. For the cover I constructed by the algorithm, we would like to show that j∈I wj ≤ ∗ f · OPT. Let ZLP be the optimal value of the linear programming relaxation of (7.1). It is Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
7.1
The set cover problem: a review
163
∑ ∑ suﬃcient to show that j∈I wj ≤ f · ni=1 yi for solution y, since by weak duality ∑the ﬁnal dual ∗ , and since the LP is a relaxation, we know that for any dual feasible solution y, ni=1 yi ≤ ZLP ∗ ≤ OPT. ZLP Because we only added a set Sj to our ∑ cover when its corresponding dual inequality was tight, we know that for any j ∈ I, wj = i:ei ∈Sj yi . Thus we have that ∑
wj
∑ ∑
=
yi
j∈I i:ei ∈Sj
j∈I
n ∑
=
yi ·  {j ∈ I : ei ∈ Sj } 
i=1
where the second equality comes from rewriting the double sum. We then observe that since  {j ∈ I : ei ∈ Sj }  ≤ f , we get that ∑
wj ≤ f ·
n ∑
yi ≤ f · OPT .
i=1
j∈I
We’ll be using several features of the algorithm and the analysis repeatedly in this chapter. In particular: we maintain a feasible dual solution, and increase dual variables until a dual constraint becomes tight. This indicates an object that we need to add to our primal solution (a set, in this case). When we analyze the cost of the primal solution, each object in the solution was given by a tight dual inequality. Thus we can rewrite the cost of the primal solution in terms of the dual variables. We then compare this cost with the dual objective function and show that the primal cost is within a certain factor of the dual objective, which shows that we are close to the value of an optimal solution. ∑ In this case, we increase dual variables until wj = i:ei ∈Sj yi for some set Sj , which we then add to our primal solution. When we have a feasible primal solution I, we can rewrite its cost in terms of the dual variables by using the tight dual inequalities, so that ∑ ∑ ∑ yi . wj = j∈I
j∈I i:ei ∈Sj
By exchanging the double summation, we have that ∑ j∈I
wj =
n ∑
yi ·  {j ∈ I : ei ∈ Sj } .
i=1
Then by bounding the value of  {j ∈ I : ei ∈ Sj }  by f , we get that the cost is at most f times the dual objective function, proving a performance guarantee on the algorithm. Because we will use this form of analysis frequently in this chapter, we will call it the standard primaldual analysis. This method of analysis is strongly related to the complementary slackness conditions discussed at the end of Section 1.4. Let I be the set cover returned by the primaldual algorithm, and consider an integer primal solution x∗ for the integer programming formulation (7.1) of the set cover problem in which we set x∗j = 1 for each set j ∈ I. Then we know that whenever x∗j > 0, the corresponding dual inequality is tight, so this part of the complementary slackness Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
164
The primaldual method
conditions is satisﬁed. If it were∑also the case that whenever yi∗ > 0, the corresponding primal inequality were tight (namely, j:ei ∈Sj x∗j = 1) then the complementary slackness conditions would imply that x∗ is an optimal solution. This is not the case, but we have an approximate form of the complementary slackness condition that holds; namely, whenever yi∗ > 0, ∑
x∗j =  {j ∈ I : ei ∈ Sj }  ≤ f.
j:ei ∈Sj
Whenever we can show that these complementary slackness conditions hold within a factor of α, we can then obtain an αapproximation algorithm. Recall from Section 5.6 that we deﬁned the integrality gap of an integer programming formulation to be the worstcase ratio over all instances of the problem of the optimal value of the integer program to the optimal value of the linear programming relaxation. Standard primaldual algorithms construct a primal integer solution and a solution to the dual of the linear programming relaxation, and the performance guarantee of the algorithm gives an upper bound on the ratio of these two values over all possible instances of the problem. Therefore, the performance guarantee of a primaldual algorithm provides an upper bound on the integrality gap of an integer programming formulation. However, the opposite is also true: the integrality gap gives a lower bound on the performance guarantee that can be achieved via a standard primaldual algorithm and analysis, or indeed any algorithm that compares the value of its solution with the value of the linear programming relaxation. In this chapter we will sometimes be able to show limits on primaldual algorithms that use particular integer programming formulations due to an instance of the problem with a bad integrality gap.
7.2
Choosing variables to increase: the feedback vertex set problem in undirected graphs
In the feedback vertex set problem in undirected graphs, we are given an undirected graph G = (V, E) and nonnegative weights wi ≥ 0 for vertices i ∈ V . The goal is to choose a minimumcost subset of vertices S ⊆ V such that every cycle C in the graph contains some vertex of S. We sometimes say that S hits every cycle of the graph. Another way to view the problem is that the goal is to ﬁnd a minimumcost subset of vertices S such that removing S from the graph leaves the graph acyclic. Let G[V − S] be the graph on the set of vertices V − S with the edges from G that have both endpoints in V − S; we say that G[V − S] is the graph induced by V − S. A third way to view the problem is to ﬁnd a minimumcost set of vertices S such that the induced graph G[V − S] is acyclic. We will give a primaldual algorithm for this problem. In the case of the set cover problem, it did not matter which dual variable was increased; we could increase any variable yi corresponding to an uncovered element ei and obtain the same performance guarantee. However, it is often helpful to carefully select the dual variable to increase, and we will see this principle in the course of devising an algorithm for the feedback vertex set problem. If we let C denote the set of all cycles C in the graph, we can formulate the feedback vertex Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
7.2 Choosing variables to increase: the feedback vertex set problem in undirected graphs 165
set problem in undirected graphs as the following integer program: ∑ minimize wi xi
(7.4)
i∈V
∑
subject to
xi ≥ 1,
∀C ∈ C,
xi ∈ {0, 1} ,
∀i ∈ V.
i∈C
Initially this might not seem like such a good choice of a formulation: the number of cycles in the graph can be exponential in the size of the graph. However, with the primaldual method we do not actually need to solve either the integer program or its linear programming relaxation; our use of the linear program and the dual only guides the algorithm and its analysis, so having an exponential number of constraints is not problematic. If we relax the integer program to a linear program by replacing the constraints xi ∈ {0, 1} with xi ≥ 0, and take its dual, we obtain ∑ maximize yC C∈C
subject to
∑
yC ≤ wi ,
∀i ∈ V,
yC ≥ 0,
∀C ∈ C.
C∈C:i∈C
Again, it might seem worrisome that we now have an exponential number of dual variables, since the primaldual method maintains a feasible dual solution. In the course of the algorithm, however, only a polynomial number of these will become nonzero, so we only need to keep track of these nonzero variables. By analogy with the primaldual algorithm for the setcover problem, we obtain Algorithm 7.2. We start with the dual feasible solution in which all yC are set to zero, and with the primal infeasible solution S = ∅. We see if there is any cycle C left in the induced graph G[V − S]. If there is, then we determine the amount by which we can increase the yC while ( ) ∑ dual variable still maintaining dual feasibility. This amount is ϵ = mini∈C wi − C ′ :i∈C ′ yC ′ . Increasing yC causes a dual inequality to become tight for some vertex ℓ ∈ C; in particular, it becomes tight for a vertex ℓ ∈ C that attains the minimum in the expression for ϵ. We add this vertex ℓ to our solution S and we remove ℓ from the graph. We also repeatedly remove any vertices of degree one from the graph (since they cannot be in any cycle) until we are left with a graph that contains only vertices of degree two or higher. Let n = V  be the number of vertices in the graph. Then note that we can add at most n vertices to our solution, so that we only go through the main loop at most n times, and at most n dual variables are nonzero. Suppose we now analyze the algorithm as we did for the set∑cover problem. Let S be the ﬁnal set of vertices chosen. We know that for any i ∈ S, wi = C:i∈C yC . Thus we can write the cost of our chosen solution as ∑ ∑ ∑ ∑ wi = yC = S ∩ CyC . i∈S
i∈S C:i∈C
C∈C
Note that S ∩ C is simply the number of vertices of the solution S in the ∑ cycle C. If we can ∑ show that S ∩ C ≤ α whenever yC > 0, then we will have i∈S wi ≤ α C∈C yC ≤ α · OPT. Unfortunately, if in the main loop of the algorithm we choose an arbitrary cycle C and increase its dual variable yC , it is possible that S ∩ C can be quite large. In order to do better, Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
166
The primaldual method
y←0 S←∅ while there exists a cycle C in G do ∑ Increase yC until there is some ℓ ∈ C such that C ′ ∈C:ℓ∈C ′ yC ′ = wℓ S ← S ∪ {ℓ} Remove ℓ from G Repeatedly remove vertices of degree one from G return S Algorithm 7.2: Primaldual algorithm for the feedback vertex set problem (ﬁrst attempt). we need to make a careful choice of cycle C. If we can always choose a short cycle, with C ≤ α, then certainly we will have S ∩ C ≤ C ≤ α. This isn’t always possible either: the graph itself can simply be one large cycle through all n vertices. In such a case, however, we only need to choose one vertex from the cycle in order to have a feasible solution. This leads to the following observation. Observation 7.2: For any path P of vertices of degree two in graph G, Algorithm 7.2 will choose at most one vertex from P ; that is, S ∩ P  ≤ 1 for the ﬁnal solution S given by the algorithm. Proof. Once S contains a vertex of P , we remove that vertex from the graph. Its neighbors in P will then have degree one and be removed. Iteratively, the entire path P will be removed from the graph, and so no further vertices of P will be added to S. Suppose that in the main loop of Algorithm 7.2 we choose the cycle C that minimizes the number of vertices that have degree three or higher. Note that in such a cycle, vertices of degree three or more alternate with paths of vertices of degree two (possibly paths of a single edge). Thus by Observation 7.2, the value of S ∩ C for the ﬁnal solution S will be at most twice the number of vertices of degree three or higher in C. The next lemma shows us that we can ﬁnd a cycle C with at most O(log n) vertices of degree three or higher. Lemma 7.3: In any graph G that has no vertices of degree one, there is a cycle with at most 2⌈log2 n⌉ vertices of degree three or more, and it can be found in linear time. Proof. If G has only vertices of degree two, then the statement is trivially true. Otherwise, pick an arbitrary vertex of degree three or higher. We start a variant of a breadthﬁrst search from this vertex, in which we treat every path of vertices of degree two as a single edge joining vertices of degree three or more; note that since there are no degree one vertices, every such path joins vertices of degree at least three. Thus each level of the breadthﬁrst search tree consists only of vertices of degree three or more. This implies that the number of vertices at each level is at least twice the number of vertices of the previous level. Observe then that the depth of the breadthﬁrst search can be at most ⌈log2 n⌉: once we reach level ⌈log2 n⌉, we have reached all n vertices of the graph. We continue the breadthﬁrst search until we close a cycle; that is, we ﬁnd a path of vertices of degree two from a node on the current level to a previously visited node. This cycle has at most 2⌈log2 n⌉ vertices of degree three or more, since at worst the cycle is closed by an edge joining two vertices at depth ⌈log2 n⌉. We give the revised algorithm in Algorithm 7.3. We can now show the following theorem. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
7.2 Choosing variables to increase: the feedback vertex set problem in undirected graphs 167
y←0 S←∅ Repeatedly remove vertices of degree one from G while there exists a cycle in G do Find cycle C with at most 2⌈log2 n⌉ vertices of degree three or more ∑ Increase yC until there is some ℓ ∈ C such that C ′ ∈C:ℓ∈C ′ yC ′ = wℓ S ← S ∪ {ℓ} Remove ℓ from G Repeatedly remove vertices of degree one from G return S Algorithm 7.3: Primaldual algorithm for the feedback vertex set problem (second attempt).
Theorem 7.4: Algorithm 7.3 is a (4⌈log2 n⌉)approximation algorithm for the feedback vertex set problem in undirected graphs.
Proof. As we showed above, the cost of our ﬁnal solution S is ∑ i∈S
wi =
∑ ∑
yC =
i∈S C:i∈C
∑
S ∩ CyC .
C∈C
By construction, yC > 0 only when C contains at most 2⌈log2 n⌉ vertices of degree three or more in the graph at the time we increased yC . Note that at this time S ∩ C = ∅. By Observation 7.2, each path of vertices of degree two joining two vertices of degree three or more in C can contain at most one vertex of S. Thus if C has at most 2⌈log2 n⌉ vertices of degree three or more, it can have at most 4⌈log2 n⌉ vertices of S overall: possibly each vertex of degree 3 or more is in S, and then at most one of the vertices in the path joining adjacent vertices of degree three or more can be in S. Since as the algorithm proceeds, the degree of a vertex can only go down, we obtain that whenever yC > 0, S ∩ C ≤ 4⌈log2 n⌉. Thus we have that ∑ i∈S
wi =
∑ C∈C
S ∩ CyC ≤ (4⌈log2 n⌉)
∑
yC ≤ (4⌈log2 n⌉) OPT .
C∈C
The important observation of this section is that in order to get a good performance guarantee, one must choose carefully the dual variable to increase, and it is frequently useful to choose a dual variable that is small or minimal in some sense. The integrality gap of the integer programming formulation (7.4) is known to be Ω(log n), and so a performance guarantee of O(log n) is the best we can hope for from a primaldual algorithm using this formulation. However, this does not rule out obtaining a better performance guarantee by a primaldual algorithm using a diﬀerent integer programming formulation of the problem. In Section 14.2, we will show that we can obtain a primaldual 2approximation algorithm for the feedback vertex set problem in undirected graphs by considering a more sophisticated integer programming formulation of the problem. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
168
The primaldual method
7.3
Cleaning up the primal solution: the shortest st path problem
In the shortest st path problem, we are given an undirected graph G = (V, E), nonnegative costs ce ≥ 0 on all edges e ∈ E, and a pair of distinguished vertices s and t. The goal is to ﬁnd the minimumcost path from s to t. It is wellknown that the optimal solution can be found in polynomial time; for instance, Dijkstra’s algorithm ﬁnds an optimal solution in polynomial time. However, it is instructive to think about applying the primaldual method to this problem, in part because it gives us insight into using the primaldual method for related problems that are NPhard; we will see this in the next section. Additionally, the algorithm we ﬁnd using the primaldual method turns out to be equivalent to Dijkstra’s algorithm. Let S = {S ⊆ V : s ∈ S, t ∈ / S}; that is, S is the set of all st cuts in the graph. Then we can model the shortest st path problem with the following integer program: minimize
∑
ce xe
e∈E
subject to
∑
xe ≥ 1,
∀S ∈ S,
xe ∈ {0, 1} ,
∀e ∈ E,
e∈δ(S)
where δ(S) is the set of all edges that have one endpoint in S and the other endpoint not in S. To see that this integer program models the shortest st path problem, take any feasible solution x and consider the graph G′ = (V, E ′ ) with E ′ = {e ∈ E : xe = 1}. The constraints ensure that for any st cut S, there must be at least one edge of E ′ in δ(S): that is, the size of the minimum st cut in G′ must be at least one. Thus by the maxﬂow mincut theorem the maximum st ﬂow in G′ is at least one, which implies that there is a path from s to t in G′ . Similarly, if x is not feasible, then there is some st cut S for which there are no edges of E ′ in δ(S), which implies that the size of minimum st cut is zero, and thus the maximum st ﬂow is zero. Hence there is no path from s to t in G′ . Once again the number of constraints in the integer program is exponential in the size of the problem, and as with the feedback vertex set problem in Section 7.2, this is not problematic since we only use the formulation to guide the algorithm and its analysis. If we replace the constraints xe ∈ {0, 1} with xe ≥ 0 to obtain a linear programming relaxation and take the dual of this linear program, we obtain: maximize
∑
yS
S∈S
subject to
∑
yS ≤ ce ,
∀e ∈ E,
yS ≥ 0
∀S ∈ S.
S∈S:e∈δ(S)
The dual variables yS have a nice geometric interpretation; they can be interpreted as “moats” surrounding the set S of width yS ; see Figure 7.1 for an illustration. Any path from s to t must cross this moat, and hence must have cost at least yS . Moats must be nonoverlapping, and thus for any edge e, we cannot have edge e crossing moats of total width more than ce ; thus ∑ the dual constraint that S:e∈δ(S) yS ≤ ce . We ∑ can have many moats, and any st path must cross them all, and have total length at most S∈S yS . Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
7.3
Cleaning up the primal solution: the shortest st path problem
169
s t yS S
Figure 7.1: An illustration of a moat separating s from t. The moat contains the nodes in S and its width is yS . y←0 F ←∅ while there is no st path in (V, F ) do Let C be the connected component of (V, F ) containing ∑s ′ Increase yC until there is an edge e ∈ δ(C) such that S∈S:e′ ∈δ(S) yS = ce′ F ← F ∪ {e′ } Let P be an st path in (V, F ) return P Algorithm 7.4: Primaldual algorithm for the shortest st path problem.
We now give a primaldual algorithm for the shortest st path problem in Algorithm 7.4 which follows the general lines of the primaldual algorithms we have given previously for the set cover and feedback vertex set problems. We start with a dual feasible solution y = 0 and the primal infeasible solution of F = ∅. While we don’t yet have a feasible solution, we increase the dual variable yC associated with an st cut C which the current solution does not satisfy; that is, for which F ∩ δ(C) = ∅. We call such constraints violated constraints. Following the lesson of the previous section, we carefully choose the “smallest” such constraint: we let C be the set of vertices of the connected component containing s in the set of edges F . Because F does not contain an st path, we know that t ∈ / C, and by the deﬁnition of a connected component we know that δ(C) ∩ F = ∅. We increase the dual variable yC until some constraint of the dual becomes tight for some edge e′ ∈ E, and we add e′ to F . Once our primal solution F is feasible and contains an st path, we end the main loop. Now we do something slightly diﬀerent than before: we don’t return the solution F , but rather a subset of F . Let P be any st path such that P ⊆ F . Our algorithm returns P . We do this since it may be too expensive to return all the edges in F , so we eﬀectively delete any edge we Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
170
The primaldual method
do not need and return only the edges in P . We begin the analysis by showing that the set of edges F found by the algorithm forms a tree; this implies that the st path P is unique. Lemma 7.5: At any point in Algorithm 7.4, the set of edges in F forms a tree containing the vertex s. Proof. We prove this by induction on the number of edges added to F . In each step of the main loop we consider the connected component C of (V, F ) containing s, and add an edge e′ from the set δ(C) to our solution F . Since exactly one endpoint of e′ is in C, e′ cannot close a cycle in the tree F , and causes F to span a new vertex not previously spanned. We can now show that this algorithm gives an optimal algorithm for the shortest st path problem. Theorem 7.6: Algorithm 7.4 ﬁnds a shortest path from s to t. Proof. We prove this using ∑the standard primaldual analysis . As usual, since for every edge e ∈ P , we have that ce = S:e∈δ(S) yS , and we know that ∑ e∈P
ce =
∑
∑
∑
yS =
P ∩ δ(S)yS .
S:s∈S,t∈S /
e∈P S:e∈δ(S)
If we can now show that whenever yS > 0, we have P ∩ δ(S) = 1, then we will show that ∑ e∈P
ce =
∑
yS ≤ OPT
S:s∈S,t∈S /
by weak duality. But of course since P is an st path of cost no less than OPT, it must have cost exactly OPT. We now show that if yS > 0, then P ∩ δ(S) = 1. Suppose otherwise, and P ∩ δ(S) > 1. Then there must be a subpath P ′ of P joining two vertices of S such that the only vertices of P ′ in S are its start and end vertices; see Figure 7.2. Since yS > 0, we know that at the time we increased yS , F was a tree spanning just the vertices in S. Thus F ∪ P ′ must contain a cycle. Since P is a subset of the ﬁnal set of edges F , this implies that the ﬁnal F contains a cycle, which contradicts Lemma 7.5. Thus it must be the case that P ∩ δ(S) = 1. As we stated previously, one can show that this algorithm behaves in exactly the same way as Dijkstra’s algorithm for solving the shortest st path problem; proving this equivalence is given as Exercise 7.1.
7.4
Increasing multiple variables at once: the generalized Steiner tree problem
We now turn to a problem known as the generalized Steiner tree problem or the Steiner forest problem. In this problem we are given an undirected graph G = (V, E), nonnegative costs ce ≥ 0 for all edges e ∈ E, and k pairs of vertices si , ti ∈ V . The goal is to ﬁnd a minimumcost subset of edges F ⊆ E such that every si ti pair is connected in the set of selected edges; that is, si and ti are connected in (V, F ) for all i. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
7.4
Increasing multiple variables at once: the generalized Steiner tree problem
171
S
P0
s t
Figure 7.2: Proof of Theorem 7.6. The heavy line is the path P ; the dotted heavy line is the subpath P ′ . Let Si be the subsets of vertices separating si and ti ; that is, Si = {S ⊆ V : S ∩ {si , ti }  = 1}. Then we can model this problem with the following integer program: ∑ minimize ce xe (7.5) e∈E
subject to
∑
xe ≥ 1,
∀S ⊆ V : S ∈ Si for some i,
xe ∈ {0, 1} ,
e ∈ E.
e∈δ(S)
The set of constraints enforce that for any si ti cut S with si ∈ S, ti ∈ / S or vice versa, we must select one edge from δ(S). The argument that this models the generalized Steiner tree problem is similar to that for the shortest st path problem in the previous section. Given the linear programming relaxation obtained by dropping the constraints xe ∈ {0, 1} and replacing them with xe ≥ 0, the dual of this linear program is ∑ maximize yS S⊆V :∃i,S∈Si
subject to
∑
yS ≤ ce ,
∀e ∈ E,
yS ≥ 0,
∃i : S ∈ Si .
S:e∈δ(S)
As in the case of the shortest st path problem, the dual has a natural geometric interpretation as moats. In this case, however, we can have moats around any set of vertices S ∈ Si for any i. See Figure 7.3 for an illustration. Our initial attempt at a primaldual algorithm is given in Algorithm 7.5, and is similar to the shortest st path algorithm given in the previous section. In every iteration, we choose some connected component C such that C ∩ {si , ti }  = 1 for some i. We increase the dual variable yC associated with C until the dual inequality associated with some edge e′ ∈ δ(C) becomes tight, and we add this edge to our primal solution F . Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
172
The primaldual method
s1
δ
s2
t3
Figure 7.3: Illustration of moats for the generalized Steiner tree problem. Each of s1 , s2 , and t3 has a white moat surrounding just that node; additionally, there is a grey moat depicting y{s1 ,s2 ,t3 } = δ. y←0 F ←∅ while not all si ti pairs are connected in (V, F ) do Let C be a connected component of (V, F ) such that C ∑∩ {si , ti }  = 1 for some i Increase yC until there is an edge e′ ∈ δ(C) such that S∈Si :e′ ∈δ(S) yS = ce′ F ← F ∪ {e′ } return F Algorithm 7.5: Primaldual algorithm for the generalized Steiner tree problem (ﬁrst attempt). Using the standard primaldual analysis, we can analyze the cost of the ﬁnal solution F as follows: ∑ ∑ ∑ ∑ ce = yS = δ(S) ∩ F yS . e∈F
∑
e∈F S:e∈δ(S)
S
∑ In order to compare the term S δ(S) ∩ F yS with the dual objective function S yS , we will need to show that δ(S) ∩ F  ≤ α for some α whenever yS > 0. Unfortunately, it is possible to give an example showing that δ(S) ∩ F  = k (where k is the number of si ti pairs) for yS > 0 no matter what connected component is chosen in the main loop of Algorithm 7.5. To see this, consider the complete graph on k + 1 vertices. Let each edge have cost 1, let one vertex correspond to all si , and let the remaining k vertices be t1 , . . . , tk (see Figure 7.4). The algorithm must choose one of the k + 1 vertices initially for the set C. The dual yC gets value yC = 1, and this is the only nonzero dual at the end of the algorithm. The ﬁnal solution F has edges from the vertex chosen for C to all k other vertices, giving δ(C) ∩ F  = k. However, observe that in the ﬁnal solution if we take the average of δ(C) ∩ F  over ∑ all the k + 1 vertices we could have chosen for the initial set C, we get 2k/(k + 1) ≈ 2 since j∈V δ({j}) ∩ F  = 2k. This observation suggests that perhaps we should increase the dual variables for several sets C at once. We present this variation in Algorithm 7.6. Let C be the set of all the connected Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
7.4
Increasing multiple variables at once: the generalized Steiner tree problem
173
s1, s2, s3, s4
t1
t4
t2
t3
Figure 7.4: Bad example for Algorithm 7.5. If the algorithm chooses the vertex s1 , . . . , s4 as its initial connected component C, then it will eventually add all solid edges to F , and δ(C) ∩ F  = 4. components C such that C ∩ {si , ti }  = 1. We increase associated dual variables yC for all C ∈ C at the same rate until a dual inequality becomes tight for some edge e ∈ δ(C) for a set C whose dual we increased. We then add e to our solution F and continue. Additionally, we index the edges we add as we go along: e1 is added in the ﬁrst iteration, e2 in the second, and so on. Once we have a feasible solution F such that all si ti pairs are connected in F , we go through the edges in the reverse of the order in which they were added, from the edge added in the last iteration to the edge added in the ﬁrst iteration. If the edge can be removed without aﬀecting the feasibility of the solution, it is deleted. The ﬁnal set of edges returned after this “reverse deletion” step is F ′ . This reverse deletion step is solely to simplify our analysis; in Exercise 7.4 the reader is asked to show that removing all unnecessary edges in any order gives an equally good approximation algorithm. Given the geometric interpretation of the dual variables as moats, we can give an illustration of the algorithm in Figure 7.5. We can now show that our intuition gathered from the bad example in Figure 7.4 is essentially correct, and that the algorithm is a 2approximation algorithm for the generalized Steiner tree problem. In order to do this, we ﬁrst state a lemma, whose proof we defer for a moment. Lemma 7.7: For any C in any iteration of the algorithm, ∑ δ(C) ∩ F ′  ≤ 2C. C∈C
In terms of the geometric interpretation, we wish to show that the number of times that the edges in the solution cross a moat is at most twice the number of moats; see Figure 7.6. The main intuition of the proof is that the degree of nodes in a tree is at most twice the number of nodes, where we treat each connected component C as a node of the tree and the edges in δ(C) as the edges of the tree. The proof is slightly more complicated than this because only some components C are in C, but we show that every “leaf” of the tree is a component in C and this is suﬃcient to prove the result. We now show that the lemma implies the desired performance guarantee. Theorem 7.8: Algorithm 7.6 is a 2approximation algorithm for the generalized Steiner tree problem. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
174
The primaldual method
t3
s1
s2
t2
s1
t1
s1
s2
t2
t1
s2
s3
s3
t3
t3
t2
s1
t1
s1
t3
s2
t2
t1
s2
s3
s3
t3
t3
t2
s1
t1
s2
t2
t1 s3
s3
Figure 7.5: Illustration of the primaldual algorithm for the generalized Steiner tree problem. Two edges go tight simultaneously in the last iteration before the deletion step. The deletion step removes the edge (s1 , s2 ) added in the ﬁrst iteration. The ﬁnal set of edges is shown in the last ﬁgure. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
7.4
Increasing multiple variables at once: the generalized Steiner tree problem
175
y←0 F ←∅ ℓ←0 while not all si ti pairs are connected in (V, F ) do ℓ←ℓ+1 Let C be the set of all connected components C of (V, F ) such that C ∩ {si , ti }  = 1 for some i ′ ′ Increase ∑ yC for all C in C uniformly until for some eℓ ∈ δ(C ), C ∈ C, ceℓ = S:eℓ ∈δ(S) yS F ← F ∪ {eℓ } ′ F ←F for k ← ℓ downto 1 do if F ′ − ek is a feasible solution then Remove ek from F ′ return F ′ Algorithm 7.6: Primaldual algorithm for the generalized Steiner tree problem (second attempt). Proof. As usual, we begin by expressing the cost of our primal solution in terms of the dual variables: ∑ ∑ ∑ ∑ ce = yS = F ′ ∩ δ(S)yS . e∈F ′
We would like to show that
e∈F ′ S:e∈δ(S)
∑
S
F ′ ∩ δ(S)yS ≤ 2
S
∑
yS ,
(7.6)
S
since by weak duality, this will imply that the algorithm is a 2approximation algorithm. As is suggested by the bad example in Figure 7.4, we cannot simply prove this by showing that F ′ ∩ δ(S) ≤ 2 whenever yS > 0. Instead, we show inequality (7.6) by induction on the number of iterations of the algorithm. Initially all dual variables yS = 0, so inequality (7.6) holds. Suppose that the inequality holds at the beginning of some iteration of the main loop of the algorithm. Then in this iteration we increase each yC for C ∈ C by the same amount (call it ϵ). This increases the lefthand side of (7.6) by ∑ ϵ F ′ ∩ δ(C) C∈C
and the righthand side by 2ϵC. However, by the inequality of Lemma 7.7, this means that the increase in the lefthand side is no greater than the increase in the righthand side. Thus if the inequality held before the increase of the dual variables in this iteration, it will also hold afterwards. We now turn to the proof of Lemma 7.7. We ﬁrst need an observation. Observation 7.9: At any point in the algorithm, the set of edges F is a forest. Proof. We prove the statement by induction on the number of iterations of the algorithm. The set F = ∅ is initially a forest. Any time we add an edge to the solution, it has at most one Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
176
The primaldual method
t3
s1
s2
t2
t1 s3
Figure 7.6: Illustration of Lemma 7.7. Consider the 5 thin dark moats. The edges of F ′ cross these moats 8 times (twice each by the edges joining s1 t1 , s2 t2 , t2 t3 , and t2 s3 ). Each moat corresponds to a connected component C ∈ C in the iteration in which ∑ the corresponding duals yC were increased. Thus in this iteration we see that 8 = C∈C δ(C) ∩ F ′  ≤ 2C = 10. endpoint in any given connected component of C. Thus it joins two connected components of the forest F , and the forest remains a forest. Proof of Lemma 7.7. Consider the iteration in which edge ei is added to F . Let Fi be the edges already in F at this point; that is, Fi = {e1 , . . . , ei−1 }. Let H = F ′ − Fi . Observe that Fi ∪ H = Fi ∪ F ′ is a feasible solution to the problem, since F ′ by itself is a feasible solution to the problem. We claim also that if we remove any edge e ∈ H from Fi ∪ H, it will not be a feasible solution. This follows from the deletion procedure at the end of the algorithm: at the time we consider edge ei−1 for deletion, the edges in F ′ at that point in the procedure are exactly those in Fi ∪ H. Hence it must be the case that any edge already considered and remaining in F ′ is necessary for the solution to be feasible. These edges are exactly the edges in H. We form a new graph by contracting each connected component of (V, Fi ) to a single vertex. Let V ′ be this new set of vertices. Observe that since at any point F is a forest, once we have contracted all the edges in Fi into vertices V ′ , no edge in H can have both endpoints corresponding to the same vertex in V ′ . Thus we can consider the forest of edges H on the set of vertices V ′ , where each edge in H is connected to the two vertices in V ′ corresponding to the two connected components it joined in (V, Fi ). See Figure 7.7 for an illustration. We let deg(v) for v ∈ V ′ represent the degree of vertex v in this forest. We also color the vertices in V ′ with two diﬀerent colors, red and blue. The red vertices in V ′ are those connected components C of (V, Fi ) that are in the set C in this iteration (that is, for some j, C ∩ {sj , tj }  = 1). The blue vertices are all the other vertices of V ′ . Let R denote the set of red vertices in V ′ and B the set of blue vertices v that have deg(v) > 0. We now observe that we can rewrite the desired inequality ∑ δ(C) ∩ F ′  ≤ 2C C∈C
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
7.4
Increasing multiple variables at once: the generalized Steiner tree problem
177
Figure 7.7: Proof of Lemma 7.7. On the left is the current set of connected components, with the edges in H shown in dashed lines. The right side shows the contracted graph. in terms of this forest. The righthand side is 2R. Since F ′ ⊆ Fi ∪ H and no edge of Fi can appear in δ(C) of Fi ), the lefthand side is no ∑ for C ∈ C (since C is a connected component ∑ greater than v∈R deg(v). So we wish to prove that v∈R deg(v) ≤ 2R. To prove this, we claim that no blue vertex can have degree exactly one. If this is true, then we have that ∑ ∑ ∑ deg(v) = deg(v) − deg(v). v∈R
v∈R∪B
v∈B
Then since the total degree of a forest is no more than twice the number of vertices in it, and since all blue vertices of nonzero degree have degree at least two by the claim, we have that ∑ deg(v) ≤ 2(R + B) − 2B = 2R, v∈R
as desired. It remains to prove that no blue vertex can have degree one. Suppose otherwise, and let v ∈ V ′ be a blue vertex of degree one, let C be the connected component of the uncontracted graph corresponding to v, and let e ∈ H be the incident edge. By our initial discussion, e must be necessary for the feasibility of the solution. Thus it must be on some path between sj and tj for some j, and C ∩ {sj , tj }  = 1. But in this case, C must be in C, and v must be red, which is a contradiction. Since the proof shows that the algorithm ﬁnds a solution to the integer programming formulation (7.5) of cost at most twice the value of a dual solution to the dual of the linear programming relaxation, the proof implies that the integrality gap of the formulation is at most 2. We can use the integrality gap instance for the prizecollecting Steiner tree problem in Section 5.7 (see Figure 5.4) to show that the integrality gap for this formulation is essentially 2, so that no better performance guarantee can be obtained by using a primaldual algorithm with this formulation. In Section 16.4, we will consider a directed version of the generalized Steiner tree problem, and we will show that it is substantially more diﬃcult to approximate the directed version of the problem. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
178
The primaldual method
7.5
Strengthening inequalities: the minimum knapsack problem
In this section, we turn to a minimization version of the knapsack problem introduced in Section 3.1. As in the previous version of the knapsack problem, we are given a set I of n items, I = {1, 2, . . . , n}. Each item has a value vi and a size si . In the previous version of the problem, we additionally had a knapsack capacity B, and the goal was to ﬁnd the maximumvalue subset of items such that the total size of the items was at most the capacity of the knapsack. In the minimization version of the problem, we are given a demand D, and the goal of the problem is to ﬁnd a subset of items having minimum total size such that the total value ∑is at least the demand. That is, we try to ﬁnd a subset X ⊆ I of items minimizing s(X) = i∈X si subject ∑ to the constraint that v(X) = i∈X vi ≥ D. We can formulate the problem as the following integer program, in which the primal variable xi indicates whether item i is chosen to be in the solution or not: ∑ minimize si xi i∈I
subject to
∑
vi xi ≥ D,
i∈I
xi ∈ {0, 1} ,
∀i ∈ I.
We obtain a linear programming relaxation for the problem by replacing the constraints xi ∈ {0, 1} with linear constraints 0 ≤ xi ≤ 1 for all i ∈ I. However, this linear programming relaxation is not a particularly good one, in the sense that it has a bad integrality gap. Consider a twoitem set I = {1, 2}, where v1 = D − 1, v2 = D, s1 = 0 and s2 = 1. The only feasible integer solutions require us to take item 2 (x2 = 1), for a total size of 1. But the solution x1 = 1, x2 = 1/D is feasible for the linear programming relaxation and has total size 1/D. This example shows that the integrality gap of this formulation is at least 1/(1/D) = D. To get a primaldual algorithm with a reasonable performance guarantee, we must use a diﬀerent integer programming formulation of the same problem, one with a better integrality gap. We introduce a new set of constraints, one for every subset A ⊆ I of items such that v(A) < D. Let DA = D − v(A); we can think of DA as the demand left over if the items in A are added to the knapsack. Notice that even if we select every item in the set A, we must still choose additional items of total value at least DA . Given the set A, this is simply another minimum knapsack problem on the set of items I − A, where the desired demand is now DA . Given A, we can also reduce the value of each item in I − A to be the minimum of its value and DA ; since the desired demand is DA , we don’t need the value of the item to be any larger. We let viA = min(vi , DA∑ ). Then for any A ⊆ I and any set of items X ⊆ I such that v(X) ≥ D, it is the case that i∈X−A viA ≥ DA . We can then give the following integer programming formulation of the problem: ∑ minimize si xi i∈I
subject to
∑
viA xi ≥ DA ,
∀A ⊆ I,
i∈I−A
xi ∈ {0, 1} ,
∀i ∈ I.
As we argued above, any integer solution corresponding to a knapsack of value at least D is Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
7.5
Strengthening inequalities: the minimum knapsack problem
179
y←0 A←∅ while v(A) < D do ∑ B Increase yA until for some i ∈ I − A, B⊆I:i∈B / vi yB = si A ← A ∪ {i} return A. Algorithm 7.7: Primaldual algorithm for the minimum knapsack problem. a feasible solution to the integer program, and thus the integer program models the minimum knapsack problem. We now consider the linear programming relaxation of the integer program above, replacing the constraints xi ∈ {0, 1} with xi ≥ 0: ∑ si xi minimize i∈I
subject to
∑
viA xi ≥ DA ,
∀A ⊆ I,
xi ≥ 0,
∀i ∈ I.
i∈I−A
Observe that for the instance that gave a bad integrality gap for the previous integer programming formulation, the given LP solution is no longer feasible. In that example, we had I = {1, 2}, with v1 = D − 1, v2 = D, s1 = 0, and s2 = 1. Now consider the constraint corresponding∑ to A = {1}. We have DA = D − v1 = 1, and v2A = min(v2 , DA ) = 1, so that the constraint i∈I−A viA xi ≥ DA is x2 ≥ 1 for this choice of A. The constraint forces us to take item 2; we cannot take a 1/D fraction of item 2 to meet the missing single unit of demand. The dual of the linear programming relaxation is ∑ maximize DA yA A:A⊆I
∑
viA yA ≤ si ,
∀i ∈ I,
A⊆I:i∈A /
yA ≥ 0,
∀A ⊂ I.
We can now consider a primaldual algorithm for the problem using the primal and dual formulations as given above. We start with an empty set of selected items, and a dual solution yA = 0 for all A ⊆ I. Now we must select some dual variable to increase. Which one should it be? Following the idea introduced previously of choosing a variable corresponding to a minimal object of some sort, we increase the dual variable y∅ . A dual constraint will become tight for some item i ∈ I, and we will add this to our set of selected items. Which variable should we increase next? Notice that in order to maintain dual feasibility, it will have to be a variable yA for some set A such that i ∈ A; if i ∈ / A, then we cannot increase yA without violating the tight dual constraint for i. The most natural choice is to increase yA for A = {i}. We continue in this fashion, letting A be our set of selected items; whenever a dual constraint becomes tight for some new item j ∈ I, we add j to A, and in the next iteration increase the dual variable yA . The algorithm is given in Algorithm 7.7. The algorithm terminates when v(A) ≥ D. We can now show that the algorithm is a 2approximation algorithm for the minimum knapsack problem. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
180
The primaldual method
Theorem 7.10: Algorithm 7.7 is a 2approximation algorithm for the minimum knapsack problem. Proof. Let ℓ be the ﬁnal item selected by the algorithm, and let X be the set of items returned at the end of the algorithm. We know that v(X) ≥ D; since item ℓ was added to X, it must be the case that before ℓ was added, the total value of the set of items was less than D, so that v(X − ℓ) < D. Following the standard primaldual analysis, we know that ∑ ∑ ∑ si = viA yA . i∈X A⊆I:i∈A /
i∈X
Reversing the double sum, we have that ∑ ∑
viA yA =
i∈X A⊆I:i∈A /
∑
∑
yA
A⊆I
viA .
i∈X−A
Note that in any iteration of the algorithm except the last one, adding the next item i to the current set of items A did not cause the value of the knapsack to become at least D; that is, vi < D − v(A) = DA at that point in the algorithm. Thus for all items i ∈ X except ℓ, viA = min(vi , DA ) = vi for the point in the algorithm at which A was the current set of items. Thus we can rewrite ∑ ∑ viA = vℓA + viA = vℓA + v(X − ℓ) − v(A). i∈X−A
i∈X−A:i̸=ℓ
Note that vℓA ≤ DA by deﬁnition, and as argued at the beginning of the proof v(X − ℓ) < D so that v(X − ℓ) − v(A) < D − v(A) = DA ; thus we have that vℓA + v(X − ℓ) − v(A) < 2DA . Therefore
∑ i∈X
si =
∑ A⊆I
yA
∑ i∈X−A
viA < 2
∑
DA yA ≤ 2 OPT,
A:A⊆I
where the ﬁnal inequality follows by weak duality since function.
∑ A:A⊆I
DA yA is the dual objective
The proof of the performance guarantee of the algorithm shows that the integrality gap of the new integer programming formulation must be at most 2.
7.6
The uncapacitated facility location problem
We now return to the uncapacitated facility location problem introduced in Section 4.5 for which we gave a randomized approximation algorithm in Section 5.8. Recall that the input to the problem is a set of clients D and a set of facilities F , with facility costs fi for all facilities i ∈ F , and assignment costs cij for all facilities i ∈ F and clients j ∈ D. The goal is to select a subset of facilities to open and an assignment of clients to open facilities so as to minimize the total cost of the open facilities plus the assignment costs. As before, we consider the metric uncapacitated facility location and assume that the clients and facilities are points in a metric Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
7.6
The uncapacitated facility location problem
181
space, and the assignment cost cij is the distance between client j and facility i. In particular, given clients j, l and facilities i, k, we have that cij ≤ cil + ckl + ckj by the triangle inequality. We will now give a primaldual approximation algorithm for the problem. Recall the linear programming relaxation of the problem we used in previous sections: minimize
∑
∑
fi yi +
i∈F
cij xij
i∈F,j∈D
∑
subject to
xij = 1,
∀j ∈ D,
xij ≤ yi ,
∀i ∈ F, j ∈ D,
xij ≥ 0,
∀i ∈ F, j ∈ D,
i∈F
yi ≥ 0,
∀i ∈ F,
in which variable xij indicates whether client j is assigned to facility i, and variable yi indicates whether facility i is open or not. The dual of the LP relaxation is maximize
∑
vj
j∈D
subject to
∑
wij ≤ fi ,
∀i ∈ F,
j∈D
vj − wij ≤ cij , wij ≥ 0,
∀i ∈ F, j ∈ D, ∀i ∈ F, j ∈ D.
It is also useful to recall the intuition for the dual which we gave in Section 4.5. We can view the dual variables vj as the amount that each client j will pay towards its part of the cost of the solution. If facility costs are all zero, then vj = mini∈F cij . To handle nonzero facility costs,∑the cost fi is split into nonnegative cost shares wij apportioned among the clients, so that j∈D wij ≤ fi . A client j only needs to pay this share if it uses facility i. In this way, we no longer charge explicitly for opening a facility, but still recover some of its cost. Each client j is willing to pay the lowest cost over all facilities of its service cost and its share of the facility cost, so that vj = mini∈F (cij + wij∑ ). By allowing vj to be any value for which vj ≤ cij + wij , the objective function maximizing j∈D vj forces vj to be equal to the smallest righthand side over all facilities i ∈ F . Thus any feasible solution to the dual is a lower bound on the cost of an optimal solution to the facility location problem. To get some intuition for why a primaldual analysis will be useful for this problem, let us ﬁrst consider a dual feasible solution (v ∗ , w∗ ) that is maximal; that is, we cannot increase any ∗ that gives a dual feasible solution. Such vj∗ by any positive amount and then derive a set of wij a dual solution has some very nice structure. To discuss this further, we modify a deﬁnition used in previous sections, and introduce a new one. Deﬁnition 7.11: Given a dual solution (v ∗ , w∗ ),{we say that a client j neighbors a facility i } ∗ ∗ (or that i neighbors j) if vj ≥ cij . We let N (j) = i ∈ F : vj ≥ cij be the neighbors of a client { } ∗ j and N (i) = j ∈ D : vj ≥ cij be the neighbors of a facility i. Deﬁnition 7.12: Given a dual solution (v ∗ , w∗ ), we say that a client j contributes to a facility ∗ > 0. i if wij Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
182
The primaldual method
∗ for that In other words, a client j contributes to facility i if it has a nonzero cost share wij facility. We observe that given dual variables v ∗ , we can derive a feasible set of cost shares w∗ (if ∗ = max(0, v ∗ − c ). Observe that if we derive w ∗ in this way and they exist) by setting wij ij j ∗ > 0 implies v ∗ > c . client j contributes to facility i, then j neighbors i (j ∈ N (i)), since wij ij j ∗. Furthermore, if j ∈ N (i), then vj∗ = cij + wij Let T be the set of all facilities such that the sum of the cost shares is equal to the cost of the facility; in }other words, the corresponding dual inequality is tight. Then T = { ∑ ∗ = f . First we claim that in a maximal dual solution (v ∗ , w ∗ ), every client i ∈ F : j∈D wij i must neighbor some facility in T . To see this we claim that in a maximal dual solution, it ∗ ), and some facility i ∈ F attaining the minimum must be the case that vj∗ = mini∈F (cij + wij ∗ must be in T . Then for this facility i, vj ≥ cij , and j neighbors i ∈ T . To see the claim, ∗ ), we can feasibly increase v ∗ and the solution is not maximal. clearly if vj∗ < mini∈F (cij + wij j ∗ ∗ If v = min (c + w ) and all facilities i attaining the minimum are not in T , then since ij i∈F ij ∑ j ∗ < f for these facilities i we can feasibly increase v ∗ and w ∗ for these facilities i, and w i j ij k∈D ik once again the solution is not maximal. Given some facility i ∈ T , the cost of the facility plus the cost of assigning the neighbors N (i) to i is exactly equal to the sum of the dual variables of the neighboring clients; that is,
fi +
∑ j∈N (i)
cij =
∑ j∈N (i)
∗ (wij + cij ) =
∑
vj∗ ,
j∈N (i)
∗ > 0 implies that j ∈ N (i) and the second equality where the ﬁrst equality follows since wij ∗ +c = v ∗ . Since all clients have a neighbor in T , it would follows since j ∈ N (i) implies that wij ij j then seem that we could get an optimal algorithm by opening all facilities in T and assigning each client to its neighbor in T . The diﬃculty with this approach is that a given client j might neighbor several facilities in T and might contribute to many of them; we then use vj∗ multiple times to pay for the cost of these facilities. We can ﬁx this by only opening a subset T ′ of T such that each client contributes to the cost of at most one facility in T ′ . If we do this in such a way that clients not neighboring a facility in T ′ are nonetheless not too far away from a facility in T ′ , we can get a good performance guarantee for the algorithm. We now give the primaldual algorithm in Algorithm 7.8. We generate a maximal dual solution by increasing the dual variables vj . We let S be the set of clients whose duals we are increasing, and T be the set of facilities whose dual inequality is tight. Initially S = D and T = ∅. We increase vj uniformly for all j ∈ S. Once vj = cij for some i, we increase wij uniformly with vj . We increase vj until one of two things happens: either j becomes a neighbor of a facility in T , or a dual inequality becomes tight for some facility i. In the ﬁrst case, we remove j from S, and in the second case we add i to T , and remove all neighboring clients N (i) from S. Once S is empty and every client neighbors a facility in T , we select a subset T ′ of T by iteratively picking an arbitrary i ∈ T , then deleting all facilities i′ in T such that there exists some client j that contributes to both i and i′ . We then open all facilities in T ′ and assign every client to the closest open facility. We claim the following lemma about the set of facilities T ′ and the dual solution (v, w) produced by the algorithm. It shows that if a client does not have a neighbor in T ′ , then it is not far away from a facility in T ′ .
Lemma 7.13: If a client j does not have a neighbor in T ′ , then there exists a facility i ∈ T ′ such that cij ≤ 3vj . Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
7.6
The uncapacitated facility location problem
183
v ← 0, w ← 0 S←D T ←∅ while S ̸= ∅ do // While not all clients neighbor a facility in T Increase vj for all j ∈ S and wij for all i ∈ N (j), j ∈ S uniformly until some j ∈ S neighbors some i ∈ T or some i ∈ / T has a tight dual inequality if some j ∈ S neighbors some i ∈ T then S ← S − {j} if i ∈ / T has a tight dual inequality then // facility i is added to T T ← T ∪ {i} S ← S − N (i) T′ ← ∅ while T ̸= ∅ do Pick i ∈ T ; T ′ ← T ′ ∪ {i} // remove all facilities h if some client j contributes to h and i T ← T − {h ∈ T : ∃j ∈ D, wij > 0 and whj > 0} Algorithm 7.8: Primaldual algorithm for the uncapacitated facility location problem. The intuition is that if a client j does not have a neighbor in T ′ , then it must have neighbored some tight facility h ∈ / T ′ such that some other client k contributed both to h and another facility i ∈ T ′ (see Figure 7.8). By applying triangle inequality we obtain the factor of 3. We defer the proof of the lemma for the moment and show that it implies a performance guarantee of 3 for the algorithm. Theorem 7.14: Algorithm 7.8 is a 3approximation algorithm for the uncapacitated facility location problem. Proof. For any client that contributes to a facility in T ′ , we assign it to this facility. Note that by construction of the algorithm, any client contributes to at most one facility in T ′ , so this assignment is unique. For clients that neighbor facilities in T ′ but do not contribute to any of them, assign each to some arbitrary neighboring facility in T ′ . Let A(i) ⊆ N (i) be the neighboring clients assigned to a facility i ∈ T ′ . Then as discussed above, the cost of opening the facilities in T ′ plus the cost of assigning the neighboring clients is ∑ i∈T ′
fi +
∑ j∈A(i)
cij =
∑ ∑ i∈T ′
(wij + cij ) =
j∈A(i)
∑ ∑ i∈T ′
vj ,
j∈A(i)
∑ where the ﬁrst equality holds because i ∈ T ′ implies j∈D wij = fi and wij > 0 implies j ∈ A(i). ∪ Let Z be the set of all clients not neighboring a facility in T ′ , so that Z = D − i∈T ′ A(i). We have by Lemma 7.13 that the cost of assigning any j ∈ Z to some facility in T ′ is at most 3vj . Thus the total assignment cost for these clients is at most 3
∑
vj .
j∈Z
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
184
The primaldual method h∈ / T0 whk > 0 → vk > chk vj ≥ chj k j
wik > 0 → vk > cik
i ∈ T0
Figure 7.8: Proof of Lemma 7.13. Putting everything together, we have that the cost of the solution is at most ∑ ∑ ∑ ∑ vj + 3 vj ≤ 3 vj ≤ 3 OPT, i∈T ′ j∈A(i)
j∈Z
j∈D
where the ﬁnal inequality follows by weak duality. Now we ﬁnish the proof of the lemma. Proof of Lemma 7.13. Let j be an arbitrary client that does not neighbor a facility in T ′ . During the course of the algorithm, we stopped increasing vj because j neighbored some h ∈ T . Obviously h ∈ / T ′ , since otherwise j would neighbor a facility in T ′ . The facility h must have been removed from T ′ during the ﬁnal phase of the algorithm because there exists another client k such that k contributes to both h and another facility i ∈ T ′ . See Figure 7.8. We would now like to show that the cost of assigning j to this facility i is at most 3vj . In particular, we will show that each of the three terms in chj + chk + cik is no more than vj , which will then prove by the triangle inequality that cij ≤ 3vj . We know that chj ≤ vj simply because j neighbors h. Now consider the point in the algorithm at which we stop increasing vj . By our choice of h, at this point in the algorithm either h is already in T or the algorithm adds h to T . Because client k contributes to facility h, it must be the case that either vk has already stopped increasing or we stop increasing it at the same point that we stop increasing vj . Because the dual variables are increased uniformly, we must have that vj ≥ vk . Since client k contributes to both facilities h and i, we know that vk ≥ chk and vk ≥ cik . Thus vj ≥ vk ≥ chk and vj ≥ vk ≥ cik , as claimed.
7.7
Lagrangean relaxation and the kmedian problem
In this section, we look at a variant of the uncapacitated facility location problem called the kmedian problem. As in the uncapacitated facility location problem, we are given as input a set of clients D and a set of facilities F , with assignment costs cij for all facilities i ∈ F and clients j ∈ D. However, there are no longer costs for opening facilities; instead, we are given as input a positive integer k that is an upper bound on the number of facilities that can be opened. The goal is to select a subset of facilities of at most k facilities to open and an assignment of clients to open facilities so as to minimize the total assignment costs. As before, we assume that the Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
7.7
Lagrangean relaxation and the kmedian problem
185
clients and facilities are points in a metric space, and the assignment cost cij is the distance between client j and facility i. Since the facilities are points in a metric space, we also have distances between pairs of facilities, a fact we will use in our algorithm. For facilities h, i ∈ F , let chi denote the distance between h and i. An alternate perspective on the kmedian problem is that it is a type of clustering problem. In Section 2.2, we saw the kcenter problem, in which we wished to ﬁnd k clusters of a set of vertices. Each cluster was deﬁned by a cluster center; each vertex assigned itself to the closest cluster center, and the goal was to ﬁnd a set of k cluster centers such that the maximum distance of a vertex to its cluster center was minimized. In the kmedian problem, the facilities correspond to potential cluster centers, and the clients correspond to vertices. As in the kcenter problem, we choose k cluster centers, and each vertex assigns itself to the closest cluster center. However, rather than trying to minimize the maximum distance of a vertex to its cluster center, we minimize the sum of the distances of the vertices to their cluster centers. For the rest of this section, we will discuss the kmedian problem in terms of a facility location problem; since the clustering problem is completely equivalent, this is just a choice of terminology. We can formulate the kmedian problem as an integer program very similar to the one used for the uncapacitated facility location in Section 4.5. If we let yi ∈ {0, 1} indicate whether we open facility i, then in order to limit the number of open facilities to k, we introduce the ∑ constraint i∈F yi ≤ k. This gives the following integer programming formulation: ∑ minimize cij xij i∈F,j∈D
subject to
∑
∀j ∈ D,
xij = 1,
i∈F
xij ≤ yi , ∑ yi ≤
∀i ∈ F, j ∈ D, k,
i∈F
xij ∈ {0, 1} , yi ∈
∀i ∈ F, j ∈ D,
{0, 1} ,
∀i ∈ F.
The only diﬀerences from the uncapacitated facility location integer program of Section 4.5 is the extra constraint and the objective function, which has no facility costs. We use the idea of Lagrangean relaxation to reduce the kmedian problem to the uncapacitated facility location problem. In Lagrangean relaxation, we eliminate complicating constraints but add penalties for their violation to the objective function. For example, consider the linear programming relaxation of the integer program for the kmedian problem: ∑ minimize cij xij (7.7) i∈F,j∈D
subject to
∑
xij = 1,
∀j ∈ D,
i∈F
xij ≤ yi , ∑ yi ≤ k,
∀i ∈ F, j ∈ D,
i∈F
xij ≥ 0 yi ≥ 0,
∀i ∈ F, j ∈ D, ∀i ∈ F.
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
186
The primaldual method
To make the problem more closely resemble the uncapacitated facility location(∑ problem, we) ∑ would like to get rid of the constraint i∈F yi ≤ k. To do this, we add a penalty λ i∈F yi − k to the objective function for some constant λ ≥ 0. The penalty term favors solutions that obey the constraint. So our new linear program is then minimize
∑
cij xij +
∑
i∈F,j∈D
λyi − λk
i∈F
∑
subject to
(7.8)
xij = 1,
∀j ∈ D,
xij ≤ yi ,
∀i ∈ F, j ∈ D,
xij ≥ 0,
∀i ∈ F, j ∈ D,
i∈F
yi ≥ 0,
∀i ∈ F.
First, observe that any feasible solution for the linear programming relaxation (7.7) of the kmedian problem is also feasible for this linear program (7.8). Also, for any λ ≥ 0, any feasible solution for the linear programming relaxation of the kmedian problem (7.7) has an objective function value in this linear program (7.8) at most that of its value in (7.7). Therefore, this linear program (7.8) gives a lower bound on the cost of an optimal solution to the kmedian problem. We will denote the optimum cost of the kmedian problem as OPTk . Observe that except for the constant term of −λk, the linear program now looks exactly like the linear programming relaxation of the uncapacitated facility location problem in which each facility cost fi = λ. If we take the dual of this linear program, we obtain maximize
∑
vj − λk
j∈D
subject to
∑
(7.9)
wij ≤ λ,
∀i ∈ F,
j∈D
vj − wij ≤ cij , wij ≥ 0,
∀i ∈ F, j ∈ D, ∀i ∈ F, j ∈ D.
Again, this is the same as the dual of the linear programming relaxation of the uncapacitated facility location problem except that each facility cost is λ and there is an extra constant term of −λk in the objective function. We would like to use the primaldual algorithm for the uncapacitated facility location problem from the previous section in which all facility costs fi are set to λ for some choice of λ ≥ 0. While this will open some set of facilities, how can we then obtain a performance guarantee? In the previous section, we show that the algorithm opens a set S of facilities and constructs a feasible dual (v, w) so that ∑ ∑ ∑ min cij + fi ≤ 3 vj . j∈D
i∈S
For notational convenience, let c(S) = claim can be strengthened slightly to
i∈S
∑ j∈D
c(S) + 3
j∈D
mini∈S cij . In Exercise 7.8, we observe that this
∑ i∈S
fi ≤ 3
∑
vj .
j∈D
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
7.7
Lagrangean relaxation and the kmedian problem
187
Substituting fi = λ, and rearranging, this gives us that c(S) ≤ 3
∑
vj − λS .
(7.10)
j∈D
Note that if, by chance, the algorithm opens a set of facilities S such that S = k, we then have that ∑ c(S) ≤ 3 vj − λk ≤ 3 · OPTk . j∈D
This follows since (v, w) is a feasible solution to the dual program (7.9), ∑which is a lower bound on the cost of an optimal solution to the kmedian problem, and since j∈D vj − λk is the dual objective function. A natural idea is to try to ﬁnd some value of λ such that the facility location algorithm opens a set of facilities S with S = k. We will do this via a bisection search. To initialize the search, we need two initial values of λ, one for which the facility location algorithm opens at least k facilities, and one for which it opens at most k facilities. Consider what happens with the facility location algorithm when λ = 0. If the algorithm opens fewer than k facilities, then we can open an additional k − S facilities at no cost, and apply the previous reasoning to get a solution of cost at most 3 OPTk . So we assume that with 0, the algorithm opens more ∑λ =∑ than k facilities. It is also not hard to show that if λ = j∈D i∈F cij , then the algorithm opens a single facility. Thus ∑ we∑can run the bisection search on λ as follows. We initially set λ1 = 0 and λ2 = j∈D i∈F cij ; as discussed above, these two values of λ return solutions S1 and S2 (respectively) in which S1  > k and S2  < k. We run the algorithm on the value λ = 21 (λ1 + λ2 ). If the algorithm returns a solution S with exactly k facilities, then by the arguments above, we are done, and we have a solution of cost at most 3 OPTk . If S has more than k facilities, then we set λ1 = λ and S1 = S, otherwise S has fewer than k facilities and we set λ2 = λ and S2 = S. We then repeat until either the algorithm ﬁnds a solution with exactly k facilities or the interval λ2 − λ1 becomes suitably small, at which point we will be able to obtain a solution to the kmedian problem by appropriately combining the solutions from S1 and S2 . If we let cmin be the smallest nonzero assignment cost, we run the bisection search until either the algorithm ﬁnds a solution with exactly k facilities, or λ2 −λ1 ≤ ϵcmin /F . In the latter case, we will use S1 and S2 to obtain a solution S in polynomial time such that S = k and c(S) ≤ 2(3 + ϵ) OPTk . This will give us a 2(3 + ϵ)approximation algorithm for the kmedian problem. If we have not terminated with a solution with exactly k facilities, the algorithm terminates with solutions S1 and S2 and( corresponding )dual solutions (v 1 , w1 ) and (v 2 , w2 ) such that ∑ ℓ S1  > k > S2  and c(Sℓ ) ≤ 3 j∈D vj − λℓ k for ℓ = 1, 2 by inequality (7.10). Also, by the termination condition, λ2 − λ1 ≤ ϵcmin /F . Without loss of generality, we can assume that 0 < cmin ≤ OPTk , since otherwise OPTk = 0; we leave it as an exercise to show that we can ﬁnd an optimal solution to the kmedian problem in polynomial time if OPTk = 0 (Exercise 7.9). ∑ F  cij Note that the binary search on λ makes O(log ϵcmin ) calls to the facility location algorithm, and thus the overall algorithm runs in polynomial time. We will now show how we can use the two solutions S1 and S2 to obtain a solution S in polynomial time such that S = k and c(S) ≤ 2(3 + ϵ) OPTk . To do this, we ﬁrst need to relate the costs of the two solutions to OPTk . Let α1 and α2 be such that α1 S1  + α2 S2  = k, Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
188
The primaldual method
α1 + α2 = 1, with α1 , α2 ≥ 0. Note that this implies that α1 =
k − S2  S1  − k and α2 = . S1  − S2  S1  − S2 
We can then get a dual solution (˜ v , w) ˜ by letting v˜ = α1 v 1 + α2 v 2 and w ˜ = α1 w1 + α2 w2 . Note that (˜ v , w) ˜ is feasible for the dual linear program (7.9) with facility costs λ2 since it is a convex combination of two feasible dual solutions. We can now prove the following lemma, which states that the convex combination of the costs of S1 and S2 must be close to the cost of an optimal solution. Lemma 7.15: α1 c(S1 ) + α2 c(S2 ) ≤ (3 + ϵ) OPTk . Proof. We ﬁrst observe that c(S1 ) ≤ 3 = 3 = 3 ≤ 3
∑
vj1 − λ1 S1 
j∈D
∑
vj1 − (λ1 + λ2 − λ2 )S1 
j∈D
∑
vj1 − λ2 S1  + (λ2 − λ1 )S1 
j∈D
∑
vj1 − λ2 S1  + ϵ OPTk ,
j∈D
where the last inequality follows from our bound on the diﬀerence λ2 − λ1 . Now if we take the convex combination of the inequality above and our bound on c(S2 ), we obtain ∑ α1 c(S1 ) + α2 c(S2 ) ≤ 3α1 vj1 − λ2 S1  + α1 ϵ OPTk j∈D
+ 3α2
∑
vj2 − λ2 S2  .
j∈D
Recalling that v˜ = α1 v 1 + α2 v 2 is a dual feasible solution for facility costs λ2 , that α1 S1  + α2 S2  = k, and that α1 ≤ 1, we have that α1 c(S1 ) + α2 c(S2 ) ≤ 3
∑
v˜j − λ2 k + α1 ϵ · OPTk ≤ (3 + ϵ) OPTk .
j∈D
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
7.7
Lagrangean relaxation and the kmedian problem
189
Our algorithm then splits into two cases, a simple case when α2 ≥ 12 and a more complicated case when α2 < 21 . If α2 ≥ 12 , we return S2 as our solution. Note that S2  < k, and thus is a feasible solution. Using α2 ≥ 12 and Lemma 7.15, we obtain c(S2 ) ≤ 2α2 c(S2 ) ≤ 2 (α1 c(S1 ) + α2 c(S2 )) ≤ 2(3 + ϵ) OPTk , as desired. ∑ Before we give our algorithm for the remaining case, we let c(j, S) = mini∈S cij , so that j∈D c(j, S) = c(S). Now for each facility i ∈ S2 , we open the closest facility h ∈ S1 ; that is, the facility h ∈ S1 that minimizes cih . If this doesn’t open S2  facilities of S1 because some facilities in S2 are close to the same facility in S1 , we open some arbitrary facilities in S1 so that exactly S2  are opened. We then choose a random subset of k − S2  of the S1  − S2  facilities of S1 remaining, and open these. Let S be the resulting set of facilities opened. We now show the following lemma. Lemma 7.16: If α2 < 12 , then opening the facilities as above has cost E[c(S)] ≤ 2(3 + ϵ) OPTk . Proof. To prove the lemma, we consider the expected cost of assigning a given client j ∈ D to a facility opened by the randomized algorithm. Let us suppose that the facility 1 ∈ S1 is the open facility in S1 closest to j ; that is, c1j = c(j, S1 ); similarly, let 2 ∈ S2 be the open facility 2 in S2 closest to j. Note that with probability Sk−S = α1 , the facility 1 ∈ S1 is opened in 1 −S2  the randomized step of the algorithm if it has not already been opened by the ﬁrst step of the algorithm. Thus with probability at least α1 , the cost of assigning j to the closest open facility in S is at most c1j = c(j, S1 ). With probability at most 1− α1 = α2 , the facility 1 is not opened. In this case, we can at worst assign j to a facility opened in the ﬁrst step of the algorithm; in particular, we assign j to the facility in S1 closest to 2 ∈ S2 . Let i ∈ S1 be the closest facility to 2 ∈ S2 ; see Figure 7.9. Then cij ≤ ci2 + c2j by triangle inequality. We know that ci2 ≤ c12 since i is the closest facility in S1 to 2, so that cij ≤ c12 + c2j . Finally, by triangle inequality c12 ≤ c1j + c2j , so that we have cij ≤ c1j + c2j + c2j = c(j, S1 ) + 2c(j, S2 ). Thus the expected cost of assigning j to the closest facility in S is E[c(j, S)] ≤ α1 c(j, S1 ) + α2 (c(j, S1 ) + 2c(j, S2 )) = c(j, S1 ) + 2α2 c(j, S2 ). By the assumption α2 < 21 , so that α1 = 1 − α2 > 12 , we obtain E[c(j, S)] ≤ 2(α1 c(j, S1 ) + α2 c(j, S2 )). Then summing over all j ∈ D and using Lemma 7.15, we obtain E[c(S)] ≤ 2(α1 c(S1 ) + α2 c(S2 )) ≤ 2(3 + ϵ) OPTk .
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
190
The primaldual method
S2
S1 1 i 2
j Figure 7.9: The bad case of assigning clients. Client j is closest to 1 in S1 , closest to 2 in S2 , but gets assigned to i in S1 since this is the closest facility in S1 to 2 in S2 . This algorithm can be derandomized by the method of conditional expectations; we leave this as an exercise (Exercise 7.10). In Chapter 9, we will see improved algorithms for the kmedian problem using local search and greedy algorithms. In particular, in Section 9.4, we will see a dual ﬁtting greedy algorithm for the uncapacitated facility location problem that opens a set S of facilities such that c(S) + 2
∑
fi ≤ 2
i∈S
∑
vj ,
j∈D
where v is a feasible solution to the dual of the linear programming relaxation of the uncapacitated facility location problem. Then we will be able to follow the same logic as the algorithm above to get a 2(2 + ϵ)approximation algorithm for the kmedian problem. Note that it is crucial for the analysis that we have an uncapacitated facility location algorithm that returns a solution S such that ∑ ∑ c(S) + α fi ≤ α vj i∈S
j∈D
for some α. If this is the case, then when we set fi = λ, we are able to derive that c(S) ≤ α
∑
vj − λS ,
j∈D
which then allows us to use as a bound the objective function of the dual of the Lagrangean relaxation. Such algorithms are called Lagrangean multiplier preserving. In Chapter 14, we will see another example of a Lagrangean multiplier preserving algorithm: a primaldual 2approximation algorithm for the prizecollecting Steiner tree problem. The following hardness result is known for the kmedian problem via a reduction from the set cover problem. We discuss this result further in Section 16.2. Theorem 7.17: There is no αapproximation algorithm for the kmedian problem with constant α < 1 + 2e ≈ 1.736 unless each problem in NP has an O(nO(log log n) ) time algorithm. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
7.7
Lagrangean relaxation and the kmedian problem
191
Exercises 7.1 Prove that the shortest st path algorithm in Section 7.3 is equivalent to Dijkstra’s algorithm: that is, in each step, it adds the same edge that Dijkstra’s algorithm would add. 7.2 Consider the multicut problem in trees. In this problem, we are given a tree T = (V, E), k pairs of vertices si ti , and costs ce ≥ 0 for each edge e ∈ E. The goal is to ﬁnd a minimumcost set of edges F such that for all i, si and ti are in diﬀerent connected components of G′ = (V, E − F ). Let Pi be the set of edges in the unique path in T between si and ti . Then we can formulate the problem as the following integer program: ∑ minimize ce xe e∈E
subject to
∑
xe ≥ 1,
1 ≤ i ≤ k,
xe ∈ {0, 1} ,
e ∈ E.
e∈Pi
Suppose we root the tree at an arbitrary vertex r. Let depth(v) be the number of edges on the path from v to r. Let lca(si , ti ) be the vertex v on the path from si to ti whose depth is minimum. Suppose we use the primaldual method to solve this problem, where the dual variable we increase in each iteration corresponds to the violated constraint that maximizes depth(lca(si , ti )). Prove that this gives a 2approximation algorithm for the multicut problem in trees. 7.3 The local ratio technique is another technique that is highly related to the primaldual method; however, its use of duality is implicit. Consider the following local ratio algorithm for the set cover problem. As with the primaldual algorithm, we compute a collection I of indices of a set cover, where I is initially empty. In each iteration, we ﬁnd some element ei not covered by the current collection I. Let ϵ be the minimum weight of any set containing ei . We subtract ϵ from the weight of each set containing ei ; some such set now has weight zero, and we add the index of this set to I. We now analyze the performance of this algorithm. To do this, let ϵj be the value of ϵ from the jth iteration of the algorithm. ∑ (a) Show that the cost of the solution returned is at most f j ϵj , where f = maxi  {j : ei ∈ Sj } . ∑ (b) Show that the cost of the optimal solution must be at least j ϵj . (c) Conclude that the algorithm is an f approximation algorithm. In its most general application, the local ratio technique depends upon the local ratio theorem, stated below. For a minimization problem Π with weights w, we say that a feasible solution F is αapproximate with respect to w if the weight of F is at most α times the weight of an optimal solution given weights w. Then the local ratio theorem states that if we have nonnegative weights w such that w = w1 + w2 , where w1 and w2 are also nonnegative weights, and we have a feasible solution F such that F is αapproximate with respect to both w1 and w2 , then F is αapproximate with respect to w. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
192
The primaldual method
(d) Prove the local ratio theorem. (e) Explain how the set cover algorithm above can be analyzed in terms of the local ratio theorem to prove that it is an f approximation algorithm. Most of the algorithms of this chapter have local ratio variants. 7.4 In the 2approximation algorithm for the generalized Steiner tree problem that we gave in Section 7.4, we ﬁrst add certain edges, then remove unnecessary edges in the order opposite of the order in which they were added. Prove that one can in fact remove unnecessary edges in any order and still obtain a 2approximation algorithm for the problem. That is, we replace the edge removal steps in Algorithm 7.6 with a step that checks if there exists any edge e in F ′ such that F ′ − e is feasible. removed from F ′ , and if not, F ′ is returned as the ﬁnal solution. Prove ∑ If so, e is ∑ that e∈F ′ ce ≤ 2 S yS for the dual y generated by the algorithm. 7.5 In the minimumcost branching problem we are given a directed graph G = (V, A), a root vertex r ∈ V , and weights wij ≥ 0 for all (i, j) ∈ A. The goal of the problem is to ﬁnd a minimumcost set of arcs F ⊆ A such that for every v ∈ V , there is exactly one directed path in F from r to v. Use the primaldual method to give an optimal algorithm for this problem. 7.6 Recall that in our algorithms of Sections 4.4 and 5.7 for the prizecollecting Steiner tree problem, we used the following linear programming relaxation of the problem: ∑ ∑ minimize ce xe + πi (1 − yi ) e∈E
subject to
i∈V
∑
xe ≥ y i ,
∀i ∈ S, ∀S ⊆ V − r, S ̸= ∅,
e∈δ(S)
yr = 1, yi ≥ 0,
∀i ∈ V,
xe ≥ 0,
∀e ∈ E.
Given an optimal solution (x∗ , y ∗ ) to the linear program, we then selected a set of vertices U such that U = {i ∈ V : yi∗ ≥ α} for some value of α > 0. Give a primaldual algorithm that ﬁnds a Steiner tree T on the set of terminals U such that ∑ 2∑ ce ≤ ce x∗e . α e∈T
e∈E
(Hint: you should not need to design a new primaldual algorithm.) 7.7 In the kpath partition problem, we are given a complete, undirected graph G = (V, E) with edge costs ce ≥ 0 that obey the triangle inequality (that is, c(u,v) ≤ c(u,w) + c(w,v) for all u, v, w ∈ V ), and a parameter k such that V  is a multiple of k. The goal is to ﬁnd a minimumcost collection of paths of k vertices each such that each vertex is on exactly one path. A related problem is that of partitioning a graph into 0(mod k)trees. The input to this problem is the same as that above, except that the graph is not necessarily complete and Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
7.7
Lagrangean relaxation and the kmedian problem
193
edge costs do not necessarily obey the triangle inequality. The goal is to ﬁnd a minimumcost collection of trees such that each tree has 0(mod k) vertices, and each vertex is in exactly one tree. ( ) (a) Given an αapproximation algorithm for the second problem, produce a 2α 1 − k1 approximation algorithm for the ﬁrst. (b) Use the primaldual method to give a 2approximation algorithm for the second problem. ( ) (c) Give a 4 1 − k1 approximation algorithm for the problem of partitioning a graph into cycles of length exactly k. 7.8 Show that the performance guarantee of the primaldual algorithm for the uncapacitated facility location algorithm in Section 7.6 can be strengthened in the following way. Suppose that the algorithm opens the set T ′ of facilities and constructs the dual solution (v, w). Show that ∑ ∑ ∑ vj . min′ cij + 3 fi ≤ 3 j∈D
i∈T
i∈T ′
j∈D
7.9 Show that for the kmedian problem as deﬁned in Section 7.7, the optimal solution can be found in polynomial time if the optimum cost OP Tk = 0. 7.10 By using the method of conditional expectations, show that the randomized algorithm for choosing k facilities in the kmedian algorithm of Section 7.7 can be made deterministic.
Chapter Notes The primaldual method for approximation algorithms is a generalization of the primaldual method used for linear programming and combinatorial optimization problems such as the shortest st path problem, the maximum ﬂow problem, the assignment problem, the minimumcost branching problem, and others. For an overview of the primaldual method and its application to these problems, see Papadimitriou and Steiglitz [238]. Edmonds [95] gives the primaldual algorithm for the minimumcost branching problem in Exercise 7.5. The idea of Section 7.3 that the shortest st path problem can be solved by an algorithm that greedily increases dual variables is due to Hoﬀman [168]. Dijkstra’s algorithm for the same problem is due, of course, to Dijkstra [88]. The ﬁrst use of the primaldual method for approximation algorithms is due to BarYehuda and Even [35]; they gave the algorithm of Section 7.1 for the set cover problem. Work in primaldual approximation algorithms was revived by work on the generalized Steiner tree problem of Section 7.4. The ﬁrst 2approximation algorithm for the generalized Steiner tree problem is due to Agrawal, Klein, and Ravi [4], and the algorithm presented in Section 7.4 is essentially that of [4]. The use of linear programming and LP duality in the algorithm was made explicit Goemans and Williamson [138], who extended the technique to other problems (such as the kpath partition problem of Exercise 7.7). The idea of depicting dual variables as moats is due to J¨ unger and Pulleyblank [180]. Several uses of the primaldual method for approximation algorithms then followed. BarYehuda, Geiger, Naor, and Roth [37] gave the feedback vertex set algorithm of Section 7.2, using Lemma 7.3, which is due to Erd˝os and P´osa [100]. Jain and Vazirani [177] developed the primaldual algorithm for the uncapacitated facility location problem and the use of Lagrangean Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
194
The primaldual method
relaxation for the kmedian problem in Sections 7.6 and 7.7. Carnes and Shmoys [61] gave the primaldual algorithm for the minimum knapsack problem in Section 7.5; their algorithm uses an integer programming formulation of the problem due to Carr, Fleischer, Leung, and Phillips [62], who gave an LProunding 2approximation algorithm for the problem based on their formulation. Surveys of the primaldual method for approximation algorithms are given by Bertsimas and Teo [46], Goemans and Williamson [140], and Williamson [288]. The local ratio technique of Exercise 7.3 is due to BarYehuda and Even [36]. All of the algorithms in Sections 7.1 through 7.4 are known to have local ratio equivalents. A formal proof of the equivalence of the primaldual method and the local ratio technique for a deﬁned set of problems is given in BarYehuda and Rawitz [38]. Surveys of the local ratio technique have been given by BarYehuda [33], BarYehuda, Bendel, Freund, and Rawitz [34], and BarYehuda and Rawitz [38]. The hardness result for the kmedian problem in Theorem 7.17 is due to Jain, Mahdian, Markakis, Saberi, and Vazirani [176], following work of Guha and Khuller [146] for the hardness of the uncapacitated facility location problem. The result of Exercise 7.2 is due to Garg, Vazirani, and Yannakakis [127]. The results of Exercises 7.4 and 7.7 are from Goemans and Williamson [138]. The result of Exercise 7.8 is from Jain and Vazirani [177]; Exercise 7.10 is also from this paper.
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
Chapter 8
Cuts and metrics
In this chapter, we think about problems involving metrics. A metric (V, d) on a set of vertices V gives a distance duv for each pair of vertices u, v ∈ V such that three properties are obeyed: (1) duv = 0 if and only if v = u; (2) duv = dvu for all u, v ∈ V and (3) duv ≤ duw + dwv for all u, v, w ∈ V . The ﬁnal property is sometimes called the triangle inequality. We will sometimes simply refer to the metric d instead of (V, d) if the set of vertices V is clear from the context. A concept related to a metric is a semimetric (V, d), in which properties (2) and (3) are obeyed, but not necessarily (1), so that if duv = 0, then possibly u ̸= v (a semimetric maintains that duu = 0). We may sometimes ignore this distinction between metrics and semimetrics, and call them both metrics. Metrics turn out to be a useful way of thinking about graph problems involving cuts. Many important problems in discrete optimization require ﬁndings cuts in graphs of various types. To see the connection between cuts and metrics, note that for any cut S ⊆ V , we can deﬁne d where duv = 1 if u ∈ S and v ∈ / S, and duv = 0 otherwise. Note that (V, d) is then a semimetric; it is sometimes called the cut semimetric associated with S. Then a problem in a graph G = (V, E) in which we are trying to ﬁnd a cut to minimize or maximize the sum of weights we of the ∑ edges e in the cut becomes one of ﬁnding a cut semimetric that minimizes or maximizes e=(u,v)∈E we duv . In several examples in this chapter, we set up a linear programming relaxation with variables duv in which we have constraints on d corresponding to the properties of a semimetric. Then we use the metric properties of duv to help ﬁnd the desired cut. In many cases, we consider for some vertex s ∈ V all the vertices within some small distance r of s (using the LP variables duv as distances) and put them on the same side of the cut and all other vertices on the other side; we view this as taking a ball of radius r around s. This is a technique we will use extensively. We will also consider the notion of approximating a given metric (V, d) by a metric of a simpler form. In particular, we will consider tree metrics; tree metrics are metrics that are deﬁned by shortest paths in a tree. Sometimes we wish to approximate problems involving distances in a graph, in which the problem becomes straightforward in a tree metric. We can sometimes get good approximation algorithms by ﬁrst approximating the graph distances by a tree metric, then solving the problem easily in the tree metric. We begin the chapter by considering a number of diﬀerent cut problems. We start in Section 8.1 with the multiway cut problem, and ﬁrst show that a simple algorithm involving ﬁnding 195
196
Cuts and metrics
a number of minimum st cuts gives a 2approximation algorithm. We then show that the randomized rounding of an LP solution along the lines described above improves this to a 32 approximation algorithm. We consider the multicut problem in Section 8.3, which also uses an LP relaxation and a rounding technique as described above. For this problem, we introduce an important technique called “region growing” that relates the edges in the cut formed by a ball to the value of the LP solution on edges inside the ball. In the following section, we apply the region growing technique to the problem of ﬁnding small balanced cuts; balanced cuts are ones in which the two parts of the cut have roughly equal numbers of vertices. The ﬁnal three sections of the chapter discuss the technique of approximating metrics by tree metrics, and present applications of this technique to the buyatbulk network design problem and the linear arrangement problem.
8.1
The multiway cut problem and a minimumcutbased algorithm
We begin by considering a simple variety of cut problem, and give an approximation algorithm that does not require using a linear programming relaxation. We then show that using a linear programming relaxation and treating its solution as a metric on the set of vertices gives a better performance guarantee. The problem is known as the multiway cut problem. We are given an undirected graph G = (V, E), costs ce ≥ 0 for all edges e ∈ E, and k distinguished vertices s1 , . . . , sk . The goal is to remove a minimumcost set of edges F such that no pair of distinguished vertices si and sj for i ̸= j are in the same connected component of (V, E − F ). One application of this problem arises in distributed computing. Each vertex represents an object, and an edge e between them of cost ce represents the amount of communication between the objects. The objects need to be partitioned to reside on k diﬀerent machines, with special object si needing to reside on the ith machine. The goal is to partition the objects onto the k machines in such a way that communication between the machines is minimized. Our algorithm for the multiway cut problem begins with some observations about the structure of any feasible solution F . Given a feasible F , let Ci be the set of vertices reachable in (V, E − F ) from each distinguished vertex si . Let Fi = δ(Ci ), where δ(S) is the set of all edges in E with exactly one endpoint in S. Observe that each Fi is a cut separating si from s1 , . . . , si−1 , si+1 , . . . , sk . We call Fi an isolating cut: it isolates si from the other distinguished vertices. Observe also that some edges might appear in two diﬀerent Fi : an edge e can have one endpoint in Ci and the other in Cj for some j ̸= i, so that e ∈ Fi and e ∈ Fj . Our algorithm will compute a minimum isolating cut Fi between si and s1 , . . . , si−1 , si+1 , . . . , sk for each i: we can do this by adding a sink vertex t to the graph with inﬁnite cost edges from the distinguished vertices (other than si ) to the sink, and then computing a minimum si t cut. ∪ We return as our solution ki=1 Fi . Theorem 8.1: The algorithm of computing a minimum cut between each si and the other distinguished vertices is a 2approximation algorithm for the multiway cut problem. Proof. As above, let Fi be the minimum isolating cut between si and the other distinguished ∗ vertices. Let F ∗ be an optimal solution, and let F∑ i be the isolating cut in the optimal solution for si . For a subset of edges A ⊆ E, let c(A) = e∈A ce . Because Fi is a minimum isolating cut for si , we know that c(Fi ) ≤ c(Fi∗ ). Hence the cost of the solution of the algorithm is at ∑ ∑ most ki=1 c(Fi ) ≤ ki=1 c(Fi∗ ). We observed above that each edge in the optimal solution F ∗ Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
8.2
The multiway cut problem and an LP rounding algorithm
197
can be in at most two Fi∗ , so that k ∑
c(Fi ) ≤
i=1
k ∑
c(Fi∗ ) ≤ 2c(F ∗ ) = 2 OPT .
i=1
By being only slightly cleverer, we can improve the performance guarantee a little bit. Without loss of generality, let∪Fk be the costliest cut of F1 , . . . , Fk . Note that the union of the ﬁrst k − 1 isolating cuts, F = k−1 i=1 Fi , is also a feasible solution for the problem: if sk can reach any other distinguished vertex si in (V, E − F ), then Fi was not an isolating cut for si . Then we have the following. Corollary 8.2: The algorithm of returning the cheapest k − 1 minimum isolating cuts is a (2 − k2 )approximation algorithm for the multiway cut problem. Proof. We use the same notation as in the ∑ proof of the theorem above. Observe that the cost of our new solution F is at most (1 − k1 ) ki=1 c(Fi ). Thus its cost is at most (
8.2
1 1− k
)∑ k i=1
( c(Fi ) ≤
1 1− k
)∑ k i=1
c(Fi∗ )
(
1 ≤2 1− k
) OPT .
The multiway cut problem and an LP rounding algorithm
We now show that one can obtain a better approximation algorithm for the multiway cut problem via LP rounding. First, we need to strengthen some of our observations above. We noted that for any feasible solution F to the problem, we can compute sets Ci of vertices reachable from each distinguished vertex si . We claim for any minimal solution F , the Ci must be a partition of the vertices V . To see this, suppose we are given some solution F such that the corresponding sets Ci do not partition V . Let S be all vertices not∪reachable from any si . Pick some j arbitrarily and add S to Cj . Let the new solution be F ′ = ki=1 δ(Ci ). Then we claim that F ′ ⊆ F . Observe that for any i ̸= j, δ(Ci ) is in F . Furthermore, any edge e ∈ δ(Cj ) must have some endpoint in some Ci with i ̸= j. Thus e ∈ δ(Ci ) and is in F also. Therefore, another way of looking at the multiway cut problem is ﬁnding an ∪ optimal partition of V into sets Ci such that si ∈ Ci for all i and such that the cost of F = ki=1 δ(Ci ) is minimized. Given this perspective, we formulate the problem as an integer program. For each vertex u ∈ V , we have k diﬀerent variables, xiu . The variable xiu = 1 if u is assigned to the set Ci and is 0 otherwise. We create a variable zei which will be 1 if e ∈ δ(Ci ) and 0 otherwise. Since if e ∈ δ(Ci ), it is also the case that e ∈ δ(Cj ) for some j ̸= i, the objective function of the integer program is then k 1∑ ∑ i ce ze ; 2 e∈E
i=1
∪ this will give exactly the cost of the edges in the solution F = ki=1 δ(Ci ) for the assignment of vertices to sets Ci given by the variables xiu . Now we consider constraints for the program. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
198
Cuts and metrics
Obviously si must be assigned to Ci , so we have xisi = 1 for all i. To enforce that zei = 1 when e ∈ δ(Ci ) for e = (u, v), we add constraints zei ≥ xiu − xiv and zei ≥ xiv − xiu ; this enforces that z i ≥ xiu − xiv . Since the integer program will minimize the objective function and zei appears with a nonnegative coeﬃcient in the objective function, at optimality we will have zei = xiu −xiv . Thus zei = 1 if one of the two endpoints of the edge e = (u, v) is assigned to the set Ci and the other is not. Then the overall integer program is as follows: 1∑ ∑ i minimize ze ce 2 k
e∈E
k ∑
subject to
(8.1)
i=1
xiu = 1,
∀u ∈ V,
zei ≥ xiu − xiv ,
∀e = (u, v) ∈ E,
zei ≥ xiv − xiu ,
∀e = (u, v) ∈ E,
(8.2)
i=1
xisi xiu
= 1,
i = 1, . . . , k,
∈ {0, 1},
∀u ∈ V, i = 1, . . . , k.
The linear programming relaxation of this integer program is closely connected with the ℓ1 metric for measuring distances in Euclidean space. Let x, y ∈ ℜn , and suppose that xi , y i are the ith coordinates of x and y. Then the ℓ1 metric is as follows. n Deﬁnition 8.3: Given ∑n x, y i∈ ℜ i , the ℓ1 metric is a metric such that the distance between x and y is ∥x − y∥1 = i=1 x − y .
We relax the integer program above to a linear program by replacing the integrality condition ∈ {0, 1} with xiu ≥ 0. Observe then that the linear program can be given a much more compact formulation. The variable xiu is the ith coordinate of a vector xu . Each xu ∈ ℜk ; k in ∑kfact,i because of constraint (8.2), each xu lies in the ksimplex ∆k , where ∆k = {x ∈ ℜ : ei is the vector with 1 in the i=1 x = 1}. Each distinguished vertex si has xsi = ei , where ∑ ∑k i i ith coordinate and zeroes elsewhere. Finally, we observe that ki=1 zei = i=1 xu − xv  = ∥xu − xv ∥1 , which is just the distance between the points xu and xv under the ℓ1 metric. Thus the linear programming relaxation becomes xiu
minimize subject to
1 2
∑
ce ∥xu − xv ∥1
(8.3)
e=(u,v)∈E
xsi = ei ,
i = 1, . . . , k,
xu ∈ ∆k ,
∀u ∈ V.
In Figure 8.1, we give a geometric representation of an instance with k = 3. We will give an approximation algorithm by the randomized rounding of the solution of the linear program. In particular, we will take all vertices that are close to the distinguished vertex si in the LP solution and put them in Ci . For any r ≥ 0 and 1 ≤ i ≤ k, let B(ei , r) be the set of vertices corresponding to the points }xu in a ball of radius r in the ℓ1 metric around ei ; that { is, B(ei , r) = u ∈ V : 12 ∥ei − xu ∥1 ≤ r . We will sometimes write B(si , r) instead of B(ei , r). We include the factor of 1/2 in the deﬁnition so that all vertices are within a ball of radius 1: B(ei , 1) = V for all i. See Figure 8.1 for an illustration. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
8.2
The multiway cut problem and an LP rounding algorithm
199
s3 (0,0,1)
s1 (1,0,0)
s2 (0,1,0)
Figure 8.1: A geometric representation of an LP solution for k = 3. The distinguished vertices s1 , s2 , and s3 are given by the coordinates (1, 0, 0), (0, 1, 0), and (0, 0, 1) respectively. Any other vertex lies in the triangle deﬁned by these three coordinates. The dotted line represents a ball around s1 . Let x be an LP solution to (8.3) for all 1 ≤ i ≤ k do Ci ← ∅ Pick r ∈ (0, 1) uniformly at random Pick a random permutation π of {1, . . . , k} X←∅ // X keeps track of all currently assigned vertices for i ← 1 to k − 1 do Cπ(i) ← B(sπ(i) , r) − X X ← X ∪ Cπ(i) Cπ(k) ← V − X ∪ return F = ki=1 δ(Ci ) Algorithm 8.1: Randomized rounding algorithm for the multiway cut problem. We now consider Algorithm 8.1. The algorithm selects r ∈ (0, 1) uniformly at random, and a random permutation π of the indices {1, . . . , k}. The algorithm proceeds through the indices in the order given by the permutation. For index π(i) in the ordering, the algorithm assigns all previously unassigned vertices in B(sπ(i) , r) to Cπ(i) . At the end of the order, any unassigned vertices are assigned to Cπ(k) . See Figure 8.2 for an example. Before we get into the details of the proof, let’s look at an example that will show why picking a random radius and a random permutation π are useful in giving a good performance guarantee. To simplify matters, suppose that k is large, and suppose we have an edge (u, v) where xv = (0, 1, 0, 0, . . .) and xu = ( 12 , 12 , 0, 0, . . .); see Figure 8.3 for an illustration. We suppose that xs1 = (1, 0, 0, . . .) and xs2 = (0, 1, 0, . . .). Note that in this case, u can only be in the balls B(s1 , r) or B(s2 , r) since r < 1 and 12 ∥ei − xu ∥1 = 1 for i ̸= 1, 2. Thus u can only be assigned to C1 , C2 , or Cπ(k) , with the last case occurring if xu ∈ / B(s1 , r) and xu ∈ / B(s2 , r). Somewhat similarly, v can only be assigned to C1 or C2 ; in this case, xv is always in B(s2 , r) since r > 0 and ∥e2 − xv ∥1 = 0. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
200
Cuts and metrics
s3 (0,0,1)
s2 (0,1,0)
s1 (1,0,0)
Figure 8.2: A geometric representation of the algorithm for k = 3. The random permutation is 1,3,2. The ball around s3 assigns to C3 all vertices in the ball not already assigned to C1 . Now, suppose we have a ﬁxed permutation π. If the permutation orders s1 before s2 , and s2 is not last in the permutation, then we claim (u, v) enters the cut with probability ∥xu − xv ∥1 = 1. To see this, note that if 1/2 ≤ r < 1, then u ∈ B(s1 , r) but v ∈ / B(s1 , r) so that u ∈ C1 and v ∈ C2 . If 0 < r < 1/2, then v ∈ B(s2 , r) but u ∈ / B(s2 , r), so that v ∈ C2 ; we observed above that u can only be assigned to C1 , C2 , and Cπ(k) , so that if u ∈ / B(s2 , r) and π(k) ̸= 2, it must be the case that u ∈ / C2 and (u, v) is in the cut. Note that if s2 is last in the permutation, this can only lower the probability that (u, v) is in the cut. In general, we can upper bound the probability that an edge (u, v) ends up in the cut by ∥xu − xv ∥1 , but analyzing the algorithm with this probability is only good enough to give a performance guarantee of 2, since the contribution of edge e = (u, v) to the objective function is 21 ce ∥xu − xv ∥1 . However, if the permutation π orders s2 before s1 , then the edge is in the cut if 0 < r < 1/2, since v ∈ C2 but u ∈ / C2 as before, while if r > 1/2, then the edge cannot end up in the cut at all because both u and v are in B(s2 , r) and hence both are assigned to C2 . Since the probability that s2 is ordered before s1 is 1/2 in a random permutation, the overall probability that (u, v) is in the cut is at most Pr[(u, v) in cut s1 before s2 ] Pr[s1 before s2 ] + Pr[(u, v) in cut s2 before s1 ] Pr[s2 before s1 ] 1 1 1 ≤ ∥xu − xv ∥1 · + ∥xu − xv ∥1 · 2 2 2 3 1 = · ∥xu − xv ∥1 , 2 2 which will give us a performance guarantee of 3/2. To start the analysis, we need some lemmas that will be helpful later on. The ﬁrst lemma observes that any coordinate accounts for at most half of the ℓ1 distance. The second lemma gives us a condition under which a vertex is in a ball of radius r. Lemma 8.4: For any index ℓ and any two vertices u, v ∈ V , 1 xℓu − xℓv  ≤ ∥xu − xv ∥1 . 2 Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
8.2
The multiway cut problem and an LP rounding algorithm
201
u = (1/2, 1/2, 0, . . .) s2 = v = (0, 1, 0, . . .)
s1 = (1, 0, 0, . . .)
Figure 8.3: An illustration of the ideas of the analysis. Proof. Without loss of generality, assume that xℓu ≥ xℓv . Then ∑ ∑ ∑ ∑ xju − 1 − xjv = (xjv − xju ) ≤ xju − xjv . xℓu − xℓv  = xℓu − xℓv = 1 − j̸=ℓ
j̸=ℓ
j̸=ℓ
j̸=ℓ
By adding xℓu − xℓv  to both sides, we have 2xℓu − xℓv  ≤ ∥xu − xv ∥1 , which gives the lemma. Lemma 8.5: u ∈ B(si , r) if and only if 1 − xiu ≤ r.
∑ Proof. We have u ∈ B(si , r) if and∑only if 12 ∥ei − xu ∥1 ≤ r. This is ∑ equivalent to 12 kℓ=1 eℓi − xℓu  ≤ r which is equivalent to 12 ℓ̸=i xℓu + 21 (1 − xiu ) ≤ r. Since ℓ̸=i xℓu = 1 − xiu , this is equivalent to 1 − xiu ≤ r. Theorem 8.6: Algorithm 8.1 is a randomized 32 approximation algorithm for the multiway cut problem. Proof. Pick an arbitrary edge (u, v) ∈ E. We claim that the probability that the endpoints lie in diﬀerent parts of the partition is at most 43 ∥xu − xv ∥1 . Let W be a random variable denoting the value of the cut, and let Zuv be ∑ a 01 random variable which is 1 if u and v are in diﬀerent parts of the partition, so that W = e=(u,v)∈E ce Zuv . Given the claim, we have ∑ E[W ] = E ce Zuv e=(u,v)∈E
=
∑
ce E[Zuv ]
e=(u,v)∈E
=
∑
ce · Pr[(u, v) in cut]
e=(u,v)∈E
≤
∑
e=(u,v)∈E
=
3 1 · 2 2
3 ce · ∥xu − xv ∥1 4
∑
ce ∥xu − xv ∥1
e=(u,v)∈E
3 OPT, 2 ∑ where the ﬁnal inequality follows since 12 e=(u,v)∈E ce ∥xu − xv ∥1 is the objective function of the LP relaxation. ≤
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
202
Cuts and metrics
Now to prove the claim. We say that an index i settles edge (u, v) if i is the ﬁrst index in the random permutation such that at least one of u, v ∈ B(si , r). We say that index i cuts edge (u, v) if exactly one of u, v ∈ B(si , r). Let Si be the event that i settles (u, v) and Xi be the event that i cuts (u, v). Note that Si depends on the random permutation, while Xi is independent of the random permutation. In order for (u, v) to be in the multiway cut, there must be some index i that both settles and cuts (u, v); if this happens,∑then (u, v) ∈ δ(Ci ). Thus the probability that edge (u, v) is in the multiway cut is at most ki=1 Pr[Si ∧ Xi ]. We ∑ will now show that ki=1 Pr[Si ∧ Xi ] ≤ 34 ∥xu − xv ∥1 . By Lemma 8.5, Pr[Xi ] = Pr[min(1 − xiu , 1 − xiv ) ≤ r < max(1 − xiu , 1 − xiv )] = xiu − xiv . Let ℓ be the index that minimizes mini (min(1 − xiu , 1 − xiv )); in other words, sℓ is the distinguished vertex that is closest to one of the two endpoints of (u, v). We claim that index i ̸= ℓ cannot settle edge (u, v) if ℓ is ordered before i in the random permutation π: by Lemma 8.5 and the deﬁnition of ℓ, if at least one of u, v ∈ B(ei , r), then at least one of u, v ∈ B(eℓ , r). Note that the probability that ℓ occurs after i in the random permutation π is 1/2. Hence for i ̸= ℓ, Pr[Si ∧ Xi ] = Pr[Si ∧ Xi ℓ occurs after i in π] · Pr[ℓ occurs after i in π] + Pr[Si ∧ Xi ℓ occurs before i in π] · Pr[ℓ occurs before i in π] 1 ≤ Pr[Xi ℓ occurs after i in π] · + 0. 2 Since the event Xi is independent of the choice of random permutation, Pr[Xi ℓ occurs after i in π] = Pr[Xi ], and therefore for i ̸= ℓ Pr[Si ∧ Xi ] ≤ Pr[Xi ] ·
1 1 = xiu − xiv . 2 2
We also have that Pr[Sℓ ∧ Xℓ ] ≤ Pr[Xℓ ] ≤ xℓu − xℓv . Therefore, we have that the probability that (u, v) is in the multiway cut is k ∑
Pr[Si ∧ Xi ] ≤ xℓu − xℓv  +
i=1
1∑ i xu − xiv  2 i̸=ℓ
=
1 ℓ 1 xu − xℓv  + ∥xu − xv ∥1 . 2 2
Using Lemma 8.4, 12 xℓu − xℓv  ≤ 14 ∥xu − xv ∥1 , so that k ∑ i=1
3 Pr[Si ∧ Xi ] ≤ ∥xu − xv ∥1 , 4
as desired. With only slightly more work, the performance guarantee of the algorithm can be improved to this is left to Exercise 8.1. One can also obtain a 23 approximation algorithm by choosing between two ﬁxed permutations; we explore this in Exercise 8.2. The idea of partitioning vertices by taking balls of ﬁxed radius in the order given by a random permutation is a useful one which we will apply again in Section 8.5. 3 1 2−k;
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
8.3
The multicut problem
8.3
203
The multicut problem
In this section, we consider a slightly diﬀerent cut problem, called the multicut problem. Rather than having a set of distinguished vertices s1 , . . . , sk , we now have a set of distinguished sourcesink pairs of vertices (s1 , t1 ), . . . , (sk , tk ). Given an undirected graph G = (V, E) with nonnegative costs ce ≥ 0 for all e ∈ E, our goal is to ﬁnd a minimumcost set of edges F whose removal disconnects all pairs; that is, for every i, 1 ≤ i ≤ k, there is no path connecting si and ti in (V, E − F ). Unlike the previous problem, there can be paths connecting si and sj or si and tj for i ̸= j. We previously considered special case of the multicut problem in trees in Exercise 7.2. Given a graph G, let Pi be the set of all paths P joining si and ti . Then an integer programming formulation of the problem is: minimize
∑
ce xe
e∈E
subject to
∑
xe ≥ 1,
∀P ∈ Pi , 1 ≤ i ≤ k,
xe ∈ {0, 1},
∀e ∈ E.
e∈P
The constraints ensure that for each i, for each path P ∈ Pi , some edge is selected from P . To obtain a linear programming relaxation, we replace the constraints xe ∈ {0, 1} with xe ≥ 0. Although the formulation is exponential in the size of the input (since there could be an exponential number of paths P ∈ Pi ), we can solve the linear program in polynomial time by using the ellipsoid method described in Section 4.3. The separation oracle works as follows: given a solution x, we consider the graph G in which the length of each edge e is xe . For each i, 1 ≤ i ≤ k, we compute the length of the shortest path between si and ti . If for some i, the length ∑ of the shortest path P is less than 1, we return it as a violated constraint, since we have e∈P xe < 1 for P ∈ Pi . If for each i, the length of the shortest path between si and ti is at least 1, then the length of every path P ∈ Pi is at least 1, and the solution is feasible. Thus we have a polynomialtime separation oracle for the linear program. Alternatively, it is possible to give an equivalent, polynomiallysized linear program that can be solved directly in polynomial time, and whose solution can be transformed into a solution of this linear program in polynomial time; we leave this as an exercise (Exercise 8.4). As in the LP rounding algorithm for the multiway cut problem, we will build our solution by taking balls around the vertex si for each i. In order to do this, we must deﬁne a notion of distance. Given an optimal solution x to the LP, we will let xe denote the length of edge e for the purposes of our algorithm. We then let dx (u, v) denote the length of the shortest path from vertex u to v using xe as edge lengths. Observe that dx will obey the triangle inequality by the deﬁnition of shortest paths. Also, in any feasible LP solution we have dx (si , ti ) ≥ 1 for all i. Let Bx (si , r) = {v ∈ V : dx (si , v) ≤ r} be the ball of radius r around vertex si using these distances. Additionally, it will be useful to think of each edge e ∈ E as being a pipe with length xe and crosssectional area ce . Then the product ce xe gives the volume of edge e. The LP produces the minimumvolume system of pipes such that dx (si , ti ) ≥ 1 for all i. See Figure 8.4 for an illustration of the pipe system, and Figure 8.5 for an ∑ illustration of a ball in the pipe system. ∗ Given an optimal solution x to the LP, we let V = e∈E ce xe be the total volume of the pipes. We know that V ∗ ≤ OPT . Let Vx (si , r) be the volume of pipes within distance r of si plus an Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
204
Cuts and metrics
s2 1/2 1/2
t1 1/2
s1 1/2 1/4
1/2
1/4 t2 1/4
Figure 8.4: An illustration of a pipe system. The width of the pipe corresponds to its cost ce , and the number next to it gives its length. The distance between each source/sink pair is at least 1 along any path. extra V ∗ /k term; that is, Vx (si , r) =
V∗ + k
∑ e=(u,v):u,v∈Bx (si ,r)
ce xe +
∑
ce (r − dx (si , u)).
e=(u,v):u∈Bx (si ,r),v ∈B / x (si ,r)
The ﬁrst term ensures that Vx (si , 0) is nonzero, and that the sum of V (si , 0) over all si is V ∗ ; it will later be clear why this is useful. The second term adds up the volume of all edges (u, v) such that both u and v are inside the ball of distance r around si , while the third term adds up the volume of pipes that fall partially within the ball. Let δ(S) be the set of all edges that have exactly one endpoint in the set of vertices S. For a given radius ∑ r, let c(δ(Bx (si , r))) be the cost of the edges in δ(Bx (si , r)); that is, c(δ(Bx (si , r))) = e∈δ(Bx (si ,r)) ce . We will ﬁrst claim that it is always possible to ﬁnd some radius r < 1/2 such that the cost c(δ(Bx (si , r))) of the cut induced by the ball of radius r around si is not much more than the volume of the ball; ﬁnding a ball of this sort is sometimes called region growing. Lemma 8.7: Given a feasible solution to the linear program x, for any si one can ﬁnd in polynomial time a radius r < 1/2 such that c(δ(Bx (si , r))) ≤ (2 ln(k + 1))Vx (si , r). This leads to the following algorithm, which is summarized in Algorithm 8.2. We start out with an empty set of edges F , and we sequence through each i from 1 to k. If si and ti are not already separated by the cut F , we invoke Lemma 8.7 to ﬁnd an appropriate radius r < 1/2 around si . We then add the edges of δ(Bx (si , r)) to F . We remove all the vertices of Bx (si , r) and all incident edges from the graph and continue. We note that in any iteration, the balls Bx (si , r) and volumes Vx (si , r) are taken with respect to the edges and vertices remaining in the current graph. We ﬁrst show that the algorithm produces a feasible solution. Lemma 8.8: Algorithm 8.2 produces a feasible solution for the multicut problem. Proof. The only possible way in which the solution might not be feasible is if we have some sj tj pair in the ball Bx (si , r) when the vertices in the ball get removed from the graph. However, if Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
8.3
The multicut problem
si
1/4 dr
1/4
1/4 u
v 1/2
Figure 8.5: An illustration of a ball in pipe system. The ball has radius 3/8 around si . Note that the volume of the ball will jump from radius r = 1/4 − ϵ to r = 1/4 due to the presence of edge (u, v). Observe also that if we consider a second ball of radius r + dr for suﬃciently small dr, then the volume of the diﬀerence of the two balls is just the crosssectional area of the pipes with one endpoint inside the ball and one outside; that is, it is c(δx (Bx (si , r)))dr.
Let x be an optimal solution to the LP F ←∅ for i ← 1 to k do if si and ti are connected in (V, E − F ) then Choose radius r < 1/2 around si as in Lemma 8.7 F ← F ∪ δ(Bx (si , r)) Remove Bx (si , r) and incident edges from graph return F Algorithm 8.2: Algorithm for the multicut problem.
Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
205
206
Cuts and metrics
sj , tj ∈ Bx (si , r) for r < 1/2, then dx (si , sj ) < 1/2 and dx (si , tj ) < 1/2 so that dx (sj , tj ) < 1. This contradicts the feasibility of the LP solution x. So it must be the case that whenever we remove the vertices in a ball Bx (si , r) from the graph, it can contain at most one of the pair of vertices sj , tj for all j. Assuming Lemma 8.7, we can now prove that the algorithm is a good approximation algorithm for the multicut problem. Theorem 8.9: Algorithm 8.2 is a (4 ln(k + 1))approximation algorithm for the multicut problem. Proof. Let Bi be the set of vertices in the ball Bx (si , r) chosen by the algorithm when the pair si ti are selected for separation; we set Bi = ∅ if no ball is chosen for i. Let Fi be the edges in δ(Bi ) when ∪ Bi and its incident edges are removed from the graph (where Fi = ∅ if Bi = ∅). Then F = ki=1 Fi . Let Vi be the total volume of edges removed when the vertices of Bi and ∗ its incident edges are removed from the graph. Note that Vi ≥ Vx (si , r) − Vk since Vi contains the full volume of edges in Fi , while Vx (si , r) only contains part of the volume of these edges, but has an extra term of V ∗ /k. Note that by of r in Lemma 8.7, we have that ( theV choice ∗) c(Fi ) ≤ (2 ln(k + 1))Vx (si , r) ≤ (2 ln(k + 1)) Vi + k . Further, observe that the volume of each edge belongs to at most one Vi ; once the edge is part of Vi it is removed from the graph and cannot be part of the volume of a ball Bj removed in a later iteration. This observation ∑ implies that ki=1 Vi ≤ V ∗ . Putting all of this together, we have that ∑ e∈F
ce =
k ∑ ∑ i=1 e∈Fi
ce ≤ (2 ln(k + 1))
k ( ∑ i=1
V∗ Vi + k
)
≤ (4 ln(k + 1))V ∗ ≤ (4 ln(k + 1)) OPT .
We ﬁnally turn to the proof of Lemma 8.7. Proof of Lemma 8.7. For simplicity we write V (r) = Vx (si , r) and c(r) = c(δ(Bx (si , r)). Our proof will show that for r chosen uniformly at random from [0, 1/2), the expected value of c(r)/V (r) is no more than 2 ln(k + 1). This implies that for some value of r, we have that c(r) ≤ (2 ln(k + 1))V (r). We will then show how we can quickly ﬁnd such a value of r deterministically. To set up the computation of the expectation, we need the following observations. For any value of r such that V (r) is diﬀerentiable, note that the derivative is exactly the cost of the edges in the cut given by the ball of radius r around si ; that is, dV dr = c(r) (see Figure 8.5). We observe that the points at which V (r) is not diﬀerentiable are values of r such that Bx (si , r) changes; that is, at values of r such that dx (si , v) = r for some vertex v. Furthermore, V (r) may not even be continuous for these values: if there are two vertices u and v such that there is an edge (u, v) of positive volume δ > 0 and dx (si , u) = dx (si , v) = r, then V (r) − V (r − ϵ) ≥ δ as ϵ ↓ 0 (see Figure 8.5). Nevertheless, note that V (r) is nondecreasing in r. Essentially we would now like to invoke the mean value theorem from calculus. Recall that the mean value theorem states that for a function f continuous on an interval [a, b] and (a) diﬀerentiable on (a, b), there exists some c ∈ (a, b) such that f ′ (c) = f (b)−f . If we set b−a f (r) = ln V (r), we note that d V (r) c(r) = . f ′ (r) = dr V (r) V (r) Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
8.3
The multicut problem
207
∗
Observe that V (1/2) ≤ V ∗ + Vk since V (1/2) cannot be more than the entire volume plus V ∗ /k, and V (0) is exactly V ∗ /k. Following the mean value theorem, we want to show that there is some r ∈ (0, 1/2) such that ( ) ( ) ∗ + V∗ V ln V (1/2) − ln V (0) V (1/2) k f ′ (r) ≤ = 2 ln ≤ 2 ln = 2 ln(k + 1). (8.4) 1/2 − 0 V (0) V ∗ /k However, the mean value theorem doesn’t apply since as we noticed earlier V (r) may not be continuous and diﬀerentiable in (0,1/2). Nevertheless, we show a form of the mean value theorem still holds in this case. We now sort and label the vertices in Bx (si , 1/2) according to their distance from si ; let rj = dx (si , vj ) be such that 0 = r0 ≤ r1 ≤ · · · ≤ rl−1 ≤ 1/2. Then the vertices are labelled si = v0 , v1 , v2 , . . . , vl−1 . For notational simplicity deﬁne rl = 1/2. Let rj− be a value inﬁnitesimally smaller than rj . The expected value of c(r)/V (r) for r chosen uniformly from [0, 1/2) is then 1 ∑ 1/2 l−1
j=0
∫
− rj+1
rj
∑ c(r) dr = 2 V (r) l−1
∫
rj
j=0
= 2
l−1 ∑
− rj+1
1 dV dr V (r) dr r−
[ln V (r)]rjj+1
j=0
= 2
l−1 [ ∑
] − ln V (rj+1 ) − ln V (rj )
j=0
Since V (r) is nondecreasing, this last sum is at most 2
l−1 ∑
[ln V (rj+1 ) − ln V (rj )] .
j=0
Then the sum telescopes so that 2
l−1 ∑
[ln V (rj+1 ) − ln V (rj )] = 2 (ln V (1/2) − ln V (0)) ,
j=0
and this can be bounded above by 2 ln(k + 1) as shown in (8.4). Thus it must be the case that there exists r ∈ [0, 1/2) such that c(r) ≤ (2 ln(k + 1))V (r). − How can we ﬁnd this value of r quickly? Observe that for r ∈ [rj , rj+1 ], c(r) stays constant (since Bx (si , r) is unchanged), while V (r) is nondecreasing. Thus if the inequality holds at any − point in this interval, it must also hold at rj+1 . Therefore, we need to check the inequality only − at rj+1 for j = 0, . . . , l − 1; by the argument above, the inequality must hold for some value of j. While no better approximation algorithm for the multicut problem is known, given the unique games conjecture, we cannot do signiﬁcantly better. Theorem 8.10: Assuming the unique games conjecture, for any constant α ≥ 1, there is no αapproximation algorithm for the multicut problem unless P = NP. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
208
Cuts and metrics
We will prove this theorem in Section 16.5. It will be useful in later sections to have a slight generalization of Lemma 8.7. We observe that by modifying the proof of Lemma 8.7 one can get the following corollary. Corollary 8.11: Given lengths xe on edges e ∈ E and a vertex u, one can ﬁnd in polynomial time a radius r ∈ [a, b) such that 1 c(δ(Bx (u, r))) ≤ ln b−a
8.4
(
Vx (u, b) Vx (u, a)
) Vx (u, r).
Balanced cuts
We now turn to the graph cut problem that most commonly arises in practice. We say that a set of vertices S is a bbalanced cut for b ∈ (0, 1/2] if ⌊bn⌋ ≤ S ≤ ⌈(1 − b)n⌉, where n = V . Given an undirected graph G = (V, E) with nonnegative edge costs ce ≥ 0 for all edges e ∈ E, and a b ∈ (0, 1/2], the goal is to ﬁnd a bbalanced cut S that minimizes the cost of the edges with exactly one endpoint in S. The case when b = 1/2 is sometimes called the minimum bisection problem. Finding bbalanced cuts arises in divideandconquer schemes used in a variety of applications: the cut S is found, some graph problem is solved on S and V − S, then the solution is the result of combining the two solutions in S and V − S via the edges between them. If the cut has low cost, then the combining step becomes easier. Furthermore, if S and V − S have approximately the same size, then the algorithm can be applied recursively to each side, and the total depth of the recursion will be O(log n). In this section, we will not give an approximation algorithm for the balanced cut problem, settling instead for a pseudoapproximation algorithm; by this we mean that our algorithm will ﬁnd a bbalanced cut whose cost is within a O(log n) factor of the cost of the optimal b′ balanced cut for b′ ̸= b. Let OPT(b) denote the value of the minimumcost bbalanced cut. In particular, we will show how to ﬁnd a 31 balanced cut of value at most O(log n) OPT(1/2). Note ﬁrst of all that OPT(1/3) ≤ OPT(1/2), since any 12 balanced cut is also a 13 balanced cut. However, OPT(1/2) could be substantially larger than OPT(1/3): in fact, if the graph consists of a clique on 32 n nodes connected by a single edge to a clique on 31 n nodes, and all edges have weight 1, OPT(1/3) = 1 while OPT(1/2) = Ω(n2 ). Thus the algorithm is not a true approximation algorithm since we compare the cost of the solution found by the algorithm with the optimum OPT(1/2) for a problem that could be much larger than the optimum OPT(1/3) of the problem for which the algorithm ﬁnds a solution. While we give an algorithm to ﬁnd a 13 balanced cut of cost at most O(log n) OPT(1/2), the discussion that follows can be generalized to give a bbalanced cut of cost at most O( b′ 1−b log n) OPT(b′ ) for b ≤ 1/3 and b < b′ . Our approach to the problem follows that used for the multicut problem in the previous section. We claim that the following is a linear programming relaxation of the minimum bisection problem; we will shortly prove that this is true. Given a graph G, let Puv be the set of all Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
8.4
Balanced cuts
209
paths between u and v in G. Consider the following linear program: minimize
∑
ce xe
e∈E
duv ≤
subject to ∑
∑ e∈P
(
duv ≥
v∈S
duv ≥ 0, xe ≥ 0,
∀u, v ∈ V, ∀P ∈ Puv ,
xe ,
2 1 − 3 2
)
⌈
n,
⌉ 2 ∀S ⊆ V : S ≥ n + 1, ∀u ∈ S, 3 ∀u, v ∈ V, ∀e ∈ E.
In this relaxation, we have a variable duv that is intended to denote the distance between u and v using edge lengths xe ; if we let dx (u, v) be the actual shortest path length between u and v using edge lengths xe , then duv ≤ dx (u, v). To solve the linear program in polynomial time, we once again apply the ellipsoid method as described in Section 4.3. Given a solution (d, x), we can ﬁrst check if the ﬁrst set of constraints is obeyed by computing the shortest path distance between u and v using edge lengths xe , and ensuring that duv ≤ dx (u, v). For each vertex u, we ﬁnd the closest ⌈ 32 n⌉ + 1 vertices using distances duv ; this will include u itself since duu = 0. Let u = v0 , v1 , v2 , . . . , v⌈ 2 n⌉ be the ⌈ 32 n⌉+1 3 ∑⌈ 32 n⌉ vertices that minimize duv . If j=0 duvj < ( 23 − 12 )n, then clearly the constraint is violated for ∑⌈ 23 n⌉ duvj ≥ ( 32 − 21 )n, then there is no S = {v0 , . . . , v⌈ 2 n⌉ } and this choice of u. Note that if j=0 3
other set S with S ≥ ⌈ 32 n⌉ + 1 and u ∈ S such that the constraint is violated, since any S containing v0 , . . . , v⌈ 2 n⌉ can only give rise to a larger sum, and since these vertices were chosen 3 to make the sum as small as possible. Thus in polynomial time we can check whether (d, x) is a feasible solution and ﬁnd a violated constraint if it is not feasible. We now argue that the linear program is a relaxation. Lemma 8.12: The linear program above is a relaxation of the minimum bisection problem. ¯x Proof. Given an optimal bisection S, we construct a solution (d, ¯) for the linear program by setting x ¯e = 1 if e ∈ δ(S) and x ¯e = 0 otherwise. We then set d¯uv = 1 for all u ∈ S, v ∈ / S, ¯ and duv = 0 otherwise. We argue that this solution is feasible, which is suﬃcient to prove the lemma. Clearly the ﬁrst set of constraints is obeyed, since any path P from u ∈ S to v ∈ / S must use an edge e ∈ δ(S). Now consider any set S ′ such that S ′  ≥ ⌈ 32 n⌉ + 1. Notice that S ′ − S ≥ ⌈ 23 n⌉ + 1 − ⌈ 12 n⌉ ≥ 2 ( 3 − 12 )n and S ′ ∩ S ≥ ⌈ 32 n⌉ + 1 − ⌈ 12 n⌉ ≥ ( 23 − 21 )n; call S ′ − S and S ′ ∩ S the two parts of S ′ . See Figure 8.6. One part (namely, S ′ ∩ S) is contained in S and the other (namely, S ′ − S) has no vertices of S, so for u and v in diﬀerent parts, d¯uv = 1. Pick any u ∈ S ′ . Since u is in one of the two parts of S ′ ,∑there are at least ( 32 − 12 )n vertices v in the other part of S ′ which does not contain u. Thus v∈S ′ d¯uv ≥ ( 23 − 12 )n.
Given an optimal solution (d, x) to the linear program, let Bx (u, r) be the set of vertices in the ball of radius r around ∑ u; that is, Bx (u, r) = {v ∈ V : dx (u, v) ≤ r}. As with the multicut problem, we let V ∗ = e∈E ce xe , so that we know V ∗ ≤ OPT(1/2) by Lemma 8.12. We deﬁne Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
210
Cuts and metrics
S0
S
u v
Figure 8.6: An illustration of the proof of Lemma 8.12. At least ( 23 − 21 )n vertices must be both in S ∩ S ′ and S ′ − S, so for any u ∈ S ′ , there must be at least ( 23 − 21 )n vertices v which lie on the other side of the bisection S. the volume of the ball of radius r around a vertex u as we did in the previous section: Vx (u, r) =
V∗ + n
∑ e=(v,w):v,w∈Bx (u,r)
ce xe +
∑
ce (r − dx (u, v)).
e=(v,w):v∈Bx (u,r),w∈B / x (u,r)
As before, deﬁne c(δ(Bx (u, r))) to be the cost of the edges with exactly one endpoint in Bx (u, r). We can then prove the following lemma, which is analogous to Lemma 8.7. Lemma 8.13: Given a feasible solution to the linear program (d, x) and two vertices u and v such that dx (u, v) ≥ 1/12, one can ﬁnd in polynomial time a radius r < 1/12 such that c(δ(Bx (u, r))) ≤ (12 ln(n + 1))Vx (u, r). Proof. We apply Corollary 8.11 to the interval [0, 1/12), and observe that ) ( ∗ ) ( V + V ∗ /n Vx (u, 1/12) ≤ ln = ln(n + 1). ln Vx (u, 0) V ∗ /n
Our algorithm for the 13 balanced cut problem is given in Algorithm 8.3. We let S be the set of vertices that will be in the cut; S is initially empty. We let F be a superset of the edges in the cut, and it is also initially empty. Our ﬁnal solution requires that ⌊ 31 n⌋ ≤ S ≤ ⌈ 23 n⌉. As long as S < ⌊ 31 n⌋, we will show that there must exist two vertices u, v ∈ / S such that dx (u, v) ≥ 16 . We apply Lemma 8.13 to ﬁnd balls of radius less than 1/12 around both u and v. We take the ball of smaller cardinality, add it to S, add the corresponding cut to F , and remove the ball and its incident edges from the graph. We now need to prove that the algorithm is correct, and returns a 13 balanced cut. Lemma 8.14: For any iteration of the algorithm in which S < ⌊ 13 n⌋, there exist u, v ∈ / S such that dx (u, v) ≥ 1/6. Proof. Consider∑ S ′ = V −S. Then S ′  ≥ ⌈ 23 n⌉+1. Then by the linear programming constraint, for any u ∈ S ′ , v∈S ′ duv ≥ ( 32 − 12 )n = 16 n. Since there are at most n vertices in S ′ , it must be the case that for some v ∈ S ′ , duv ≥ 1/6. Finally, we know that dx (u, v) ≥ duv . Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
8.5
Probabilistic approximation of metrics by tree metrics
211
Let (d, x) be an optimal solution to the LP relaxation F ← ∅; S ← ∅ while S < ⌊ 13 n⌋ do Choose some u, v ∈ / S such that dx (u, v) ≥ 1/6 Choose radius r < 1/12 around u as in Lemma 8.13 Choose radius r′ < 1/12 around v as in Lemma 8.13 if Bx (u, r) ≤ Bx (v, r′ ) then S ← S ∪ Bx (u, r) F ← F ∪ δ(Bx (u, r)) Remove Bx (u, r) and incident edges from graph else S ← S ∪ Bx (v, r′ ) F ← F ∪ δ(Bx (v, r′ )) Remove Bx (v, r′ ) and incident edges from graph return S Algorithm 8.3: Algorithm for the 13 balanced cut problem. Lemma 8.15: The algorithm returns S such that ⌊ 13 n⌋ ≤ S ≤ ⌈ 32 n⌉. Proof. By construction of the algorithm, S ≥ ⌊ 13 n⌋. Thus we only need to show that S ≤ ⌈ 23 n⌉. Let Sˆ be the contents of S at the beginning of the iteration before the algorithm terˆ < ⌊ 1 n⌋ and we added the smaller of the two balls around u and v to minated. Then S 3 Sˆ to obtain the ﬁnal solution S. Since we considered Bx (u, r) and Bx (v, r′ ) for r < 1/12, r′ < 1/12, and dx (u, v) ≥ 1/6, it must be the case that these two balls are disjoint; that is, ˆ this implies that the size Bx (u, r) ∩ Bx (v, r′ ) = ∅. Since we chose the smaller ball to add to S, ˆ ˆ of the ball added to S had no more than half the remaining vertices, or no more than 12 (n − S) vertices. Thus ⌈ ⌉ ˆ = 1 n + 1 S ˆ ≤ 2n , ˆ + min(Bx (u, r), Bx (v, r′ )) ≤ S ˆ + 1 (n − S) S = S 2 2 2 3 ˆ < ⌊ 1 n⌋. since S 3 The proof of the performance guarantee is nearly the same as that for the multicut algorithm, and so we omit it. Theorem 8.16: Algorithm 8.3 returns a 13 balanced cut of cost no more than (24 ln(n+1))V ∗ ≤ (24 ln(n + 1)) OPT(1/2). As mentioned previously, the algorithm above can be generalized to give a bbalanced cut of cost at most O( b′1−b log n) OPT(b′ ) for b ≤ 1/3 and b < b′ . In Section 15.3, we will give an O(log n)approximation algorithm for the minimum bisection problem; this algorithm is not a pseudoapproximation algorithm, but is an algorithm that produces a bisection of cost at most O(log n) OPT(1/2).
8.5
Probabilistic approximation of metrics by tree metrics
In this section, we introduce the idea of tree metrics, which we will consider for the remainder of the chapter. Tree metrics have become an important tool for devising approximation algorithms Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
212
Cuts and metrics
for a wide variety of problems. We use tree metrics to approximate a given distance metric d on a set of vertices V . A tree metric (V ′ , T ) for a set of vertices V is a tree T deﬁned on some set of vertices V ′ ⊇ V , together with nonnegative lengths on each edge of T . The distance Tuv between vertices u, v ∈ V ′ is the length of the unique shortest path between u and v in T . We would like to have a tree metric (V ′ , T ) that approximates d on V in the sense that duv ≤ Tuv ≤ α · duv for all u, v ∈ V for some value of α. The parameter α is called the distortion of the embedding of d into the tree metric (V ′ , T ). Given a lowdistortion embedding, we can often reduce problems on general metric spaces to problems on tree metrics with a loss of a factor of α in the performance guarantee. It is often much easier to produce an algorithm for a tree metric than for a general metric. We will see examples of this in the following two sections, in which we discuss the buyatbulk network design problem and the linear arrangement problem. Further examples can be found in the exercises. Unfortunately, it can be shown that for a cycle on n vertices, no tree metric has distortion less than (n − 1)/8 (see Exercise 8.7 for a restricted proof of this). However, we can give a randomized algorithm for producing a tree T such that for each u, v ∈ V , duv ≤ Tuv and E[Tuv ] ≤ O(log n)duv ; that is, the expected distortion is O(log n). Another way to view this is that we can give a probability distribution on trees such that the expected distortion is O(log n). We refer to this as the probabilistic approximation of the metric d by a tree metric. Theorem 8.17: Given a distance metric (V, d), such that duv ≥ 1 for all u ̸= v, u, v ∈ V , there is a randomized, polynomialtime algorithm that produces a tree metric (V ′ , T ), V ⊆ V ′ , such that for all u, v ∈ V , duv ≤ Tuv and E[Tuv ] ≤ O(log n)duv . It is known that there exist metrics such that any probabilistically approximation of the metric by tree metrics must have distortion Ω(log n), so the result above is the best possible to within constant factors. We now begin to give the details of how the algorithm of Theorem 8.17 works. The tree is constructed via a hierarchical cut decomposition of the metric d. Let ∆ be the smallest power of 2 greater than 2 maxu,v duv . The hierarchical cut decomposition is a rooted tree with log2 ∆ + 1 levels. The nodes at each level of the tree correspond to some partitioning of the vertex set V : the root node (at level log2 ∆) corresponds to V itself, while each leaf of the tree (at level 0) corresponds to a single vertex of V . A given node at level i corresponds to some subset S of vertices V ; the children of the node corresponding to S correspond to a partitioning of S. For a given node at level i that corresponds to a set S, the vertices in S will be the vertices in a ball of radius less than 2i and at least 2i−1 centered on some vertex. Notice that each leaf of the tree at level 0 is in a ball of radius less than 20 = 1 centered on some vertex u, and so u is in the ball by itself since duv ≥ 1 for all v ̸= u. Observe also that by the deﬁnition of ∆, the set V is contained in a ball of radius ∆ centered on any vertex, since the radius of the ball at level log2 ∆ is at least 21 ∆ ≥ maxu,v duv . The length of the tree edge joining the children at level i − 1 to their parent at level i is 2i . See Figure 8.7 for an illustration of the hierarchical cut decomposition. Each node of the tree will be a vertex in V ′ , so that the tree is a tree on the vertices in V ′ . We let each leaf correspond to the unique v ∈ V that it contains, so that the leaves are the vertex set V , while the internal vertices are the remaining nodes in V ′ , and V ⊆ V ′ . Before we state precisely how to obtain the decomposition, we observe the following properties of the tree obtained in this way. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
8.5
Probabilistic approximation of metrics by tree metrics
Level log2 ∆
1 + 2, the rescaled local search algorithm using bigger steps yields a ρapproximation algorithm for the uncapacitated facility location problem.
9.2
A local search algorithm for the kmedian problem
In this section, we shall consider again the kmedian problem originally considered in Section 7.7; however, we shall consider a slightly simpler variant in which we have a set of locations Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
240
Further uses of greedy and local search algorithms
N , each of which is both a client and a potential facility location. For each pair of locations i and j, there is a cost cij of assigning location j to a facility at location i. We can select at most k locations at which to open facilities, where k is part of the input. The goal is to ﬁnd a set of locations ∑ S at which to open a facility, where S ≤ k, such that the assignment cost is minimized: j∈N mini∈S cij . Without loss of generality, we can assume that S = k. In other words, this problem can be viewed as the minsum analogue of the kcenter problem, previously considered in Section 2.2, which is a minmax problem. As in the kcenter problem, we shall assume that the distance matrix is symmetric, satisﬁes the triangle inequality, and has zeroes on the diagonal (i.e. cii = 0 for each i ∈ N ). We will give a local search algorithm for the kmedian problem. We will let S ⊆ N denote the set of open facilities for the current solution, and let S ∗ ⊆ N denote the set of open facilities in a ﬁxed optimal solution. For each of these two solutions, each client j is assigned to its nearest open facility (breaking ties arbitrarily); we let this mapping be denoted σ(j) and σ ∗ (j), respectively, for these two assignments. Analogously, we let C and C ∗ denote, respectively, the total cost of the current and optimal solutions. The local search algorithm that we shall consider is the most natural one. Each current solution is speciﬁed by a subset S ⊆ N of exactly k locations. To move from one feasible solution to a neighboring one, we swap two locations; that is, we select one location i ∈ S to delete from the current set, and choose one location i′ ∈ N − S to add to the current set of facilities. Afterward, we reassign each client to its nearest open facility. In our local search procedure, we repeatedly check to see if any swap move yields a solution of lower cost; if so, the resulting solution is our new current solution. We repeat this step until, from the current solution, no swap move decreases the cost. The current solution at this point is said to be locally optimal. We shall prove the following theorem. Theorem 9.6: For any input to the kmedian problem, any feasible solution S that is locally optimal with respect to the pairwise swap move has a cost that is at most ﬁve times the optimal value. Proof. The proof will focus on ﬁrst constructing a set of k special swaps, which we shall call the crucial swaps. Since the current solution S is locally optimal, we know that each of these swaps does not improve the objective function of the resulting solution. These swaps will all be constructed by swapping into the solution one location i∗ in S ∗ and swapping out of the solution one location i in S. Each i∗ ∈ S ∗ will participate in exactly one of these k swaps, and each i ∈ S will participate in at most 2 of these k swaps. (We will allow the possibility that i∗ = i, and hence the swap move is degenerate, but clearly such a “change” would also not improve the objective function of the current solution, even if we change the corresponding assignment.) Observe that the current solution provides a mapping from each facility i∗ ∈ S ∗ to a facility σ(i∗ ) ∈ S. As Figure 9.2 shows, this mapping allows us to categorize the facilities in S: • let O ⊆ S consist of those facilities i ∈ S that have exactly one facility i∗ ∈ S ∗ with σ(i∗ ) = i; • let Z ⊆ S consist of those facilities i ∈ S for which none of the facilities i∗ ∈ S ∗ have σ(i∗ ) = i; • and let T ⊆ S consist of those facilities i ∈ S such that i has at least two locations in S ∗ assigned to it in the current solution. Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
9.2
A local search algorithm for the kmedian problem
241
O∗
R∗
S∗ σ
S T
Z
O
Figure 9.2: The mapping of locations in S ∗ to S used to construct the crucial swaps.
We make a few simple observations. The mapping σ provides a matching between a subset O∗ ⊆ S ∗ and the set O ⊆ S (by the deﬁnition of O). Hence, if ℓ denotes the number of locations in R∗ = S ∗ − O∗ , then ℓ must also equal Z ∪ T  (since S ∗  = S = k). Since each location in T is the image of at least two locations in R∗ , it follows that T  ≤ ℓ/2. Hence Z ≥ ℓ/2. We now construct the crucial swaps as follows: ﬁrst, for each i∗ ∈ O∗ , we swap i∗ with second, we build a collection of ℓ swaps, each of which swaps into the solution a distinct location in R∗ , and swaps out a location in Z, where each location in Z appears in at most two swaps. For the swaps involving locations in R∗ and Z, we are free to choose any mapping provided that each element of R∗ is swapped in exactly once, and each element of Z is swapped out once or twice. σ(i∗ );
Consider one crucial swap, where i∗ ∈ S ∗ and i ∈ S denote the swapped locations; we analyze the change of cost incurred by no longer opening a facility at location i ∈ S, and using i∗ instead. Let S ′ denote the set of selected locations after this swap; that is, S ′ = S −{i}∪{i∗ }. To complete the analysis, we also specify the assignment of each location in N to an open facility in S ′ . For each location j such that σ ∗ (j) = i∗ , we assign j to i∗ (since i∗ is in S ′ ). For each location j such that σ ∗ (j) ̸= i∗ , but σ(j) = i, then we need a new facility to serve j, and we now assign it to σ(σ ∗ (j)). All other locations j remain served by σ(j). One must argue that σ(σ ∗ (j)) ̸= i when σ ∗ (j) ̸= i∗ (since we need that this location must remain in S ′ ). Assume, for a contradiction, that σ(σ ∗ (j)) = i; then i ∈ O (since each location swapped out by a crucial swap is either in Z or O, and the former is clearly not possible by the deﬁnition of Z). Since i ∈ O, it is σ’s image of exactly one element in O∗ , and we build a crucial swap by swapping i with that one element. Hence, σ ∗ (j) = i∗ , but this is a contradiction. We have now constructed a new set of facilities S ′ and an assignment of each location in N to one of the facilities in S ′ . There may be some location j that is not served by its closest point in S ′ by this assignment. However, since there are no improving swaps from the current solution S, any swap combined with a suboptimal assignment must also not decrease the overall cost; hence, the change to S ′ along with the speciﬁed assignment also must not improve over using S with the assignment given by function σ. What is the change of cost for this swap? By focusing only on the clients j for which Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
242
Further uses of greedy and local search algorithms
S σ(j)
σ(σ ∗(j)) σ
S∗ σ ∗(j)
j
Figure 9.3: Bounding the length of the edge from j to σ(σ ∗ (j)). σ ∗ (j) = i∗ , or both σ ∗ (j) ̸= i∗ and σ(j) = i, we can compute the change in the cost to be ∑ ∑ (cσ∗ (j)j − cσ(j)j ) + (cσ(σ∗ (j))j − cσ(j)j ). ∗ ∗ j:σ ∗ (j)=i∗ j:σ (j)̸=i & σ(j)=i We can simplify the terms in the second summation by considering Figure 9.3. By the triangle inequality, we have that cσ(σ∗ (j))j ≤ cσ(σ∗ (j))σ∗ (j) + cσ∗ (j)j . In the current solution S, σ ∗ (j) is assigned to σ(σ ∗ (j)) instead of to σ(j), and so we know that cσ(σ∗ (j))σ∗ (j) ≤ cσ(j)σ∗ (j) , Once again, we can apply the triangle inequality to get that cσ(j)σ∗ (j) ≤ cσ∗ (j)j + cσ(j)j , and so, putting these pieces together, we have that cσ(σ∗ (j))j ≤ 2cσ∗ (j)j + cσ(j)j , or equivalently, cσ(σ∗ (j))j − cσ(j)j ≤ 2cσ∗ (j)j . (In fact, a little reﬂection indicates that we had already proved this, in Lemma 9.2, where now σ plays the same role as ϕ.) This yields a more compact upper bound on the change in the cost, which we know must be nonnegative; that is, for each crucial swap i∗ and i, we have that ∑ ∑ 0≤ (cσ∗ (j)j − cσ(j)j ) + 2cσ∗ (j)j . (9.11) ∗ ∗ j:σ ∗ (j)=i∗ j:σ (j)̸=i & σ(j)=i Now we add inequality (9.11) over all k crucial swaps. Consider the contribution for each of the two terms in the ﬁrst summation. Recall that each i∗ ∈ S ∗ participates in exactly one crucial swap. For the ﬁrst term, we add cσ∗ (j)j over all clients j for which σ ∗ (j) = i∗ , and this sum is then added for each choice of i∗ in S ∗ . Each client j has a unique facility σ ∗ (j) ∈ S ∗ to which it is assigned by the ﬁxed optimal solution, and so the net eﬀect is to compute the Electronic web edition. Copyright 2011 by David P. Williamson and David B. Shmoys. To be published by Cambridge University Press
9.3
Minimumdegree spanning trees
243
∑ sum j∈N cσ∗ (j)j = C ∗ . However, the same is true for the second term, the double summation