Latest Articles

## Faster Carry Bit Computation for Adder Circuits with Prescribed Arrival Times

We consider the fundamental problem of constructing fast circuits for the carry bit computation in binary addition. Up to a small additive constant,... (more)

## Dynamic Beats Fixed: On Phase-based Algorithms for File Migration

We construct a deterministic 4-competitive algorithm for the online file migration problem, beating the currently best 20-year-old, 4.086-competitive Move-To-Local-Min (Mtlm) algorithm by Bartal et al. (SODA 1997). Like Mtlm, our algorithm also operates in phases, but it adapts their lengths dynamically depending on the geometry of... (more)

## Faster Approximation Schemes for the Two-Dimensional Knapsack Problem

For geometric optimization problems we often understand the computational complexity on a rough scale, but not very well on a finer scale. One example... (more)

## An Optimal O(nm) Algorithm for Enumerating All Walks Common to All Closed Edge-covering Walks of a Graph

In this article, we consider the following problem. Given a directed graph G, output all walks of G... (more)

##### NEWS

The ACM Transactions on Algorithms (TALG) publishes original research of the highest quality dealing with algorithms that are inherently discrete and finite, and having mathematical content in a natural way, either in the objective or in the analysis.

#### SODA'18 Editorial

More Logarithmic-Factor Speedups for 3SUM, (median,+)-Convolution, and Some Geometric 3SUM-Hard...

We present an algorithm that solves the 3SUM problem for $n$ real numbers in $O((n^2/\log^2n) (\log\log n)^{O(1)})$ time, improving previous solutions by about a logarithmic factor. Our framework for shaving off two logarithmic factors can be applied to other problems, such as (median,+)-convolution/matrix multiplication and algebraic generalizations of 3SUM\@. We also obtain the first subquadratic results on some 3SUM-hard problems in computational geometry, for example, deciding whether (the interiors of) a constant number of simple polygons have a common intersection.

Proximity results and faster algorithms for Integer Programming using the Steinitz Lemma

We consider integer programming problems in standard form $\max \{c^Tx : Ax = b, \, xe0, \, x $^n\}$where$A  $^{m ×n}$, $b $^m$and$c  $^n$. We show that such an integer program can be solved in time $(m Å )^{O(m)} Å \|b\|_ ^2$, where $$ is an upper bound on each absolute value of an entry in $A$. This improves upon the longstanding best bound of Papadimitriou (1981) of $(mÅ)^{O(m^2)}$, where in addition, the absolute values of the entries of $b$ also need to be bounded by $$. % and addresses an open problem raised by Fomin. Our result relies on a lemma of Steinitz that states that a set of vectors in $^m$ that is contained in the unit ball of a norm and that sum up to zero can be ordered such that all partial sums are of norm bounded by $m$. We also use the Steinitz lemma to show that the $\ell_1$-distance of an optimal integer and fractional solution, %of the integer %program, also under the presence of upper bounds on the variables, is bounded by $m Å (2\,mÅ+1)^m$. Here $$ is again an upper bound on the absolute values of the entries of $A$. The novel strength of our bound is that it is independent of $n$. We provide evidence for the significance of our bound by applying it to general knapsack problems where we obtain structural and algorithmic results that improve upon the recent literature.

Faster Replacement Paths and Distance Sensitivity Oracles

Given a directed n-node graph G with integer edge weights in [-M,M], a source node s, a target node t, and an edge e, a replacement path for the triple (s,t,e) is a shortest s-t path in G avoiding e. In this paper we achieve the following main results: 1) We describe an algorithm with runtime $\tilde{O}(Mn^{\omega})$ that computes the lengths of all the replacement paths for a given source-target pair (s,t). Here $\omega<2.373$ is fast matrix multiplication exponent. For a comparison, the previous fastest algorithms have runtime $\tilde{O}(Mn^{1+2\omega/3})$ [Weimann,Yuster-FOCS'10] and $\tilde{O}(n^{2.5})$ in the unweighted case [Roditty,Zwick-ICALP'05]. 2) We describe an algorithm with runtime $\tilde{O}(M^{\frac{1}{4-\omega}}n^{2+\frac{1}{4-\omega}})$ that computes the lengths of all the replacement paths for a given source s. Our runtime reduces to $\tilde{O}(Mn^\omega)$ for positive weights, hence matching our mentioned result for the simpler case where also the target t is fixed. 3) We present a data structure (a.k.a. distance sensitivity oracle) that answers to queries of the form (s,t,e) with the length of the associated replacement path. For a given parameter $\alpha\in [0,1]$, our oracle has preprocessing time $\tilde{O}(Mn^{\omega+\frac{1}{2}}+Mn^{\omega+\alpha(4-\omega)})$ and query time $\tilde{O}(n^{1-\alpha})$. In particular, we achieve for the first time simultaneously subcubic preprocessing time and sublinear query time. For a comparison, the previous best oracle has $\tilde{O}(Mn^{\omega+1-\alpha})$ preprocessing time and (superlinear) $\tilde{O}(n^{1+\alpha})$ query time [Weimann,Yuster-FOCS'10]. All the mentioned results are randomized: a wrong distance is output with polynomially small probability in n.

Distributed Edge Coloring with Small Palettes

The complexity of distributed edge coloring depends heavily on the palette size as a function of the maximum degree $\Delta$. In this paper we explore the complexity of edge coloring in the LOCAL model in different palette size regimes. 1. We simplify the \emph{round elimination} technique of Brandt et al. and prove that $(2\Delta-2)$-edge coloring requires $\Omega(\log_\Delta \log n)$ time w.h.p. 2. We give a randomized edge coloring algorithm that can use palette sizes as small as $\Delta + \tilde{O}(\sqrt{\Delta})$, which is a natural barrier for randomized approaches. 3. We develop a new distributed Lovasz local lemma algorithm for tree-structured dependency graphs, which leads to a $(1+\epsilon)\Delta$-edge coloring algorithm for trees running in $O(\log\log n)$ time. 4. A natural approach to computing $(\Delta+1)$-edge colorings (Vizing's theorem) is to extend partial colorings by iteratively re-coloring parts of the graph, e.g., via augmenting paths.'' We prove that this approach may be viable, but in the worst case requires recoloring subgraphs of diameter $\Omega(\Delta\log n)$.

A Faster Algorithm for Minimum-Cost Bipartite Perfect Matching in Planar Graphs

Given a weighted planar bipartite graph G(A U B, E) where each edge has an integer edge cost, we give an O(n4/3 log2 n log(nC)) time algorithm to compute minimum-cost perfect matching; here C is the maximum edge cost in the graph. The previous best known planarity exploiting algorithm has a running time of O(n3/2log n) and is achieved by using planar separators (Lipton and Tarjan '80). Our algorithm is based on the bit-scaling paradigm (Gabow and Tarjan '89). For each scale, our algorithm first executes O(n1/3) iterations of Gabow and Tarjan's algorithm in O(n4/3) time leaving only O(n2/3) vertices unmatched. Next, it constructs a compressed residual graph H with O(n2/3) vertices and O(n) edges. This is achieved by using an r-division of the planar graph G with r=n2/3. For each partition of the r-division, there is an edge between two vertices of H if and only if they are connected by a directed path inside the partition. Using existing efficient shortest-path data structures, the remaining O(n2/3) vertices are matched by iteratively computing a minimum-cost augmenting path each taking O(n2/3 log2 n) time. Augmentation changes the residual graph, so the algorithm updates the compressed representation for each partition affected by the change in O(n2/3 log n) time. We bound the total number of affected partitions over all the augmenting paths by O(n2/3 log n). Therefore, the total time taken by the algorithm is O(n4/3log2 n log(nC)).

Scheduling When You Don't Know the Number of Machines

Often in a scheduling problem, there is uncertainty about the jobs to be processed. The issue of uncertainty regarding the machines has been much less studied. In this paper, we study a scheduling environment in which jobs first need to be grouped into some sets before the number of machines is known, and then the sets need to be scheduled on machines without being separated. In order to evaluate algorithms in such an environment, we introduce the idea of an ±-robust algorithm, one which is guaranteed to return a schedule on any number m of machines that is within an ± factor of the optimal schedule on m machine, where the optimum is not subject to the restriction that the sets cannot be separated. Under such environment, we give a 5/3-robust algorithm for scheduling on parallel machines to minimize makespan, and show a lower bound 4/3. For the special case when the jobs are infinitesimal, we give a 1.233-robust algorithm with an asymptotic lower bound of 1.207. We also study a case of fair allocation, where the objective is to minimize the difference between the maximum and minimum machine load.

Algorithms to Approximate Column-Sparse Packing Problems

Column-sparse packing problems arise in several contexts in both deterministic and stochastic discrete optimization. We present two unifying ideas, (non-uniform) attenuation and multiple-chance algorithms, to obtain improved approximation algorithms for some well-known families of such problems. As three main examples, we attain the integrality gap, up to lower-order terms, for known LP relaxations for k-column sparse packing integer programs (Bansal et al., Theory of Computing, 2012) and stochastic k-set packing (Bansal et al., Algorithmica, 2012), and go half the remaining distance" to optimal for a major integrality-gap conjecture of F\"uredi, Kahn and Seymour on hypergraph matching (Combinatorica, 1993).

Randomized Memoryless Algorithms for the Weighted and the Generalized $k$-server Problems

The weighted $k$-server problem is a generalization of the $k$-server problem, wherein the cost of moving a server of weight $\beta_i$ through a distance $d$ is $\beta_i\cdot d$. On uniform metric spaces, this models caching with caches having different page replacement costs. A memoryless algorithm is an online algorithm whose behavior is independent of the history given the positions of its $k$ servers. In this paper, we develop a framework to analyze the competitiveness of randomized memoryless algorithms. The key technical contribution is a method for working with potential functions defined implicitly as the solution of a linear system. Using this, we establish tight bounds on the competitive ratio achievable by randomized memoryless algorithms for the weighted $k$-server problem on uniform metrics. We first prove that there is an $\alpha_k$-competitive memoryless algorithm for this problem, where $\alpha_k=\alpha_{k-1}^2+3\alpha_{k-1}+1$; $\alpha_1=1$. We complement this result by proving that no randomized memoryless algorithm can have a competitive ratio less than $\alpha_k$. Finally, we prove that the above bounds also hold for the generalized $k$-server problem on weighted uniform metrics.

#### Solving the Sigma-Tau Problem

Approximation Schemes for Low-Rank Binary Matrix Approximation Problems

We provide a randomized linear time approximation scheme for a generic problem about clustering of binary vectors subject to additional constrains. The new constrained clustering problem generalizes a number of problems and by solving it, we obtain the first linear time-approximation schemes for a number of well-studied fundamental problems concerning clustering of binary vectors and low-rank approximation of binary matrices. Among the problems solvable by our approach are \bmfgfr, \bmfbr, and various versions of \textsc{Binary Clustering}. For example, for \bmfgfr problem, where for an $m\times n$ binary matrix $A$ and integer $r>0$, we seek for a binary matrix $B$ of \GF rank at most $r$ such that $\ell_0$ norm of matrix $\bfA-\bfB$ is minimum, our algorithm, for any $\epsilon>0$ in time $f(r,\epsilon)\cdot n\cdot m$, where $f$ is some computable function, outputs a $(1+\epsilon)$-approximate solution with probability at least $(1-\frac{1}{e})$. This is the first linear time approximation schemes for these problems. We also give (deterministic) PTASes for these problems running in time $n^{f(r)\frac{1}{\epsilon^2}\log \frac{1}{\epsilon}}$, where $f$ is some function depending on the problem. Our algorithm for the constrained clustering problem is based on a novel sampling lemma, which is interesting in its own.

A Time- and Message-Optimal Distributed Algorithm for Minimum Spanning Trees

This paper presents a randomized (Las Vegas) distributed algorithm that constructs a minimum spanning tree (MST) in weighted networks with optimal (up to polylogarithmic factors) time and message complexity. This algorithm runs in $\tilde{O}(D + \sqrt{n})$ time and exchanges $\tilde{O}(m)$ messages (both with high probability), where $n$ is the number of nodes of the network, $D$ is the hop-diameter, and $m$ is the number of edges. This is the \emph{first} distributed MST algorithm that matches \emph{simultaneously} the time lower bound of $\tilde{\Omega}(D + \sqrt{n})$ [Elkin, SIAM J.\ Comput.\ 2006] and the message lower bound of $\Omega(m)$ [Kutten et al., J.\ ACM 2015], which both apply to randomized Monte Carlo algorithms. The prior time and message lower bounds are derived using two completely different graph constructions; the existing lower bound construction that shows one lower bound does not work for the other. To complement our algorithm, we present a new lower bound graph construction for which any distributed MST algorithm requires both $\tilde{\Omega}(D + \sqrt{n})$ rounds and $\Omega(m)$ messages.

###### All ACM Journals | See Full Journal Index

Search TALG
enter search term and/or author name