ACM DL

ACM Transactions on

Algorithms (TALG)

Menu
Latest Articles

Approximation Schemes for Machine Scheduling with Resource (In-)dependent Processing Times

We consider two related scheduling problems: single resource-constrained scheduling on identical parallel machines and a generalization with... (more)

NEWS

About TALG

The ACM Transactions on Algorithms (TALG) publishes original research of the highest quality dealing with algorithms that are inherently discrete and finite, and having mathematical content in a natural way, either in the objective or in the analysis.

read more
A 4/3 Approximation for the Minimum 2-Edge Connected Subgraph Problem

We present a factor 4/3 approximation algorithm for the problem of finding a minimum 2-edge connected spanning subgraph of a given undirected multigraph. The algorithm is based on a simple graph decomposition followed by comparing with the smallest 2-edge cover.

Proximity results and faster algorithms for Integer Programming using the Steinitz Lemma

We consider integer programming problems in standard form $\max \{c^Tx : Ax = b, \, xe0, \, x  $^n\}$ where $A  $^{m ×n}$, $b  $^m$ and $c  $^n$. We show that such an integer program can be solved in time $(m Å ”)^{O(m)} Å \|b\|_^2$, where $”$ is an upper bound on each absolute value of an entry in $A$. This improves upon the longstanding best bound of Papadimitriou (1981) of $(mŔ)^{O(m^2)}$, where in addition, the absolute values of the entries of $b$ also need to be bounded by $”$. % and addresses an open problem raised by Fomin. Our result relies on a lemma of Steinitz that states that a set of vectors in $^m$ that is contained in the unit ball of a norm and that sum up to zero can be ordered such that all partial sums are of norm bounded by $m$. We also use the Steinitz lemma to show that the $\ell_1$-distance of an optimal integer and fractional solution, %of the integer %program, also under the presence of upper bounds on the variables, is bounded by $m Å (2\,mŔ+1)^m$. Here $”$ is again an upper bound on the absolute values of the entries of $A$. The novel strength of our bound is that it is independent of $n$. We provide evidence for the significance of our bound by applying it to general knapsack problems where we obtain structural and algorithmic results that improve upon the recent literature.

Generalized Center Problems with Outliers

We study the F-center problem with outliers: given a metric space (X,d), a general down-closed family F of subsets of X, and a parameter m, we need to locate a subset S ?F of centers such that the maximum distance among the closest m points in X to S is minimized. Our main result is a dichotomy theorem. Colloquially, we prove that there is an efficient 3-approximation for the F-center problem with outliers if and only if we can efficiently optimize a poly-bounded linear function over F subject to a partition constraint. One concrete upshot of our result is a polynomial time 3-approximation for the knapsack center problem with outliers for which no (true) approximation algorithm was known.

Faster approximation schemes for the two-dimensional knapsack problem

For geometric optimization problems we often understand the computational complexity on a rough scale, but not very well on a finer scale. One such example is the two-dimensional knapsack problem for squares. There is a polynomial time (1+epsilon)-approximation algorithm for it (i.e., a PTAS) but the running time of this algorithm is triple exponential in 1/epsilon, i.e., Omega(n^{2^{2^{1/epsilon}}}). A double or triple exponential dependence on 1/epsilon is inherent in how this and various other algorithms for other geometric problems work. In this paper, we present an EPTAS for knapsack for squares, i.e., a (1+epsilon)-approximation algorithm with a running time of O_{epsilon}(1)*n^{O(1)}. In particular, the exponent of n in the running time does not depend on epsilon at all! Since there can be no FPTAS for the problem (unless P=NP) this is the best kind of approximation scheme we can hope for. To achieve this improvement, we introduce two new key ideas: We present a fast method to guess the Omega(2^{2^{1/epsilon}}) relatively large squares of a suitable near-optimal packing instead of using brute-force enumeration. Secondly, we introduce an indirect guessing framework to define sizes of cells for the remaining squares. In the previous PTAS each of these steps needs a running time of Omega(n^{2^{2^{1/epsilon}}}) and we improve both to O_{epsilon}(1)*n^{O(1)}. We complete our result by giving an algorithm for two-dimensional knapsack for rectangles under (1+epsilon)-resource augmentation. We improve the previous double-exponential PTAS to an EPTAS and compute even a solution with optimal weight, while the previous PTAS computes only an approximation.

A Tractable Class of Binary VCSPs via M-Convex Intersection

A binary VCSP is a general framework for the minimization problem of a function represented as the sum of unary and binary cost functions. An important line of VCSP research is to investigate what functions can be solved in polynomial time. Cooper--}ivn? classified the tractability of binary VCSP instances according to the concept of ``triangle,'' and showed that the only interesting tractable case is the one induced by the joint winner property (JWP). Recently, Iwamasa--Murota--}ivn? made a link between VCSP and discrete convex analysis, showing that a function satisfying the JWP can be transformed into a function represented as the sum of two M-convex functions, which can be minimized in polynomial time via an M-convex intersection algorithm if the value oracle of each M-convex function is given. In this paper, we give an algorithmic answer to a natural question: What binary VCSP instances can be solved in polynomial time via an M-convex intersection algorithm? Under a natural condition, we solve this problem by devising a polynomial-time algorithm for obtaining a concrete form of the representation in the representable case. Our result presents a larger tractable class of binary VCSPs, which properly contains the JWP class. We also show the co-NP-hardness of testing the representability of a given binary VCSP instance as the sum of two M-convex functions.

Exponential Separations in the Energy Complexity of Leader Election

Energy is often the most constrained resource for battery-powered wireless devices and the lion's share of energy is often spent on transceiver usage (sending/receiving packets), not on computation. In this paper we study the energy complexity of LeaderElection and ApproximateCounting in several models of wireless radio networks. It turns out that energy complexity is very sensitive to whether the devices can generate random bits and their ability to detect collisions. We consider four collision-detection models: Strong-CD (in which transmitters and listeners detect collisions), Sender-CD and Receiver-CD (in which only transmitters or only listeners detect collisions), and No-CD (in which no one detects collisions.) The take-away message of our results is quite surprising. For randomized LeaderElection algorithms, there is an exponential gap between the energy complexity of Sender-CD and Receiver-CD, and for deterministic LeaderElection algorithms there is another exponential gap, but in the reverse direction. In particular, the randomized energy complexity of LeaderElection is $\Theta(\log^* n)$ in Sender-CD but $\Theta(\log(\log^* n))$ in Receiver-CD, where $n$ is the (unknown) number of devices. Its deterministic complexity is $\Theta(\log N)$ in Receiver-CD but $\Theta(\log\log N)$ in Sender-CD, where $N$ is the (known) size of the devices' ID space. There is a tradeoff between time and energy. We give a new upper bound on the time-energy tradeoff curve for randomized LeaderElection and ApproximateCounting. A critical component of this algorithm is a new deterministic LeaderElection algorithm for dense instances, when $n=\Theta(N)$, with inverse-Ackermann-type ($O(\alpha(N))$) energy complexity.

Scheduling When You Don't Know the Number of Machines

Often in a scheduling problem, there is uncertainty about the jobs to be processed. The issue of uncertainty regarding the machines has been much less studied. In this paper, we study a scheduling environment in which jobs first need to be grouped into some sets before the number of machines is known, and then the sets need to be scheduled on machines without being separated. In order to evaluate algorithms in such an environment, we introduce the idea of an ±-robust algorithm, one which is guaranteed to return a schedule on any number m of machines that is within an ± factor of the optimal schedule on m machine, where the optimum is not subject to the restriction that the sets cannot be separated. Under such environment, we give a 5/3-robust algorithm for scheduling on parallel machines to minimize makespan, and show a lower bound 4/3. For the special case when the jobs are infinitesimal, we give a 1.233-robust algorithm with an asymptotic lower bound of 1.207. We also study a case of fair allocation, where the objective is to minimize the difference between the maximum and minimum machine load.

Optimal streaming and tracking distinct elements with high probability

The distinct elements problem is one of the fundamental problems in streaming algorithms --- given a stream of integers in the range $\{1,\ldots,n\}$, we wish to provide a $(1+\varepsilon)$ approximation to the number of distinct elements in the input. After a long line of research optimal solution for this problem with constant probability of success, using $\mathcal{O}(\frac{1}{\varepsilon^2}+\log n)$ bits of space, was given by Kane, Nelson and Woodruff in 2010. The standard approach used in order to achieve low failure probability $\delta$, is to take a median of $\log \delta^{-1}$ parallel repetitions of the original algorithm and report the median of computed answers. We show that such a multiplicative space blow-up is unnecessary: we provide an optimal algorithm using $\mathcal{O}(\frac{\log \delta^{-1}}{\varepsilon^2} + \log n)$ bits of space --- matching known lower bounds for this problem. That is, the $\log\delta^{-1}$ factor does not multiply the $\log n$ term. This settles completely the space complexity of the distinct elements problem with respect to all standard parameters. We consider also \emph{strong tracking} (or \emph{continuous monitoring}) variant of the distinct elements problem, where we want an algorithm which provides an approximation of the number of distinct elements seen so far, at all times of the stream. We show that this variant can be solved using $\mathcal{O}(\frac{\log \log n + \log \delta^{-1}}{\varepsilon^2} + \log n)$ bits of space, which we show to be optimal.

Dynamic Beats Fixed: On Phase-based Algorithms for File Migration

We construct a deterministic 4-competitive algorithm for the online file migration problem, beating the currently best 20-year-old, 4.086-competitive MTLM algorithm by Bartal et al. (SODA 1997). Like MTLM, our algorithm also operates in phases, but it adapts their lengths dynamically depending on the geometry of requests seen so far. The improvement was obtained by carefully analyzing a linear model (factor-revealing LP) of a single phase of the algorithm. We also show that if an online algorithm operates in phases of fixed length and the adversary is able to modify the graph between phases, then the competitive ratio is at least 4.086.

Tight Analysis of Parallel Randomized Greedy MIS

We provide a tight analysis which settles the round complexity of the well-studied \emph{parallel randomized greedy MIS} algorithm, thus answering the main open question of Blelloch, Fineman, and Shun [SPAA'12]. The parallel/distributed randomized greedy Maximal Independent Set (MIS) algorithm works as follows. An order of the vertices is chosen uniformly at random. Then, in each round, all vertices that appear before their neighbors in the order are added to the independent set and removed from the graph along with their neighbors. The main question of interest is the number of rounds it takes until the graph is empty. This algorithm has been studied since 1987, initiated by Coppersmith, Raghavan, and Tompa [FOCS'87], and the previously best known bounds were $O(\log n)$ rounds in expectation for Erd\H{o}s-R\'{e}nyi random graphs by Calkin and Frieze [Random Struc. \& Alg. '90] and $O(\log^2 n)$ rounds with high probability for general graphs by Blelloch, Fineman, and Shun [SPAA'12]. We prove a high probability upper bound of $O(\log n)$ on the round complexity of this algorithm in general graphs, and that this bound is tight. This also shows that parallel randomized greedy MIS is as fast as the celebrated algorithm of Luby [STOC'85, JALG'86].

Optimal Hashing--Based Time--Space Trade-offs for Approximate Near Neighbors

We show tight upper and lower bounds for time--space trade-offs for the c-Approximate Near Neighbor Search problem. For the d-dimensional Euclidean space and n-point datasets, we develop a data structure with space n1+Áu+o(1)+O(dn) and query time nÁq+o(1)+dno(1) for every Áu, Áqe0 with: c2q + (c2 - 1) Áu = (2c2-1). For example, for the approximation c=2 we achieve: Ï Space n1.77... and query time no(1), significantly improving upon known data structures that support very fast queries; Ï Space n1.14... and query time n0.14..., matching the optimal data-dependent Locality--Sensitive Hashing (LSH); Ï Space n1+o(1) and query time n0.43..., making significant progress in the regime of near-linear space, which is arguably of the most interest for practice. This is the first data structure that achieves sublinear query time and near-linear space for every approximation factor, improving upon [Kap15]. The data structure is a culmination of a long line of work on the problem for all space regimes; it builds on spherical Locality-Sensitive Filtering [BDGL16] and data-dependent hashing [AR15]. Our matching lower bounds are of two types: conditional and unconditional. We prove tightness of the whole trade-off in a restricted model of computation. We show unconditional cell-probe lower bounds for one and two probes that match the trade-off for $\rho_q = 0$. This is the first space lower bound (for any static data structure) for two probes which is not polynomially smaller than the corresponding one-probe bound. To show the result for two probes, we establish and exploit a connection to locally-decodable codes.

Faster Carry Bit Computation for Adder Circuits with Prescribed Arrival Times

We consider the fundamental problem of constructing fast circuits for the carry bit computation in binary addition. Up to a small additive constant, the carry bit computation reduces to computing an And-Or path, i.e., a formula of type t0 ? (t1 ? (t2 ? ( ... tm-1) ... ) or t0 ? (t1 ? (t2 ? ( ... tm-1) ... ). We present an algorithm that computes the fastest known Boolean circuit for an And-Or path with given arrival times a(t0), ..., a(tm-1) for the input signals. Our objective function is delay, a natural generalization of depth with respect to arrival times. The maximum delay of the circuit we compute is log2 W + log2 log2 m + log2 log2 log2 m + 4.3, where W := ?i = 0m-1 2a(ti). Note that ?log2 W? is a lower bound on the delay of any circuit depending on inputs t0, ..., tm-1 with prescribed arrival times. Our method yields the fastest circuits for And-Or paths, carry bit computation and adders in terms of delay known so far.

Improving TSP tours using dynamic programming over tree decompositions

Given a traveling salesman problem (TSP) tour $H$ in graph $G$ a $k$-move is an operation which removes $k$ edges from $H$, and adds $k$ edges of $G$ so that a new tour $H'$ is formed. The popular $k$-OPT heuristic for TSP finds a local optimum by starting from an arbitrary tour $H$ and then improving it by a sequence of $k$-moves. Until 2016, the only known algorithm to find an improving $k$-move for a given tour was the naive solution in time $O(n^k)$. At ICALP'16 de~Berg, Buchin, Jansen and Woeginger showed an $O(n^{\floor{2/3k}+1})$-time algorithm. We show an algorithm which runs in $O(n^{(1/4+\epsilon_k)k})$ time, where $\lim_{k\rightarrow\infty} \epsilon_k = 0$. It improves over the state of the art for every $k\ge 5$. For the most practically relevant case $k=5$ we provide a slightly refined algorithm running in $O(n^{3.4})$ time. We also show that for the $k=4$ case, improving over the $O(n^3)$-time algorithm of de~Berg et al. would be a major breakthrough: an $O(n^{3-\epsilon})$-time algorithm for any $\epsilon>0$ would imply an $O(n^{3-\delta})$-time algorithm for the \probAPSP problem, for some $\delta>0$.

Deciding the Confusability of Words under Tandem Repeats in Linear Time

Tandem duplication in DNA is the process of inserting a copy of a segment of DNA adjacent to the original position. Motivated by applications that store data in living organisms, Jain et al. (2016) proposed the study of codes that correct tandem duplications to improve the reliability of data storage. We investigate algorithms associated with the study of these codes. Two words are said to be ${\le}k$-confusable if there exists two sequences of tandem duplications of lengths at most $k$ such that the resulting words are equal. We demonstrate that the problem of deciding whether two words is ${\le}k$-confusable is linear-time solvable through a characterisation that can be checked efficiently for $k=3$. Combining with previous results, the decision problem is linear-time solvable for $k\le 3$. We conjecture that this problem is undecidable for $k>3$. Using insights gained from the algorithm, we study the size of tandem-duplication codes. We improve the previous known upper bound and then construct codes with larger sizes as compared to the previous constructions. We determine the sizes of optimal tandem-duplication codes for lengths up to twenty, develop recursive methods to construct tandem-duplication codes for all word lengths, and compute explicit lower bounds for the size of optimal tandem-duplication codes for lengths from 21 to 30.

Derandomized concentration bounds for polynomials, and hypergraph maximal independent set

A parallel algorithm for maximal independent set (MIS) in hypergraphs has been a long- standing algorithmic challenge, dating back nearly 30 years to a survey of Karp & Ramachandran (1990). Despite its apparent simplicity, there have been no general sub-polynomial-time algorithms or hardness reductions. The best randomized parallel algorithm for hypergraphs of fixed rank r was developed by Beame & Luby (1990) and Kelsen (1992), running in time roughly (log n)^r!. The key probabilistic tool underlying this algorithm is a concentration bound for low-degree polynomials applied to independent input variables; this is a natural generalization of concentration bounds for sums of independent random variables, which are ubiquitous in combinatorics and computer science. These concentration bounds for polynomials do not lend themselves to standard derandomization techniques. Thus, the algorithm of Kelsen cannot be derandomized in any known way. There are no deterministic parallel algorithms for hypergraph MIS for any fixed rank r > 3. We improve the randomized algorithm of Kelsen to obtain a running time of (log n)^(2^r). We also give a method for derandomizing concentration bounds for polynomials, thus obtaining a deterministic algorithm running in (log n)^(2^(r+3)) time and (mn)^O(1) processors. Our analysis can also apply when r is slowly growing; using this in conjunction with a strategy of Bercea et al. (2015) gives a deterministic MIS algorithm running in time exp(O( log m / log log m + log log n).

An optimal O(nm) algorithm enumerating all walks common to all closed edge-covering walks of a graph

In this paper we consider the following problem. Given a directed graph G, output all walks of G that are sub-walks of all closed edge-covering walks of G. This problem was first considered by Tomescu and Medvedev (RECOMB 2016), who characterized these walks through the notion of omnitig. Omnitigs were shown to be relevant for the genome assembly problem from bioinformatics, where a genome sequence must be assembled from a set of reads from a sequencing experiment. Tomescu and Medvedev (RECOMB 2016) also proposed an algorithm for listing all maximal omnitigs, by launching an exhaustive visit from every edge. In this paper, we prove new insights about the structure of omnitigs and solve several open questions about them. We combine these to achieve an O(nm)-time algorithm for outputting all the maximal omnitigs of a graph (with n nodes and m edges). This is also optimal, as we show families of graphs whose total omnitig length is ?(nm). We implement this algorithm and show that it is 9-12 times faster in practice than the one of Tomescu and Medvedev (RECOMB 2016).

All ACM Journals | See Full Journal Index

Search TALG
enter search term and/or author name