ACM DL

ACM Transactions on

Algorithms (TALG)

Menu
Latest Articles

Approximation Schemes for Machine Scheduling with Resource (In-)dependent Processing Times

We consider two related scheduling problems: single resource-constrained scheduling on identical parallel machines and a generalization with... (more)

NEWS

In Memoriam: David S. Johnson

http://dl.acm.org/citation.cfm?id=2907073

About TALG

The ACM Transactions on Algorithms (TALG) publishes original research of the highest quality dealing with algorithms that are inherently discrete and finite, and having mathematical content in a natural way, either in the objective or in the analysis.

read more
Distributed Dominating Set Approximations beyond Planar Graphs

The Minimum Dominating Set (MDS) problem is one of the most fundamental and challenging problems in distributed computing. While it is well-known that minimum dominating sets cannot be approximated locally on general graphs, over the last years, there has been much progress on computing local approximations on sparse graphs, and in particular planar graphs. In this paper we study distributed and deterministic MDS approximation algorithms for graph classes beyond planar graphs. In particular, we show that existing approximation bounds for planar graphs can be lifted to bounded genus graphs, and present (1) a local constant-time, constant-factor MDS approximation algorithm and (2) a local O(log* n)-time approximation scheme. Our main technical contribution is a new analysis of a slightly modified variant of an existing algorithm by Lenzen et al. Interestingly, unlike existing proofs for planar graphs, our analysis does not rely on direct topological arguments.

Locating Errors in Faulty Formulas

Given a drawing of a read-once formula (called the blueprint), and a blackbox implementation with the same topology as the blueprint, that purports to compute the formula, can we tell if it does? Under a fault model, where the only faults in the implementation are gates that complement their outputs, we show that there is an efficient algorithm that makes a linear number of probes to the blackbox implementation and determines if the blueprint and implementation are identical. We also show a matching lower bound. We further ask whether we can diagnose where the faults are, using black-box testing. We prove that if the implementation has a property called {\em polynomial balance} it is possible to do this efficiently. To complement this result, we show that even if the \textit{blueprint} is polynomially balanced, and there are only logarithmically many errors in the implementation, the implementation could be unbalanced and the diagnosis problem provably requires super-polynomially many tests. We point out that this problem is one instance of a general class of problems of learning deviations from a blueprint, which we call {\em conformance learning}. Conformance learning seems worthy of further investigation in a broader context.

Ascending-Price Algorithms for Unknown Markets

We design a simple ascending-price algorithm to compute a $(1+\varepsilon)$-approximate equilibrium in Arrow-Debreu markets with weak gross substitute property. It applies to an unknown market setting without exact knowledge about the number of agents, their individual utilities and endowments. Instead, our algorithm only uses price queries to a global demand oracle. This is the first polynomial-time algorithm for most of the known tractable classes of Arrow-Debreu markets, which computes such an equilibrium with a number of calls to the demand oracle that is polynomial in $\log 1/\varepsilon$ and avoids heavy machinery such as the ellipsoid method. Demands can be real-valued functions of prices, but the oracles only return demand values of bounded precision. Due to this more realistic assumption, precision and representation of prices and demands become a major technical challenge, and we develop new tools and insights that may be of independent interest. Furthermore, we give the first polynomial-time algorithm to compute an exact equilibrium for markets with spending constraint utilities. This resolves an open problem posed by~\cite{DuanM15}.

A Tractable Class of Binary VCSPs via M-Convex Intersection

A binary VCSP is a general framework for the minimization problem of a function represented as the sum of unary and binary cost functions. An important line of VCSP research is to investigate what functions can be solved in polynomial time. Cooper--}ivn? classified the tractability of binary VCSP instances according to the concept of ``triangle,'' and showed that the only interesting tractable case is the one induced by the joint winner property (JWP). Recently, Iwamasa--Murota--}ivn? made a link between VCSP and discrete convex analysis, showing that a function satisfying the JWP can be transformed into a function represented as the sum of two M-convex functions, which can be minimized in polynomial time via an M-convex intersection algorithm if the value oracle of each M-convex function is given. In this paper, we give an algorithmic answer to a natural question: What binary VCSP instances can be solved in polynomial time via an M-convex intersection algorithm? Under a natural condition, we solve this problem by devising a polynomial-time algorithm for obtaining a concrete form of the representation in the representable case. Our result presents a larger tractable class of binary VCSPs, which properly contains the JWP class. We also show the co-NP-hardness of testing the representability of a given binary VCSP instance as the sum of two M-convex functions.

A Lottery Model for Center-type Problems With Outliers

In this paper, we give tight approximation algorithms for the $k$-center and matroid center problems with outliers. Unfairness arises naturally in this setting: certain clients could always be considered as outliers. To address this issue, we introduce a lottery model in which each client $j$ is allowed to submit a parameter $p_j \in [0,1]$ and we look for a random solution that covers every client $j$ with probability at least $p_j$. Our techniques include a randomized rounding procedure to round a point inside a matroid intersection polytope to a basis plus at most one extra item such that all marginal probabilities are preserved and such that a certain linear function of the variables does not decrease in the process with probability one.

Optimal streaming and tracking distinct elements with high probability

The distinct elements problem is one of the fundamental problems in streaming algorithms --- given a stream of integers in the range $\{1,\ldots,n\}$, we wish to provide a $(1+\varepsilon)$ approximation to the number of distinct elements in the input. After a long line of research optimal solution for this problem with constant probability of success, using $\mathcal{O}(\frac{1}{\varepsilon^2}+\log n)$ bits of space, was given by Kane, Nelson and Woodruff in 2010. The standard approach used in order to achieve low failure probability $\delta$, is to take a median of $\log \delta^{-1}$ parallel repetitions of the original algorithm and report the median of computed answers. We show that such a multiplicative space blow-up is unnecessary: we provide an optimal algorithm using $\mathcal{O}(\frac{\log \delta^{-1}}{\varepsilon^2} + \log n)$ bits of space --- matching known lower bounds for this problem. That is, the $\log\delta^{-1}$ factor does not multiply the $\log n$ term. This settles completely the space complexity of the distinct elements problem with respect to all standard parameters. We consider also \emph{strong tracking} (or \emph{continuous monitoring}) variant of the distinct elements problem, where we want an algorithm which provides an approximation of the number of distinct elements seen so far, at all times of the stream. We show that this variant can be solved using $\mathcal{O}(\frac{\log \log n + \log \delta^{-1}}{\varepsilon^2} + \log n)$ bits of space, which we show to be optimal.

Faster Pseudopolynomial Time Algorithms for Subset Sum

Given a multiset $S$ of $n$ positive integers and a target integer $t$, the subset sum problem is to decide if there is a subset of $S$ that sums up to $t$. We present a new divide-and-conquer algorithm that computes \emph{all} the realizable subset sums up to an integer $u$ in $\widetilde{O}(\min\{\sqrt{n}u,u^{4/3},\sigma\})$, where $\sigma$ is the sum of all elements in $S$ and $\widetilde{O}$ hides polylogarithmic factors. This result improves upon the standard dynamic programming algorithm that runs in $O(nu)$ time. To the best of our knowledge, the new algorithm is the fastest general deterministic algorithm for this problem. We also present a modified algorithm for finite cyclic groups, which computes all the realizable subset sums within the group in $\widetilde{O}(\min\{\sqrt{n}m,m^{5/4}\})$ time, where $m$ is the order of the group.

Bloom Filters in Adversarial Environments

Many efficient data structures use randomness, allowing them to improve upon deterministic ones. Usually, their efficiency and/or correctness are analyzed using probabilistic tools under the assumption that the inputs and queries are independent of the internal randomness of the data structure. In this work, we consider data structures in a more robust model, which we call the adversarial model. Roughly speaking, this model allows an adversary to choose inputs and queries adaptively according to previous responses. Specifically, we consider a data structure known as "Bloom filter" and prove a tight connection between Bloom filters in this model and cryptography. A Bloom filter represents a set S of elements approximately, by using fewer bits than a precise representation. The price for succinctness is allowing some errors: for any x in S it should always answer `Yes', and for any x not in S it should answer 'Yes' only with small probability. In the adversarial model, we consider both efficient adversaries (that run in polynomial time) and computationally unbounded adversaries that are only bounded in the amount of queries they can make. For computationally bounded adversaries, we show that non-trivial (memory-wise) Bloom filters exist if and only if one-way functions exist. For unbounded adversaries we show that there exists a Bloom filter for sets of size n and error eps, that is secure against t queries and uses only O(n*log(1/eps)+t) bits of memory. In comparison, n*log(1/eps) is the best possible under a non-adaptive adversary.

Online Submodular Maximization with Preemption

Submodular function maximization has been studied extensively in recent years under various constraints and models. The problem plays a major role in various disciplines. We study a natural online variant of this problem in which elements arrive one-by-one and the algorithm has to maintain a solution obeying certain constraints at all times. Upon arrival of an element, the algorithm has to decide whether to accept the element into its solution and may preempt previously chosen elements. The goal is to maximize a submodular function over the set of elements in the solution. We study two special cases of this general problem and derive upper and lower bounds on the competitive ratio. Specifically, we design a 1/e-competitive algorithm for the unconstrained case in which the algorithm may hold any subset of the elements, and constant competitive ratio algorithms for the case where the algorithm may hold at most k elements in its solution.

Tight Analysis of Parallel Randomized Greedy MIS

We provide a tight analysis which settles the round complexity of the well-studied \emph{parallel randomized greedy MIS} algorithm, thus answering the main open question of Blelloch, Fineman, and Shun [SPAA'12]. The parallel/distributed randomized greedy Maximal Independent Set (MIS) algorithm works as follows. An order of the vertices is chosen uniformly at random. Then, in each round, all vertices that appear before their neighbors in the order are added to the independent set and removed from the graph along with their neighbors. The main question of interest is the number of rounds it takes until the graph is empty. This algorithm has been studied since 1987, initiated by Coppersmith, Raghavan, and Tompa [FOCS'87], and the previously best known bounds were $O(\log n)$ rounds in expectation for Erd\H{o}s-R\'{e}nyi random graphs by Calkin and Frieze [Random Struc. \& Alg. '90] and $O(\log^2 n)$ rounds with high probability for general graphs by Blelloch, Fineman, and Shun [SPAA'12]. We prove a high probability upper bound of $O(\log n)$ on the round complexity of this algorithm in general graphs, and that this bound is tight. This also shows that parallel randomized greedy MIS is as fast as the celebrated algorithm of Luby [STOC'85, JALG'86].

Optimal Hashing--Based Time--Space Trade-offs for Approximate Near Neighbors

We show tight upper and lower bounds for time--space trade-offs for the c-Approximate Near Neighbor Search problem. For the d-dimensional Euclidean space and n-point datasets, we develop a data structure with space n1+Áu+o(1)+O(dn) and query time nÁq+o(1)+dno(1) for every Áu, Áqe0 with: c2q + (c2 - 1) Áu = (2c2-1). For example, for the approximation c=2 we achieve: Ï Space n1.77... and query time no(1), significantly improving upon known data structures that support very fast queries; Ï Space n1.14... and query time n0.14..., matching the optimal data-dependent Locality--Sensitive Hashing (LSH); Ï Space n1+o(1) and query time n0.43..., making significant progress in the regime of near-linear space, which is arguably of the most interest for practice. This is the first data structure that achieves sublinear query time and near-linear space for every approximation factor, improving upon [Kap15]. The data structure is a culmination of a long line of work on the problem for all space regimes; it builds on spherical Locality-Sensitive Filtering [BDGL16] and data-dependent hashing [AR15]. Our matching lower bounds are of two types: conditional and unconditional. We prove tightness of the whole trade-off in a restricted model of computation. We show unconditional cell-probe lower bounds for one and two probes that match the trade-off for $\rho_q = 0$. This is the first space lower bound (for any static data structure) for two probes which is not polynomially smaller than the corresponding one-probe bound. To show the result for two probes, we establish and exploit a connection to locally-decodable codes.

Online Vertex-Weighted Bipartite Matching: Beating 1-1/e with Random Arrivals

We introduce a weighted version of the ranking algorithm by Karp et al. (STOC 1990), and prove a competitive ratio of 0.6534 for the vertex-weighted online bipartite matching problem when online vertices arrive in random order. Our result shows that random arrivals help beating the 1-1/e barrier even in the vertex-weighted case. We build on the randomized primal-dual framework by Devanur et al. (SODA 2013) and design a two dimensional gain sharing function, which depends not only on the rank of the offline vertex, but also on the arrival time of the online vertex. To our knowledge, this is the first competitive ratio strictly larger than 1-1/e for an online bipartite matching problem achieved under the randomized primal-dual framework. Our algorithm has a natural interpretation that offline vertices offer a larger portion of their weights to the online vertices as time goes by, and each online vertex matches the neighbor with the highest offer at its arrival.

P-FPT algorithms for bounded clique-width graphs

Recently, hardness results for problems in P were achieved using reasonable complexity theoretic assumptions such as the Strong Exponential Time Hypothesis. According to these assumptions, many graph theoretic problems do not admit truly subquadratic algorithms. A central technique used to tackle the difficulty of the above mentioned problems is fixed-parameter algorithms with {\em polynomial dependency} in the fixed parameter (P-FPT). Applying this technique to {\em clique-width}, an important graph parameter, remained to be done. In this paper we study several graph theoretic problems for which hardness results exist such as {\em cycle problems}, {\em distance problems} and {\em maximum matching}. We give hardness results and P-FPT algorithms, using clique-width and some of its upper-bounds as parameters. We believe that our most important result is an $\mathcal{O}(k^4 \cdot n + m)$-time algorithm for computing a maximum matching where $k$ is either the modular-width or the $P_4$-sparseness. The latter generalizes many algorithms that have been introduced so far for specific subclasses such as cographs. Our algorithms are based on preprocessing methods using modular decomposition and split decomposition. Thus they can also be generalized to some graph classes with unbounded clique-width.

Derandomized concentration bounds for polynomials, and hypergraph maximal independent set

A parallel algorithm for maximal independent set (MIS) in hypergraphs has been a long- standing algorithmic challenge, dating back nearly 30 years to a survey of Karp & Ramachandran (1990). Despite its apparent simplicity, there have been no general sub-polynomial-time algorithms or hardness reductions. The best randomized parallel algorithm for hypergraphs of fixed rank r was developed by Beame & Luby (1990) and Kelsen (1992), running in time roughly (log n)^r!. The key probabilistic tool underlying this algorithm is a concentration bound for low-degree polynomials applied to independent input variables; this is a natural generalization of concentration bounds for sums of independent random variables, which are ubiquitous in combinatorics and computer science. These concentration bounds for polynomials do not lend themselves to standard derandomization techniques. Thus, the algorithm of Kelsen cannot be derandomized in any known way. There are no deterministic parallel algorithms for hypergraph MIS for any fixed rank r > 3. We improve the randomized algorithm of Kelsen to obtain a running time of (log n)^(2^r). We also give a method for derandomizing concentration bounds for polynomials, thus obtaining a deterministic algorithm running in (log n)^(2^(r+3)) time and (mn)^O(1) processors. Our analysis can also apply when r is slowly growing; using this in conjunction with a strategy of Bercea et al. (2015) gives a deterministic MIS algorithm running in time exp(O( log m / log log m + log log n).

A new algorithm for fast generalized DFTs

We give an new arithmetic algorithm to compute the generalized Discrete Fourier Transform (DFT) over finite groups $G$. The new algorithm uses $O(|G|^{\omega/2 + o(1)})$ operations to compute the generalized DFT over finite groups of Lie type, including the linear, orthogonal, and symplectic families and their variants, as well as all finite simple groups of Lie type. Here $\omega$ is the exponent of matrix multiplication, so the exponent $\omega/2$ is optimal if $\omega = 2$. Previously, ``exponent one'' algorithms were known for supersolvable groups and the symmetric and alternating groups. No exponent one algorithms were known, even under the assumption $\omega = 2$, for families of linear groups of fixed dimension, and indeed the previous best-known algorithm for $\SL_2(\F_q)$ had exponent $4/3$ despite being the focus of significant effort. We unconditionally achieve exponent at most $1.19$ for this group, and exponent one if $\omega = 2$. Our algorithm also yields an improved exponent for computing the generalized DFT over general finite groups $G$, which beats the longstanding previous best upper bound, for any $\omega$. In particular, assuming $\omega = 2$, we achieve exponent $\sqrt{2}$, while the previous best was $3/2$.

Sparse Approximation via Generating Point Sets

For a set P of n points in the unit ball b†Rd, consider the problem of finding a small subset T†P such that its convex-hull µ-approximates the convex-hull of the original set. We present an efficient algorithm to compute such a µ2-approximation of size kalg, where µ2 is function of µ, and kalg is a function of the minimum size kopt of such an µ-approximation. Surprisingly, there is no dependency on the dimension d in both bounds. Furthermore, every point of P can be µ-approximated by a convex-combination of points of T that is O(1/µ2)-sparse. Our result can be viewed as a method for sparse, convex autoencoding: approximately representing the data in a compact way using sparse combinations of a small subset T of the original data. The new algorithm can be kernelized, and it preserves sparsity in the original input.

All ACM Journals | See Full Journal Index

Search TALG
enter search term and/or author name