#### Approximation Guarantees for the Minimum Linear Arrangement Problem by Higher Eigenvalues

Suguru Tamaki (Kyoto University); Yuichi Yoshida (National Institute of Informatics, and Preferred Infrastructure, Inc.)The SINR model attempts to predict whether a particular transmitter is heard at a specific location, in a setting consisting of n simultaneous transmitters and background noise. The SINR model gives rise to the SINR diagram, which partitions the space into n regions, one per transmitter, and the remaining space where no transmitter can be heard. Point location in the SINR diagram, i.e., determining which transmitter is heard at a query point (if any), has been investigated in several papers. These planar data structures are constructed in time at least quadratic in n and support logarithmic-time approximate queries. Moreover, the performance of some of them depends also on some geometric parameters that cannot be bounded as a function of n or epsilon. In this paper, we address the question of batched point-location queries, i.e., answering many queries simultaneously. In one dimension, we can answer n queries exactly in amortized polylogarithmic time per query, while in the plane we can do it approximately. These results can handle arbitrary power assignments to the transmitters. Moreover, the amortized query time depends only on n and epsilon. We also show how to speed up the preprocessing in a previously proposed point-location structure for uniform-power sites, by almost a full order of magnitude. For this we obtain results on the sensitivity of the reception regions to slight changes in the reception threshold, which are of independent interest. Finally, these results demonstrate the power of combining algebraic tools with those of computational geometry and other fields.

This paper shows the weighted matching problem on general graphs
can be solved in time $O(n(m + n\log n))$ for $n$ and $m$ the
number of vertices and edges, respectively. This was previously known
only for bipartite graphs. The crux is a data structure for blossom creation. It uses a dynamic nearest-common-ancestor algorithm
to simplify blossom steps, so they involve only back edges rather than arbitrary nontree edges.
The rest of the paper presents direct extensions of Edmonds'
blossom algorithm to weighted $b$-matching and $f$-factors. Again
the time bound is the one previously known for
bipartite graphs: for $b$-matching the time is $O(\min\{b(V),n\log
n\}(m + n\log n))$ and for $f$-factors the time is $O(\min {f(V),m\log n\}( m + n\log n) )$,
where $b(V)$ and $f(V)$ denote the sum of all degree constraints.
Several immediate applications of the $f$-factor algorithm
are given: The generalized shortest path structure of \cite{GS13}, i.e., the analog of the shortest path tree for
conservative undirected graphs, is shown to be a version of
the blossom structure for $f$-factors. This structure is found in time $O(|N|(m+n\log n))$ for $N$ the set of negative edges ($0<|N|

We provide evidence that computing the maximum flow value between every pair of nodes in a directed graph on $n$ nodes, $m$ edges, and capacities in the range $[1..n]$, which we call the All-Pairs Max-Flow problem, cannot be solved in time that is faster significantly than $O(n^2 m)$. Since a single maximum $st$-flow in such graphs can be solved in time $\tO(m\sqrt{n})$ [Lee and Sidford, FOCS 2014], we conclude that the all-pairs version might require time equivalent to $\tilde\Omega(n^{3/2})$ computations of maximum $st$-flow, which strongly separates the directed case from the undirected one. Moreover, if maximum $st$-flow can be solved in time $\tO(m)$, then the runtime of $\tilde\Omega(n^2)$ computations is needed. This is in contrast to a conjecture of Lacki, Nussbaum, Sankowski, and Wulf-Nilsen [FOCS 2012] that All-Pairs Max-Flow in general graphs can be solved faster than the time of $O(n^2)$ computations of maximum $st$-flow. Specifically, we show that in sparse graphs $G=(V,E,w)$, if one can compute All-Pairs Max-Flow in time $O((n^2 m)^{1-\varepsilon})$, for some constant $\varepsilon>0$, then MAX-CNF-SAT with $n'$ variables and $m'$ clauses can be solved in time ${m'}^{O(1)}2^{(1-\delta)n'}$ for a constant $\delta(\varepsilon)>0$, a problem for which not even $2^{n'}/\poly(n')$ algorithms are known. Such runtime for MAX-CNF-SAT would in particular refute the Strong Exponential Time Hypothesis (SETH). Hence, we improve the lower bound of Abboud, Vassilevska-Williams, and Yu [STOC 2015], who showed that for every fixed $\varepsilon>0$ and $\card{S}=\card{T}=O(\sqrt{n})$, if the above problem can be solved in time $O(n^{3/2-\varepsilon})$, then some incomparable (and intuitively weaker) conjecture is false.

We show how to compute for n-vertex planar graphs in O(n^{11/6} polylog(n)) expected time the diameter and the sum of the pairwise distances. The algorithms work for directed graphs with real weights and no negative cycles. In O(n^{15/8} polylog(n)) expected time we can also compute the number of pairs of vertices at distance smaller than a given threshold, These are the first algorithms for these problems using time O(n^c) for some constant c < 2, even when restricted to undirected, unweighted planar graphs.

Dynamic Time Warping (DTW) and Geometric Edit Distance (GED) are basic similarity measures between curves or general temporal sequences (e.g., time series) that are represented as sequences of points in some metric space $(X, dist)$. The DTW and GED measures are massively used in various fields of computer science and computational biology. Consequently, the tasks of computing these measures are among the core problems in P. Despite extensive efforts to find more efficient algorithms, the best-known algorithms for computing the DTW or GED between two sequences of points in $X = \mathbb{R}^d$ are long-standing dynamic programming algorithms that require quadratic runtime, even for the one-dimensional case $d = 1$, which is perhaps one of the most used in practice. In this paper, we break the nearly 50 years old quadratic time bound for computing DTW or GED between two sequences of $n$ points in $\mathbb{R}$, by presenting deterministic algorithms that run in $O\left( n^2 \log\log\log n / \log\log n \right)$ time. Our algorithms can be extended to work also for higher dimensional spaces $\mathbb{R}^d$, for any constant $d$, when the underlying distance-metric $dist$ is polyhedral (e.g., $L_1, L_\infty$).

A filtration over a simplicial complex K is an ordering of the simplices of K such that all prefixes in the ordering are subcomplexes of K. Filtrations are at the core of Persistent Homology, a major tool in Topological Data Analysis. In order to represent the filtration of a simplicial complex, the entire filtration can be appended to any data structure that explicitly stores all the simplices of the complex such as the Hasse diagram or the recently introduced Simplex Tree [Algorithmica '14]. However, with the popularity of various computational methods that need to handle simplicial complexes, and with the rapidly increasing size of the complexes, the task of finding a compact data structure that can still support efficient queries is of great interest. In this paper, we propose a new data structure called the Critical Simplex Diagram (CSD) which is a variant of the Simplex Array List (SAL) [SoCG '15]. Our data structure allows to store in a compact way the filtration of a simplicial complex, and allows for the efficient implementation of a large range of basic operations. Moreover, we prove that our data structure is essentially optimal with respect to the requisite storage space. Finally, we show that the CSD representation admits fast construction algorithms for Flag complexes and relaxed Delaunay complexes.

Properties definable in first-order logic are algorithmically interesting for both theoretical and pragmatic reasons. Many of the most studied algorithmic problems, such as Hitting Set and Orthogonal Vectors, are first-order, and the first-order properties naturally arise as relational database queries. A relatively straightforward algorithm for evaluating a property with k+1 quantifiers takes time $O(m^k)$ and, assuming the Strong Exponential Time Hypothesis (SETH), some such properties require $O(m^{k-\epsilon})$ time for any $\epsilon > 0$. (Here, m represents the size of the input structure, i.e. the number of tuples in all relations.) We give algorithms for every first-order property that improves this upper bound to $m^k/2^{\Theta(\sqrt{\log n})}$, i.e., an improvement by a factor more than any poly-log, but less than the polynomial required to refute SETH. Moreover, we show that further improvement is equivalent to improving algorithms for sparse instances of the well-studied Orthogonal Vectors problem. Surprisingly, both results are obtained by showing completeness of the Sparse Orthogonal Vectors problem for the class of first-order properties under fine-grained reductions. To obtain improved algorithms, we apply the fast Orthogonal Vectors algorithm of [AWY15,CW16]. While fine-grained reductions (reductions that closely preserve the conjectured complexities of problems) have been used to relate the hardness of disparate specific problems both within P and beyond, this is the first such completeness result for a standard complexity class.

A simple digraph is semi-complete if for any two of its vertices u and v, at least one of the arcs (u,v) and (v,u) is present. We study the complexity of computing two layout parameters of semi-complete digraphs: cutwidth and optimal linear arrangement (Ola). We prove that: " Both parameters are NP-hard to compute and the known exact and parameterized algorithms for them have essentially optimal running times, assuming the Exponential Time Hypothesis. " The cutwidth parameter admits a quadratic Turing kernel, whereas it does not admit any polyno- mial kernel unless NP coNP/poly. By contrast, Ola admits a linear kernel. These results essentially complete the complexity analysis of computing cutwidth and Ola on semi- complete digraphs. Our techniques can be also used to analyze the sizes of minimal obstructions for having small cutwidth under the induced subdigraph relation.

In the Interval Completion problem we are given an n-vertex graph G and an integer k, and the task is to transform G by making use of at most k edge additions into an interval graph. This is a fundamental graph modifi cation problem with applications in sparse matrix multiplication and molecular biology. The question about fi xed-parameter tractability of Interval Completion was asked by Kaplan, Shamir and Tarjan [FOCS 1994; SIAM J. Comput. 1999] and was answered a rmatively more than a decade later by Villanger at el. [STOC 2007; SIAM J. Comput. 2009], who presented an algorithm with running time O(k^{2k} n^3 m). We give the first subexponential parameterized algorithm solving Interval Completion in time k^O(sqrt(k)) n^O(1). This adds Interval Completion to a very small list of parameterized graph modi fication problems solvable in subexponential time.

We study a path-planning problem amid a set *O* of obstacles in *R*^{2}, in which we wish to compute a short path between two points while also maintaining a high clearance from *O*; the clearance of a point is its distance from a nearest obstacle in *O*. Specifically, the problem asks for a path minimizing the reciprocal of the clearance integrated over the length of the path. We present the first polynomial-time approximation scheme for this problem. Let *n* be the total number of obstacle vertices and let µ (0,1]. Our algorithm computes in time *O*((*n*^{2} / µ^{2}) log(*n* / µ)) a path of total cost at most (1+µ) times the cost of the optimal path.

The Swap-Insert Correction distance from a string S of length n to another string L of length m g n on the alphabet [1..d] is the minimum number of insertions, and swaps of pairs of adjacent symbols, converting S into L. Contrarily to other correction distances, computing it is NP-Hard in the size d of the alphabet. We describe an algorithm computing this distance in time within O(d^{2} nm g^{d-1}), where for each a[1..d] there are n_{a} occurrences of a in S, m_{a} occurrences of a in L, and where g=max_{a[1..d]} min {n_{a},m_{a}-n_{a}} measures the difficulty of the instance. The difficulty g is bounded by above by various terms, such as the length n of the shortest string S, and by the maximum number of occurrences of a single character in S. Those results illustrate how, in many cases, the correction distance between two strings can be easier to compute than in the worst case scenario.

How efficiently can we find an unknown graph using distance or shortest path queries between its vertices? We assume that the unknown graph G is connected, unweighted, and has bounded degree. In the reconstruction problem, the goal is to find the graph G. In the verification problem, we are given a hypothetical graph $\hat G$ and want to check whether G is equal to $\hat G$. We provide a randomized algorithm for reconstruction using $\tilde O(n^{3/2})$ distance queries, based on Voronoi cell decomposition. Next, we analyze natural greedy algorithms for reconstruction using a shortest path oracle and also for verification using either oracle, and show that their query complexity is $n^{1+o(1)}$. We further improve the query complexity when the graph is chordal or outerplanar. Finally, we show some lower bounds, and consider an approximate version of the reconstruction problem.

Many randomized algorithms can be derandomized efficiently using either the method of conditional expectations or probability spaces with low (almost-) independence. A series of papers, beginning with Luby (1988) and continuing with Berger \& Rompel (1991) and Chari et al. (1994), showed that these techniques can be combined to give deterministic parallel algorithms for combinatorial optimization problems involving sums of $w$-juntas. We improve these algorithms through derandomized variable partitioning and a new code construction for fooling Fourier characters over $GF(2)$. This reduces the processor complexity to essentially independent of $w$ while the running time is reduced from exponential in $w$ to linear in $w$. As a key subroutine, we give a new algorithm to generate a probability space which can fool a given set of neighborhoods, each of size at most $w$. Schulman (1992) gave an NC algorithm to do so for $w \leq O(\log n)$. Our new algorithm is NC1, with essentially optimal time and processor complexity, when $w = O(\log n)$; it remains NC up to $w = \text{polylog}(n)$. This answers an open problem of Schulman. One major application of these algorithms is an NC algorithm for the Lov\'{a}sz Local Lemma. Previous NC algorithms, including the seminal algorithm of Moser \& Tardos (2010) and the work of Chandrasekaran et. al (2013), required that (essentially) the bad-events could span only $O(\log n)$ variables; we relax this to $\text{polylog}(n)$ variables. We use this to give algorithms for defective vertex coloring and domatic graph partition in graphs of maximum degree $\text{polylog}(n)$.

The Weighted Tree Augmentation Problem (WTAP) is a fundamental well-studied problem in the field of network design. Given an undirected tree $G=(V,E)$, an additional set of edges $L \subseteq V\times V$ disjoint from $E$ called \textit{links} and a cost vector $c\in \mathbb{R}_{\geq 0}^L$, WTAP asks to find a minimum-cost set $F\subseteq L$ with the property that $(V,E\cup F)$ is $2$-edge connected. The special case where $c_\ell = 1$ for all $\ell\in L$ is called the Tree Augmentation Problem (TAP). For the class of bounded cost vectors, we present a first improved approximation algorithm for WTAP since more than three decades. Concretely, for any $M\in \mathbb{R}_{\geq 1}$ and $\epsilon > 0$, we present an LP based $(\delta+\epsilon)$-approximation for WTAP restricted to cost vectors $c$ in $[1,M]^L$ for $\delta \approx 1.96417$. More generally, our result is a $(\delta+\epsilon)$-approximation algorithm with running time $n^{r^{O(1)}}$, where $r = c_{\max}/c_{\min}$ is the ratio between the largest and the smallest cost of any link. For the special case of TAP we improve this factor to $\frac{5}{3}+\epsilon$. Our results rely on several new ideas, including a new LP relaxation of WTAP and a two-phase rounding algorithm.

We introduce the dependent doors problem as an abstraction for situations in which one must perform a sequence of dependent decisions, without receiving feedback information on the effectiveness of previously made actions. Informally, the problem considers a set of d doors that are initially closed. To open a door, the algorithm knocks on it and it might open or not according to some probability distribution. This distribution may depend on which other doors are currently open, as well as on which other doors were open during each of the previous knocks on that door. The algorithm aims to minimize the expected time until all doors open without knowing whether or which other doors have already opened. Here, we focus on scenarios where dependencies are positively correlated and acyclic. The fundamental distribution of a door describes the probability it opens in the best of conditions. We show that if in two configurations corresponding doors share the same fundamental distribution, then these configurations have the same optimal running time up to a universal constant, no matter what are the dependencies between doors and what are the distributions. We also identify algorithms that are optimal up to a universal constant factor. We then turn our attention to investigate precise bounds. Even for the case of two doors, identifying the optimal sequence is an intriguing combinatorial question. Here, we study the case of two cascading memoryless doors and solve it almost completely.

A spanner H of a weighted undirected graph G is a ``sparse" subgraph that approximately preserves distances between every pair of vertices in G. We refer to H as a \delta-spanner of G for some parameter \delta\geq 1 if the distance in H between every vertex pair is at most a factor \delta bigger than in G. In this case, we say that H has stretch \delta. Two main measures of the sparseness of a spanner are the size (number of edges) and the total weight (the sum of weights of the edges in the spanner). Recently Elkin, Neiman and Solomon [ICALP 14] presented an improved analysis of the greedy algorithm, proving that the greedy algorithm admits (2k-1) \cdot (1+\eps) stretch and total edge weight of O_{\eps}((k/\log{k}) \cdot \omega(MST(G))\cdot n^{1/k}), where \omega(MST(G)) is the weight of a minimum spanning tree of G. The previous analysis by Chandra \etal [SOCG 92] admitted (2k-1)\cdot (1+\eps) stretch and total edge weight of O_{\eps}(k \omega(MST(G)) n^{1/k}). Hence, Elkin \etal improved the weight of the spanner by a \log{k} factor. In this work, we completely remove the k factor from the weight, presenting a spanner with (2k-1)\cdot (1+\eps) stretch, O_{\eps}(\omega(MST(G)) n^{1/k}) total weight, and O(n^{1+1/k}) edges. Up to a $(1+\eps)$ factor in the stretch this matches the girth conjecture of Erd\H{o}s \cite{Er64}.

We consider the problem of estimating the size of a maximum matching when the edges are revealed in a streaming fashion. When the input graph is planar, we present a simple and elegant streaming algorithm that with high probability estimates the size of a maximum matching within a constant factor using O(n^{2/3}) space, where n is the number of vertices. The approach generalizes to the family of graphs that have bounded arboricity, which include graphs with an excluded constant-size minor. To the best of our knowledge, this is the first result for estimating the size of a maximum matching in the adversarial-order streaming model in o(n) space. We circumvent the barriers inherent in the adversarial-order model by exploiting several structural properties of planar graphs, and more generally, graphs with bounded arboricity. We further reduce the required memory size to O(\sqrt{n}) for three restricted settings: (i) when the input graph is a forest and (ii) when we have 2-passes and the input graph has bounded arboricity. Finally, we design a reduction from the Boolean Hidden Matching Problem to show that there is no randomized streaming algorithm that estimates the size of the maximum matching to within a factor better than 3/2 and uses only o(n^{1/2}) bits of space. Using the same reduction, we show that there is no deterministic algorithm that computes this kind of estimate in $o(n)$ bits of space. The lower bounds hold even for graphs that are collections of paths of constant length.

The main result of this paper is a generalization of the classical blossom algorithm for finding perfect matchings. Our algorithm can efficiently solve Boolean CSPs where each variable appears in exactly two constraints (we call it edge CSP) and all constraints are even -matroid relations (represented by lists of tuples). As a consequence of this, we settle the complexity classification of planar Boolean CSPs started by DvoYák and Kupec. Knowing that edge CSP is tractable for even -matroid constraints allows us to extend the tractability result to a larger class of -matroids that includes many classes that were known to be tractable before, namely co-independent, compact, local and binary.