Given a simple polygon $\cal P$, in the Art Gallery problem the goal is to find the minimum number of guards needed to cover the entire $\cal P$, where a guard is a point and can see another point $q$ when $\overline{pq}$ does not cross the edges of $\cal P$. This paper studies a variant of the Art Gallery problem in which guards are restricted to lie on a dense grid inside $\cal P$. In the general problem, guards can be anywhere inside or on the boundary of $\cal P$. The general problem is called the \emph{point} guarding problem. It was proved that the point guarding problem is APX-complete, meaning that we cannot do better than a constant-factor approximation algorithm unless $P = NP$. A huge amount of research is committed to the studies of combinatorial and algorithmic aspects of this problem, and as of this time, we could not find a constant factor approximation for simple polygons. The last best-known approximation factor for point guarding a simple polygon was $\mathcal{O}(\log (|OPT|))$ introduced by E. Bonnet and T. Miltzow in 2020, where $|OPT|$ is the size of the optimal solution. Here, we propose an algorithm with a constant approximation factor for the point guarding problem where the location of guards is restricted to a grid. The running time of the proposed algorithm depends on the number of cells of the grid. The approximation factor is constant regardless of the grid we use, the running time could be super-polynomial if the grid size becomes exponential.
In the F-minor-free deletion problem we want to find a minimum vertex set in a given graph that intersects all minor models of graphs from the family F. The Vertex planarization problem is a special case of F-minor-free deletion for the family F = {K_5, K_{3,3}}. Whenever the family F contains at least one planar graph, then F-minor-free deletion is known to admit a constant-factor approximation algorithm and a polynomial kernelization [Fomin, Lokshtanov, Misra, and Saurabh, FOCS'12]. The Vertex planarization problem is arguably the simplest setting for which F does not contain a planar graph and the existence of a constant-factor approximation or a polynomial kernelization remains a major open problem. In this work we show that Vertex planarization admits an algorithm which is a combination of both approaches. Namely, we present a polynomial A-approximate kernelization, for some constant A > 1, based on the framework of lossy kernelization [Lokshtanov, Panolan, Ramanujan, and Saurabh, STOC'17]. Simply speaking, when given a graph G and integer k, we show how to compute a graph G' on poly(k) vertices so that any B-approximate solution to G' can be lifted to an (A*B)-approximate solution to G, as long as A*B*OPT(G) <= k. In order to achieve this, we develop a framework for sparsification of planar graphs which approximately preserves all separators and near-separators between subsets of the given terminal set. Our result yields an improvement over the state-of-art approximation algorithms for Vertex planarization. The problem admits a polynomial-time O(n^eps)-approximation algorithm, for any eps > 0, and a quasi-polynomial-time (log n)^O(1) approximation algorithm, both randomized [Kawarabayashi and Sidiropoulos, FOCS'17]. By pipelining these algorithms with our approximate kernelization, we improve the approximation factors to respectively O(OPT^eps) and (log OPT)^O(1).
Compact Approximate Taylor (CAT) methods for systems of conservation laws were introduced by Carrillo and Pares in 2019. These methods, based on a strategy that allows one to extend high-order Lax-Wendroff methods to nonlinear systems without using the Cauchy-Kovalevskaya procedure, have arbitrary even order of accuracy 2p and use (2p + 1)-point stencils, where p is an arbitrary positive integer. More recently in 2021 Carrillo, Macca, Pares, Russo and Zorio introduced a strategy to get rid of the spurious oscillations close to discontinuities produced by CAT methods. This strategy led to the so-called Adaptive CAT (ACAT) methods, in which the order of accuracy, and thus the width of the stencils, is adapted to the local smoothness of the solution. The goal of this paper is to extend CAT and ACAT methods to systems of balance laws. To do this, the source term is written as the derivative of its indefinite integral that is formally treated as a flux function. The well-balanced property of the methods is discussed and a variant that allows in principle to preserve any stationary solution is presented. The resulting methods are then applied to a number of systems going from a linear scalar conservation law to the 2D Euler equations with gravity, passing by the Burgers equations with source term and the 1D shallow water equations: the order and well-balanced properties are checked in several numerical tests.
We consider the watchman route problem for a $k$-transmitter watchman: standing at point $p$ in a polygon $P$, the watchman can see $q\in P$ if $\overline{pq}$ intersects $P$'s boundary at most $k$ times -- $q$ is $k$-visible to $p$. Traveling along the $k$-transmitter watchman route, either all points in $P$ or a discrete set of points $S\subset P$ must be $k$-visible to the watchman. We aim for minimizing the length of the $k$-transmitter watchman route. We show that even in simple polygons the shortest $k$-transmitter watchman route problem for a discrete set of points $S\subset P$ is NP-complete and cannot be approximated to within a logarithmic factor (unless P=NP), both with and without a given starting point. Moreover, we present a polylogarithmic approximation for the $k$-transmitter watchman route problem for a given starting point and $S\subset P$ with approximation ratio $O(\log^2(|S|\cdot n) \log\log (|S|\cdot n) \log(|S|+1))$ (with $|P|=n$).
Quantum Annealing (QA) is a computational framework where a quantum system's continuous evolution is used to find the global minimum of an objective function over an unstructured search space. It can be seen as a general metaheuristic for optimization problems, including NP-hard ones if we allow an exponentially large running time. While QA is widely studied from a heuristic point of view, little is known about theoretical guarantees on the quality of the solutions obtained in polynomial time. In this paper we use a technique borrowed from theoretical physics, the Lieb-Robinson (LR) bound, and develop new tools proving that short, constant time quantum annealing guarantees constant factor approximations ratios for some optimization problems when restricted to bounded degree graphs. Informally, on bounded degree graphs the LR bound allows us to retrieve a (relaxed) locality argument, through which the approximation ratio can be deduced by studying subgraphs of bounded radius. We illustrate our tools on problems MaxCut and Maximum Independent Set for cubic graphs, providing explicit approximation ratios and the runtimes needed to obtain them. Our results are of similar flavor to the well-known ones obtained in the different but related QAOA (quantum optimization algorithms) framework. Eventually, we discuss theoretical and experimental arguments for further improvements.
We consider a singularly perturbed time dependent problem with a shift term in space. On appropriately defined layer adapted meshes of Dur\'an- and S-type we derive a-priori error estimates for the stationary problem. Using a discontinuous Galerkin method in time we obtain error estimates for the full discretisation. Introduction of a weighted scalar products and norms allows us to estimate the time-dependent problem in energy and balanced norm. So far it was open to prove such a result. Some numerical results are given to confirm the predicted theory and to show the effect of shifts on the solution.
Stochastic gradient methods have enabled variational inference for high-dimensional models and large data. However, the steepest ascent direction in the parameter space of a statistical model is given not by the commonly used Euclidean gradient, but the natural gradient which premultiplies the Euclidean gradient by the inverted Fisher information matrix. Use of natural gradients can improve convergence significantly, but inverting the Fisher information matrix is daunting in high-dimensions. In Gaussian variational approximation, natural gradient updates of the natural parameters (expressed in terms of the mean and precision matrix) of the Gaussian distribution can be derived analytically, but do not ensure the precision matrix remains positive definite. To tackle this issue, we consider Cholesky decomposition of the covariance or precision matrix and derive explicit natural gradient updates of the Cholesky factor, which depend only on the first instead of the second derivative of the log posterior density, by finding the inverse of the Fisher information matrix analytically. Efficient natural gradient updates of the Cholesky factor are also derived under sparsity constraints incorporating different posterior independence structures.
In a realistic wireless environment, the multi-antenna channel usually exhibits spatially correlation fading. This is more emphasized when a large number of antennas is densely deployed, known as holographic massive MIMO (multiple-input multiple-output). In the first part of this letter, we develop a channel model for holographic massive MIMO by considering both non-isotropic scattering and directive antennas. With a large number of antennas, it is difficult to obtain full knowledge of the spatial correlation matrix. In this case, channel estimation is conventionally done using the least-squares (LS) estimator that requires no prior information of the channel statistics or array geometry. In the second part of this letter, we propose a novel channel estimation scheme that exploits the array geometry to identify a subspace of reduced rank that covers the eigenspace of any spatial correlation matrix. The proposed estimator outperforms the LS estimator, without using any user-specific channel statistics.
This note describes the full approximation storage (FAS) multigrid scheme for an easy one-dimensional nonlinear boundary value problem. The problem is discretized by a simple finite element (FE) scheme. We apply both FAS V-cycles and F-cycles, with a nonlinear Gauss-Seidel smoother, to solve the resulting finite-dimensional problem. The mathematics of the FAS restriction and prolongation operators, in the FE case, are explained. A self-contained Python program implements the scheme. Optimal performance, i.e. work proportional to the number of unknowns, is demonstrated for both kinds of cycles, including convergence nearly to discretization error in a single F-cycle.
This paper studies Quasi Maximum Likelihood estimation of Dynamic Factor Models for large panels of time series. Specifically, we consider the case in which the autocorrelation of the factors is explicitly accounted for, and therefore the model has a state-space form. Estimation of the factors and their loadings is implemented through the Expectation Maximization (EM) algorithm, jointly with the Kalman smoother.~We prove that as both the dimension of the panel $n$ and the sample size $T$ diverge to infinity, up to logarithmic terms: (i) the estimated loadings are $\sqrt T$-consistent and asymptotically normal if $\sqrt T/n\to 0$; (ii) the estimated factors are $\sqrt n$-consistent and asymptotically normal if $\sqrt n/T\to 0$; (iii) the estimated common component is $\min(\sqrt n,\sqrt T)$-consistent and asymptotically normal regardless of the relative rate of divergence of $n$ and $T$. Although the model is estimated as if the idiosyncratic terms were cross-sectionally and serially uncorrelated and normally distributed, we show that these mis-specifications do not affect consistency. Moreover, the estimated loadings are asymptotically as efficient as those obtained with the Principal Components estimator, while the estimated factors are more efficient if the idiosyncratic covariance is sparse enough.~We then propose robust estimators of the asymptotic covariances, which can be used to conduct inference on the loadings and to compute confidence intervals for the factors and common components. Finally, we study the performance of our estimators and we compare them with the traditional Principal Components approach through MonteCarlo simulations and analysis of US macroeconomic data.
We study the problem of learning in the stochastic shortest path (SSP) setting, where an agent seeks to minimize the expected cost accumulated before reaching a goal state. We design a novel model-based algorithm EB-SSP that carefully skews the empirical transitions and perturbs the empirical costs with an exploration bonus to guarantee both optimism and convergence of the associated value iteration scheme. We prove that EB-SSP achieves the minimax regret rate $\widetilde{O}(B_{\star} \sqrt{S A K})$, where $K$ is the number of episodes, $S$ is the number of states, $A$ is the number of actions and $B_{\star}$ bounds the expected cumulative cost of the optimal policy from any state, thus closing the gap with the lower bound. Interestingly, EB-SSP obtains this result while being parameter-free, i.e., it does not require any prior knowledge of $B_{\star}$, nor of $T_{\star}$ which bounds the expected time-to-goal of the optimal policy from any state. Furthermore, we illustrate various cases (e.g., positive costs, or general costs when an order-accurate estimate of $T_{\star}$ is available) where the regret only contains a logarithmic dependence on $T_{\star}$, thus yielding the first horizon-free regret bound beyond the finite-horizon MDP setting.