Let $f$ be analytic on $[0,1]$ with $|f^{(k)}(1/2)|\leq A\alpha^kk!$ for some constant $A$ and $\alpha<2$. We show that the median estimate of $\mu=\int_0^1f(x)\,\mathrm{d}x$ under random linear scrambling with $n=2^m$ points converges at the rate $O(n^{-c\log(n)})$ for any $c< 3\log(2)/\pi^2\approx 0.21$. We also get a super-polynomial convergence rate for the sample median of $2k-1$ random linearly scrambled estimates, when $k=\Omega(m)$. When $f$ has a $p$'th derivative that satisfies a $\lambda$-H\"older condition then the median-of-means has error $O( n^{-(p+\lambda)+\epsilon})$ for any $\epsilon>0$, if $k\to\infty$ as $m\to\infty$.
This paper proposed a method to judge whether the point is inside or outside of the simple convex polygon by the intersection of the vertical line. It determined the point to an area enclosed by two straight lines, then convert the problem of determining whether a point is inside or outside of a convex polygon into the problem of determining whether a point is inside or outside of a quadrilateral. After that, use the ray method to judge it. The complexity of this algorithm is O(1) to O(n). As the experimental results show, the algorithm has fewer intersections and greatly improves the efficiency of the judgment.
We propose a dimension reduction technique for Bayesian inverse problems with nonlinear forward operators, non-Gaussian priors, and non-Gaussian observation noise. The likelihood function is approximated by a ridge function, i.e., a map which depends non-trivially only on a few linear combinations of the parameters. We build this ridge approximation by minimizing an upper bound on the Kullback--Leibler divergence between the posterior distribution and its approximation. This bound, obtained via logarithmic Sobolev inequalities, allows one to certify the error of the posterior approximation. Computing the bound requires computing the second moment matrix of the gradient of the log-likelihood function. In practice, a sample-based approximation of the upper bound is then required. We provide an analysis that enables control of the posterior approximation error due to this sampling. Numerical and theoretical comparisons with existing methods illustrate the benefits of the proposed methodology.
In recent work (Maierhofer & Huybrechs, 2022, Adv. Comput. Math.), the authors showed that least-squares oversampling can improve the convergence properties of collocation methods for boundary integral equations involving operators of certain pseudo-differential form. The underlying principle is that the discrete method approximates a Bubnov$-$Galerkin method in a suitable sense. In the present work, we extend this analysis to the case when the integral operator is perturbed by a compact operator $\mathcal{K}$ which is continuous as a map on Sobolev spaces on the boundary, $\mathcal{K}:H^{p}\rightarrow H^{q}$ for all $p,q\in\mathbb{R}$. This study is complicated by the fact that both the test and trial functions in the discrete Bubnov-Galerkin orthogonality conditions are modified over the unperturbed setting. Our analysis guarantees that previous results concerning optimal convergence rates and sufficient rates of oversampling are preserved in the more general case. Indeed, for the first time, this analysis provides a complete explanation of the advantages of least-squares oversampled collocation for boundary integral formulations of the Laplace equation on arbitrary smooth Jordan curves in 2D. Our theoretical results are shown to be in very good agreement with numerical experiments.
We give a nearly-linear time reduction that encodes any linear program as a 2-commodity flow problem with only a small blow-up in size. Under mild assumptions similar to those employed by modern fast solvers for linear programs, our reduction causes only a polylogarithmic multiplicative increase in the size of the program and runs in nearly-linear time. Our reduction applies to high-accuracy approximation algorithms and exact algorithms. Given an approximate solution to the 2-commodity flow problem, we can extract a solution to the linear program in linear time with only a polynomial factor increase in the error. This implies that any algorithm that solves the 2-commodity flow problem can solve linear programs in essentially the same time. Given a directed graph with edge capacities and two source-sink pairs, the goal of the 2-commodity flow problem is to maximize the sum of the flows routed between the two source-sink pairs subject to edge capacities and flow conservation. A 2-commodity flow can be directly written as a linear program, and thus we establish a nearly-tight equivalence between these two classes of problems. Our proof follows the outline of Itai's polynomial-time reduction of a linear program to a 2-commodity flow problem (JACM'78). Itai's reduction shows that exactly solving 2-commodity flow and exactly solving linear programming are polynomial-time equivalent. We improve Itai's reduction to nearly preserve the problem representation size in each step. In addition, we establish an error bound for approximately solving each intermediate problem in the reduction, and show that the accumulated error is polynomially bounded. We remark that our reduction does not run in strongly polynomial time and that it is open whether 2-commodity flow and linear programming are equivalent in strongly polynomial time.
An incremental approach for computation of convex hull for data points in two-dimensions is presented. The algorithm is not output-sensitive and costs a time that is linear in the size of data points at input. Graham's scan is applied only on a subset of the data points, represented at the extremal of the dataset. Points are classified for extremal, in proportion with the modular distance, about an imaginary point interior to the region bounded by convex hull of the dataset assumed for origin or center in polar coordinate. A subset of the data is arrived by terminating at until an event of no change in maximal points is observed per bin, for iteratively and exponentially decreasing intervals.
We consider the allocation of $m$ balls into $n$ bins with incomplete information. In the classical Two-Choice process a ball first queries the load of two randomly chosen bins and is then placed in the least loaded bin. In our setting, each ball also samples two random bins but can only estimate a bin's load by sending binary queries of the form "Is the load at least the median?" or "Is the load at least 100?". For the lightly loaded case $m=O(n)$, Feldheim and Gurel-Gurevich (2021) showed that with one query it is possible to achieve a maximum load of $O(\sqrt{\log n/\log \log n})$, and posed the question whether a maximum load of $m/n+O(\sqrt{\log n/\log \log n})$ is possible for any $m = \Omega(n)$. In this work, we resolve this open problem by proving a lower bound of $m/n+\Omega( \sqrt{\log n})$ for a fixed $m=\Theta(n \sqrt{\log n})$, and a lower bound of $m/n+\Omega(\log n/\log \log n)$ for some $m$ depending on the used strategy. We complement this negative result by proving a positive result for multiple queries. In particular, we show that with only two binary queries per chosen bin, there is an oblivious strategy which ensures a maximum load of $m/n+O(\sqrt{\log n})$ for any $m \geq 1$. Further, for any number of $k = O(\log \log n)$ binary queries, the upper bound on the maximum load improves to $m/n + O(k(\log n)^{1/k})$ for any $m \geq 1$. Further, this result for $k$ queries implies (i) new bounds for the $(1+\beta)$-process introduced by Peres et al (2015), (ii) new bounds for the graphical balanced allocation process on dense expander graphs, and (iii) the bound of $m/n+O(\log \log n)$ on the maximum load achieved by the Two-Choice process, including the heavily loaded case $m=\Omega(n)$ derived by Berenbrink et al. (2006). One novel aspect of our proofs is the use of multiple super-exponential potential functions, which might be of use in future work.
In supersingular isogeny-based cryptography, the path-finding problem reduces to the endomorphism ring problem. Can path-finding be reduced to knowing just one endomorphism? It is known that a small endomorphism enables polynomial-time path-finding and endomorphism ring computation (Love-Boneh [36]). As this paper neared completion, it was shown that the endomorphism ring problem in the presence of one known endomorphism reduces to a vectorization problem (Wesolowski [54]). In this paper, we give explicit classical and quantum algorithms for path-finding to an initial curve using the knowledge of one endomorphism. An endomorphism gives an explicit orientation of a supersingular elliptic curve. We use the theory of oriented supersingular isogeny graphs and algorithms for taking ascending/descending/horizontal steps on such graphs. Although the most general runtimes are subexponential, we show that every supersingular elliptic curve has (potentially large) endomorphisms whose exposure would lead to a classical polynomial-time path-finding algorithm.
We initiate the study of Boolean function analysis on high-dimensional expanders. We give a random-walk based definition of high-dimensional expansion, which coincides with the earlier definition in terms of two-sided link expanders. Using this definition, we describe an analog of the Fourier expansion and the Fourier levels of the Boolean hypercube for simplicial complexes. Our analog is a decomposition into approximate eigenspaces of random walks associated with the simplicial complexes. Our random-walk definition and the decomposition have the additional advantage that they extend to the more general setting of posets, encompassing both high-dimensional expanders and the Grassmann poset, which appears in recent work on the unique games conjecture. We then use this decomposition to extend the Friedgut-Kalai-Naor theorem to high-dimensional expanders. Our results demonstrate that a constant-degree high-dimensional expander can sometimes serve as a sparse model for the Boolean slice or hypercube, and quite possibly additional results from Boolean function analysis can be carried over to this sparse model. Therefore, this model can be viewed as a derandomization of the Boolean slice, containing only $|X(k-1)|=O(n)$ points in contrast to $\binom{n}{k}$ points in the $k$-slice (which consists of all $n$-bit strings with exactly $k$ ones).
Motivated by applications in reinforcement learning (RL), we study a nonlinear stochastic approximation (SA) algorithm under Markovian noise, and establish its finite-sample convergence bounds under various stepsizes. Specifically, we show that when using constant stepsize (i.e., $\alpha_k\equiv \alpha$), the algorithm achieves exponential fast convergence to a neighborhood (with radius $O(\alpha\log(1/\alpha))$) around the desired limit point. When using diminishing stepsizes with appropriate decay rate, the algorithm converges with rate $O(\log(k)/k)$. Our proof is based on Lyapunov drift arguments, and to handle the Markovian noise, we exploit the fast mixing of the underlying Markov chain. To demonstrate the generality of our theoretical results on Markovian SA, we use it to derive the finite-sample bounds of the popular $Q$-learning with linear function approximation algorithm, under a condition on the behavior policy. Importantly, we do not need to make the assumption that the samples are i.i.d., and do not require an artificial projection step in the algorithm to maintain the boundedness of the iterates. Numerical simulations corroborate our theoretical results.
We study the fundamental problem of ReLU regression, where the goal is to fit Rectified Linear Units (ReLUs) to data. This supervised learning task is efficiently solvable in the realizable setting, but is known to be computationally hard with adversarial label noise. In this work, we focus on ReLU regression in the Massart noise model, a natural and well-studied semi-random noise model. In this model, the label of every point is generated according to a function in the class, but an adversary is allowed to change this value arbitrarily with some probability, which is {\em at most} $\eta < 1/2$. We develop an efficient algorithm that achieves exact parameter recovery in this model under mild anti-concentration assumptions on the underlying distribution. Such assumptions are necessary for exact recovery to be information-theoretically possible. We demonstrate that our algorithm significantly outperforms naive applications of $\ell_1$ and $\ell_2$ regression on both synthetic and real data.