For finite abstract simplicial complex $\Sigma$, initial realization $\alpha$ in $\mathbb{E}^d$, and desired edge lengths $L$, we give practical sufficient conditions for the existence of a non-self-intersecting perturbation of $\alpha$ realizing the lengths $L$. We provide software to verify these conditions by computer and optionally assist in the creation of an initial realization from abstract simplicial data. Applications include proving the existence of a planar embedding of a graph with specified edge lengths or proving the existence of polyhedra (or higher-dimensional polytopes) with specified edge lengths.
In this paper we develop a classical algorithm of complexity $O(K \, 2^n)$ to simulate parametrized quantum circuits (PQCs) of $n$ qubits, where $K$ is the total number of one-qubit and two-qubit control gates. The algorithm is developed by finding $2$-sparse unitary matrices of order $2^n$ explicitly corresponding to any single-qubit and two-qubit control gates in an $n$-qubit system. Finally, we determine analytical expression of Hamiltonians for any such gate and consequently a local Hamiltonian decomposition of any PQC is obtained. All results are validated with numerical simulations.
We consider the problem of approximating a function from $L^2$ by an element of a given $m$-dimensional space $V_m$, associated with some feature map $\varphi$, using evaluations of the function at random points $x_1,\dots,x_n$. After recalling some results on optimal weighted least-squares using independent and identically distributed points, we consider weighted least-squares using projection determinantal point processes (DPP) or volume sampling. These distributions introduce dependence between the points that promotes diversity in the selected features $\varphi(x_i)$. We first provide a generalized version of volume-rescaled sampling yielding quasi-optimality results in expectation with a number of samples $n = O(m\log(m))$, that means that the expected $L^2$ error is bounded by a constant times the best approximation error in $L^2$. Also, further assuming that the function is in some normed vector space $H$ continuously embedded in $L^2$, we further prove that the approximation is almost surely bounded by the best approximation error measured in the $H$-norm. This includes the cases of functions from $L^\infty$ or reproducing kernel Hilbert spaces. Finally, we present an alternative strategy consisting in using independent repetitions of projection DPP (or volume sampling), yielding similar error bounds as with i.i.d. or volume sampling, but in practice with a much lower number of samples. Numerical experiments illustrate the performance of the different strategies.
We consider the $h$-version of the finite-element method, where accuracy is increased by decreasing the meshwidth $h$ while keeping the polynomial degree $p$ constant, applied to the Helmholtz equation. Although the question "how quickly must $h$ decrease as the wavenumber $k$ increases to maintain accuracy?" has been studied intensively since the 1990s, none of the existing rigorous wavenumber-explicit analyses take into account the approximation of the geometry. In this paper we prove that for nontrapping problems solved using straight elements the geometric error is order $kh$, which is then less than the pollution error $k(kh)^{2p}$ when $k$ is large; this fact is then illustrated in numerical experiments. More generally, we prove that, even for problems with strong trapping, using degree four (in 2-d) or degree five (in 3-d) polynomials and isoparametric elements ensures that the geometric error is smaller than the pollution error for most large wavenumbers.
A $k$-subcolouring of a graph $G$ is a function $f:V(G) \to \{0,\ldots,k-1\}$ such that the set of vertices coloured $i$ induce a disjoint union of cliques. The subchromatic number, $\chi_{\textrm{sub}}(G)$, is the minimum $k$ such that $G$ admits a $k$-subcolouring. Ne\v{s}et\v{r}il, Ossona de Mendez, Pilipczuk, and Zhu (2020), recently raised the problem of finding tight upper bounds for $\chi_{\textrm{sub}}(G^2)$ when $G$ is planar. We show that $\chi_{\textrm{sub}}(G^2)\le 43$ when $G$ is planar, improving their bound of 135. We give even better bounds when the planar graph $G$ has larger girth. Moreover, we show that $\chi_{\textrm{sub}}(G^{3})\le 95$, improving the previous bound of 364. For these we adapt some recent techniques of Almulhim and Kierstead (2022), while also extending the decompositions of triangulated planar graphs of Van den Heuvel, Ossona de Mendez, Quiroz, Rabinovich and Siebertz (2017), to planar graphs of arbitrary girth. Note that these decompositions are the precursors of the graph product structure theorem of planar graphs. We give improved bounds for $\chi_{\textrm{sub}}(G^p)$ for all $p$, whenever $G$ has bounded treewidth, bounded simple treewidth, bounded genus, or excludes a clique or biclique as a minor. For this we introduce a family of parameters which form a gradation between the strong and the weak colouring numbers. We give upper bounds for these parameters for graphs coming from such classes. Finally, we give a 2-approximation algorithm for the subchromatic number of graphs coming from any fixed class with bounded layered cliquewidth. In particular, this implies a 2-approximation algorithm for the subchromatic number of powers $G^p$ of graphs coming from any fixed class with bounded layered treewidth (such as the class of planar graphs). This algorithm works even if the power $p$ and the graph $G$ is unknown.
In this paper we consider an initial-boundary value problem with a Caputo time derivative of order $\alpha\in(0,1)$. The solution typically exhibits a weak singularity near the initial time and this causes a reduction in the orders of convergence of standard schemes. To deal with this singularity, the solution is computed with a fitted difference scheme on a graded mesh. The convergence of this scheme is analysed using a discrete maximum principle and carefully chosen barrier functions. Sharp error estimates are proved, which show an enhancement in the convergence rate compared with the standard L1 approximation on uniform meshes, and also indicate an optimal choice for the mesh grading. This optimal mesh grading is less severe than the optimal grading for the standard L1 scheme. Furthermore, the dependence of the error on the final time forms part of our error estimate. Numerical experiments are presented which corroborate our theoretical results.
We consider the success probability of the $L_0$-regularized box-constrained Babai point, which is a suboptimal solution to the $L_0$-regularized box-constrained integer least squares problem and can be used for MIMO detection. First, we derive formulas for the success probability of both $L_0$-regularized and unregularized box-constrained Babai points. Then we investigate the properties of the $L_0$-regularized box-constrained Babai point, including the optimality of the regularization parameter, the monotonicity of its success probability, and the monotonicity of the ratio of the two success probabilities. A bound on the success probability of the $L_0$-regularized Babai point is derived. After that, we analyze the effect of the LLL-P permutation strategy on the success probability of the $L_0$-regularized Babai point. Then we propose some success probability based column permutation strategies to increase the success probability of the $L_0$-regularized box-constrained Babai point. Finally, we present numerical tests to confirm our theoretical results and to show the advantage of the $L_0$ regularization and the effectiveness of the proposed column permutation algorithms compared to existing strategies.
Let $(X_t)$ be a reflected diffusion process in a bounded convex domain in $\mathbb R^d$, solving the stochastic differential equation $$dX_t = \nabla f(X_t) dt + \sqrt{2f (X_t)} dW_t, ~t \ge 0,$$ with $W_t$ a $d$-dimensional Brownian motion. The data $X_0, X_D, \dots, X_{ND}$ consist of discrete measurements and the time interval $D$ between consecutive observations is fixed so that one cannot `zoom' into the observed path of the process. The goal is to infer the diffusivity $f$ and the associated transition operator $P_{t,f}$. We prove injectivity theorems and stability inequalities for the maps $f \mapsto P_{t,f} \mapsto P_{D,f}, t<D$. Using these estimates we establish the statistical consistency of a class of Bayesian algorithms based on Gaussian process priors for the infinite-dimensional parameter $f$, and show optimality of some of the convergence rates obtained. We discuss an underlying relationship between the degree of ill-posedness of this inverse problem and the `hot spots' conjecture from spectral geometry.
We consider the community detection problem in a sparse $q$-uniform hypergraph $G$, assuming that $G$ is generated according to the Hypergraph Stochastic Block Model (HSBM). We prove that a spectral method based on the non-backtracking operator for hypergraphs works with high probability down to the generalized Kesten-Stigum detection threshold conjectured by Angelini et al. (2015). We characterize the spectrum of the non-backtracking operator for the sparse HSBM and provide an efficient dimension reduction procedure using the Ihara-Bass formula for hypergraphs. As a result, community detection for the sparse HSBM on $n$ vertices can be reduced to an eigenvector problem of a $2n\times 2n$ non-normal matrix constructed from the adjacency matrix and the degree matrix of the hypergraph. To the best of our knowledge, this is the first provable and efficient spectral algorithm that achieves the conjectured threshold for HSBMs with $r$ blocks generated according to a general symmetric probability tensor.
In this paper, we combine the stabilizer free weak Galerkin (SFWG) method and the implicit $\theta$-schemes in time for $\theta\in [\frac{1}{2},1]$ to solve the fourth-order parabolic problem. In particular, when $\theta =1$, the full-discrete scheme is first-order backward Euler and the scheme is second-order Crank Nicolson scheme if $\theta =\frac{1}{2}$. Next, we analyze the well-posedness of the schemes and deduce the optimal convergence orders of the error in the $H^2$ and $L^2$ norms. Finally, numerical examples confirm the theoretical results.
Determining the capacity $\alpha_c$ of the Binary Perceptron is a long-standing problem. Krauth and Mezard (1989) conjectured an explicit value of $\alpha_c$, approximately equal to .833, and a rigorous lower bound matching this prediction was recently established by Ding and Sun (2019). Regarding the upper bound, Kim and Roche (1998) and Talagrand (1999) independently showed that $\alpha_c$ < .996, while Krauth and Mezard outlined an argument which can be used to show that $\alpha_c$ < .847. The purpose of this expository note is to record a complete proof of the bound $\alpha_c$ < .847. The proof is a conditional first moment method combined with known results on the spherical perceptron