We investigate the approximation of weighted integrals over $\mathbb{R}^d$ for integrands from weighted Sobolev spaces of mixed smoothness. We prove upper and lower bounds of the convergence rate of optimal quadratures with respect to $n$ integration nodes for functions from these spaces. In the one-dimensional case $(d=1)$, we obtain the right convergence rate of optimal quadratures. For $d \ge 2$, the upper bound is performed by sparse-grid quadratures with integration nodes on step hyperbolic crosses in the function domain $\mathbb{R}^d$.
We study a class of orbit recovery problems in which we observe independent copies of an unknown element of $\mathbb{R}^p$, each linearly acted upon by a random element of some group (such as $\mathbb{Z}/p$ or $\mathrm{SO}(3)$) and then corrupted by additive Gaussian noise. We prove matching upper and lower bounds on the number of samples required to approximately recover the group orbit of this unknown element with high probability. These bounds, based on quantitative techniques in invariant theory, give a precise correspondence between the statistical difficulty of the estimation problem and algebraic properties of the group. Furthermore, we give computer-assisted procedures to certify these properties that are computationally efficient in many cases of interest. The model is motivated by geometric problems in signal processing, computer vision, and structural biology, and applies to the reconstruction problem in cryo-electron microscopy (cryo-EM), a problem of significant practical interest. Our results allow us to verify (for a given problem size) that if cryo-EM images are corrupted by noise with variance $\sigma^2$, the number of images required to recover the molecule structure scales as $\sigma^6$. We match this bound with a novel (albeit computationally expensive) algorithm for ab initio reconstruction in cryo-EM, based on invariant features of degree at most 3. We further discuss how to recover multiple molecular structures from mixed (or heterogeneous) cryo-EM samples.
Coverings of undirected graphs are used in distributed computing, and unfoldings of directed graphs in semantics of programs. We study these two notions from a graph theoretical point of view so as to highlight their similarities, as they are both defined in terms of surjective graph homomorphisms. In particular, universal coverings and complete unfoldings are infinite trees that are regular if the initial graphs are finite. Regularity means that a tree has finitely many subtrees up to isomorphism. Two important theorems have been established by Leighton and Norris for coverings. We prove similar statements for unfoldings. Our study of the difficult proof of Leighton's Theorem lead us to generalize coverings and similarly, unfoldings, by attaching finite or infinite weights to edges of the covered or unfolded graphs. This generalization yields a canonical factorization of the universal covering of any finite graph, that (provably) does not exist without using weights. Introducing infinite weights provides us with finite descriptions of regular trees having nodes of countably infinite degree. We also generalize to weighted graphs and their coverings a classical factorization theorem of their characteristic polynomials.
In this work we propose tailored model order reduction for varying boundary optimal control problems governed by parametric partial differential equations. With varying boundary control, we mean that a specific parameter changes where the boundary control acts on the system. This peculiar formulation might benefit from model order reduction. Indeed, fast and reliable simulations of this model can be of utmost usefulness in many applied fields, such as geophysics and energy engineering. However, varying boundary control features very complicated and diversified parametric behaviour for the state and adjoint variables. The state solution, for example, changing the boundary control parameter, might feature transport phenomena. Moreover, the problem loses its affine structure. It is well known that classical model order reduction techniques fail in this setting, both in accuracy and in efficiency. Thus, we propose reduced approaches inspired by the ones used when dealing with wave-like phenomena. Indeed, we compare standard proper orthogonal decomposition with two tailored strategies: geometric recasting and local proper orthogonal decomposition. Geometric recasting solves the optimization system in a reference domain simplifying the problem at hand avoiding hyper-reduction, while local proper orthogonal decomposition builds local bases to increase the accuracy of the reduced solution in very general settings (where geometric recasting is unfeasible). We compare the various approaches on two different numerical experiments based on geometries of increasing complexity.
It is often of interest to assess whether a function-valued statistical parameter, such as a density function or a mean regression function, is equal to any function in a class of candidate null parameters. This can be framed as a statistical inference problem where the target estimand is a scalar measure of dissimilarity between the true function-valued parameter and the closest function among all candidate null values. These estimands are typically defined to be zero when the null holds and positive otherwise. While there is well-established theory and methodology for performing efficient inference when one assumes a parametric model for the function-valued parameter, methods for inference in the nonparametric setting are limited. When the null holds, and the target estimand resides at the boundary of the parameter space, existing nonparametric estimators either achieve a non-standard limiting distribution or a sub-optimal convergence rate, making inference challenging. In this work, we propose a strategy for constructing nonparametric estimators with improved asymptotic performance. Notably, our estimators converge at the parametric rate at the boundary of the parameter space and also achieve a tractable null limiting distribution. As illustrations, we discuss how this framework can be applied to perform inference in nonparametric regression problems, and also to perform nonparametric assessment of stochastic dependence.
The Poisson-Boltzmann equation (PBE) is an implicit solvent continuum model for calculating the electrostatic potential and energies of ionic solvated biomolecules. However, its numerical solution remains a significant challenge due strong singularities and nonlinearity caused by the singular source terms and the exponential nonlinear terms, respectively. An efficient method for the treatment of singularities in the linear PBE was introduced in \cite{BeKKKS:18}, that is based on the RS tensor decomposition for both electrostatic potential and the discretized Dirac delta distribution. In this paper, we extend this regularization method to the nonlinear PBE. We apply the PBE only to the regular part of the solution corresponding to the modified right-hand side via extraction of the long-range part in the discretized Dirac delta distribution. The total electrostatic potential is obtained by adding the long-range solution to the directly precomputed short-range potential. The main computational benefit of the approach is the automatic maintaining of the continuity in the Cauchy data on the solute-solvent interface. The boundary conditions are also obtained from the long-range component of the precomputed canonical tensor representation of the Newton kernel. In the numerical experiments, we illustrate the accuracy of the nonlinear regularized PBE (NRPBE) over the classical variant.
In this work, we aim at constructing numerical schemes, that are as efficient as possible in terms of cost and conservation of invariants, for the Vlasov--Fokker--Planck system coupled with Poisson or Amp\`ere equation. Splitting methods are used where the linear terms in space are treated by spectral or semi-Lagrangian methods and the nonlinear diffusion in velocity in the collision operator is treated using a stabilized Runge--Kutta--Chebyshev (RKC) integrator, a powerful alternative of implicit schemes. The new schemes are shown to exactly preserve mass and momentum. The conservation of total energy is obtained using a suitable approximation of the electric field. An H-theorem is proved in the semi-discrete case, while the entropy decay is illustrated numerically for the fully discretized problem. Numerical experiments that include investigation of Landau damping phenomenon and bump-on-tail instability are performed to illustrate the efficiency of the new schemes.
The purpose of this research work is to employ the Optimal Auxiliary Function Method (OAFM) for obtaining numerical approximations of time-dependent nonlinear partial differential equations (PDEs) that arise in many disciplines of science and engineering. The initial and first approximations of parabolic nonlinear PDEs associated with initial conditions have been generated by utilizing this method. Then the Galerkin method is applied to estimate the coefficients that remain unknown. Finally, the values of the coefficients generated by the Galerkin method have been inserted into the first approximation. In each example, all numerical computations and corresponding absolute errors are provided in schematic and tabular representations. The rate of convergence attained by the proposed method is depicted in tabular form
In this paper, all the possibilities for the value distribution of a perfect nonlinear function from $\mathbb{F}_{p^m}$ to $\mathbb{F}_p$ are determined, where $p$ is an odd prime number and $m\in\mathbb{N}_+$. As an application, we determine the weight distributions of two classes of linear codes over $\mathbb{F}_p$ constructed from perfect nonlinear functions.
We present a new approach to semiparametric inference using corrected posterior distributions. The method allows us to leverage the adaptivity, regularization and predictive power of nonparametric Bayesian procedures to estimate low-dimensional functionals of interest without being restricted by the holistic Bayesian formalism. Starting from a conventional nonparametric posterior, we target the functional of interest by transforming the entire distribution with a Bayesian bootstrap correction. We provide conditions for the resulting $\textit{one-step posterior}$ to possess calibrated frequentist properties and specialize the results for several canonical examples: the integrated squared density, the mean of a missing-at-random outcome, and the average causal treatment effect on the treated. The procedure is computationally attractive, requiring only a simple, efficient post-processing step that can be attached onto any arbitrary posterior sampling algorithm. Using the ACIC 2016 causal data analysis competition, we illustrate that our approach can outperform the existing state-of-the-art through the propagation of Bayesian uncertainty.
Hamilton and Moitra (2021) showed that, in certain regimes, it is not possible to accelerate Riemannian gradient descent in the hyperbolic plane if we restrict ourselves to algorithms which make queries in a (large) bounded domain and which receive gradients and function values corrupted by a (small) amount of noise. We show that acceleration remains unachievable for any deterministic algorithm which receives exact gradient and function-value information (unbounded queries, no noise). Our results hold for the classes of strongly and nonstrongly geodesically convex functions, and for a large class of Hadamard manifolds including hyperbolic spaces and the symmetric space $\mathrm{SL}(n) / \mathrm{SO}(n)$ of positive definite $n \times n$ matrices of determinant one. This cements a surprising gap between the complexity of convex optimization and geodesically convex optimization: for hyperbolic spaces, Riemannian gradient descent is optimal on the class of smooth and and strongly geodesically convex functions, in the regime where the condition number scales with the radius of the optimization domain. The key idea for proving the lower bound consists of perturbing the hard functions of Hamilton and Moitra (2021) with sums of bump functions chosen by a resisting oracle.