We propose two approaches, based on Riemannian optimization, for computing a stochastic approximation of the $p$th root of a stochastic matrix $A$. In the first approach, the approximation is found in the Riemannian manifold of positive stochastic matrices. In the second approach, we introduce the Riemannian manifold of positive stochastic matrices sharing with $A$ the Perron eigenvector and we compute the approximation of the $p$th root of $A$ in such a manifold. This way, differently from the available methods based on constrained optimization, $A$ and its $p$th root approximation share the Perron eigenvector. Such a property is relevant, from a modelling point of view, in the embedding problem for Markov chains. The extended numerical experimentation shows that, in the first approach, the Riemannian optimization methods are generally faster and more accurate than the available methods based on constrained optimization. In the second approach, even though the stochastic approximation of the $p$th root is found in a smaller set, the approximation is generally more accurate than the one obtained by standard constrained optimization.
We show that the problem of counting the number of $n$-variable unate functions reduces to the problem of counting the number of $n$-variable monotone functions. Using recently obtained results on $n$-variable monotone functions, we obtain counts of $n$-variable unate functions up to $n=9$. We use an enumeration strategy to obtain the number of $n$-variable balanced monotone functions up to $n=7$. We show that the problem of counting the number of $n$-variable balanced unate functions reduces to the problem of counting the number of $n$-variable balanced monotone functions, and consequently, we obtain the number of $n$-variable balanced unate functions up to $n=7$. Using enumeration, we obtain the numbers of equivalence classes of $n$-variable balanced monotone functions, unate functions and balanced unate functions up to $n=6$. Further, for each of the considered sub-class of $n$-variable monotone and unate functions, we also obtain the corresponding numbers of $n$-variable non-degenerate functions.
In highly diffusion regimes when the mean free path $\varepsilon$ tends to zero, the radiative transfer equation has an asymptotic behavior which is governed by a diffusion equation and the corresponding boundary condition. Generally, a numerical scheme for solving this problem has the truncation error containing an $\varepsilon^{-1}$ contribution, that leads to a nonuniform convergence for small $\varepsilon$. Such phenomenons require high resolutions of discretizations, which degrades the performance of the numerical scheme in the diffusion limit. In this paper, we first provide a--priori estimates for the scaled spherical harmonic ($P_N$) radiative transfer equation. Then we present an error analysis for the spherical harmonic discontinuous Galerkin (DG) method of the scaled radiative transfer equation showing that, under some mild assumptions, its solutions converge uniformly in $\varepsilon$ to the solution of the scaled radiative transfer equation. We further present an optimal convergence result for the DG method with the upwind flux on Cartesian grids. Error estimates of $\left(1+\mathcal{O}(\varepsilon)\right)h^{k+1}$ (where $h$ is the maximum element length) are obtained when tensor product polynomials of degree at most $k$ are used.
We study an optimal control problem governed by elliptic PDEs with interface, which the control acts on the interface. Due to the jump of the coefficient across the interface and the control acting on the interface, the regularity of solution of the control problem is limited on the whole domain, but smoother on subdomains. The control function with pointwise inequality constraints is served as the flux jump condition which we called Neumann interface control. We use a simple uniform mesh that is independent of the interface. The standard linear finite element method can not achieve optimal convergence when the uniform mesh is used. Therefore the state and adjoint state equations are discretized by piecewise linear immersed finite element method (IFEM). While the accuracy of the piecewise constant approximation of the optimal control on the interface is improved by a postprocessing step which possesses superconvergence properties; as well as the variational discretization concept for the optimal control is used to improve the error estimates. Optimal error estimates for the control, suboptimal error estimates for state and adjoint state are derived. Numerical examples with and without constraints are provided to illustrate the effectiveness of the proposed scheme and correctness of the theoretical analysis.
Consider the problem of estimating a random variable $X$ from noisy observations $Y = X+ Z$, where $Z$ is standard normal, under the $L^1$ fidelity criterion. It is well known that the optimal Bayesian estimator in this setting is the conditional median. This work shows that the only prior distribution on $X$ that induces linearity in the conditional median is Gaussian. Along the way, several other results are presented. In particular, it is demonstrated that if the conditional distribution $P_{X|Y=y}$ is symmetric for all $y$, then $X$ must follow a Gaussian distribution. Additionally, we consider other $L^p$ losses and observe the following phenomenon: for $p \in [1,2]$, Gaussian is the only prior distribution that induces a linear optimal Bayesian estimator, and for $p \in (2,\infty)$, infinitely many prior distributions on $X$ can induce linearity. Finally, extensions are provided to encompass noise models leading to conditional distributions from certain exponential families.
We consider the weighted least squares spline approximation of a noisy dataset. By interpreting the weights as a probability distribution, we maximize the associated entropy subject to the constraint that the mean squared error is prescribed to a desired (small) value. Acting on this error yields a robust regression method that automatically detects and removes outliers from the data during the fitting procedure, by assigning them a very small weight. We discuss the use of both spline functions and spline curves. A number of numerical illustrations have been included to disclose the potentialities of the maximal-entropy approach in different application fields.
A general a posteriori error analysis applies to five lowest-order finite element methods for two fourth-order semi-linear problems with trilinear non-linearity and a general source. A quasi-optimal smoother extends the source term to the discrete trial space, and more importantly, modifies the trilinear term in the stream-function vorticity formulation of the incompressible 2D Navier-Stokes and the von K\'{a}rm\'{a}n equations. This enables the first efficient and reliable a posteriori error estimates for the 2D Navier-Stokes equations in the stream-function vorticity formulation for Morley, two discontinuous Galerkin, $C^0$ interior penalty, and WOPSIP discretizations with piecewise quadratic polynomials.
In this paper, we study the problem of maximizing $k$-submodular functions subject to a knapsack constraint. For monotone objective functions, we present a $\frac{1}{2}(1-e^{-2})\approx 0.432$ greedy approximation algorithm. For the non-monotone case, we are the first to consider the knapsack problem and provide a greedy-type combinatorial algorithm with approximation ratio $\frac{1}{3}(1-e^{-3})\approx 0.317$.
Transition amplitudes and transition probabilities are relevant to many areas of physics simulation, including the calculation of response properties and correlation functions. These quantities can also be related to solving linear systems of equations. Here we present three related algorithms for calculating transition probabilities. First, we extend a previously published short-depth algorithm, allowing for the two input states to be non-orthogonal. Building on this first procedure, we then derive a higher-depth algorithm based on Trotterization and Richardson extrapolation that requires fewer circuit evaluations. Third, we introduce a tunable algorithm that allows for trading off circuit depth and measurement complexity, yielding an algorithm that can be tailored to specific hardware characteristics. Finally, we implement proof-of-principle numerics for models in physics and chemistry and for a subroutine in variational quantum linear solving (VQLS). The primary benefits of our approaches are that (a) arbitrary non-orthogonal states may now be used with small increases in quantum resources, (b) we (like another recently proposed method) entirely avoid subroutines such as the Hadamard test that may require three-qubit gates to be decomposed, and (c) in some cases fewer quantum circuit evaluations are required as compared to the previous state-of-the-art in NISQ algorithms for transition probabilities.
In this paper we introduce a multilevel Picard approximation algorithm for semilinear parabolic partial integro-differential equations (PIDEs). We prove that the numerical approximation scheme converges to the unique viscosity solution of the PIDE under consideration. To that end, we derive a Feynman-Kac representation for the unique viscosity solution of the semilinear PIDE, extending the classical Feynman-Kac representation for linear PIDEs. Furthermore, we show that the algorithm does not suffer from the curse of dimensionality, i.e. the computational complexity of the algorithm is bounded polynomially in the dimension $d$ and the reciprocal of the prescribed accuracy $\varepsilon$. We also provide a numerical example in up to 10'000 dimensions to demonstrate its applicability.
We study a finite volume scheme approximating a parabolic-elliptic Keller-Segel system with power law diffusion with exponent $\gamma \in [1,3]$ and periodic boundary conditions. We derive conditional a posteriori bounds for the error measured in the $L^\infty(0,T;H^1(\Omega))$ norm for the chemoattractant and by a quasi-norm-like quantity for the density. These results are based on stability estimates and suitable conforming reconstructions of the numerical solution. We perform numerical experiments showing that our error bounds are linear in mesh width and elucidating the behaviour of the error estimator under changes of $\gamma$.