The formulation of Bayesian inverse problems involves choosing prior distributions; choices that seem equally reasonable may lead to significantly different conclusions. We develop a computational approach to better understand the impact of the hyperparameters defining the prior on the posterior statistics of the quantities of interest. Our approach relies on global sensitivity analysis (GSA) of Bayesian inverse problems with respect to the hyperparameters defining the prior. This, however, is a challenging problem--a naive double loop sampling approach would require running a prohibitive number of Markov chain Monte Carlo (MCMC) sampling procedures. The present work takes a foundational step in making such a sensitivity analysis practical through (i) a judicious combination of efficient surrogate models and (ii) a tailored importance sampling method. In particular, we can perform accurate GSA of posterior prediction statistics with respect to prior hyperparameters without having to repeat MCMC runs. We demonstrate the effectiveness of the approach on a simple Bayesian linear inverse problem and a nonlinear inverse problem governed by an epidemiological model
The hyperbolic secant distribution has several generalizations with applications in finance. In this study, we explore the dual geometric structure of one such generalization, namely the beta-logistic distribution. Recent findings also interpret Bernoulli and Euler polynomials as moments of specific random variables, treating them as special cases within the framework of the beta-logistic distribution. The current study also uncovers that the beta-logistic distribution admits an $\alpha$-parallel prior for any real number $\alpha$, that has the potential for application in geometric statistical inference.
In this paper, a two-sided variable-coefficient space-fractional diffusion equation with fractional Neumann boundary condition is considered. To conquer the weak singularity caused by the nonlocal space-fractional differential operators, by introducing an auxiliary fractional flux variable and using piecewise linear interpolations, a fractional block-centered finite difference (BCFD) method on general nonuniform grids is proposed. However, like other numerical methods, the proposed method still produces linear algebraic systems with unstructured dense coefficient matrices under the general nonuniform grids.Consequently, traditional direct solvers such as Gaussian elimination method shall require $\mathcal{O}(M^2)$ memory and $\mathcal{O}(M^3)$ computational work per time level, where $M$ is the number of spatial unknowns in the numerical discretization. To address this issue, we combine the well-known sum-of-exponentials (SOE) approximation technique with the fractional BCFD method to propose a fast version fractional BCFD algorithm. Based upon the Krylov subspace iterative methods, fast matrix-vector multiplications of the resulting coefficient matrices with any vector are developed, in which they can be implemented in only $\mathcal{O}(MN_{exp})$ operations per iteration, where $N_{exp}\ll M$ is the number of exponentials in the SOE approximation. Moreover, the coefficient matrices do not necessarily need to be generated explicitly, while they can be stored in $\mathcal{O}(MN_{exp})$ memory by only storing some coefficient vectors. Numerical experiments are provided to demonstrate the efficiency and accuracy of the method.
Block majorization-minimization (BMM) is a simple iterative algorithm for nonconvex optimization that sequentially minimizes a majorizing surrogate of the objective function in each block coordinate while the other block coordinates are held fixed. We consider a family of BMM algorithms for minimizing smooth nonconvex objectives, where each parameter block is constrained within a subset of a Riemannian manifold. We establish that this algorithm converges asymptotically to the set of stationary points, and attains an $\epsilon$-stationary point within $\widetilde{O}(\epsilon^{-2})$ iterations. In particular, the assumptions for our complexity results are completely Euclidean when the underlying manifold is a product of Euclidean or Stiefel manifolds, although our analysis makes explicit use of the Riemannian geometry. Our general analysis applies to a wide range of algorithms with Riemannian constraints: Riemannian MM, block projected gradient descent, optimistic likelihood estimation, geodesically constrained subspace tracking, robust PCA, and Riemannian CP-dictionary-learning. We experimentally validate that our algorithm converges faster than standard Euclidean algorithms applied to the Riemannian setting.
We investigate the performance of two approximation algorithms for the Hafnian of a nonnegative square matrix, namely the Barvinok and Godsil-Gutman estimators. We observe that, while there are examples of matrices for which these algorithms fail to provide a good approximation, the algorithms perform surprisingly well for adjacency matrices of random graphs. In most cases, the Godsil-Gutman estimator provides a far superior accuracy. For dense graphs, however, both estimators demonstrate a slow growth of the variance. For complete graphs, we show analytically that the relative variance $\sigma / \mu$ grows as a square root of the size of the graph. Finally, we simulate a Gaussian Boson Sampling experiment using the Godsil-Gutman estimator and show that the technique used can successfully reproduce low-order correlation functions.
In this article, we study nonparametric inference for a covariate-adjusted regression function. This parameter captures the average association between a continuous exposure and an outcome after adjusting for other covariates. In particular, under certain causal conditions, this parameter corresponds to the average outcome had all units been assigned to a specific exposure level, known as the causal dose-response curve. We propose a debiased local linear estimator of the covariate-adjusted regression function, and demonstrate that our estimator converges pointwise to a mean-zero normal limit distribution. We use this result to construct asymptotically valid confidence intervals for function values and differences thereof. In addition, we use approximation results for the distribution of the supremum of an empirical process to construct asymptotically valid uniform confidence bands. Our methods do not require undersmoothing, permit the use of data-adaptive estimators of nuisance functions, and our estimator attains the optimal rate of convergence for a twice differentiable function. We illustrate the practical performance of our estimator using numerical studies and an analysis of the effect of air pollution exposure on cardiovascular mortality.
We introduce an extension of first-order logic that comes equipped with additional predicates for reasoning about an abstract state. Sequents in the logic comprise a main formula together with pre- and postconditions in the style of Hoare logic, and the axioms and rules of the logic ensure that the assertions about the state compose in the correct way. The main result of the paper is a realizability interpretation of our logic that extracts programs into a mixed functional/imperative language. All programs expressible in this language act on the state in a sequential manner, and we make this intuition precise by interpreting them in a semantic metatheory using the state monad. Our basic framework is very general, and our intention is that it can be instantiated and extended in a variety of different ways. We outline in detail one such extension: A monadic version of Heyting arithmetic with a wellfounded while rule, and conclude by outlining several other directions for future work.
With the increasing demand of intelligent systems capable of operating in different contexts (e.g. users on the move) the correct interpretation of the user-need by such systems has become crucial to give consistent answers to the user questions. The most effective applications addressing such task are in the fields of natural language processing and semantic expansion of terms. These techniques are aimed at estimating the goal of an input query reformulating it as an intent, commonly relying on textual resources built exploiting different semantic relations like \emph{synonymy}, \emph{antonymy} and many others. The aim of this paper is to generate such resources using the labels of a given taxonomy as source of information. The obtained resources are integrated into a plain classifier for reformulating a set of input queries as intents and tracking the effect of each relation, in order to quantify the impact of each semantic relation on the classification. As an extension to this, the best tradeoff between improvement and noise introduction when combining such relations is evaluated. The assessment is made generating the resources and their combinations and using them for tuning the classifier which is used to reformulate the user questions as labels. The evaluation employs a wide and varied taxonomy as a use-case, exploiting its labels as basis for the semantic expansion and producing several corpora with the purpose of enhancing the pseudo-queries estimation.
This paper develops some theory of the matrix Dyson equation (MDE) for correlated linearizations and uses it to solve a problem on asymptotic deterministic equivalent for the test error in random features regression. The theory developed for the correlated MDE includes existence-uniqueness, spectral support bounds, and stability properties of the MDE. This theory is new for constructing deterministic equivalents for pseudoresolvents of a class of correlated linear pencils. In the application, this theory is used to give a deterministic equivalent of the test error in random features ridge regression, in a proportional scaling regime, wherein we have conditioned on both training and test datasets.
Quantum computing devices are believed to be powerful in solving the prime factorization problem, which is at the heart of widely deployed public-key cryptographic tools. However, the implementation of Shor's quantum factorization algorithm requires significant resources scaling linearly with the number size; taking into account an overhead that is required for quantum error correction the estimation is that 20 millions of (noisy) physical qubits are required for factoring 2048-bit RSA key in 8 hours. Recent proposal by Yan et al. claims a possibility of solving the factorization problem with sublinear quantum resources. As we demonstrate in our work, this proposal lacks systematic analysis of the computational complexity of the classical part of the algorithm, which exploits the Schnorr's lattice-based approach. We provide several examples illustrating the need in additional resource analysis for the proposed quantum factorization algorithm.
This manuscript is devoted to investigating the conservation laws of incompressible Navier-Stokes equations(NSEs), written in the energy-momentum-angular momentum conserving(EMAC) formulation, after being linearized by the two-level methods. With appropriate correction steps(e.g., Stoke/Newton corrections), we show that the two-level methods, discretized from EMAC NSEs, could preserve momentum, angular momentum, and asymptotically preserve energy. Error estimates and (asymptotic) conservative properties are analyzed and obtained, and numerical experiments are conducted to validate the theoretical results, mainly confirming that the two-level linearized methods indeed possess the property of (almost) retainability on conservation laws. Moreover, experimental error estimates and optimal convergence rates of two newly defined types of pressure approximation in EMAC NSEs are also obtained.