The recent statistical finite element method (statFEM) provides a coherent statistical framework to synthesise finite element models with observed data. Through embedding uncertainty inside of the governing equations, finite element solutions are updated to give a posterior distribution which quantifies all sources of uncertainty associated with the model. However to incorporate all sources of uncertainty, one must integrate over the uncertainty associated with the model parameters, the known forward problem of uncertainty quantification. In this paper, we make use of Langevin dynamics to solve the statFEM forward problem, studying the utility of the unadjusted Langevin algorithm (ULA), a Metropolis-free Markov chain Monte Carlo sampler, to build a sample-based characterisation of this otherwise intractable measure. Due to the structure of the statFEM problem, these methods are able to solve the forward problem without explicit full PDE solves, requiring only sparse matrix-vector products. ULA is also gradient-based, and hence provides a scalable approach up to high degrees-of-freedom. Leveraging the theory behind Langevin-based samplers, we provide theoretical guarantees on sampler performance, demonstrating convergence, for both the prior and posterior, in the Kullback-Leibler divergence, and, in Wasserstein-2, with further results on the effect of preconditioning. Numerical experiments are also provided, for both the prior and posterior, to demonstrate the efficacy of the sampler, with a Python package also included.
There are much recent interests in solving noncovnex min-max optimization problems due to its broad applications in many areas including machine learning, networked resource allocations, and distributed optimization. Perhaps, the most popular first-order method in solving min-max optimization is the so-called simultaneous (or single-loop) gradient descent-ascent algorithm due to its simplicity in implementation. However, theoretical guarantees on the convergence of this algorithm is very sparse since it can diverge even in a simple bilinear problem. In this paper, our focus is to characterize the finite-time performance (or convergence rates) of the continuous-time variant of simultaneous gradient descent-ascent algorithm. In particular, we derive the rates of convergence of this method under a number of different conditions on the underlying objective function, namely, two-sided Polyak-L ojasiewicz (PL), one-sided PL, nonconvex-strongly concave, and strongly convex-nonconcave conditions. Our convergence results improve the ones in prior works under the same conditions of objective functions. The key idea in our analysis is to use the classic singular perturbation theory and coupling Lyapunov functions to address the time-scale difference and interactions between the gradient descent and ascent dynamics. Our results on the behavior of continuous-time algorithm may be used to enhance the convergence properties of its discrete-time counterpart.
We study the temperature control problem for Langevin diffusions in the context of non-convex optimization. The classical optimal control of such a problem is of the bang-bang type, which is overly sensitive to errors. A remedy is to allow the diffusions to explore other temperature values and hence smooth out the bang-bang control. We accomplish this by a stochastic relaxed control formulation incorporating randomization of the temperature control and regularizing its entropy. We derive a state-dependent, truncated exponential distribution, which can be used to sample temperatures in a Langevin algorithm, in terms of the solution to an HJB partial differential equation. We carry out a numerical experiment on a one-dimensional baseline example, in which the HJB equation can be easily solved, to compare the performance of the algorithm with three other available algorithms in search of a global optimum.
We discretize a tangential tensor field equation using a surface-finite element approach with a penalization term to ensure almost tangentiality. It is natural to measure the quality of such a discretization intrinsically, i.e., to examine the tangential convergence behavior in contrast to the normal behavior. We show optimal order convergence with respect to the tangential quantities in particular for an isogeometric penalization term that is based only on the geometric information of the discrete surface.
Discretization of continuous-time diffusion processes is a widely recognized method for sampling. However, it seems to be a considerable restriction when the potentials are often required to be smooth (gradient Lipschitz). This paper studies the problem of sampling through Euler discretization, where the potential function is assumed to be a mixture of weakly smooth distributions and satisfies weakly dissipative. We establish the convergence in Kullback-Leibler (KL) divergence with the number of iterations to reach $\epsilon$-neighborhood of a target distribution in only polynomial dependence on the dimension. We relax the degenerated convex at infinity conditions of \citet{erdogdu2020convergence} and prove convergence guarantees under Poincar\'{e} inequality or non-strongly convex outside the ball. In addition, we also provide convergence in $L_{\beta}$-Wasserstein metric for the smoothing potential.
Chi-squared tests for lack of fit are traditionally employed to find evidence against a hypothesized model, with the model accepted if the Karl Pearson statistic comparing observed and expected numbers of observations falling within cells is not significantly large. However, if one really wants evidence for goodness of fit, it is better to adopt an equivalence testing approach in which small values of the chi-squared statistic are evidence for the desired model. This method requires one to define what is meant by equivalence to the desired model, and guidelines are proposed. Then a simple extension of the classical normalizing transformation for the non-central chi-squared distribution places these values on a simple to interpret calibration scale for evidence. It is shown that the evidence can distinguish between normal and nearby models, as well between the Poisson and over-dispersed models. Applications to evaluation of random number generators and to uniformity of the digits of pi are included. Sample sizes required to obtain a desired expected evidence for goodness of fit are also provided.
Modern statistical applications often involve minimizing an objective function that may be nonsmooth and/or nonconvex. This paper focuses on a broad Bregman-surrogate algorithm framework including the local linear approximation, mirror descent, iterative thresholding, DC programming and many others as particular instances. The recharacterization via generalized Bregman functions enables us to construct suitable error measures and establish global convergence rates for nonconvex and nonsmooth objectives in possibly high dimensions. For sparse learning problems with a composite objective, under some regularity conditions, the obtained estimators as the surrogate's fixed points, though not necessarily local minimizers, enjoy provable statistical guarantees, and the sequence of iterates can be shown to approach the statistical truth within the desired accuracy geometrically fast. The paper also studies how to design adaptive momentum based accelerations without assuming convexity or smoothness by carefully controlling stepsize and relaxation parameters.
This paper considers the finite element solution of the boundary value problem of Poisson's equation and proposes a guaranteed em a posteriori local error estimation based on the hypercircle method. Compared to the existing literature on qualitative error estimation, the proposed error estimation provides an explicit and sharp bound for the approximation error in the subdomain of interest, and its efficiency can be enhanced by further utilizing a non-uniform mesh. Such a result is applicable to problems without $H^2$-regularity, since it only utilizes the first order derivative of the solution. The efficiency of the proposed method is demonstrated by numerical experiments for both convex and non-convex 2D domains with uniform or non-uniform meshes.
The radiation magnetohydrodynamics (RMHD) system couples the ideal magnetohydrodynamics equations with a gray radiation transfer equation. The main challenge is that the radiation travels at the speed of light while the magnetohydrodynamics changes with the time scale of the fluid. The time scales of these two processes can vary dramatically. In order to use mesh sizes and time steps that are independent of the speed of light, asymptotic preserving (AP) schemes in both space and time are desired. In this paper, we develop an AP scheme in both space and time for the RMHD system. Two different scalings are considered. One results in an equilibrium diffusion limit system, while the other results in a non-equilibrium system. The main idea is to decompose the radiative intensity into three parts, each part is treated differently with suitable combinations of explicit and implicit discretizations guaranteeing the favorable stability conditionand computational efficiency. The performance of the AP method is presented, for both optically thin and thick regions, as well as for the radiative shock problem.
We overview Bayesian estimation, hypothesis testing, and model-averaging and illustrate how they benefit parametric survival analysis. We contrast the Bayesian framework to the currently dominant frequentist approach and highlight advantages, such as seamless incorporation of historical data, continuous monitoring of evidence, and incorporating uncertainty about the true data generating process. We illustrate the application of the Bayesian approaches on an example data set from a colon cancer trial. We compare the Bayesian parametric survival analysis and frequentist models with AIC/BIC model selection in fixed-n and sequential designs with a simulation study. In the example data set, the Bayesian framework provided evidence for the absence of a positive treatment effect on disease-free survival in patients with resected colon cancer. Furthermore, the Bayesian sequential analysis would have terminated the trial 13.3 months earlier than the standard frequentist analysis. In a simulation study with sequential designs, the Bayesian framework on average reached a decision in almost half the time required by the frequentist counterparts, while maintaining the same power, and an appropriate false-positive rate. Under model misspecification, the Bayesian framework resulted in higher false-negative rate compared to the frequentist counterparts, which resulted in a higher proportion of undecided trials. In fixed-n designs, the Bayesian framework showed slightly higher power, slightly elevated error rates, and lower bias and RMSE when estimating treatment effects in small samples. We have made the analytic approach readily available in RoBSA R package. The outlined Bayesian framework provides several benefits when applied to parametric survival analyses. It uses data more efficiently, is capable of greatly shortening the length of clinical trials, and provides a richer set of inferences.
This paper presents an approach for trajectory-centric learning control based on contraction metrics and disturbance estimation for nonlinear systems subject to matched uncertainties. The approach allows for the use of a broad class of model learning tools including deep neural networks to learn uncertain dynamics while still providing guarantees of transient tracking performance throughout the learning phase, including the special case of no learning. Within the proposed approach, a disturbance estimation law is proposed to estimate the pointwise value of the uncertainty, with pre-computable estimation error bounds (EEBs). The learned dynamics, the estimated disturbances, and the EEBs are then incorporated in a robust Riemannian energy condition to compute the control law that guarantees exponential convergence of actual trajectories to desired ones throughout the learning phase, even when the learned model is poor. On the other hand, with improved accuracy, the learned model can be incorporated in a high-level planner to plan better trajectories with improved performance, e.g., lower energy consumption and shorter travel time. The proposed framework is validated on a planar quadrotor navigation example.