Uniform error estimates of a bi-fidelity method for a kinetic-fluid coupled model with random initial inputs in the fine particle regime are proved in this paper. Such a model is a system coupling the incompressible Navier-Stokes equations to the Vlasov-Fokker-Planck equations for a mixture of the flows with distinct particle sizes. The main analytic tool is the hypocoercivity analysis for the multi-phase Navier-Stokes-Vlasov-Fokker-Planck system with uncertainties, considering solutions in a perturbative setting near the global equilibrium. This allows us to obtain the error estimates in both kinetic and hydrodynamic regimes.
We develop sampling algorithms to fit Bayesian hierarchical models, the computational complexity of which scales linearly with the number of observations and the number of parameters in the model. We focus on crossed random effect and nested multilevel models, which are used ubiquitously in applied sciences. The posterior dependence in both classes is sparse: in crossed random effects models it resembles a random graph, whereas in nested multilevel models it is tree-structured. For each class we identify a framework for scalable computation, building on previous work. Methods for crossed models are based on extensions of appropriately designed collapsed Gibbs samplers, where we introduce the idea of local centering; while methods for nested models are based on sparse linear algebra and data augmentation. We provide a theoretical analysis of the proposed algorithms in some simplified settings, including a comparison with previously proposed methodologies and an average-case analysis based on random graph theory. Numerical experiments, including two challenging real data analyses on predicting electoral results and real estate prices, compare with off-the-shelf Hamiltonian Monte Carlo, displaying drastic improvement in performance.
Certifying the positivity of trigonometric polynomials is of first importance for design problems in discrete-time signal processing. It is well known from the Riesz-Fej\'ez spectral factorization theorem that any trigonometric univariate polynomial positive on the unit circle can be decomposed as a Hermitian square with complex coefficients. Here we focus on the case of polynomials with Gaussian integer coefficients, i.e., with real and imaginary parts being integers. We design, analyze and compare, theoretically and practically,three hybrid numeric-symbolic algorithms computing weighted sums of Hermitian squares decompositions for trigonometric univariate polynomials positive on the unit circle with Gaussian coefficients. The numerical steps the first and second algorithm rely on are complex root isolation and semidefinite programming, respectively. An exact sum of Hermitian squares decomposition is obtained thanks to compensation techniques. The third algorithm, also based on complex semidefinite programming, is an adaptation of the rounding and projection algorithm by Peyrl and Parrilo. For all three algorithms, we prove bit complexity and output size estimates that are polynomial in the degree of the input and linear in the maximum bitsize of its coefficients. We compare their performance on randomly chosen benchmarks, and further design a certified finite impulse filter.
Recently, advancements in deep learning-based superpixel segmentation methods have brought about improvements in both the efficiency and the performance of segmentation. However, a significant challenge remains in generating superpixels that strictly adhere to object boundaries while conveying rich visual significance, especially when cross-surface color correlations may interfere with objects. Drawing inspiration from neural structure and visual mechanisms, we propose a biological network architecture comprising an Enhanced Screening Module (ESM) and a novel Boundary-Aware Label (BAL) for superpixel segmentation. The ESM enhances semantic information by simulating the interactive projection mechanisms of the visual cortex. Additionally, the BAL emulates the spatial frequency characteristics of visual cortical cells to facilitate the generation of superpixels with strong boundary adherence. We demonstrate the effectiveness of our approach through evaluations on both the BSDS500 dataset and the NYUv2 dataset.
We tackle the problem of sampling from intractable high-dimensional density functions, a fundamental task that often appears in machine learning and statistics. We extend recent sampling-based approaches that leverage controlled stochastic processes to model approximate samples from these target densities. The main drawback of these approaches is that the training objective requires full trajectories to compute, resulting in sluggish credit assignment issues due to use of entire trajectories and a learning signal present only at the terminal time. In this work, we present Diffusion Generative Flow Samplers (DGFS), a sampling-based framework where the learning process can be tractably broken down into short partial trajectory segments, via parameterizing an additional "flow function". Our method takes inspiration from the theory developed for generative flow networks (GFlowNets), allowing us to make use of intermediate learning signals and benefit from off-policy exploration capabilities. Through a variety of challenging experiments, we demonstrate that DGFS results in more accurate estimates of the normalization constant than closely-related prior methods.
We present a priori error estimates for a multirate time-stepping scheme for coupled differential equations. The discretization is based on Galerkin methods in time using two different time meshes for two parts of the problem. We aim at surface coupled multiphysics problems like two-phase flows. Special focus is on the handling of the interface coupling to guarantee a coercive formulation as key to optimal order error estimates. In a sequence of increasing complexity, we begin with the coupling of two ordinary differential equations, coupled heat conduction equation, and finally a coupled Stokes problem. For this we show optimal multi-rate estimates in velocity and a suboptimal result in pressure. The a priori estimates prove that the multirate method decouples the two subproblems exactly. This is the basis for adaptive methods which can choose optimal lattices for the respective subproblems.
A finite element based computational scheme is developed and employed to assess a duality based variational approach to the solution of the linear heat and transport PDE in one space dimension and time, and the nonlinear system of ODEs of Euler for the rotation of a rigid body about a fixed point. The formulation turns initial-(boundary) value problems into degenerate elliptic boundary value problems in (space)-time domains representing the Euler-Lagrange equations of suitably designed dual functionals in each of the above problems. We demonstrate reasonable success in approximating solutions of this range of parabolic, hyperbolic, and ODE primal problems, which includes energy dissipation as well as conservation, by a unified dual strategy lending itself to a variational formulation. The scheme naturally associates a family of dual solutions to a unique primal solution; such `gauge invariance' is demonstrated in our computed solutions of the heat and transport equations, including the case of a transient dual solution corresponding to a steady primal solution of the heat equation. Primal evolution problems with causality are shown to be correctly approximated by non-causal dual problems.
Generalized cross-validation (GCV) is a widely-used method for estimating the squared out-of-sample prediction risk that employs a scalar degrees of freedom adjustment (in a multiplicative sense) to the squared training error. In this paper, we examine the consistency of GCV for estimating the prediction risk of arbitrary ensembles of penalized least squares estimators. We show that GCV is inconsistent for any finite ensemble of size greater than one. Towards repairing this shortcoming, we identify a correction that involves an additional scalar correction (in an additive sense) based on degrees of freedom adjusted training errors from each ensemble component. The proposed estimator (termed CGCV) maintains the computational advantages of GCV and requires neither sample splitting, model refitting, or out-of-bag risk estimation. The estimator stems from a finer inspection of ensemble risk decomposition and two intermediate risk estimators for the components in this decomposition. We provide a non-asymptotic analysis of the CGCV and the two intermediate risk estimators for ensembles of convex penalized estimators under Gaussian features and a linear response model. In the special case of ridge regression, we extend the analysis to general feature and response distributions using random matrix theory, which establishes model-free uniform consistency of CGCV.
We give a fully polynomial-time randomized approximation scheme (FPRAS) for two terminal reliability in directed acyclic graphs.
Active Flux is an extension of the Finite Volume method and additionally incorporates point values located at cell boundaries. This gives rise to a globally continuous approximation of the solution. The method is third-order accurate. We demonstrate that a new semi-discrete Active Flux method (first described in Abgrall&Barsukow, 2023 for one space dimension) can easily be used to solve nonlinear hyperbolic systems in multiple dimensions, such as the compressible Euler equations of inviscid hydrodynamics. Originally, the Active Flux method emerged as a fully discrete method, and required an exact or approximate evolution operator for the point value update. For nonlinear problems such an operator is often difficult to obtain, in particular for multiple spatial dimensions. With the new approach it becomes possible to leave behind these difficulties. We introduce a multi-dimensional limiting strategy and demonstrate the performance of the new method on both Riemann problems and subsonic flows.
The alpha complex is a fundamental data structure from computational geometry, which encodes the topological type of a union of balls $B(x; r) \subset \mathbb{R}^m$ for $x\in S$, including a weighted version that allows for varying radii. It consists of the collection of "simplices" $\sigma = \{x_0, ..., x_k \} \subset S$, which correspond to nomempty $(k + 1)$-fold intersections of cells in a radius-restricted version of the Voronoi diagram. Existing algorithms for computing the alpha complex require that the points reside in low dimension because they begin by computing the entire Delaunay complex, which rapidly becomes intractable, even when the alpha complex is of a reasonable size. This paper presents a method for computing the alpha complex without computing the full Delaunay triangulation by applying Lagrangian duality, specifically an algorithm based on dual quadratic programming that seeks to rule simplices out rather than ruling them in.