In this paper, we prove uniform error bounds for proper orthogonal decomposition (POD) reduced order modeling (ROM) of Burgers equation, considering difference quotients (DQs), introduced in [26]. In particular, we study the behavior of the DQ ROM error bounds by considering $L^2(\Omega)$ and $H^1_0(\Omega)$ POD spaces and $l^{\infty}(L^2)$ and natural-norm errors. We present some meaningful numerical tests checking the behavior of error bounds. Based on our numerical results, DQ ROM errors are several orders of magnitude smaller than noDQ ones (in which the POD is constructed in a standard way, i.e., without the DQ approach) in terms of the energy kept by the ROM basis. Further, noDQ ROM errors have an optimal behavior, while DQ ROM errors, where the DQ is added to the POD process, demonstrate an optimality/super-optimality behavior. It is conjectured that this possibly occurs because the DQ inner products allow the time dependency in the ROM spaces to make an impact.
Numerical vector aggregation plays a crucial role in privacy-sensitive applications, such as distributed gradient estimation in federated learning and statistical analysis of key-value data. In the context of local differential privacy, this study provides a tight minimax error bound of $O(\frac{ds}{n\epsilon^2})$, where $d$ represents the dimension of the numerical vector and $s$ denotes the number of non-zero entries. By converting the conditional/unconditional numerical mean estimation problem into a frequency estimation problem, we develop an optimal and efficient mechanism called Collision. In contrast, existing methods exhibit sub-optimal error rates of $O(\frac{d^2}{n\epsilon^2})$ or $O(\frac{ds^2}{n\epsilon^2})$. Specifically, for unconditional mean estimation, we leverage the negative correlation between two frequencies in each dimension and propose the CoCo mechanism, which further reduces estimation errors for mean values compared to Collision. Moreover, to surpass the error barrier in local privacy, we examine privacy amplification in the shuffle model for the proposed mechanisms and derive precisely tight amplification bounds. Our experiments validate and compare our mechanisms with existing approaches, demonstrating significant error reductions for frequency estimation and mean estimation on numerical vectors.
It is well known that for general linear systems, only optimal Krylov methods with long recurrences exist. For special classes of linear systems it is possible to find optimal Krylov methods with short recurrences. In this paper we consider the important class of linear systems with a shifted skew-symmetric coefficient matrix. We present the MRS3 solver, a minimal residual method that solves these problems using short vector recurrences. We give an overview of existing Krylov solvers that can be used to solve these problems, and compare them with the MRS3 method, both theoretically and by numerical experiments. From this comparison we argue that the MRS3 solver is the fastest and most robust of these Krylov method for systems with a shifted skew-symmetric coefficient matrix.
Nonsmooth composite optimization with orthogonality constraints has a broad spectrum of applications in statistical learning and data science. However, this problem is generally challenging to solve due to its non-convex and non-smooth nature. Existing solutions are limited by one or more of the following restrictions: (i) they are full gradient methods that require high computational costs in each iteration; (ii) they are not capable of solving general nonsmooth composite problems; (iii) they are infeasible methods and can only achieve the feasibility of the solution at the limit point; (iv) they lack rigorous convergence guarantees; (v) they only obtain weak optimality of critical points. In this paper, we propose \textit{\textbf{OBCD}}, a new Block Coordinate Descent method for solving general nonsmooth composite problems under Orthogonality constraints. \textit{\textbf{OBCD}} is a feasible method with low computation complexity footprints. In each iteration, our algorithm updates $k$ rows of the solution matrix ($k\geq2$ is a parameter) to preserve the constraints. Then, it solves a small-sized nonsmooth composite optimization problem under orthogonality constraints either exactly or approximately. We demonstrate that any exact block-$k$ stationary point is always an approximate block-$k$ stationary point, which is equivalent to the critical stationary point. We are particularly interested in the case where $k=2$ as the resulting subproblem reduces to a one-dimensional nonconvex problem. We propose a breakpoint searching method and a fifth-order iterative method to solve this problem efficiently and effectively. We also propose two novel greedy strategies to find a good working set to further accelerate the convergence of \textit{\textbf{OBCD}}. Finally, we have conducted extensive experiments on several tasks to demonstrate the superiority of our approach.
Classical methods for model order selection often fail in scenarios with low SNR or few snapshots. Deep learning based methods are promising alternatives for such challenging situations as they compensate lack of information in the available observations with training on large datasets. This manuscript proposes an approach that uses a variational autoencoder (VAE) for model order selection. The idea is to learn a parameterized conditional covariance matrix at the VAE decoder that approximates the true signal covariance matrix. The method itself is unsupervised and only requires a small representative dataset for calibration purposes after training of the VAE. Numerical simulations show that the proposed method clearly outperforms classical methods and even reaches or beats a supervised approach depending on the considered snapshots.
Log-concave sampling has witnessed remarkable algorithmic advances in recent years, but the corresponding problem of proving lower bounds for this task has remained elusive, with lower bounds previously known only in dimension one. In this work, we establish the following query lower bounds: (1) sampling from strongly log-concave and log-smooth distributions in dimension $d\ge 2$ requires $\Omega(\log \kappa)$ queries, which is sharp in any constant dimension, and (2) sampling from Gaussians in dimension $d$ (hence also from general log-concave and log-smooth distributions in dimension $d$) requires $\widetilde \Omega(\min(\sqrt\kappa \log d, d))$ queries, which is nearly sharp for the class of Gaussians. Here $\kappa$ denotes the condition number of the target distribution. Our proofs rely upon (1) a multiscale construction inspired by work on the Kakeya conjecture in harmonic analysis, and (2) a novel reduction that demonstrates that block Krylov algorithms are optimal for this problem, as well as connections to lower bound techniques based on Wishart matrices developed in the matrix-vector query literature.
Inferring the parameters of ordinary differential equations (ODEs) from noisy observations is an important problem in many scientific fields. Currently, most parameter estimation methods that bypass numerical integration tend to rely on basis functions or Gaussian processes to approximate the ODE solution and its derivatives. Due to the sensitivity of the ODE solution to its derivatives, these methods can be hindered by estimation error, especially when only sparse time-course observations are available. We present a Bayesian collocation framework that operates on the integrated form of the ODEs and also avoids the expensive use of numerical solvers. Our methodology has the capability to handle general nonlinear ODE systems. We demonstrate the accuracy of the proposed method through a simulation study, where the estimated parameters and recovered system trajectories are compared with other recent methods. A real data example is also provided.
This work proposes a nonlinear finite element method whose nodal values preserve bounds known for the exact solution. The discrete problem involves a nonlinear projection operator mapping arbitrary nodal values into bound-preserving ones and seeks the numerical solution in the range of this projection. As the projection is not injective, a stabilisation based upon the complementary projection is added in order to restore well-posedness. Within the framework of elliptic problems, the discrete problem may be viewed as a reformulation of a discrete obstacle problem, incorporating the inequality constraints through Lipschitz projections. The derivation of the proposed method is exemplified for linear and nonlinear reaction-diffusion problems. Near-best approximation results in suitable norms are established. In particular, we prove that, in the linear case, the numerical solution is the best approximation in the energy norm among all nodally bound-preserving finite element functions. A series of numerical experiments for such problems showcase the good behaviour of the proposed bound-preserving finite element method.
Schr\"odinger Bridge (SB) is an entropy-regularized optimal transport problem that has received increasing attention in deep generative modeling for its mathematical flexibility compared to the Scored-based Generative Model (SGM). However, it remains unclear whether the optimization principle of SB relates to the modern training of deep generative models, which often rely on constructing log-likelihood objectives.This raises questions on the suitability of SB models as a principled alternative for generative applications. In this work, we present a novel computational framework for likelihood training of SB models grounded on Forward-Backward Stochastic Differential Equations Theory - a mathematical methodology appeared in stochastic optimal control that transforms the optimality condition of SB into a set of SDEs. Crucially, these SDEs can be used to construct the likelihood objectives for SB that, surprisingly, generalizes the ones for SGM as special cases. This leads to a new optimization principle that inherits the same SB optimality yet without losing applications of modern generative training techniques, and we show that the resulting training algorithm achieves comparable results on generating realistic images on MNIST, CelebA, and CIFAR10. Our code is available at //github.com/ghliu/SB-FBSDE.
The K-receiver wiretap channel is a channel model where a transmitter broadcasts K independent messages to K intended receivers while keeping them secret from an eavesdropper. The capacity region of the K-receiver multiple-input multiple-output (MIMO) wiretap channel has been characterized by using dirty-paper coding and stochastic encoding. However, K factorial encoding orders may need to be enumerated to evaluate the capacity region, which makes the problem intractable. In addition, even though the capacity region is known, the optimal signaling to achieve the capacity region is unknown. In this paper, we determine one optimal encoding order to achieve every point on the capacity region, and thus reduce the encoding complexity K factorial times. We prove that the optimal decoding order for the K-receiver MIMO wiretap channel is the same as that for the MIMO broadcast channel without secrecy. To be specific, the descending weight ordering in the weighted sum-rate (WSR) maximization problem determines the optimal encoding order. Next, to reach the border of the secrecy capacity region, we form a WSR maximization problem and apply the block successive maximization method to solve this nonconvex problem and find the input covariance matrices corresponding to each message. Numerical results are used to verify the optimality of the encoding order and to demonstrate the efficacy of the proposed signaling design.
In the present paper we consider the initial data, external force, viscosity coefficients, and heat conductivity coefficient as random data for the compressible Navier--Stokes--Fourier system. The Monte Carlo method, which is frequently used for the approximation of statistical moments, is combined with a suitable deterministic discretisation method in physical space and time. Under the assumption that numerical densities and temperatures are bounded in probability, we prove the convergence of random finite volume solutions to a statistical strong solution by applying genuine stochastic compactness arguments. Further, we show the convergence and error estimates for the Monte Carlo estimators of the expectation and deviation. We present several numerical results to illustrate the theoretical results.