We establish several convexity properties for the entropy and Fisher information of mixtures of centered Gaussian distributions. First, we prove that if $X_1, X_2$ are independent scalar Gaussian mixtures, then the entropy of $\sqrt{t}X_1 + \sqrt{1-t}X_2$ is concave in $t \in [0,1]$, thus confirming a conjecture of Ball, Nayar and Tkocz (2016) for this class of random variables. In fact, we prove a generalisation of this assertion which also strengthens a result of Eskenazis, Nayar and Tkocz (2018). For the Fisher information, we extend a convexity result of Bobkov (2022) by showing that the Fisher information matrix is operator convex as a matrix-valued function acting on densities of mixtures in $\mathbb{R}^d$. As an application, we establish rates for the convergence of the Fisher information matrix of the sum of weighted i.i.d. Gaussian mixtures in the operator norm along the central limit theorem under mild moment assumptions.
We present an asymptotic expansion formula of an estimator for the drift coefficient of the fractional Ornstein-Uhlenbeck process. As the machinery, we apply the general expansion scheme for Wiener functionals recently developed by the authors [26]. The central limit theorem in the principal part of the expansion has the classical scaling T^{1/2}. However, the asymptotic expansion formula is a complex in that the order of the correction term becomes the classical T^{-1/2} for H in (1/2,5/8), but T^{4H-3} for H in [5/8, 3/4).
We establish a theoretical framework of the particle relaxation method for uniform particle generation of Smoothed Particle Hydrodynamics. We achieve this by reformulating the particle relaxation as an optimization problem. The objective function is an integral difference between discrete particle-based and smoothed-analytical volume fractions. The analysis demonstrates that the particle relaxation method in the domain interior is essentially equivalent to employing a gradient descent approach to solve this optimization problem, and we can extend such an equivalence to the bounded domain by introducing a proper boundary term. Additionally, each periodic particle distribution has a spatially uniform particle volume, denoted as characteristic volume. The relaxed particle distribution has the largest characteristic volume, and the kernel cut-off radius determines this volume. This insight enables us to control the relaxed particle distribution by selecting the target kernel cut-off radius for a given kernel function.
We consider minimizers of the N-particle interaction potential energy and briefly review numerical methods used to calculate them. We consider simple pair potentials which are attractive at short distances and repulsive at long distances, focusing on examples which are sums of two powers. The range of powers we look at includes the well-known case of the Lennard-Jones potential, but we are also interested in less singular potentials which are relevant in collective behavior models. We report on results using the software GMIN developed by Wales and collaborators for problems in chemistry. For all cases, this algorithm gives good candidates for the minimizers for relatively low values of the particle number N. This is well-known for potentials similar to Lennard-Jones, but not for the range which is of interest in collective behavior. Standard minimization procedures have been used in the literature in this range, but they are likely to yield stationary states which are not minimizers. We illustrate numerically some properties of the minimizers in 2D, such as lattice structure, Wulff shapes, and the continuous large-N limit for locally integrable (that is, less singular) potentials.
We present some basic elements of the theory of generalised Br\`{e}gman relative entropies over nonreflexive Banach spaces. Using nonlinear embeddings of Banach spaces together with the Euler--Legendre functions, this approach unifies two former approaches to Br\`{e}gman relative entropy: one based on reflexive Banach spaces, another based on differential geometry. This construction allows to extend Br\`{e}gman relative entropies, and related geometric and operator structures, to arbitrary-dimensional state spaces of probability, quantum, and postquantum theory. We give several examples, not considered previously in the literature.
With the increasing multimedia information, multimodal recommendation has received extensive attention. It utilizes multimodal information to alleviate the data sparsity problem in recommendation systems, thus improving recommendation accuracy. However, the reliance on labeled data severely limits the performance of multimodal recommendation models. Recently, self-supervised learning has been used in multimodal recommendations to mitigate the label sparsity problem. Nevertheless, the state-of-the-art methods cannot avoid the modality noise when aligning multimodal information due to the large differences in the distributions of different modalities. To this end, we propose a Multi-level sElf-supervised learNing for mulTimOdal Recommendation (MENTOR) method to address the label sparsity problem and the modality alignment problem. Specifically, MENTOR first enhances the specific features of each modality using the graph convolutional network (GCN) and fuses the visual and textual modalities. It then enhances the item representation via the item semantic graph for all modalities, including the fused modality. Then, it introduces two multilevel self-supervised tasks: the multilevel cross-modal alignment task and the general feature enhancement task. The multilevel cross-modal alignment task aligns each modality under the guidance of the ID embedding from multiple levels while maintaining the historical interaction information. The general feature enhancement task enhances the general feature from both the graph and feature perspectives to improve the robustness of our model. Extensive experiments on three publicly available datasets demonstrate the effectiveness of our method. Our code is publicly available at //github.com/Jinfeng-Xu/MENTOR.
We consider the statistical linear inverse problem of making inference on an unknown source function in an elliptic partial differential equation from noisy observations of its solution. We employ nonparametric Bayesian procedures based on Gaussian priors, leading to convenient conjugate formulae for posterior inference. We review recent results providing theoretical guarantees on the quality of the resulting posterior-based estimation and uncertainty quantification, and we discuss the application of the theory to the important classes of Gaussian series priors defined on the Dirichlet-Laplacian eigenbasis and Mat\'ern process priors. We provide an implementation of posterior inference for both classes of priors, and investigate its performance in a numerical simulation study.
We study Langevin-type algorithms for sampling from Gibbs distributions such that the potentials are dissipative and their weak gradients have finite moduli of continuity not necessarily convergent to zero. Our main result is a non-asymptotic upper bound of the 2-Wasserstein distance between a Gibbs distribution and the law of general Langevin-type algorithms based on the Liptser--Shiryaev theory and Poincar\'{e} inequalities. We apply this bound to show that the Langevin Monte Carlo algorithm can approximate Gibbs distributions with arbitrary accuracy if the potentials are dissipative and their gradients are uniformly continuous. We also propose Langevin-type algorithms with spherical smoothing for distributions whose potentials are not convex or continuously differentiable.
In this work, we present new constructions for topological subsystem codes using semi-regular Euclidean and hyperbolic tessellations. They give us new families of codes, and we also provide a new family of codes obtained through an already existing construction, due to Sarvepalli and Brown. We also prove new results that allow us to obtain the parameters of these new codes.
Versatile mixed finite element methods were originally developed by Chen and Williams for isothermal incompressible flows in "Versatile mixed methods for the incompressible Navier-Stokes equations," Computers & Mathematics with Applications, Volume 80, 2020. Thereafter, these methods were extended by Miller, Chen, and Williams to non-isothermal incompressible flows in "Versatile mixed methods for non-isothermal incompressible flows," Computers & Mathematics with Applications, Volume 125, 2022. The main advantage of these methods lies in their flexibility. Unlike traditional mixed methods, they retain the divergence terms in the momentum and temperature equations. As a result, the favorable properties of the schemes are maintained even in the presence of non-zero divergence. This makes them an ideal candidate for an extension to compressible flows, in which the divergence does not generally vanish. In the present article, we finally construct the fully-compressible extension of the methods. In addition, we demonstrate the excellent performance of the resulting methods for weakly-compressible flows that arise near the incompressible limit, as well as more strongly-compressible flows that arise near Mach 0.5.
We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports. Under the multi-task learning setting, we propose a $l_1/l_2$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models. We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order (or topological order) than separate estimations. Moreover, the joint estimator is able to recover non-identifiable DAGs, by estimating them together with some identifiable DAGs. Lastly, our analysis also shows the consistency of union support recovery of the structures. To allow practical implementation, we design a continuous optimization problem whose optimizer is the same as the joint estimator and can be approximated efficiently by an iterative algorithm. We validate the theoretical analysis and the effectiveness of the joint estimator in experiments.