B-series and generalizations are a powerful tool for the analysis of numerical integrators. An extension named exotic aromatic B-series was introduced to study the order conditions for sampling the invariant measure of ergodic SDEs. Introducing a new symmetry normalization coefficient, we analyze the algebraic structures related to exotic B-series and S-series. Precisely, we prove the relationship between the Grossman-Larson algebras over exotic and grafted forests and the corresponding duals to the Connes-Kreimer coalgebras and use it to study the natural composition laws on exotic S-series. Applying this algebraic framework to the derivation of order conditions for a class of stochastic Runge-Kutta methods, we present a multiplicative property that ensures some order conditions to be satisfied automatically.
Fitness functions map large combinatorial spaces of biological sequences to properties of interest. Inferring these multimodal functions from experimental data is a central task in modern protein engineering. Global epistasis models are an effective and physically-grounded class of models for estimating fitness functions from observed data. These models assume that a sparse latent function is transformed by a monotonic nonlinearity to emit measurable fitness. Here we demonstrate that minimizing contrastive loss functions, such as the Bradley-Terry loss, is a simple and flexible technique for extracting the sparse latent function implied by global epistasis. We argue by way of a fitness-epistasis uncertainty principle that the nonlinearities in global epistasis models can produce observed fitness functions that do not admit sparse representations, and thus may be inefficient to learn from observations when using a Mean Squared Error (MSE) loss (a common practice). We show that contrastive losses are able to accurately estimate a ranking function from limited data even in regimes where MSE is ineffective. We validate the practical utility of this insight by showing contrastive loss functions result in consistently improved performance on benchmark tasks.
A nonlinear-manifold reduced order model (NM-ROM) is a great way of incorporating underlying physics principles into a neural network-based data-driven approach. We combine NM-ROMs with domain decomposition (DD) for efficient computation. NM-ROMs offer benefits over linear-subspace ROMs (LS-ROMs) but can be costly to train due to parameter scaling with the full-order model (FOM) size. To address this, we employ DD on the FOM, compute subdomain NM-ROMs, and then merge them into a global NM-ROM. This approach has multiple advantages: parallel training of subdomain NM-ROMs, fewer parameters than global NM-ROMs, and adaptability to subdomain-specific FOM features. Each subdomain NM-ROM uses a shallow, sparse autoencoder, enabling hyper-reduction (HR) for improved computational speed. In this paper, we detail an algebraic DD formulation for the FOM, train HR-equipped NM-ROMs for subdomains, and numerically compare them to DD LS-ROMs with HR. Results show a significant accuracy boost, on the order of magnitude, for the proposed DD NM-ROMs over DD LS-ROMs in solving the 2D steady-state Burgers' equation.
We develop a novel discontinuous Galerkin method for solving the rotating thermal shallow water equations (TRSW) on a curvilinear mesh. Our method is provably entropy stable, conserves mass, buoyancy and vorticity, while also semi-discretely conserving energy. This is achieved by using novel numerical fluxes and splitting the pressure and convection operators. We implement our method on a cubed sphere mesh and numerically verify our theoretical results. Our experiments demonstrate the robustness of the method for a regime of well developed turbulence, where it can be run stably without any dissipation. The entropy stable fluxes are sufficient to control the grid scale noise generated by geostrophic turbulence, eliminating the need for artificial stabilization.
The classification of galaxies as spirals or ellipticals is a crucial task in understanding their formation and evolution. With the arrival of large-scale astronomical surveys, such as the Sloan Digital Sky Survey (SDSS), astronomers now have access to images of a vast number of galaxies. However, the visual inspection of these images is an impossible task for humans due to the sheer number of galaxies to be analyzed. To solve this problem, the Galaxy Zoo project was created to engage thousands of citizen scientists to classify the galaxies based on their visual features. In this paper, we present a machine learning model for galaxy classification using numerical data from the Galaxy Zoo[5] project. Our model utilizes a convolutional neural network architecture to extract features from galaxy images and classify them into spirals or ellipticals. We demonstrate the effectiveness of our model by comparing its performance with that of human classifiers using a subset of the Galaxy Zoo dataset. Our results show that our model achieves high accuracy in classifying galaxies and has the potential to significantly enhance our understanding of the formation and evolution of galaxies.
The numerical integration of stiff equations is a challenging problem that needs to be approached by specialized numerical methods. Exponential integrators form a popular class of such methods since they are provably robust to stiffness and have been successfully applied to a variety of problems. The dynamical low- \rank approximation is a recent technique for solving high-dimensional differential equations by means of low-rank approximations. However, the domain is lacking numerical methods for stiff equations since existing methods are either not robust-to-stiffness or have unreasonably large hidden constants. In this paper, we focus on solving large-scale stiff matrix differential equations with a Sylvester-like structure, that admit good low-rank approximations. We propose two new methods that have good convergence properties, small memory footprint and that are fast to compute. The theoretical analysis shows that the new methods have order one and two, respectively. We also propose a practical implementation based on Krylov techniques. The approximation error is analyzed, leading to a priori error bounds and, therefore, a mean for choosing the size of the Krylov space. Numerical experiments are performed on several examples, confirming the theory and showing good speedup in comparison to existing techniques.
Parameter identification problems in partial differential equations (PDEs) consist in determining one or more unknown functional parameters in a PDE. Here, the Bayesian nonparametric approach to such problems is considered. Focusing on the representative example of inferring the diffusivity function in an elliptic PDE from noisy observations of the PDE solution, the performance of Bayesian procedures based on Gaussian process priors is investigated. Recent asymptotic theoretical guarantees establishing posterior consistency and convergence rates are reviewed and expanded upon. An implementation of the associated posterior-based inference is provided, and illustrated via a numerical simulation study where two different discretisation strategies are devised. The reproducible code is available at: //github.com/MattGiord.
High-dimensional, higher-order tensor data are gaining prominence in a variety of fields, including but not limited to computer vision and network analysis. Tensor factor models, induced from noisy versions of tensor decomposition or factorization, are natural potent instruments to study a collection of tensor-variate objects that may be dependent or independent. However, it is still in the early stage of developing statistical inferential theories for estimation of various low-rank structures, which are customary to play the role of signals of tensor factor models. In this paper, starting from tensor matricization, we aim to ``decode" estimation of a higher-order tensor factor model in the sense that, we recast it into mode-wise traditional high-dimensional vector/fiber factor models so as to deploy the conventional estimation of principle components analysis (PCA). Demonstrated by the Tucker tensor factor model (TuTFaM), which is induced from most popular Tucker decomposition, we summarize that estimations on signal components are essentially mode-wise PCA techniques, and the involvement of projection and iteration will enhance the signal-to-noise ratio to various extend. We establish the inferential theory of the proposed estimations and conduct rich simulation experiments under TuTFaM, and illustrate how the proposed estimations can work in tensor reconstruction, clustering for video and economic datasets, respectively.
We propose novel optimal and parameter-free algorithms for computing an approximate solution with small (projected) gradient norm. Specifically, for computing an approximate solution such that the norm of its (projected) gradient does not exceed $\varepsilon$, we obtain the following results: a) for the convex case, the total number of gradient evaluations is bounded by $O(1)\sqrt{L\|x_0 - x^*\|/\varepsilon}$, where $L$ is the Lipschitz constant of the gradient and $x^*$ is any optimal solution; b) for the strongly convex case, the total number of gradient evaluations is bounded by $O(1)\sqrt{L/\mu}\log(\|\nabla f(x_0)\|/\epsilon)$, where $\mu$ is the strong convexity modulus; and c) for the nonconvex case, the total number of gradient evaluations is bounded by $O(1)\sqrt{Ll}(f(x_0) - f(x^*))/\varepsilon^2$, where $l$ is the lower curvature constant. Our complexity results match the lower complexity bounds of the convex and strongly cases, and achieve the above best-known complexity bound for the nonconvex case for the first time in the literature. Moreover, for all the convex, strongly convex, and nonconvex cases, we propose parameter-free algorithms that do not require the input of any problem parameters. To the best of our knowledge, there do not exist such parameter-free methods before especially for the strongly convex and nonconvex cases. Since most regularity conditions (e.g., strong convexity and lower curvature) are imposed over a global scope, the corresponding problem parameters are notoriously difficult to estimate. However, gradient norm minimization equips us with a convenient tool to monitor the progress of algorithms and thus the ability to estimate such parameters in-situ.
We study the numerical approximation of multidimensional stochastic differential equations (SDEs) with distributional drift, driven by a fractional Brownian motion. We work under the Catellier-Gubinelli condition for strong well-posedness, which assumes that the regularity of the drift is strictly greater than $1-1/(2H)$, where $H$ is the Hurst parameter of the noise. The focus here is on the case $H<1/2$, allowing the drift $b$ to be a distribution. We compare the solution $X$ of the SDE with drift $b$ and its tamed Euler scheme with mollified drift $b^n$, to obtain an explicit rate of convergence for the strong error. This extends previous results where $b$ was assumed to be a bounded measurable function. In addition, we investigate the limit case when the regularity of the drift is equal to $1-1/(2H)$, and obtain a non-explicit rate of convergence. As a byproduct of this convergence, there exists a strong solution that is pathwise unique in a class of H\"older continuous solutions. The proofs rely on stochastic sewing techniques, especially to deduce new regularising properties of the discrete-time fractional Brownian motion. In the limit case, we introduce a critical Gr\"onwall-type lemma to quantify the error. We also present several examples and numerical simulations that illustrate our results.
Many generalised distributions exist for modelling data with vastly diverse characteristics. However, very few of these generalisations of the normal distribution have shape parameters with clear roles that determine, for instance, skewness and tail shape. In this chapter, we review existing skewing mechanisms and their properties in detail. Using the knowledge acquired, we add a skewness parameter to the body-tail generalised normal distribution \cite{BTGN}, that yields the \ac{FIN} with parameters for location, scale, body-shape, skewness, and tail weight. Basic statistical properties of the \ac{FIN} are provided, such as the \ac{PDF}, cumulative distribution function, moments, and likelihood equations. Additionally, the \ac{FIN} \ac{PDF} is extended to a multivariate setting using a student t-copula, yielding the \ac{MFIN}. The \ac{MFIN} is applied to stock returns data, where it outperforms the t-copula multivariate generalised hyperbolic, Azzalini skew-t, hyperbolic, and normal inverse Gaussian distributions.