Effective application of mathematical models to interpret biological data and make accurate predictions often requires that model parameters are identifiable. Approaches to assess the so-called structural identifiability of models are well-established for ordinary differential equation models, yet there are no commonly adopted approaches that can be applied to assess the structural identifiability of the partial differential equation (PDE) models that are requisite to capture spatial features inherent to many phenomena. The differential algebra approach to structural identifiability has recently been demonstrated to be applicable to several specific PDE models. In this brief article, we present general methodology for performing structural identifiability analysis on partially observed linear reaction-advection-diffusion (RAD) PDE models. We show that the differential algebra approach can always, in theory, be applied to linear RAD models. Moreover, despite the perceived complexity introduced by the addition of advection and diffusion terms, identifiability of spatial analogues of non-spatial models cannot decrease structural identifiability. Finally, we show that our approach can also be applied to a class of non-linear PDE models that are linear in the unobserved variables, and conclude by discussing future possibilities and computational cost of performing structural identifiability analysis on more general PDE models in mathematical biology.
Factor models have been widely used to summarize the variability of high-dimensional data through a set of factors with much lower dimensionality. Gaussian linear factor models have been particularly popular due to their interpretability and ease of computation. However, in practice, data often violate the multivariate Gaussian assumption. To characterize higher-order dependence and nonlinearity, models that include factors as predictors in flexible multivariate regression are popular, with GP-LVMs using Gaussian process (GP) priors for the regression function and VAEs using deep neural networks. Unfortunately, such approaches lack identifiability and interpretability and tend to produce brittle and non-reproducible results. To address these problems by simplifying the nonparametric factor model while maintaining flexibility, we propose the NIFTY framework, which parsimoniously transforms uniform latent variables using one-dimensional nonlinear mappings and then applies a linear generative model. The induced multivariate distribution falls into a flexible class while maintaining simple computation and interpretation. We prove that this model is identifiable and empirically study NIFTY using simulated data, observing good performance in density estimation and data visualization. We then apply NIFTY to bird song data in an environmental monitoring application.
We present a formulation for high-order generalized periodicity conditions in the context of a high-order electromechanical theory including flexoelectricity, strain gradient elasticity and gradient dielectricity, with the goal of studying periodic architected metamaterials. Such theory results in fourth-order governing partial differential equations, and the periodicity conditions involve continuity across the periodic boundary of primal fields (displacement and electric potential) and their normal derivatives, continuity of the corresponding dual generalized forces (tractions, double tractions, surface charge density and double surface charge density). Rather than imposing these conditions numerically as explicit constraints, we develop an approximation space which fulfils generalized periodicity by construction. Our method naturally allows us to impose general macroscopic fields (strains/stresses and electric fields/electric displacements) along arbitrary directions, enabling the characterization of the material anisotropy. We apply the proposed method to study periodic architected metamaterials with apparent piezoelectricity. We first verify the method by directly comparing the results with a large periodic structure, then apply it to evaluate the anisotropic apparently piezoelectricity of a geometrically polarized 2D lattice, and finally demonstrate the application of the method in a 3D architected metamaterial.
We propose a second order exponential scheme suitable for two-component coupled systems of stiff evolutionary advection--diffusion--reaction equations in two and three space dimensions. It is based on a directional splitting of the involved matrix functions, which allows for a simple yet efficient implementation through the computation of small-sized exponential-like functions and tensor-matrix products. The procedure straightforwardly extends to the case of an arbitrary number of components and to any space dimension. Several numerical examples in 2D and 3D with physically relevant (advective) Schnakenberg, FitzHugh--Nagumo, DIB, and advective Brusselator models clearly show the advantage of the approach against state-of-the-art techniques.
Inference for functional linear models in the presence of heteroscedastic errors has received insufficient attention given its practical importance; in fact, even a central limit theorem has not been studied in this case. At issue, conditional mean (projection of the slope function) estimates have complicated sampling distributions due to the infinite dimensional regressors, which create truncation bias and scaling problems that are compounded by non-constant variance under heteroscedasticity. As a foundation for distributional inference, we establish a central limit theorem for the estimated projection under general dependent errors, and subsequently we develop a paired bootstrap method to approximate sampling distributions. The proposed paired bootstrap does not follow the standard bootstrap algorithm for finite dimensional regressors, as this version fails outside of a narrow window for implementation with functional regressors. The reason owes to a bias with functional regressors in a naive bootstrap construction. Our bootstrap proposal incorporates debiasing and thereby attains much broader validity and flexibility with truncation parameters for inference under heteroscedasticity; even when the naive approach may be valid, the proposed bootstrap method performs better numerically. The bootstrap is applied to construct confidence intervals for projections and for conducting hypothesis tests for the slope function. Our theoretical results on bootstrap consistency are demonstrated through simulation studies and also illustrated with real data examples.
We extend our formulation of Merge and Minimalism in terms of Hopf algebras to an algebraic model of a syntactic-semantic interface. We show that methods adopted in the formulation of renormalization (extraction of meaningful physical values) in theoretical physics are relevant to describe the extraction of meaning from syntactic expressions. We show how this formulation relates to computational models of semantics and we answer some recent controversies about implications for generative linguistics of the current functioning of large language models.
The consistency of the maximum likelihood estimator for mixtures of elliptically-symmetric distributions for estimating its population version is shown, where the underlying distribution $P$ is nonparametric and does not necessarily belong to the class of mixtures on which the estimator is based. In a situation where $P$ is a mixture of well enough separated but nonparametric distributions it is shown that the components of the population version of the estimator correspond to the well separated components of $P$. This provides some theoretical justification for the use of such estimators for cluster analysis in case that $P$ has well separated subpopulations even if these subpopulations differ from what the mixture model assumes.
Solutions to many important partial differential equations satisfy bounds constraints, but approximations computed by finite element or finite difference methods typically fail to respect the same conditions. Chang and Nakshatrala enforce such bounds in finite element methods through the solution of variational inequalities rather than linear variational problems. Here, we provide a theoretical justification for this method, including higher-order discretizations. We prove an abstract best approximation result for the linear variational inequality and estimates showing that bounds-constrained polynomials provide comparable approximation power to standard spaces. For any unconstrained approximation to a function, there exists a constrained approximation which is comparable in the $W^{1,p}$ norm. In practice, one cannot efficiently represent and manipulate the entire family of bounds-constrained polynomials, but applying bounds constraints to the coefficients of a polynomial in the Bernstein basis guarantees those constraints on the polynomial. Although our theoretical results do not guaruntee high accuracy for this subset of bounds-constrained polynomials, numerical results indicate optimal orders of accuracy for smooth solutions and sharp resolution of features in convection-diffusion problems, all subject to bounds constraints.
This document describes an algorithm to scale a complex vector by the reciprocal of a complex value. The algorithm computes the reciprocal of the complex value and then scales the vector by the reciprocal. Some scaling may be necessary due to this 2-step strategy, and the proposed algorithm takes scaling into account. This algorithm is supposed to be faster than the naive approach of dividing each entry of the vector by the complex value, without losing much accuracy. It also serves as a single strategy for scaling vectors by the reciprocal of a complex value, which improves the software maintainability.
The development of cubical type theory inspired the idea of "extension types" which has been found to have applications in other type theories that are unrelated to homotopy type theory or cubical type theory. This article describes these applications, including on records, metaprogramming, controlling unfolding, and some more exotic ones.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.