This chapter reviews methods for linear shrinkage of the sample covariance matrix (SCM) and matrices (SCM-s) under elliptical distributions in single and multiple populations settings, respectively. In the single sample setting a popular linear shrinkage estimator is defined as a linear combination of the sample covariance matrix (SCM) with a scaled identity matrix. The optimal shrinkage coefficients minimizing the mean squared error (MSE) under elliptical sampling are shown to be functions of few key parameters only, such as elliptical kurtosis and sphericity parameter. Similar results and estimators are derived for multiple population setting and applications of the studied shrinkage estimators are illustrated in portfolio optimization.
We introduce a flexible method to simultaneously infer both the drift and volatility functions of a discretely observed scalar diffusion. We introduce spline bases to represent these functions and develop a Markov chain Monte Carlo algorithm to infer, a posteriori, the coefficients of these functions in the spline basis. A key innovation is that we use spline bases to model transformed versions of the drift and volatility functions rather than the functions themselves. The output of the algorithm is a posterior sample of plausible drift and volatility functions that are not constrained to any particular parametric family. The flexibility of this approach provides practitioners a powerful investigative tool, allowing them to posit a variety of parametric models to better capture the underlying dynamics of their processes of interest. We illustrate the versatility of our method by applying it to challenging datasets from finance, paleoclimatology, and astrophysics. In view of the parametric diffusion models widely employed in the literature for those examples, some of our results are surprising since they call into question some aspects of these models.
In this paper, we perform a roundoff error analysis of an integration-based method for computing the matrix sign function recently proposed by Nakaya and Tanaka. The method expresses the matrix sign function using an integral representation and computes the integral numerically by the double-exponential formula. While the method has large-grain parallelism and works well for well-conditioned matrices, its accuracy deteriorates when the input matrix is ill-conditioned or highly nonnormal. We investigate the reason for this phenomenon by a detailed roundoff error analysis.
We develop a numerical method for the Westervelt equation, an important equation in nonlinear acoustics, in the form where the attenuation is represented by a class of non-local in time operators. A semi-discretisation in time based on the trapezoidal rule and A-stable convolution quadrature is stated and analysed. Existence and regularity analysis of the continuous equations informs the stability and error analysis of the semi-discrete system. The error analysis includes the consideration of the singularity at $t = 0$ which is addressed by the use of a correction in the numerical scheme. Extensive numerical experiments confirm the theory.
In this paper, we revisit McFadden (1978)'s correction factor for sampling of alternatives in multinomial logit (MNL) and mixed multinomial logit (MMNL) models. McFadden (1978) proved that consistent parameter estimates are obtained when estimating MNL models using a sampled subset of alternatives, including the chosen alternative, in combination with a correction factor. We decompose this correction factor into i) a correction for overestimating the MNL choice probability due to using a smaller subset of alternatives, and ii) a correction for which a subset of alternatives is contrasted through utility differences and thereby the extent to which we learn about the parameters of interest in MNL. Keane and Wasi (2016) proved that the overall expected positive information divergence - comprising the above two elements - is minimised between the true and sampled likelihood when applying a sampling protocol satisfying uniform conditioning. We generalise their result to the case of positive conditioning and show that whilst McFadden (1978)'s correction factor may not minimise the overall expected information divergence, it does minimise the expected information loss with respect to the parameters of interest. We apply this result in the context of Bayesian analysis and show that McFadden (1978)'s correction factor minimises the expected information loss regarding the parameters of interest across the entire posterior density irrespective of sample size. In other words, McFadden (1978)'s correction factor has desirable small and large sample properties. We also show that our results for Bayesian MNL models transfer to MMNL and that only McFadden (1978) correction factor is sufficient to minimise the expected information loss in the parameters of interest. Monte Carlo simulations illustrate the successful application of sampling of alternatives in Bayesian MMNL models.
Effective application of mathematical models to interpret biological data and make accurate predictions often requires that model parameters are identifiable. Approaches to assess the so-called structural identifiability of models are well-established for ordinary differential equation models, yet there are no commonly adopted approaches that can be applied to assess the structural identifiability of the partial differential equation (PDE) models that are requisite to capture spatial features inherent to many phenomena. The differential algebra approach to structural identifiability has recently been demonstrated to be applicable to several specific PDE models. In this brief article, we present general methodology for performing structural identifiability analysis on partially observed linear reaction-advection-diffusion (RAD) PDE models. We show that the differential algebra approach can always, in theory, be applied to linear RAD models. Moreover, despite the perceived complexity introduced by the addition of advection and diffusion terms, identifiability of spatial analogues of non-spatial models cannot decrease structural identifiability. Finally, we show that our approach can also be applied to a class of non-linear PDE models that are linear in the unobserved variables, and conclude by discussing future possibilities and computational cost of performing structural identifiability analysis on more general PDE models in mathematical biology.
The subpower membership problem SMP(A) of a finite algebraic structure A asks whether a given partial function from A^k to A can be interpolated by a term operation of A, or not. While this problem can be EXPTIME-complete in general, Willard asked whether it is always solvable in polynomial time if A is a Mal'tsev algebras. In particular, this includes many important structures studied in abstract algebra, such as groups, quasigroups, rings, Boolean algebras. In this paper we give an affirmative answer to Willard's question for a big class of 2-nilpotent Mal'tsev algebras. We furthermore develop tools that might be essential in answering the question for general nilpotent Mal'tsev algebras in the future.
We introduce the modified planar rotator method (MPRS), a physically inspired machine learning method for spatial/temporal regression. MPRS is a non-parametric model which incorporates spatial or temporal correlations via short-range, distance-dependent ``interactions'' without assuming a specific form for the underlying probability distribution. Predictions are obtained by means of a fully autonomous learning algorithm which employs equilibrium conditional Monte Carlo simulations. MPRS is able to handle scattered data and arbitrary spatial dimensions. We report tests on various synthetic and real-word data in one, two and three dimensions which demonstrate that the MPRS prediction performance (without parameter tuning) is competitive with standard interpolation methods such as ordinary kriging and inverse distance weighting. In particular, MPRS is a particularly effective gap-filling method for rough and non-Gaussian data (e.g., daily precipitation time series). MPRS shows superior computational efficiency and scalability for large samples. Massive data sets involving millions of nodes can be processed in a few seconds on a standard personal computer.
We present the interpretable meta neural ordinary differential equation (iMODE) method to rapidly learn generalizable (i.e., not parameter-specific) dynamics from trajectories of multiple dynamical systems that vary in their physical parameters. The iMODE method learns meta-knowledge, the functional variations of the force field of dynamical system instances without knowing the physical parameters, by adopting a bi-level optimization framework: an outer level capturing the common force field form among studied dynamical system instances and an inner level adapting to individual system instances. A priori physical knowledge can be conveniently embedded in the neural network architecture as inductive bias, such as conservative force field and Euclidean symmetry. With the learned meta-knowledge, iMODE can model an unseen system within seconds, and inversely reveal knowledge on the physical parameters of a system, or as a Neural Gauge to "measure" the physical parameters of an unseen system with observed trajectories. We test the validity of the iMODE method on bistable, double pendulum, Van der Pol, Slinky, and reaction-diffusion systems.
We study the multivariate deconvolution problem of recovering the distribution of a signal from independent and identically distributed observations additively contaminated with random errors (noise) from a known distribution. For errors with independent coordinates having ordinary smooth densities, we derive an inversion inequality relating the $L^1$-Wasserstein distance between two distributions of the signal to the $L^1$-distance between the corresponding mixture densities of the observations. This smoothing inequality outperforms existing inversion inequalities. As an application of the inversion inequality to the Bayesian framework, we consider $1$-Wasserstein deconvolution with Laplace noise in dimension one using a Dirichlet process mixture of normal densities as a prior measure on the mixing distribution (or distribution of the signal). We construct an adaptive approximation of the sampling density by convolving the Laplace density with a well-chosen mixture of normal densities and show that the posterior measure concentrates around the sampling density at a nearly minimax rate, up to a log-factor, in the $L^1$-distance. The same posterior law is also shown to automatically adapt to the unknown Sobolev regularity of the mixing density, thus leading to a new Bayesian adaptive estimation procedure for mixing distributions with regular densities under the $L^1$-Wasserstein metric. We illustrate utility of the inversion inequality also in a frequentist setting by showing that an appropriate isotone approximation of the classical kernel deconvolution estimator attains the minimax rate of convergence for $1$-Wasserstein deconvolution in any dimension $d\geq 1$, when only a tail condition is required on the latent mixing density and we derive sharp lower bounds for these problems
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.