In Bayesian theory, the role of information is central. The influence exerted by prior information on posterior outcomes often jeopardizes Bayesian studies, due to the potentially subjective nature of the prior choice. When the studied model is not enriched with sufficiently a priori information, the reference prior theory emerges as a proficient tool. Based on the mutual information criterion, the theory handles the construction of a non informative prior whose choice could be called objective. We propose a generalization of the mutual information definition, arguing our choice on an interpretation based on an analogy with Global Sensitivity Analysis. A class of our generalized metrics is studied and our results reinforce the Jeffreys' prior choice which satisfies our extended definition of reference prior.
High-dimensional variable selection, with many more covariates than observations, is widely documented in standard regression models, but there are still few tools to address it in non-linear mixed-effects models where data are collected repeatedly on several individuals. In this work, variable selection is approached from a Bayesian perspective and a selection procedure is proposed, combining the use of a spike-and-slab prior and the Stochastic Approximation version of the Expectation Maximisation (SAEM) algorithm. Similarly to Lasso regression, the set of relevant covariates is selected by exploring a grid of values for the penalisation parameter. The SAEM approach is much faster than a classical MCMC (Markov chain Monte Carlo) algorithm and our method shows very good selection performances on simulated data. Its flexibility is demonstrated by implementing it for a variety of nonlinear mixed effects models. The usefulness of the proposed method is illustrated on a problem of genetic markers identification, relevant for genomic-assisted selection in plant breeding.
We show that the known list-decoding algorithms for univariate multiplicity and folded Reed-Solomon (FRS) codes can be made to run in nearly-linear time. This yields, to our knowledge, the first known family of codes that can be decoded in nearly linear time, even as they approach the list decoding capacity. Univariate multiplicity codes and FRS codes are natural variants of Reed-Solomon codes that were discovered and studied for their applications to list-decoding. It is known that for every $\epsilon >0$, and rate $R \in (0,1)$, there exist explicit families of these codes that have rate $R$ and can be list-decoded from a $(1-R-\epsilon)$ fraction of errors with constant list size in polynomial time (Guruswami & Wang (IEEE Trans. Inform. Theory, 2013) and Kopparty, Ron-Zewi, Saraf & Wootters (SIAM J. Comput. 2023)). In this work, we present randomized algorithms that perform the above tasks in nearly linear time. Our algorithms have two main components. The first builds upon the lattice-based approach of Alekhnovich (IEEE Trans. Inf. Theory 2005), who designed a nearly linear time list-decoding algorithm for Reed-Solomon codes approaching the Johnson radius. As part of the second component, we design nearly-linear time algorithms for two natural algebraic problems. The first algorithm solves linear differential equations of the form $Q\left(x, f(x), \frac{df}{dx}, \dots,\frac{d^m f}{dx^m}\right) \equiv 0$ where $Q$ has the form $Q(x,y_0,\dots,y_m) = \tilde{Q}(x) + \sum_{i = 0}^m Q_i(x)\cdot y_i$. The second solves functional equations of the form $Q\left(x, f(x), f(\gamma x), \dots,f(\gamma^m x)\right) \equiv 0$ where $\gamma$ is a high-order field element. These algorithms can be viewed as generalizations of classical algorithms of Sieveking (Computing 1972) and Kung (Numer. Math. 1974) for computing the modular inverse of a power series, and might be of independent interest.
Regularization of inverse problems is of paramount importance in computational imaging. The ability of neural networks to learn efficient image representations has been recently exploited to design powerful data-driven regularizers. While state-of-the-art plug-and-play methods rely on an implicit regularization provided by neural denoisers, alternative Bayesian approaches consider Maximum A Posteriori (MAP) estimation in the latent space of a generative model, thus with an explicit regularization. However, state-of-the-art deep generative models require a huge amount of training data compared to denoisers. Besides, their complexity hampers the optimization of the latent MAP. In this work, we propose to use compressive autoencoders for latent estimation. These networks, which can be seen as variational autoencoders with a flexible latent prior, are smaller and easier to train than state-of-the-art generative models. We then introduce the Variational Bayes Latent Estimation (VBLE) algorithm, which performs this estimation within the framework of variational inference. This allows for fast and easy (approximate) posterior sampling. Experimental results on image datasets BSD and FFHQ demonstrate that VBLE reaches similar performance than state-of-the-art plug-and-play methods, while being able to quantify uncertainties faster than other existing posterior sampling techniques.
Physics-informed neural networks (PINNs) effectively embed physical principles into machine learning, but often struggle with complex or alternating geometries. We propose a novel method for integrating geometric transformations within PINNs to robustly accommodate geometric variations. Our method incorporates a diffeomorphism as a mapping of a reference domain and adapts the derivative computation of the physics-informed loss function. This generalizes the applicability of PINNs not only to smoothly deformed domains, but also to lower-dimensional manifolds and allows for direct shape optimization while training the network. We demonstrate the effectivity of our approach on several problems: (i) Eikonal equation on Archimedean spiral, (ii) Poisson problem on surface manifold, (iii) Incompressible Stokes flow in deformed tube, and (iv) Shape optimization with Laplace operator. Through these examples, we demonstrate the enhanced flexibility over traditional PINNs, especially under geometric variations. The proposed framework presents an outlook for training deep neural operators over parametrized geometries, paving the way for advanced modeling with PDEs on complex geometries in science and engineering.
In semi-supervised learning, the prevailing understanding suggests that observing additional unlabeled samples improves estimation accuracy for linear parameters only in the case of model misspecification. This paper challenges this notion, demonstrating its inaccuracy in high dimensions. Initially focusing on a dense scenario, we introduce robust semi-supervised estimators for the regression coefficient without relying on sparse structures in the population slope. Even when the true underlying model is linear, we show that leveraging information from large-scale unlabeled data improves both estimation accuracy and inference robustness. Moreover, we propose semi-supervised methods with further enhanced efficiency in scenarios with a sparse linear slope. Diverging from the standard semi-supervised literature, we also allow for covariate shift. The performance of the proposed methods is illustrated through extensive numerical studies, including simulations and a real-data application to the AIDS Clinical Trials Group Protocol 175 (ACTG175).
We consider M-estimators and derive supremal-inequalities of exponential-or polynomial type according as a boundedness- or a moment-condition is fulfilled. This enables us to derive rates of r-complete convergence and also to show r-qick convergence in the sense of Strasser.
The issue related to the quantification of the tail risk of cryptocurrencies is considered in this paper. The statistical methods used in the study are those concerning recent developments in Extreme Value Theory (EVT) for weakly dependent data. This research proposes an expectile-based approach for assessing the tail risk of dependent data. Expectile is a summary statistic that generalizes the concept of mean, as the quantile generalizes the concept of the median. We present the empirical findings for a dataset of cryptocurrencies. We propose a method for dynamically evaluating the level of the expectiles by estimating the level of the expectiles of the residuals of a heteroscedastic regression, such as a GARCH model. Finally, we introduce the Marginal Expected Shortfall (MES) as a tool for measuring the marginal impact of single assets on systemic shortfalls. In our case of interest, we are focused on the impact of a single cryptocurrency on the systemic risk of the whole cryptocurrency market. In particular, we present an expectile-based MES for dependent data.
Helmholtz decompositions of elastic fields is a common approach for the solution of Navier scattering problems. Used in the context of Boundary Integral Equations (BIE), this approach affords solutions of Navier problems via the simpler Helmholtz boundary integral operators (BIOs). Approximations of Helmholtz Dirichlet-to-Neumann (DtN) can be employed within a regularizing combined field strategy to deliver BIE formulations of the second kind for the solution of Navier scattering problems in two dimensions with Dirichlet boundary conditions, at least in the case of smooth boundaries. Unlike the case of scattering and transmission Helmholtz problems, the approximations of the DtN maps we use in the Helmholtz decomposition BIE in the Navier case require incorporation of lower order terms in their pseudodifferential asymptotic expansions. The presence of these lower order terms in the Navier regularized BIE formulations complicates the stability analysis of their Nystr\"om discretizations in the framework of global trigonometric interpolation and the Kussmaul-Martensen kernel singularity splitting strategy. The main difficulty stems from compositions of pseudodifferential operators of opposite orders, whose Nystr\"om discretization must be performed with care via pseudodifferential expansions beyond the principal symbol. The error analysis is significantly simpler in the case of arclength boundary parametrizations and considerably more involved in the case of general smooth parametrizations which are typically encountered in the description of one dimensional closed curves.
Likelihood-based inference in stochastic non-linear dynamical systems, such as those found in chemical reaction networks and biological clock systems, is inherently complex and has largely been limited to small and unrealistically simple systems. Recent advances in analytically tractable approximations to the underlying conditional probability distributions enable long-term dynamics to be accurately modelled, and make the large number of model evaluations required for exact Bayesian inference much more feasible. We propose a new methodology for inference in stochastic non-linear dynamical systems exhibiting oscillatory behaviour and show the parameters in these models can be realistically estimated from simulated data. Preliminary analyses based on the Fisher Information Matrix of the model can guide the implementation of Bayesian inference. We show that this parameter sensitivity analysis can predict which parameters are practically identifiable. Several Markov chain Monte Carlo algorithms are compared, with our results suggesting a parallel tempering algorithm consistently gives the best approach for these systems, which are shown to frequently exhibit multi-modal posterior distributions.
Linear regression and classification methods with repeated functional data are considered. For each statistical unit in the sample, a real-valued parameter is observed over time under different conditions. Two regression methods based on fusion penalties are presented. The first one is a generalization of the variable fusion methodology based on the 1-nearest neighbor. The second one, called group fusion lasso, assumes some grouping structure of conditions and allows for homogeneity among the regression coefficient functions within groups. A finite sample numerical simulation and an application on EEG data are presented.