This article proposes a new method of truncated estimation to estimate the tail index $\alpha$ of the extremely heavy-tailed distribution with infinite mean or variance. We not only present two truncated estimators $\hat{\alpha}$ and $\hat{\alpha}^{\prime}$ for estimating $\alpha$ ($0<\alpha \leq 1$) and $\alpha$ ($1<\alpha \leq 2$) respectively, but also prove their asymptotic statistical properties. The numerical simulation results comparing the six known estimators in estimating error, the Type I Error and the power of estimator show that the performance of the two new truncated estimators is quite good on the whole.
We study the deviation inequality for a sum of high-dimensional random matrices and operators with dependence and arbitrary heavy tails. There is an increase in the importance of the problem of estimating high-dimensional matrices, and dependence and heavy-tail properties of data are among the most critical topics currently. In this paper, we derive a dimension-free upper bound on the deviation, that is, the bound does not depend explicitly on the dimension of matrices, but depends on their effective rank. Our result is a generalization of several existing studies on the deviation of the sum of matrices. Our proof is based on two techniques: (i) a variational approximation of the dual of moment generating functions, and (ii) robustification through truncation of eigenvalues of matrices. We show that our results are applicable to several problems such as covariance matrix estimation, hidden Markov models, and overparameterized linear regression models.
Digital sensors can lead to noisy results under many circumstances. To be able to remove the undesired noise from images, proper noise modeling and an accurate noise parameter estimation is crucial. In this project, we use a Poisson-Gaussian noise model for the raw-images captured by the sensor, as it fits the physical characteristics of the sensor closely. Moreover, we limit ourselves to the case where observed (noisy), and ground-truth (noise-free) image pairs are available. Using such pairs is beneficial for the noise estimation and is not widely studied in literature. Based on this model, we derive the theoretical maximum likelihood solution, discuss its practical implementation and optimization. Further, we propose two algorithms based on variance and cumulant statistics. Finally, we compare the results of our methods with two different approaches, a CNN we trained ourselves, and another one taken from literature. The comparison between all these methods shows that our algorithms outperform the others in terms of MSE and have good additional properties.
Causal discovery from observational and interventional data is challenging due to limited data and non-identifiability: factors that introduce uncertainty in estimating the underlying structural causal model (SCM). Selecting experiments (interventions) based on the uncertainty arising from both factors can expedite the identification of the SCM. Existing methods in experimental design for causal discovery from limited data either rely on linear assumptions for the SCM or select only the intervention target. This work incorporates recent advances in Bayesian causal discovery into the Bayesian optimal experimental design framework, allowing for active causal discovery of large, nonlinear SCMs while selecting both the interventional target and the value. We demonstrate the performance of the proposed method on synthetic graphs (Erdos-R\`enyi, Scale Free) for both linear and nonlinear SCMs as well as on the \emph{in-silico} single-cell gene regulatory network dataset, DREAM.
We prove a new generalization bound that shows for any class of linear predictors in Gaussian space, the Rademacher complexity of the class and the training error under any continuous loss $\ell$ can control the test error under all Moreau envelopes of the loss $\ell$. We use our finite-sample bound to directly recover the "optimistic rate" of Zhou et al. (2021) for linear regression with the square loss, which is known to be tight for minimal $\ell_2$-norm interpolation, but we also handle more general settings where the label is generated by a potentially misspecified multi-index model. The same argument can analyze noisy interpolation of max-margin classifiers through the squared hinge loss, and establishes consistency results in spiked-covariance settings. More generally, when the loss is only assumed to be Lipschitz, our bound effectively improves Talagrand's well-known contraction lemma by a factor of two, and we prove uniform convergence of interpolators (Koehler et al. 2021) for all smooth, non-negative losses. Finally, we show that application of our generalization bound using localized Gaussian width will generally be sharp for empirical risk minimizers, establishing a non-asymptotic Moreau envelope theory for generalization that applies outside of proportional scaling regimes, handles model misspecification, and complements existing asymptotic Moreau envelope theories for M-estimation.
In this paper, we propose new geometrically unfitted space-time Finite Element methods for partial differential equations posed on moving domains of higher order accuracy in space and time. As a model problem, the convection-diffusion problem on a moving domain is studied. For geometrically higher order accuracy, we apply a parametric mapping on a background space-time tensor-product mesh. Concerning discretisation in time, we consider discontinuous Galerkin, as well as related continuous (Petrov-)Galerkin and Galerkin collocation methods. For stabilisation with respect to bad cut configurations and as an extension mechanism that is required for the latter two schemes, a ghost penalty stabilisation is employed. The article puts an emphasis on the techniques that allow to achieve a robust but higher order geometry handling for smooth domains. We investigate the computational properties of the respective methods in a series of numerical experiments. These include studies in different dimensions for different polynomial degrees in space and time, validating the higher order accuracy in both variables.
We introduce an $r-$adaptive algorithm to solve Partial Differential Equations using a Deep Neural Network. The proposed method restricts to tensor product meshes and optimizes the boundary node locations in one dimension, from which we build two- or three-dimensional meshes. The method allows the definition of fixed interfaces to design conforming meshes, and enables changes in the topology, i.e., some nodes can jump across fixed interfaces. The method simultaneously optimizes the node locations and the PDE solution values over the resulting mesh. To numerically illustrate the performance of our proposed $r-$adaptive method, we apply it in combination with a collocation method, a Least Squares Method, and a Deep Ritz Method. We focus on the latter to solve one- and two-dimensional problems whose solutions are smooth, singular, and/or exhibit strong gradients.
In this document, some elements of the theory and algorithmics corresponding to the existence and computability of approximate joint eigenpairs for finite collections of matrices with applications to model order reduction, are presented. More specifically, given a finite collection $X_1,\ldots,X_d$ of Hermitian matrices in $\mathbb{C}^{n\times n}$, a positive integer $r\ll n$, and a collection of complex numbers $\hat{x}_{j,k}\in \mathbb{C}$ for $1\leq j\leq d$, $1\leq k\leq r$. First, we study the computability of a set of $r$ vectors $w_1,\ldots,w_r\in \mathbb{C}^{n}$, such that $w_k=\arg\min_{w\in \mathbb{C}^n}\sum_{j=1}^d\|X_jw-\hat{x}_{j,k} w\|^2$ for each $1\leq k \leq r$, then we present a model order reduction procedure based on the truncated joint approximate eigenbases computed with the aforementioned techniques. Some prototypical algorithms together with some numerical examples are presented as well.
In this paper we integrate the isotonic regression with Stone's cross-validation-based method to estimate discrete infinitely supported distribution. We prove that the estimator is strongly consistent, derive its rate of convergence for any underlying distribution, and for one-dimensional case we derive Marshal-type inequality for cumulative distribution function of the estimator. Also, we construct the asymptotically correct conservative global confidence band for the estimator. It is shown that, first, the estimator performs good even for small sized data sets, second, the estimator outperforms in the case of non-monotone underlying distribution, and, third, it performs almost as good as Grenander estimator when the true distribution is isotonic. Therefore, the new estimator provides a trade-off between goodness-of-fit, monotonicity and quality of probabilistic forecast. We apply the estimator to the time-to-onset data of visceral leishmaniasis in Brazil collected from 2007 to 2014.
Tie-breaker designs trade off a statistical design objective with short-term gain from preferentially assigning a binary treatment to those with high values of a running variable $x$. The design objective is any continuous function of the expected information matrix in a two-line regression model, and short-term gain is expressed as the covariance between the running variable and the treatment indicator. We investigate how to specify design functions indicating treatment probabilities as a function of $x$ to optimize these competing objectives, under external constraints on the number of subjects receiving treatment. Our results include sharp existence and uniqueness guarantees, while accommodating the ethically appealing requirement that treatment probabilities are non-decreasing in $x$. Under such a constraint, there always exists an optimal design function that is constant below and above a single discontinuity. When the running variable distribution is not symmetric or the fraction of subjects receiving the treatment is not $1/2$, our optimal designs improve upon a $D$-optimality objective without sacrificing short-term gain, compared to the three level tie-breaker designs of Owen and Varian (2020) that fix treatment probabilities at $0$, $1/2$, and $1$. We illustrate our optimal designs with data from Head Start, an early childhood government intervention program.
This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.