亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper aims at providing a new semi-parametric estimator for LARCH($\infty$) processes, and therefore also for LARCH(p) or GLARCH(p, q) processes. This estimator is obtained from the minimization of a contrast leading to a least squares estimator of the absolute values of the process. The strong consistency and the asymptotic normality are showed, and the convergence happens with rate $\sqrt$ n as well in cases of short or long memory. Numerical experiments confirm the theoretical results, and show that this new estimator clearly outperforms the smoothed quasi-maximum likelihood estimators or the weighted least square estimators often used for such processes.

相關內容

For a continuous random variable $Z$, testing conditional independence $X \perp\!\!\!\perp Y |Z$ is known to be a particularly hard problem. It constitutes a key ingredient of many constraint-based causal discovery algorithms. These algorithms are often applied to datasets containing binary variables, which indicate the 'context' of the observations, e.g. a control or treatment group within an experiment. In these settings, conditional independence testing with $X$ or $Y$ binary (and the other continuous) is paramount to the performance of the causal discovery algorithm. To our knowledge no nonparametric 'mixed' conditional independence test currently exists, and in practice tests that assume all variables to be continuous are used instead. In this paper we aim to fill this gap, as we combine elements of Holmes et al. (2015) and Teymur and Filippi (2020) to propose a novel Bayesian nonparametric conditional two-sample test. Applied to the Local Causal Discovery algorithm, we investigate its performance on both synthetic and real-world data, and compare with state-of-the-art conditional independence tests.

We introduce an original way to estimate the memory parameter of the elephant random walk, a fascinating discrete time random walk on integers having a complete memory of its entire history. Our estimator is nothing more than a quasi-maximum likelihood estimator, based on a second order Taylor approximation of the log-likelihood function. We show the almost sure convergence of our estimate in the diffusive, critical and superdiffusive regimes. The local asymptotic normality of our statistical procedure is established in the diffusive regime, while the local asymptotic mixed normality is proven in the superdiffusive regime. Asymptotic and exact confidence intervals as well as statistical tests are also provided. All our analysis relies on asymptotic results for martingales and the quadratic variations associated.

In the field of finance, insurance, and system reliability, etc., it is often of interest to measure the dependence among variables by modeling a multivariate distribution using a copula. The copula models with parametric assumptions are easy to estimate but can be highly biased when such assumptions are false, while the empirical copulas are non-smooth and often not genuine copula making the inference about dependence challenging in practice. As a compromise, the empirical Bernstein copula provides a smooth estimator but the estimation of tuning parameters remains elusive. In this paper, by using the so-called empirical checkerboard copula we build a hierarchical empirical Bayes model that enables the estimation of a smooth copula function for arbitrary dimensions. The proposed estimator based on the multivariate Bernstein polynomials is itself a genuine copula and the selection of its dimension-varying degrees is data-dependent. We also show that the proposed copula estimator provides a more accurate estimate of several multivariate dependence measures which can be obtained in closed form. We investigate the asymptotic and finite-sample performance of the proposed estimator and compare it with some nonparametric estimators through simulation studies. An application to portfolio risk management is presented along with a quantification of estimation uncertainty.

The distributed Hill estimator is a divide-and-conquer algorithm for estimating the extreme value index when data are stored in multiple machines. In applications, estimates based on the distributed Hill estimator can be sensitive to the choice of the number of the exceedance ratios used in each machine. Even when choosing the number at a low level, a high asymptotic bias may arise. We overcome this potential drawback by designing a bias correction procedure for the distributed Hill estimator, which adheres to the setup of distributed inference. The asymptotically unbiased distributed estimator we obtained, on the one hand, is applicable to distributed stored data, on the other hand, inherits all known advantages of bias correction methods in extreme value statistics.

In this paper we study nonparametric estimators of copulas and copula densities. We first focus our study on a density copula estimator based on a polynomial orthogonal projection of the joint density. A new copula estimator is then deduced. Its asymptotic properties are studied: we provide a large functional class for which this construction is optimal in the minimax and maxiset sense and we propose a method selection for the smoothing parameter. An intensive simulation study shows the very good performance of both copulas and copula densities estimators which we compare to a large panel of competitors. A real dataset in actuarial science illustrates this approach.

We introduce deep learning models to estimate the masses of the binary components of black hole mergers, $(m_1,m_2)$, and three astrophysical properties of the post-merger compact remnant, namely, the final spin, $a_f$, and the frequency and damping time of the ringdown oscillations of the fundamental $\ell=m=2$ bar mode, $(\omega_R, \omega_I)$. Our neural networks combine a modified $\texttt{WaveNet}$ architecture with contrastive learning and normalizing flow. We validate these models against a Gaussian conjugate prior family whose posterior distribution is described by a closed analytical expression. Upon confirming that our models produce statistically consistent results, we used them to estimate the astrophysical parameters $(m_1,m_2, a_f, \omega_R, \omega_I)$ of five binary black holes: $\texttt{GW150914}, \texttt{GW170104}, \texttt{GW170814}, \texttt{GW190521}$ and $\texttt{GW190630}$. We use $\texttt{PyCBC Inference}$ to directly compare traditional Bayesian methodologies for parameter estimation with our deep-learning-based posterior distributions. Our results show that our neural network models predict posterior distributions that encode physical correlations, and that our data-driven median results and 90$\%$ confidence intervals are similar to those produced with gravitational wave Bayesian analyses. This methodology requires a single V100 $\texttt{NVIDIA}$ GPU to produce median values and posterior distributions within two milliseconds for each event. This neural network, and a tutorial for its use, are available at the $\texttt{Data and Learning Hub for Science}$.

Let $P$ be a bounded polyhedron defined as the intersection of the non-negative orthant ${\Bbb R}^n_+$ and an affine subspace of codimension $m$ in ${\Bbb R}^n$. We show that a simple and computationally efficient formula approximates the volume of $P$ within a factor of $\gamma^m$, where $\gamma >0$ is an absolute constant. The formula provides the best known estimate for the volume of transportation polytopes from a wide family.

This work focuses on combining nonparametric topic models with Auto-Encoding Variational Bayes (AEVB). Specifically, we first propose iTM-VAE, where the topics are treated as trainable parameters and the document-specific topic proportions are obtained by a stick-breaking construction. The inference of iTM-VAE is modeled by neural networks such that it can be computed in a simple feed-forward manner. We also describe how to introduce a hyper-prior into iTM-VAE so as to model the uncertainty of the prior parameter. Actually, the hyper-prior technique is quite general and we show that it can be applied to other AEVB based models to alleviate the {\it collapse-to-prior} problem elegantly. Moreover, we also propose HiTM-VAE, where the document-specific topic distributions are generated in a hierarchical manner. HiTM-VAE is even more flexible and can generate topic distributions with better variability. Experimental results on 20News and Reuters RCV1-V2 datasets show that the proposed models outperform the state-of-the-art baselines significantly. The advantages of the hyper-prior technique and the hierarchical model construction are also confirmed by experiments.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司