亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We are interested in the nonparametric estimation of the probability density of price returns, using the kernel approach. The output of the method heavily relies on the selection of a bandwidth parameter. Many selection methods have been proposed in the statistical literature. We put forward an alternative selection method based on a criterion coming from information theory and from the physics of complex systems: the bandwidth to be selected maximizes a new measure of complexity, with the aim of avoiding both overfitting and underfitting. We review existing methods of bandwidth selection and show that they lead to contradictory conclusions regarding the complexity of the probability distribution of price returns. This has also some striking consequences in the evaluation of the relevance of the efficient market hypothesis. We apply these methods to real financial data, focusing on the Bitcoin.

相關內容

We observe n possibly dependent random variables, the distribution of which is presumed to be stationary even though this might not be true, and we aim at estimating the stationary distribution. We establish a non-asymptotic deviation bound for the Hellinger distance between the target distribution and our estimator. If the dependence within the observations is small, the estimator performs as good as if the data were independent and identically distributed. In addition our estimator is robust to misspecification and contamination. If the dependence is too high but the observed process is mixing, we can select a subset of observations that is almost independent and retrieve results similar to what we have in the i.i.d. case. We apply our procedure to the estimation of the invariant distribution of a diffusion process and to finite state space hidden Markov models.

This work is concerned with numerically recovering multiple parameters simultaneously in the subdiffusion model from one single lateral measurement on a part of the boundary, while in an incompletely known medium. We prove that the boundary measurement corresponding to a fairly general boundary excitation uniquely determines the order of the fractional derivative and the polygonal support of the diffusion coefficient, without knowing either the initial condition or the source. The uniqueness analysis further inspires the development of a robust numerical algorithm for recovering the fractional order and diffusion coefficient. The proposed algorithm combines small-time asymptotic expansion, analytic continuation of the solution and the level set method. We present extensive numerical experiments to illustrate the feasibility of the simultaneous recovery. In addition, we discuss the uniqueness of recovering general diffusion and potential coefficients from one single partial boundary measurement, when the boundary excitation is more specialized.

Electrical Distribution Systems are extensively penetrated with Distributed Energy Resources (DERs) to cater the energy demands with the general perception that it enhances the system's resilience. However, integration of DERs may adversely affect the grid operation and affect the system resilience due to various factors like their intermittent availability, dynamics of weather conditions, non-linearity, complexity, number of malicious threats, and improved reliability requirements of consumers. This paper proposes a methodology to evaluate the planning and operational resilience of power distribution systems under extreme events and determines the withstand capability of the electrical network. The proposed framework is developed by effectively employing the complex network theory. Correlated networks for undesirable configurations are developed from the time series data of active power monitored at nodes of the electrical network. For these correlated networks, computed the network parameters such as clustering coefficient, assortative coefficient, average degree and power law exponent for the anticipation; and percolation threshold for the determination of the network withstand capability under extreme conditions. The proposed methodology is also suitable for identifying the hosting capacity of solar panels in the system while maintaining resilience under different unfavourable conditions and identifying the most critical nodes of the system that could drive the system into non-resilience. This framework is demonstrated on IEEE 123 node test feeder by generating active power time-series data for a variety of electrical conditions using simulation software, GridLAB-D. The percolation threshold resulted as an effective metric for the determination of the planning and operational resilience of the power distribution system.

In a two-way contingency table analysis with explanatory and response variables, the analyst is interested in the independence of the two variables. However, if the test of independence does not show independence or clearly shows a relationship, the analyst is interested in the degree of their association. Various measures have been proposed to calculate the degree of their association, one of which is the proportional reduction in variation (PRV) measure which describes the PRV from the marginal distribution to the conditional distribution of the response. The conventional PRV measures can assess the association of the entire contingency table, but they can not accurately assess the association for each explanatory variable. In this paper, we propose a geometric mean type of PRV (geoPRV) measure that aims to sensitively capture the association of each explanatory variable to the response variable by using a geometric mean, and it enables analysis without underestimation when there is partial bias in cells of the contingency table. Furthermore, the geoPRV measure is constructed by using any functions that satisfy specific conditions, which has application advantages and makes it possible to express conventional PRV measures as geometric mean types in special cases.

The demand of computational resources for the modeling process increases as the scale of the datasets does, since traditional approaches for regression involve inverting huge data matrices. The main problem relies on the large data size, and so a standard approach is subsampling that aims at obtaining the most informative portion of the big data. In the current paper, we explore an existing approach based on leverage scores, proposed for subdata selection in linear model discrimination. Our objective is to propose the aforementioned approach for selecting the most informative data points to estimate unknown parameters in both the first-order linear model and a model with interactions. We conclude that the approach based on leverage scores improves existing approaches, providing simulation experiments as well as a real data application.

The Ridgeless minimum $\ell_2$-norm interpolator in overparametrized linear regression has attracted considerable attention in recent years. While it seems to defy the conventional wisdom that overfitting leads to poor prediction, recent research reveals that its norm minimizing property induces an `implicit regularization' that helps prediction in spite of interpolation. This renders the Ridgeless interpolator a theoretically tractable proxy that offers useful insights into the mechanisms of modern machine learning methods. This paper takes a different perspective that aims at understanding the precise stochastic behavior of the Ridgeless interpolator as a statistical estimator. Specifically, we characterize the distribution of the Ridgeless interpolator in high dimensions, in terms of a Ridge estimator in an associated Gaussian sequence model with positive regularization, which plays the role of the prescribed implicit regularization in the context of prediction risk. Our distributional characterizations hold for general random designs and extend uniformly to positively regularized Ridge estimators. As a demonstration of the analytic power of these characterizations, we derive approximate formulae for a general class of weighted $\ell_q$ risks for Ridge(less) estimators that were previously available only for $\ell_2$. Our theory also provides certain further conceptual reconciliation with the conventional wisdom: given any data covariance, a certain amount of regularization in Ridge regression remains beneficial for `most' signals across various statistical tasks including prediction, estimation and inference, as long as the noise level is non-trivial. Surprisingly, optimal tuning can be achieved simultaneously for all the designated statistical tasks by a single generalized or $k$-fold cross-validation scheme, despite being designed specifically for tuning prediction risk.

Causal inference for extreme events has many potential applications in fields such as climate science, medicine and economics. We study the extremal quantile treatment effect of a binary treatment on a continuous, heavy-tailed outcome. Existing methods are limited to the case where the quantile of interest is within the range of the observations. For applications in risk assessment, however, the most relevant cases relate to extremal quantiles that go beyond the data range. We introduce an estimator of the extremal quantile treatment effect that relies on asymptotic tail approximation, and use a new causal Hill estimator for the extreme value indices of potential outcome distributions. We establish asymptotic normality of the estimators and propose a consistent variance estimator to achieve valid statistical inference. We illustrate the performance of our method in simulation studies, and apply it to a real data set to estimate the extremal quantile treatment effect of college education on wage.

Probability density estimation is a core problem of statistics and signal processing. Moment methods are an important means of density estimation, but they are generally strongly dependent on the choice of feasible functions, which severely affects the performance. In this paper, we propose a non-classical parametrization for density estimation using sample moments, which does not require the choice of such functions. The parametrization is induced by the squared Hellinger distance, and the solution of it, which is proved to exist and be unique subject to a simple prior that does not depend on data, and can be obtained by convex optimization. Statistical properties of the density estimator, together with an asymptotic error upper bound are proposed for the estimator by power moments. Applications of the proposed density estimator in signal processing tasks are given. Simulation results validate the performance of the estimator by a comparison to several prevailing methods. To the best of our knowledge, the proposed estimator is the first one in the literature for which the power moments up to an arbitrary even order exactly match the sample moments, while the true density is not assumed to fall within specific function classes.

The accurate and efficient simulation of Partial Differential Equations (PDEs) in and around arbitrarily defined geometries is critical for many application domains. Immersed boundary methods (IBMs) alleviate the usually laborious and time-consuming process of creating body-fitted meshes around complex geometry models (described by CAD or other representations, e.g., STL, point clouds), especially when high levels of mesh adaptivity are required. In this work, we advance the field of IBM in the context of the recently developed Shifted Boundary Method (SBM). In the SBM, the location where boundary conditions are enforced is shifted from the actual boundary of the immersed object to a nearby surrogate boundary, and boundary conditions are corrected utilizing Taylor expansions. This approach allows choosing surrogate boundaries that conform to a Cartesian mesh without losing accuracy or stability. Our contributions in this work are as follows: (a) we show that the SBM numerical error can be greatly reduced by an optimal choice of the surrogate boundary, (b) we mathematically prove the optimal convergence of the SBM for this optimal choice of the surrogate boundary, (c) we deploy the SBM on massively parallel octree meshes, including algorithmic advances to handle incomplete octrees, and (d) we showcase the applicability of these approaches with a wide variety of simulations involving complex shapes, sharp corners, and different topologies. Specific emphasis is given to Poisson's equation and the linear elasticity equations.

Statistical data by their very nature are indeterminate in the sense that if one repeated the process of collecting the data the new data set would be somewhat different from the original. Therefore, a statistical method, a map $\Phi$ taking a data set $x$ to a point in some space F, should be stable at $x$: Small perturbations in $x$ should result in a small change in $\Phi(x)$. Otherwise, $\Phi$ is useless at $x$ or -- and this is important -- near $x$. So one doesn't want $\Phi$ to have "singularities," data sets $x$ such that the the limit of $\Phi(y)$ as $y$ approaches $x$ doesn't exist. (Yes, the same issue arises elsewhere in applied math.) However, broad classes of statistical methods have topological obstructions of continuity: They must have singularities. We show why and give lower bounds on the Hausdorff dimension, even Hausdorff measure, of the set of singularities of such data maps. There seem to be numerous examples. We apply mainly topological methods to study the (topological) singularities of functions defined (on dense subsets of) "data spaces" and taking values in spaces with nontrivial homology. At least in this book, data spaces are usually compact manifolds. The purpose is to gain insight into the numerical conditioning of statistical description, data summarization, and inference and learning methods. We prove general results that can often be used to bound below the dimension of the singular set. We apply our topological results to develop lower bounds on Hausdorff measure of the singular set. We apply these methods to the study of plane fitting and measuring location of data on spheres. This is not a "final" version, merely another attempt.

北京阿比特科技有限公司