亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the problem of estimating volatility based on high-frequency data when the observed price process is a continuous It\^o semimartingale contaminated by microstructure noise. Assuming that the noise process is compatible across different sampling frequencies, we argue that it typically has a similar local behavior to fractional Brownian motion. For the resulting class of processes, which we call mixed semimartingales, we derive consistent estimators and asymptotic confidence intervals for the roughness parameter of the noise and the integrated price and noise volatilities, in all cases where these quantities are identifiable. Our model can explain key features of recent stock price data, most notably divergence rates in volatility signature plots that vary considerably over time and between assets.

相關內容

We say that a continuous real-valued function $x$ admits the Hurst roughness exponent $H$ if the $p^{\text{th}}$ variation of $x$ converges to zero if $p>1/H$ and to infinity if $p<1/H$. For the sample paths of many stochastic processes, such as fractional Brownian motion, the Hurst roughness exponent exists and equals the standard Hurst parameter. In our main result, we provide a mild condition on the Faber--Schauder coefficients of $x$ under which the Hurst roughness exponent exists and is given as the limit of the classical Gladyshev estimates $\wh H_n(x)$. This result can be viewed as a strong consistency result for the Gladyshev estimators in an entirely model-free setting, because no assumption whatsoever is made on the possible dynamics of the function $x$. Nonetheless, our proof is probabilistic and relies on a martingale that is hidden in the Faber--Schauder expansion of $x$. Since the Gladyshev estimators are not scale-invariant, we construct several scale-invariant estimators that are derived from the sequence $(\wh H_n)_{n\in\bN}$. We also discuss how a dynamic change in the Hurst roughness parameter of a time series can be detected. Our results are illustrated by means of high-frequency financial times series.

In this paper, we treat estimation and prediction problems where negative multinomial variables are observed and in particular consider unbalanced settings. First, the problem of estimating multiple negative multinomial parameter vectors under the standardized squared error loss is treated and a new empirical Bayes estimator which dominates the UMVU estimator under suitable conditions is derived. Second, we consider estimation of the joint predictive density of several multinomial tables under the Kullback-Leibler divergence and obtain a sufficient condition under which the Bayesian predictive density with respect to a hierarchical shrinkage prior dominates the Bayesian predictive density with respect to the Jeffreys prior. Third, our proposed Bayesian estimator and predictive density give risk improvements in simulations. Finally, the problem of estimating the joint predictive density of negative multinomial variables is discussed.

Identifying cause-effect relations among variables is a key step in the decision-making process. While causal inference requires randomized experiments, researchers and policymakers are increasingly using observational studies to test causal hypotheses due to the wide availability of observational data and the infeasibility of experiments. The matching method is the most used technique to make causal inference from observational data. However, the pair assignment process in one-to-one matching creates uncertainty in the inference because of different choices made by the experimenter. Recently, discrete optimization models are proposed to tackle such uncertainty. Although a robust inference is possible with discrete optimization models, they produce nonlinear problems and lack scalability. In this work, we propose greedy algorithms to solve the robust causal inference test instances from observational data with continuous outcomes. We propose a unique framework to reformulate the nonlinear binary optimization problems as feasibility problems. By leveraging the structure of the feasibility formulation, we develop greedy schemes that are efficient in solving robust test problems. In many cases, the proposed algorithms achieve global optimal solutions. We perform experiments on three real-world datasets to demonstrate the effectiveness of the proposed algorithms and compare our result with the state-of-the-art solver. Our experiments show that the proposed algorithms significantly outperform the exact method in terms of computation time while achieving the same conclusion for causal tests. Both numerical experiments and complexity analysis demonstrate that the proposed algorithms ensure the scalability required for harnessing the power of big data in the decision-making process.

Key to effective generic, or "black-box", variational inference is the selection of an approximation to the target density that balances accuracy and calibration speed. Copula models are promising options, but calibration of the approximation can be slow for some choices. Smith et al. (2020) suggest using "implicit copula" models that are formed by element-wise transformation of the target parameters. We show here why these are a tractable and scalable choice, and propose adjustments to increase their accuracy. We also show how a sub-class of elliptical copulas have a generative representation that allows easy application of the re-parameterization trick and efficient first order optimization methods. We demonstrate the estimation methodology using two statistical models as examples. The first is a mixed effects logistic regression, and the second is a regularized correlation matrix. For the latter, standard Markov chain Monte Carlo estimation methods can be slow or difficult to implement, yet our proposed variational approach provides an effective and scalable estimator. We illustrate by estimating a regularized Gaussian copula model for income inequality in U.S. states between 1917 and 2018.

We study a new method for estimating the risk of an arbitrary estimator of the mean vector in the classical normal means problem. The key idea is to generate two auxiliary data vectors, by adding two carefully constructed normal noise vectors to the original data vector. We then train the estimator of interest on the first auxiliary data vector and test it on the second. In order to stabilize the estimate of risk, we average this procedure over multiple draws of the synthetic noise. A key aspect of this coupled bootstrap approach is that it delivers an unbiased estimate of risk under no assumptions on the estimator of the mean vector, albeit for a slightly "harder" version of the original normal means problem, where the noise variance is inflated. We show that, under the assumptions required for Stein's unbiased risk estimator (SURE), a limiting version of the coupled bootstrap estimator recovers SURE exactly (with an infinitesimal auxiliary noise variance and infinite bootstrap samples). We also analyze a bias-variance decomposition of the error of our risk estimator, to elucidate the effects of the variance of the auxiliary noise and the number of bootstrap samples on the accuracy of the risk estimator. Lastly, we demonstrate that our coupled bootstrap risk estimator performs quite favorably in simulated experiments and in an image denoising example.

We present a unified technique for sequential estimation of convex divergences between distributions, including integral probability metrics like the kernel maximum mean discrepancy, $\varphi$-divergences like the Kullback-Leibler divergence, and optimal transport costs, such as powers of Wasserstein distances. This is achieved by observing that empirical convex divergences are (partially ordered) reverse submartingales with respect to the exchangeable filtration, coupled with maximal inequalities for such processes. These techniques appear to be complementary and powerful additions to the existing literature on both confidence sequences and convex divergences. We construct an offline-to-sequential device that converts a wide array of existing offline concentration inequalities into time-uniform confidence sequences that can be continuously monitored, providing valid tests or confidence intervals at arbitrary stopping times. The resulting sequential bounds pay only an iterated logarithmic price over the corresponding fixed-time bounds, retaining the same dependence on problem parameters (like dimension or alphabet size if applicable). These results are also applicable to more general convex functionals, like the negative differential entropy, suprema of empirical processes, and V-Statistics.

High dimensional non-Gaussian time series data are increasingly encountered in a wide range of applications. Conventional estimation methods and technical tools are inadequate when it comes to ultra high dimensional and heavy-tailed data. We investigate robust estimation of high dimensional autoregressive models with fat-tailed innovation vectors by solving a regularized regression problem using convex robust loss function. As a significant improvement, the dimension can be allowed to increase exponentially with the sample size to ensure consistency under very mild moment conditions. To develop the consistency theory, we establish a new Bernstein type inequality for the sum of autoregressive models. Numerical results indicate a good performance of robust estimates.

Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.

Owing to the recent advances in "Big Data" modeling and prediction tasks, variational Bayesian estimation has gained popularity due to their ability to provide exact solutions to approximate posteriors. One key technique for approximate inference is stochastic variational inference (SVI). SVI poses variational inference as a stochastic optimization problem and solves it iteratively using noisy gradient estimates. It aims to handle massive data for predictive and classification tasks by applying complex Bayesian models that have observed as well as latent variables. This paper aims to decentralize it allowing parallel computation, secure learning and robustness benefits. We use Alternating Direction Method of Multipliers in a top-down setting to develop a distributed SVI algorithm such that independent learners running inference algorithms only require sharing the estimated model parameters instead of their private datasets. Our work extends the distributed SVI-ADMM algorithm that we first propose, to an ADMM-based networked SVI algorithm in which not only are the learners working distributively but they share information according to rules of a graph by which they form a network. This kind of work lies under the umbrella of `deep learning over networks' and we verify our algorithm for a topic-modeling problem for corpus of Wikipedia articles. We illustrate the results on latent Dirichlet allocation (LDA) topic model in large document classification, compare performance with the centralized algorithm, and use numerical experiments to corroborate the analytical results.

We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems.

北京阿比特科技有限公司