亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The higher dimensional autoregressive models would describe some of the econometric processes relatively generically if they incorporate the heterogeneity in dependence on times. This paper analyzes the stationarity of an autoregressive process of dimension $k>1$ having a sequence of coefficients $\beta$ multiplied by successively increasing powers of $0<\delta<1$. The theorem gives the conditions of stationarity in simple relations between the coefficients and $k$ in terms of $\delta$. Computationally, the evidence of stationarity depends on the parameters. The choice of $\delta$ sets the bounds on $\beta$ and the number of time lags for prediction of the model.

相關內容

Rough path theory provides one with the notion of signature, a graded family of tensors which characterise, up to a negligible equivalence class, and ordered stream of vector-valued data. In the last few years, use of the signature has gained traction in time-series analysis, machine learning , deep learning and more recently in kernel methods. In this article, we lay down the theoretical foundations for a connection between signature asymptotics, the theory of empirical processes, and Wasserstein distances, opening up the landscape and toolkit of the second and third in the study of the first. Our main contribution is to show that the Hambly-Lyons limit can be reinterpreted as a statement about the asymptotic behaviour of Wasserstein distances between two independent empirical measures of samples from the same underlying distribution. In the setting studied here, these measures are derived from samples from a probability distribution which is determined by geometrical properties of the underlying path. The general question of rates of convergence for these objects has been studied in depth in the recent monograph of Bobkov and Ledoux. By using these results, we generalise the original result of Hambly and Lyons from $C^3$ curves to a broad class of $C^2$ ones. We conclude by providing an explicit way to compute the limit in terms of a second-order differential equation.

Reward-Weighted Regression (RWR) belongs to a family of widely known iterative Reinforcement Learning algorithms based on the Expectation-Maximization framework. In this family, learning at each iteration consists of sampling a batch of trajectories using the current policy and fitting a new policy to maximize a return-weighted log-likelihood of actions. Although RWR is known to yield monotonic improvement of the policy under certain circumstances, whether and under which conditions RWR converges to the optimal policy have remained open questions. In this paper, we provide for the first time a proof that RWR converges to a global optimum when no function approximation is used, in a general compact setting. Furthermore, for the simpler case with finite state and action spaces we prove R-linear convergence of the state-value function to the optimum.

The problem of optimal estimation of linear functionals constructed from the unobserved values of a stochastic sequence with periodically stationary increments based on observations of the sequence with stationary noise is considered. For sequences with known spectral densities, we obtain formulas for calculating values of the mean square errors and the spectral characteristics of the optimal estimates of the functionals. Formulas that determine the least favorable spectral densities and the minimax-robust spectral characteristics of the optimal linear estimates of functionals are proposed in the case where spectral densities of the sequence are not exactly known while some sets of admissible spectral densities are specified.

We investigate the computational performance of Artificial Neural Networks (ANNs) in semi-nonparametric instrumental variables (NPIV) models of high dimensional covariates that are relevant to empirical work in economics. We focus on efficient estimation of and inference on expectation functionals (such as weighted average derivatives) and use optimal criterion-based procedures (sieve minimum distance or SMD) and novel efficient score-based procedures (ES). Both these procedures use ANN to approximate the unknown function. Then, we provide a detailed practitioner's recipe for implementing these two classes of estimators. This involves the choice of tuning parameters both for the unknown functions (that include conditional expectations) but also for the choice of estimation of the optimal weights in SMD and the Riesz representers used with the ES estimators. Finally, we conduct a large set of Monte Carlo experiments that compares the finite-sample performance in complicated designs that involve a large set of regressors (up to 13 continuous), and various underlying nonlinearities and covariate correlations. Some of the takeaways from our results include: 1) tuning and optimization are delicate especially as the problem is nonconvex; 2) various architectures of the ANNs do not seem to matter for the designs we consider and given proper tuning, ANN methods perform well; 3) stable inferences are more difficult to achieve with ANN estimators; 4) optimal SMD based estimators perform adequately; 5) there seems to be a gap between implementation and approximation theory. Finally, we apply ANN NPIV to estimate average price elasticity and average derivatives in two demand examples.

The sandwiched R\'enyi $\alpha$-divergences of two finite-dimensional quantum states play a distinguished role among the many quantum versions of R\'enyi divergences as the tight quantifiers of the trade-off between the two error probabilities in the strong converse domain of state discrimination. In this paper we show the same for the sandwiched R\'enyi divergences of two normal states on an injective von Neumann algebra, thereby establishing the operational significance of these quantities. Moreover, we show that in this setting, again similarly to the finite-dimensional case, the sandwiched R\'enyi divergences coincide with the regularized measured R\'enyi divergences, another distinctive feature of the former quantities. Our main tool is an approximation theorem (martingale convergence) for the sandwiched R\'enyi divergences, which may be used for the extension of various further results from the finite-dimensional to the von Neumann algebra setting. We also initiate the study of the sandwiched R\'enyi divergences of pairs of states on a $C^*$-algebra, and show that the above operational interpretation, as well as the equality to the regularized measured R\'enyi divergence, holds more generally for pairs of states on a nuclear $C^*$-algebra.

Popular network models such as the mixed membership and standard stochastic block model are known to exhibit distinct geometric structure when embedded into $\mathbb{R}^{d}$ using spectral methods. The resulting point cloud concentrates around a simplex in the first model, whereas it separates into clusters in the second. By adopting the formalism of generalised random dot-product graphs, we demonstrate that both of these models, and different mixing regimes in the case of mixed membership, may be distinguished by the persistent homology of the underlying point distribution in the case of adjacency spectral embedding. Moreover, despite non-identifiability issues, we show that the persistent homology of the support of the distribution and its super-level sets can be consistently estimated. As an application of our consistency results, we provide a topological hypothesis test for distinguishing the standard and mixed membership stochastic block models.

We study stochastic sequences $\xi(k)$ with periodically stationary generalized multiple increments of fractional order which combines cyclostationary, multi-seasonal, integrated and fractionally integrated patterns. We solve the filtering problem for linear functionals constructed from unobserved values of a stochastic sequence $\xi(k)$ based on observations with the periodically stationary noise sequence. For sequences with known matrices of spectral densities, we obtain formulas for calculating values of the mean square errors and the spectral characteristics of the optimal estimates of the functionals. Formulas that determine the least favorable spectral densities and minimax (robust) spectral characteristics of the optimal linear estimates of the functionals are proposed in the case where spectral densities of sequences are not exactly known while some sets of admissible spectral densities are given.

This paper studies the $\tau$-coherence of a (n x p)-observation matrix in a Gaussian framework. The $\tau$-coherence is defined as the largest magnitude outside a diagonal bandwith of size $\tau$ of the empirical correlation coefficients associated to our observations. Using the Chen-Stein method we derive the limiting law of the normalized coherence and show the convergence towards a Gumbel distribution. We generalize here the results of Cai and Jiang [CJ11a]. We assume that the covariance matrix of the model is bandwise. Moreover, we provide numerical considerations highlighting issues from the high dimension hypotheses. We numerically illustrate the asymptotic behaviour of the coherence with Monte-Carlo experiment using a HPC splitting strategy for high dimensional correlation matrices.

There are two cases describing how a classifier processes input text, namely, misclassification and correct classification. In terms of misclassified texts, a classifier handles the texts with both incorrect predictions and adversarial texts, which are generated to fool the classifier, which is called a victim. Both types are misunderstood by the victim, but they can still be recognized by other classifiers. This induces large gaps in predicted probabilities between the victim and the other classifiers. In contrast, text correctly classified by the victim is often successfully predicted by the others and induces small gaps. In this paper, we propose an ensemble model based on similarity estimation of predicted probabilities (SEPP) to exploit the large gaps in the misclassified predictions in contrast to small gaps in the correct classification. SEPP then corrects the incorrect predictions of the misclassified texts. We demonstrate the resilience of SEPP in defending and detecting adversarial texts through different types of victim classifiers, classification tasks, and adversarial attacks.

Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.

北京阿比特科技有限公司