亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Working with so-called linkages allows to define a copula-based, $[0,1]$-valued multivariate dependence measure $\zeta^1(\boldsymbol{X},Y)$ quantifying the scale-invariant extent of dependence of a random variable $Y$ on a $d$-dimensional random vector $\boldsymbol{X}=(X_1,\ldots,X_d)$ which exhibits various good and natural properties. In particular, $\zeta^1(\boldsymbol{X},Y)=0$ if and only if $\boldsymbol{X}$ and $Y$ are independent, $\zeta^1(\boldsymbol{X},Y)$ is maximal exclusively if $Y$ is a function of $\boldsymbol{X}$, and ignoring one or several coordinates of $\boldsymbol{X}$ can not increase the resulting dependence value. After introducing and analyzing the metric $D_1$ underlying the construction of the dependence measure and deriving examples showing how much information can be lost by only considering all pairwise dependence values $\zeta^1(X_1,Y),\ldots,\zeta^1(X_d,Y)$ we derive a so-called checkerboard estimator for $\zeta^1(\boldsymbol{X},Y)$ and show that it is strongly consistent in full generality, i.e., without any smoothness restrictions on the underlying copula. Some simulations illustrating the small sample performance of the estimator complement the established theoretical results.

相關內容

We propose the first near-optimal quantum algorithm for estimating in Euclidean norm the mean of a vector-valued random variable with finite mean and covariance. Our result aims at extending the theory of multivariate sub-Gaussian estimators to the quantum setting. Unlike classically, where any univariate estimator can be turned into a multivariate estimator with at most a logarithmic overhead in the dimension, no similar result can be proved in the quantum setting. Indeed, Heinrich ruled out the existence of a quantum advantage for the mean estimation problem when the sample complexity is smaller than the dimension. Our main result is to show that, outside this low-precision regime, there is a quantum estimator that outperforms any classical estimator. Our approach is substantially more involved than in the univariate setting, where most quantum estimators rely only on phase estimation. We exploit a variety of additional algorithmic techniques such as amplitude amplification, the Bernstein-Vazirani algorithm, and quantum singular value transformation. Our analysis also uses concentration inequalities for multivariate truncated statistics. We develop our quantum estimators in two different input models that showed up in the literature before. The first one provides coherent access to the binary representation of the random variable and it encompasses the classical setting. In the second model, the random variable is directly encoded into the phases of quantum registers. This model arises naturally in many quantum algorithms but it is often incomparable to having classical samples. We adapt our techniques to these two settings and we show that the second model is strictly weaker for solving the mean estimation problem. Finally, we describe several applications of our algorithms, notably in measuring the expectation values of commuting observables and in the field of machine learning.

We study the complexity of approximating the partition function of the $q$-state Potts model and the closely related Tutte polynomial for complex values of the underlying parameters. Apart from the classical connections with quantum computing and phase transitions in statistical physics, recent work in approximate counting has shown that the behaviour in the complex plane, and more precisely the location of zeros, is strongly connected with the complexity of the approximation problem, even for positive real-valued parameters. Previous work in the complex plane by Goldberg and Guo focused on $q=2$, which corresponds to the case of the Ising model; for $q>2$, the behaviour in the complex plane is not as well understood and most work applies only to the real-valued Tutte plane. Our main result is a complete classification of the complexity of the approximation problems for all non-real values of the parameters, by establishing \#P-hardness results that apply even when restricted to planar graphs. Our techniques apply to all $q\geq 2$ and further complement/refine previous results both for the Ising model and the Tutte plane, answering in particular a question raised by Bordewich, Freedman, Lov\'{a}sz and Welsh in the context of quantum computations.

In directional statistics, the von Mises distribution is a key element in the analysis of circular data. While there is a general agreement regarding the estimation of its location parameter $\mu$, several methods have been proposed to estimate the concentration parameter $\kappa$. We here provide a thorough evaluation of the behavior of 12 such estimators for datasets of size $N$ ranging from 2 to 8\,192 generated with a $\kappa$ ranging from 0 to 100. We provide detailed results as well as a global analysis of the results, showing that (1) for a given $\kappa$, most estimators have behaviors that are very similar for large datasets ($N \geq 16$) and more variable for small datasets, and (2) for a given estimator, results are very similar if we consider the mean absolute error for $\kappa \leq 1$ and the mean relative absolute error for $\kappa \geq 1$.

The Sum-of-Squares (SoS) hierarchy of semidefinite programs is a powerful algorithmic paradigm which captures state-of-the-art algorithmic guarantees for a wide array of problems. In the average case setting, SoS lower bounds provide strong evidence of algorithmic hardness or information-computation gaps. Prior to this work, SoS lower bounds have been obtained for problems in the "dense" input regime, where the input is a collection of independent Rademacher or Gaussian random variables, while the sparse regime has remained out of reach. We make the first progress in this direction by obtaining strong SoS lower bounds for the problem of Independent Set on sparse random graphs. We prove that with high probability over an Erdos-Renyi random graph $G\sim G_{n,\frac{d}{n}}$ with average degree $d>\log^2 n$, degree-$D_{SoS}$ SoS fails to refute the existence of an independent set of size $k = \Omega\left(\frac{n}{\sqrt{d}(\log n)(D_{SoS})^{c_0}} \right)$ in $G$ (where $c_0$ is an absolute constant), whereas the true size of the largest independent set in $G$ is $O\left(\frac{n\log d}{d}\right)$. Our proof involves several significant extensions of the techniques used for proving SoS lower bounds in the dense setting. Previous lower bounds are based on the pseudo-calibration heuristic of Barak et al [FOCS 2016] which produces a candidate SoS solution using a planted distribution indistinguishable from the input distribution via low-degree tests. In the sparse case the natural planted distribution does admit low-degree distinguishers, and we show how to adapt the pseudo-calibration heuristic to overcome this. Another notorious technical challenge for the sparse regime is the quest for matrix norm bounds. In this paper, we obtain new norm bounds for graph matrices in the sparse setting.

This paper presents a new parameter estimation algorithm for the adaptive control of a class of time-varying plants. The main feature of this algorithm is a matrix of time-varying learning rates, which enables parameter estimation error trajectories to tend exponentially fast towards a compact set whenever excitation conditions are satisfied. This algorithm is employed in a large class of problems where unknown parameters are present and are time-varying. It is shown that this algorithm guarantees global boundedness of the state and parameter errors of the system, and avoids an often used filtering approach for constructing key regressor signals. In addition, intervals of time over which these errors tend exponentially fast toward a compact set are provided, both in the presence of finite and persistent excitation. A projection operator is used to ensure the boundedness of the learning rate matrix, as compared to a time-varying forgetting factor. Numerical simulations are provided to complement the theoretical analysis.

Utility-Based Shortfall Risk (UBSR) is a risk metric that is increasingly popular in financial applications, owing to certain desirable properties that it enjoys. We consider the problem of estimating UBSR in a recursive setting, where samples from the underlying loss distribution are available one-at-a-time. We cast the UBSR estimation problem as a root finding problem, and propose stochastic approximation-based estimations schemes. We derive non-asymptotic bounds on the estimation error in the number of samples. We also consider the problem of UBSR optimization within a parameterized class of random variables. We propose a stochastic gradient descent based algorithm for UBSR optimization, and derive non-asymptotic bounds on its convergence.

High dimensional non-Gaussian time series data are increasingly encountered in a wide range of applications. Conventional estimation methods and technical tools are inadequate when it comes to ultra high dimensional and heavy-tailed data. We investigate robust estimation of high dimensional autoregressive models with fat-tailed innovation vectors by solving a regularized regression problem using convex robust loss function. As a significant improvement, the dimension can be allowed to increase exponentially with the sample size to ensure consistency under very mild moment conditions. To develop the consistency theory, we establish a new Bernstein type inequality for the sum of autoregressive models. Numerical results indicate a good performance of robust estimates.

The best polynomial approximation and Chebyshev approximation are both important in numerical analysis. In tradition, the best approximation is regarded as more better than the Chebyshev approximation, because it is usually considered in the uniform norm. However, it not always superior to the latter noticed by Trefethen \cite{Trefethen11sixmyths,Trefethen2020} for the algebraic singularity function. Recently Wang \cite{Wang2021best} have proved it in theory. In this paper, we find that for the functions with logarithmic regularities, the pointwise errors of Chebyshev approximation are smaller than the ones of the best approximations except only in the very narrow boundaries at the same degree. The pointwise error for Chebyshev series, truncated at the degree $n$ is $O(n^{-\kappa})$ ($\kappa = \min\{2\gamma+1, 2\delta + 1\}$), but is worse by one power of $n$ in narrow boundary layer near the weak singular endpoints. Theorems are given to explain this effect.

Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司