亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Recently, we have classified Hermitian random matrix ensembles that are invariant under the conjugate action of the unitary group and stable with respect to matrix addition. Apart from a scaling and a shift, the whole information of such an ensemble is encoded in the stability exponent determining the ``heaviness'' of the tail and the spectral measure that describes the anisotropy of the probability distribution. In the present work, we address the question how these ensembles can be generated by the knowledge of the latter two quantities. We consider a sum of a specific construction of identically and independently distributed random matrices that are based on Haar distributed unitary matrices and a stable random vectors. For this construction, we derive the rate of convergence in the supremums norm and show that this rate is optimal in the class of all stable invariant random matrices for a fixed stability exponent. As a consequence we also give the rate of convergence in the total variation distance.

相關內容

Gradient clipping is a popular modification to standard (stochastic) gradient descent, at every iteration limiting the gradient norm to a certain value $c >0$. It is widely used for example for stabilizing the training of deep learning models (Goodfellow et al., 2016), or for enforcing differential privacy (Abadi et al., 2016). Despite popularity and simplicity of the clipping mechanism, its convergence guarantees often require specific values of $c$ and strong noise assumptions. In this paper, we give convergence guarantees that show precise dependence on arbitrary clipping thresholds $c$ and show that our guarantees are tight with both deterministic and stochastic gradients. In particular, we show that (i) for deterministic gradient descent, the clipping threshold only affects the higher-order terms of convergence, (ii) in the stochastic setting convergence to the true optimum cannot be guaranteed under the standard noise assumption, even under arbitrary small step-sizes. We give matching upper and lower bounds for convergence of the gradient norm when running clipped SGD, and illustrate these results with experiments.

The conic bundle implementation of the spectral bundle method for large scale semidefinite programming solves in each iteration a semidefinite quadratic subproblem by an interior point approach. For larger cutting model sizes the limiting operation is collecting and factorizing a Schur complement of the primal-dual KKT system. We explore possibilities to improve on this by an iterative approach that exploits structural low rank properties. Two preconditioning approaches are proposed and analyzed. Both might be of interest for rank structured positive definite systems in general. The first employs projections onto random subspaces, the second projects onto a subspace that is chosen deterministically based on structural interior point properties. For both approaches theoretic bounds are derived for the associated condition number. In the instances tested the deterministic preconditioner provides surprisingly efficient control on the actual condition number. The results suggest that for large scale instances the iterative solver is usually the better choice if precision requirements are moderate or if the size of the Schur complemented system clearly exceeds the active dimension within the subspace giving rise to the cutting model of the bundle method.

Reproducing kernel Hilbert spaces (RKHSs) are special Hilbert spaces in one-to-one correspondence with positive definite maps called kernels. They are widely employed in machine learning to reconstruct unknown functions from sparse and noisy data. In the last two decades, a subclass known as stable RKHSs has been also introduced in the setting of linear system identification. Stable RKHSs contain only absolutely integrable impulse responses over the positive real line. Hence, they can be adopted as hypothesis spaces to estimate linear, time-invariant and BIBO stable dynamic systems from input-output data. Necessary and sufficient conditions for RKHS stability are available in the literature and it is known that kernel absolute integrability implies stability. Working in discrete-time, in a recent work we have proved that this latter condition is only sufficient. Working in continuous-time, it is the purpose of this note to prove that the same result holds also for Mercer kernels.

This paper investigates the asymptotic behavior of structural break tests in the harmonic domain for time-dependent spherical random fields. In particular, we prove a Functional Central Limit Theorem result for the fluctuations over time of the sample spherical harmonic coefficients, under the null of isotropy and stationarity; furthermore, we prove consistency of the corresponding CUSUM test, under a broad range of alternatives. Our results are then applied to NCEP data on global temperature: our estimates suggest that Climate Change does not simply affect global average temperatures, but also the nature of spatial fluctuations at different scales.

An inner-product Hilbert space formulation of the Kemeny distance is defined over the domain of all permutations with ties upon the extended real line, and results in an unbiased minimum variance (Gauss-Markov) correlation estimator upon a homogeneous i.i.d. sample. In this work, we construct and prove the necessary requirements to extend this linear topology for both Spearman's \(\rho\) and Kendall's \(\tau_{b}\), showing both spaces to be both biased and inefficient upon practical data domains. A probability distribution is defined for the Kemeny \(\tau_{\kappa}\) estimator, and a Studentisation adjustment for finite samples is provided as well. This work allows for a general purpose linear model duality to be identified as a unique consistent solution to many biased and unbiased estimation scenarios.

This paper studies the causal representation learning problem when the latent causal variables are observed indirectly through an unknown linear transformation. The objectives are: (i) recovering the unknown linear transformation (up to scaling) and (ii) determining the directed acyclic graph (DAG) underlying the latent variables. Sufficient conditions for DAG recovery are established, and it is shown that a large class of non-linear models in the latent space (e.g., causal mechanisms parameterized by two-layer neural networks) satisfy these conditions. These sufficient conditions ensure that the effect of an intervention can be detected correctly from changes in the score. Capitalizing on this property, recovering a valid transformation is facilitated by the following key property: any valid transformation renders latent variables' score function to necessarily have the minimal variations across different interventional environments. This property is leveraged for perfect recovery of the latent DAG structure using only \emph{soft} interventions. For the special case of stochastic \emph{hard} interventions, with an additional hypothesis testing step, one can also uniquely recover the linear transformation up to scaling and a valid causal ordering.

Fern\'andez-Dur\'an and Gregorio-Dom\'inguez (2014) defined a family of probability distributions for a vector of circular random variables by considering multiple nonnegative trigonometric sums. These distributions are highly flexible and can present numerous modes and skewness. Several operations on these multivariate distributions were translated into operations on the vector of parameters; for instance, marginalization involves calculating the eigenvectors and eigenvalues of a matrix, and independence among subsets of the vector of circular variables translates to a Kronecker product of the corresponding subsets of the vector of parameters. The derivation of marginal and conditional densities from the joint multivariate density is important when applying this model in practice to real datasets. A goodness-of-fit test based on the characteristic function and an alternative parameter estimation algorithm for high-dimensional circular data was presented and applied to a real dataset on the daily times of occurrence of maxima and minima of prices in financial markets.

This paper aims to construct an efficient and highly accurate numerical method to solve a class of parabolic integro-fractional differential equations, which is based on wavelets and $L2$-$1_\sigma$ scheme; specifically, the Haar wavelet decomposition is used for grid adaptation and efficient computations, while the high order $L2$-$1_\sigma$ scheme is taken into account to discretize the time-fractional operator. In particular, second-order discretizations are used to approximate the spatial derivatives to solve the one-dimensional problem. In contrast, a repeated quadrature rule based on trapezoidal approximation is employed to discretize the integral operator. On the other hand, we use the semi-discretization of the proposed two-dimensional model based on the $L2$-$1_\sigma$ scheme for the fractional operator and composite trapezoidal approximation for the integral part. Then, the spatial derivatives are approximated by using the two-dimensional Haar wavelet. Here, we investigated theoretically and verified numerically the behavior of the proposed higher-order numerical method. In particular, the stability and convergence analysis of the proposed higher-order method has been studied. The obtained results are compared with some existing techniques through several graphs and tables, and it is shown that the proposed higher-order methods have better accuracy and produce less error compared with the $L1$ scheme.

The use of orthonormal polynomial bases has been found to be efficient in preventing ill-conditioning of the system matrix in the primal formulation of Virtual Element Methods (VEM) for high values of polynomial degree and in presence of badly-shaped polygons. However, we show that using the natural extension of a orthogonal polynomial basis built for the primal formulation is not sufficient to cure ill-conditioning in the mixed case. Thus, in the present work, we introduce an orthogonal vector-polynomial basis which is built ad hoc for being used in the mixed formulation of VEM and which leads to very high-quality solution in each tested case. Furthermore, a numerical experiment related to simulations in Discrete Fracture Networks (DFN), which are often characterised by very badly-shaped elements, is proposed to validate our procedures.

In this paper, we present some theoretical work to explain why simple gradient descent methods are so successful in solving non-convex optimization problems in learning large-scale neural networks (NN). After introducing a mathematical tool called canonical space, we have proved that the objective functions in learning NNs are convex in the canonical model space. We further elucidate that the gradients between the original NN model space and the canonical space are related by a pointwise linear transformation, which is represented by the so-called disparity matrix. Furthermore, we have proved that gradient descent methods surely converge to a global minimum of zero loss provided that the disparity matrices maintain full rank. If this full-rank condition holds, the learning of NNs behaves in the same way as normal convex optimization. At last, we have shown that the chance to have singular disparity matrices is extremely slim in large NNs. In particular, when over-parameterized NNs are randomly initialized, the gradient decent algorithms converge to a global minimum of zero loss in probability.

北京阿比特科技有限公司