亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Matrix perturbation bounds (such as Weyl and Davis-Kahan) are frequently used in many branches of mathematics. Most of the classical results in this area are optimal, in the worst case analysis. However, in modern applications, both the ground and the nose matrices frequently have extra structural properties. For instance, it is often assumed that the ground matrix is essentially low rank, and the nose matrix is random or pseudo-random. We aim to rebuild a part of perturbation theory, adapting to these modern assumptions. The key idea to to exploit the skewness between the leading eigenvectors of the ground matrix and the noise matrix. We will do this by combining the classical contour integration method with new combinatorial ideas, resulting in a new machinery, which has a wide range of applications. Our new bounds are optimal under mild assumptions, with direct applications to central problems in many different areas. Among others, we derive a sharp result for the perturbation of a low rank matrix with random perturbation, answering an open question in this area. Next, we derive new, optimal, results concerning covariance estimator of the spiked model, an important model in statistics, bridging two different directions of current research. Finally, we use our results on the perturbation of eigenspaces to derive new results concerning eigenvalues of deterministic and random matrices. In particular, we obtain new results concerning the outliers in the deformed Wigner model and the least singular value of random matrices with non-zero mean.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 估計/估計量 · 自助法/自舉法 · 穩健性 · 相似度 ·
2024 年 12 月 13 日

In reinsurance, Poisson and Negative binomial distributions are employed for modeling frequency. However, the incomplete data regarding reported incurred claims above a priority level presents challenges in estimation. This paper focuses on frequency estimation using Schnieper's framework for claim numbering. We demonstrate that Schnieper's model is consistent with a Poisson distribution for the total number of claims above a priority at each year of development, providing a robust basis for parameter estimation. Additionally, we explain how to build an alternative assumption based on a Negative binomial distribution, which yields similar results. The study includes a bootstrap procedure to manage uncertainty in parameter estimation and a case study comparing assumptions and evaluating the impact of the bootstrap approach.

We consider the bidimensional Stokes problem for incompressible fluids in stream function-vorticity. For this problem, the classical finite elements method of degree one converges only to order one-half for the L2 norm of the vorticity. We propose to use harmonic functions to approach the vorticity along the boundary. Discrete harmonics are functions that are used in practice to derive a new numerical method. We prove that we obtain with this numerical scheme an error of order one for the L2 norm of the vorticity.

Subclasses of TFNP (total functional NP) are usually defined by specifying a complete problem, which is necessarily in TFNP, and including all problems many-one reducible to it. We study two notions of how a TFNP problem can be reducible to an object, such as a complexity class, outside TFNP. This gives rise to subclasses of TFNP which capture some properties of that outside object. We show that well-known subclasses can arise in this way, for example PPA from reducibility to parity P and PLS from reducibility to P^NP. We study subclasses arising from PSPACE and the polynomial hierarchy, and show that they are characterized by the propositional proof systems Frege and constant-depth Frege, extending the known pairings between natural TFNP subclasses and proof systems. We study approximate counting from this point of view, and look for a subclass of TFNP that gives a natural home to combinatorial principles such as Ramsey which can be proved using approximate counting. We relate this to the recently-studied Long choice and Short choice problems.

Next-generation reservoir computing (NG-RC) has attracted much attention due to its excellent performance in spatio-temporal forecasting of complex systems and its ease of implementation. This paper shows that NG-RC can be encoded as a kernel ridge regression that makes training efficient and feasible even when the space of chosen polynomial features is very large. Additionally, an extension to an infinite number of covariates is possible, which makes the methodology agnostic with respect to the lags into the past that are considered as explanatory factors, as well as with respect to the number of polynomial covariates, an important hyperparameter in traditional NG-RC. We show that this approach has solid theoretical backing and good behavior based on kernel universality properties previously established in the literature. Various numerical illustrations show that these generalizations of NG-RC outperform the traditional approach in several forecasting applications.

Gradient descent is one of the most widely used iterative algorithms in modern statistical learning. However, its precise algorithmic dynamics in high-dimensional settings remain only partially understood, which has therefore limited its broader potential for statistical inference applications. This paper provides a precise, non-asymptotic distributional characterization of gradient descent iterates in a broad class of empirical risk minimization problems, in the so-called mean-field regime where the sample size is proportional to the signal dimension. Our non-asymptotic state evolution theory holds for both general non-convex loss functions and non-Gaussian data, and reveals the central role of two Onsager correction matrices that precisely characterize the non-trivial dependence among all gradient descent iterates in the mean-field regime. Although the Onsager correction matrices are typically analytically intractable, our state evolution theory facilitates a generic gradient descent inference algorithm that consistently estimates these matrices across a broad class of models. Leveraging this algorithm, we show that the state evolution can be inverted to construct (i) data-driven estimators for the generalization error of gradient descent iterates and (ii) debiased gradient descent iterates for inference of the unknown signal. Detailed applications to two canonical models--linear regression and (generalized) logistic regression--are worked out to illustrate model-specific features of our general theory and inference methods.

A Nehari manifold optimization method (NMOM) is introduced for finding 1-saddles, i.e., saddle points with the Morse index equal to one, of a generic nonlinear functional in Hilbert spaces. Actually, it is based on the variational characterization that 1-saddles of this functional are local minimizers of the same functional restricted on the associated Nehari manifold. The framework contains two important ingredients: one is the retraction mapping to make the iterative points always lie on the Nehari manifold; the other is the tangential search direction to decrease the functional with suitable step-size search rules. Particularly, the global convergence is rigorously established by virtue of some crucial analysis techniques (including a weak convergence method) that overcome difficulties in the infinite-dimensional setting. In practice, combining with an easy-to-implement Nehari retraction and the negative Riemannian gradient direction, the NMOM is successfully applied to compute the unstable ground-state solutions of a class of typical semilinear elliptic PDEs, such as the H\'enon equation and the stationary nonlinear Schr\"odinger equation. In particular, the symmetry-breaking phenomenon of the ground states of the H\'enon equation is explored numerically in 1D and 2D with interesting numerical findings on the critical value of the symmetry-breaking reported.

Unlabeled sensing is a linear inverse problem with permuted measurements. We propose an alternating minimization (AltMin) algorithm with a suitable initialization for two widely considered permutation models: partially shuffled/$k$-sparse permutations and $r$-local/block diagonal permutations. Key to the performance of the AltMin algorithm is the initialization. For the exact unlabeled sensing problem, assuming either a Gaussian measurement matrix or a sub-Gaussian signal, we bound the initialization error in terms of the number of blocks $s$ and the number of shuffles $k$. Experimental results show that our algorithm is fast, applicable to both permutation models, and robust to choice of measurement matrix. We also test our algorithm on several real datasets for the linked linear regression problem and show superior performance compared to baseline methods.

Two sequential estimators are proposed for the odds p/(1-p) and log odds log(p/(1-p)) respectively, using independent Bernoulli random variables with parameter p as inputs. The estimators are unbiased, and guarantee that the variance of the estimation error divided by the true value of the odds, or the variance of the estimation error of the log odds, are less than a target value for any p in (0,1). The estimators are close to optimal in the sense of Wolfowitz's bound.

Normalizing flow-based variational inference (flow VI) is a promising approximate inference approach, but its performance remains inconsistent across studies. Numerous algorithmic choices influence flow VI's performance. We conduct a step-by-step analysis to disentangle the impact of some of the key factors: capacity, objectives, gradient estimators, number of gradient estimates (batchsize), and step-sizes. Each step examines one factor while neutralizing others using insights from the previous steps and/or using extensive parallel computation. To facilitate high-fidelity evaluation, we curate a benchmark of synthetic targets that represent common posterior pathologies and allow for exact sampling. We provide specific recommendations for different factors and propose a flow VI recipe that matches or surpasses leading turnkey Hamiltonian Monte Carlo (HMC) methods.

Incomplete LU factorizations of sparse matrices are widely used as preconditioners in Krylov subspace methods to speed up solving linear systems. Unfortunately, computing the preconditioner itself can be time-consuming and sensitive to hyper-parameters. Instead, we replace the hand-engineered algorithm with a graph neural network that is trained to approximate the matrix factorization directly. To apply the output of the neural network as a preconditioner, we propose an output activation function that guarantees that the predicted factorization is invertible. Further, applying a graph neural network architecture allows us to ensure that the output itself is sparse which is desirable from a computational standpoint. We theoretically analyze and empirically evaluate different loss functions to train the learned preconditioners and show their effectiveness in decreasing the number of GMRES iterations and improving the spectral properties on synthetic data. The code is available at //github.com/paulhausner/neural-incomplete-factorization.

北京阿比特科技有限公司