亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Approximate Message Passing (AMP) algorithms have seen widespread use across a variety of applications. However, the precise forms for their Onsager corrections and state evolutions depend on properties of the underlying random matrix ensemble, limiting the extent to which AMP algorithms derived for white noise may be applicable to data matrices that arise in practice. In this work, we study more general AMP algorithms for random matrices $W$ that satisfy orthogonal rotational invariance in law, where $W$ may have a spectral distribution that is different from the semicircle and Marcenko-Pastur laws characteristic of white noise. The Onsager corrections and state evolutions in these algorithms are defined by the free cumulants or rectangular free cumulants of the spectral distribution of $W$. Their forms were derived previously by Opper, \c{C}akmak, and Winther using non-rigorous dynamic functional theory techniques, and we provide rigorous proofs. Our motivating application is a Bayes-AMP algorithm for Principal Components Analysis, when there is prior structure for the principal components (PCs) and possibly non-white noise. For sufficiently large signal strengths and any non-Gaussian prior distributions for the PCs, we show that this algorithm provably achieves higher estimation accuracy than the sample PCs.

相關內容

We develop a group of robust, nonparametric hypothesis tests which detect differences between the covariance operators of several populations of functional data. These tests, called FKWC tests, are based on functional data depth ranks. These tests work well even when the data is heavy tailed, which is shown both in simulation and theoretically. These tests offer several other benefits, they have a simple distribution under the null hypothesis, they are computationally cheap and they possess transformation invariance properties. We show that under general alternative hypotheses these tests are consistent under mild, nonparametric assumptions. As a result of this work, we introduce a new functional depth function called L2-root depth which works well for the purposes of detecting differences in magnitude between covariance kernels. We present an analysis of the FKWC test using L2-root depth under local alternatives. In simulation, when the true covariance kernels have strictly positive eigenvalues, we show that these tests have higher power than their competitors, while still maintaining their nominal size. We also provide a methods for computing sample size and performing multiple comparisons.

The Bayesian persuasion paradigm of strategic communication models interaction between a privately-informed agent, called the sender, and an ignorant but rational agent, called the receiver. The goal is typically to design a (near-)optimal communication (or signaling) scheme for the sender. It enables the sender to disclose information to the receiver in a way as to incentivize her to take an action that is preferred by the sender. Finding the optimal signaling scheme is known to be computationally difficult in general. This hardness is further exacerbated when there is also a constraint on the size of the message space, leading to NP-hardness of approximating the optimal sender utility within any constant factor. In this paper, we show that in several natural and prominent cases the optimization problem is tractable even when the message space is limited. In particular, we study signaling under a symmetry or an independence assumption on the distribution of utility values for the actions. For symmetric distributions, we provide a novel characterization of the optimal signaling scheme. It results in a polynomial-time algorithm to compute an optimal scheme for many compactly represented symmetric distributions. In the independent case, we design a constant-factor approximation algorithm, which stands in marked contrast to the hardness of approximation in the general case.

In the Two-Bar Charts Packing Problem (2-BCPP), it is required to pack the bar charts (BCs) consisting of two bars into the horizontal unit-height strip of minimal length. The bars may move vertically within the strip, but it is forbidden to change the order and separate the chart's bars. Recently, for this new problem, which is a generalization of the Bin Packing Problem (BPP) and 2-Dimensional Vector Packing Problem (2-DVPP), several approximation algorithms with guaranteed estimates were proposed. However, after a preliminary analysis of the solutions constructed by approximation algorithms, we discerned that the guaranteed estimates are inaccurate. This fact inspired us to conduct a numerical experiment in which the approximate solutions are compared to each other and with the optimal ones. To construct the optimal solutions or lower bounds for optimum, we use the Boolean Linear Programming (BLP) formulation of 2-BCPP proposed earlier and apply the CPLEX package. We also use a database of instances for BPP with known optimal solutions to construct the instances for the 2-BCPP with known minimal packing length. The results of the simulation make up the main content of this paper.

We improve the current best running time value to invert sparse matrices over finite fields, lowering it to an expected $O\big(n^{2.2131}\big)$ time for the current values of fast rectangular matrix multiplication. We achieve the same running time for the computation of the rank and nullspace of a sparse matrix over a finite field. This improvement relies on two key techniques. First, we adopt the decomposition of an arbitrary matrix into block Krylov and Hankel matrices from Eberly et al. (ISSAC 2007). Second, we show how to recover the explicit inverse of a block Hankel matrix using low displacement rank techniques for structured matrices and fast rectangular matrix multiplication algorithms. We generalize our inversion method to block structured matrices with other displacement operators and strengthen the best known upper bounds for explicit inversion of block Toeplitz-like and block Hankel-like matrices, as well as for explicit inversion of block Vandermonde-like matrices with structured blocks. As a further application, we improve the complexity of several algorithms in topological data analysis and in finite group theory.

We analyze the Lanczos method for matrix function approximation (Lanczos-FA), an iterative algorithm for computing $f(\mathbf{A}) \mathbf{b}$ when $\mathbf{A}$ is a Hermitian matrix and $\mathbf{b}$ is a given mathbftor. Assuming that $f : \mathbb{C} \rightarrow \mathbb{C}$ is piecewise analytic, we give a framework, based on the Cauchy integral formula, which can be used to derive {\em a priori} and \emph{a posteriori} error bounds for Lanczos-FA in terms of the error of Lanczos used to solve linear systems. Unlike many error bounds for Lanczos-FA, these bounds account for fine-grained properties of the spectrum of $\mathbf{A}$, such as clustered or isolated eigenvalues. Our results are derived assuming exact arithmetic, but we show that they are easily extended to finite precision computations using existing theory about the Lanczos algorithm in finite precision. We also provide generalized bounds for the Lanczos method used to approximate quadratic forms $\mathbf{b}^\textsf{H} f(\mathbf{A}) \mathbf{b}$, and demonstrate the effectiveness of our bounds with numerical experiments.

Embedding methods for product spaces are powerful techniques for low-distortion and low-dimensional representation of complex data structures. Nevertheless, little is known regarding downstream learning and optimization problems in such spaces. Here, we address the problem of linear classification in a product space form -- a mix of Euclidean, spherical, and hyperbolic spaces. First, we describe new formulations for linear classifiers on a Riemannian manifold using geodesics and Riemannian metrics which generalize straight lines and inner products in vector spaces, respectively. Second, we prove that linear classifiers in $d$-dimensional space forms of any curvature have the same expressive power, i.e., they can shatter exactly $d+1$ points. Third, we formalize linear classifiers in product space forms, describe the first corresponding perceptron and SVM classification algorithms, and establish rigorous convergence results for the former. We support our theoretical findings with simulation results on several datasets, including synthetic data, CIFAR-100, MNIST, Omniglot, and single-cell RNA sequencing data. The results show that learning methods applied to small-dimensional embeddings in product space forms outperform their algorithmic counterparts in each space form.

Despite the recent success of graph neural networks (GNN), common architectures often exhibit significant limitations, including sensitivity to oversmoothing, long-range dependencies, and spurious edges, e.g., as can occur as a result of graph heterophily or adversarial attacks. To at least partially address these issues within a simple transparent framework, we consider a new family of GNN layers designed to mimic and integrate the update rules of two classical iterative algorithms, namely, proximal gradient descent and iterative reweighted least squares (IRLS). The former defines an extensible base GNN architecture that is immune to oversmoothing while nonetheless capturing long-range dependencies by allowing arbitrary propagation steps. In contrast, the latter produces a novel attention mechanism that is explicitly anchored to an underlying end-toend energy function, contributing stability with respect to edge uncertainty. When combined we obtain an extremely simple yet robust model that we evaluate across disparate scenarios including standardized benchmarks, adversarially-perturbated graphs, graphs with heterophily, and graphs involving long-range dependencies. In doing so, we compare against SOTA GNN approaches that have been explicitly designed for the respective task, achieving competitive or superior node classification accuracy.

Graph neural networks (GNNs) are typically applied to static graphs that are assumed to be known upfront. This static input structure is often informed purely by insight of the machine learning practitioner, and might not be optimal for the actual task the GNN is solving. In absence of reliable domain expertise, one might resort to inferring the latent graph structure, which is often difficult due to the vast search space of possible graphs. Here we introduce Pointer Graph Networks (PGNs) which augment sets or graphs with additional inferred edges for improved model expressivity. PGNs allow each node to dynamically point to another node, followed by message passing over these pointers. The sparsity of this adaptable graph structure makes learning tractable while still being sufficiently expressive to simulate complex algorithms. Critically, the pointing mechanism is directly supervised to model long-term sequences of operations on classical data structures, incorporating useful structural inductive biases from theoretical computer science. Qualitatively, we demonstrate that PGNs can learn parallelisable variants of pointer-based data structures, namely disjoint set unions and link/cut trees. PGNs generalise out-of-distribution to 5x larger test inputs on dynamic graph connectivity tasks, outperforming unrestricted GNNs and Deep Sets.

In this paper, from a theoretical perspective, we study how powerful graph neural networks (GNNs) can be for learning approximation algorithms for combinatorial problems. To this end, we first establish a new class of GNNs that can solve strictly a wider variety of problems than existing GNNs. Then, we bridge the gap between GNN theory and the theory of distributed local algorithms to theoretically demonstrate that the most powerful GNN can learn approximation algorithms for the minimum dominating set problem and the minimum vertex cover problem with some approximation ratios and that no GNN can perform better than with these ratios. This paper is the first to elucidate approximation ratios of GNNs for combinatorial problems. Furthermore, we prove that adding coloring or weak-coloring to each node feature improves these approximation ratios. This indicates that preprocessing and feature engineering theoretically strengthen model capabilities.

This paper describes a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by numerical experiments with real and synthetic data.

北京阿比特科技有限公司