亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Nystr\"om method is a popular choice for finding a low-rank approximation to a symmetric positive semi-definite matrix. The method can fail when applied to symmetric indefinite matrices, for which the error can be unboundedly large. In this work, we first identify the main challenges in finding a Nystr\"om approximation to symmetric indefinite matrices. We then prove the existence of a variant that overcomes the instability, and establish relative-error nuclear norm bounds of the resulting approximation that hold when the singular values decay rapidly. The analysis naturally leads to a practical algorithm, whose robustness is illustrated with experiments.

相關內容

The problem of optimal recovering high-order mixed derivatives of bivariate functions with finite smoothness is studied. On the basis of the truncation method, an algorithm for numerical differentiation is constructed, which is order-optimal both in the sense of accuracy and in terms of the amount of involved Galerkin information.

We consider generalized operator eigenvalue problems in variational form with random perturbations in the bilinear forms. This setting is motivated by variational forms of partial differential equations with random input data. The considered eigenpairs can be of higher but finite multiplicity. We investigate stochastic quantities of interest of the eigenpairs and discuss why, for multiplicity greater than 1, only the stochastic properties of the eigenspaces are meaningful, but not the ones of individual eigenpairs. To that end, we characterize the Fr\'echet derivatives of the eigenpairs with respect to the perturbation and provide a new linear characterization for eigenpairs of higher multiplicity. As a side result, we prove local analyticity of the eigenspaces. Based on the Fr\'echet derivatives of the eigenpairs we discuss a meaningful Monte Carlo sampling strategy for multiple eigenvalues and develop an uncertainty quantification perturbation approach. Numerical examples are presented to illustrate the theoretical results.

The Sinkhorn algorithm is a numerical method for the solution of optimal transport problems. Here, I give a brief survey of this algorithm, with a strong emphasis on its geometric origin: it is natural to view it as a discretization, by standard methods, of a non-linear integral equation. In the appendix, I also provide a short summary of an early result of Beurling on product measures, directly related to the Sinkhorn algorithm.

Hierarchical matrices approximate a given matrix by a decomposition into low-rank submatrices that can be handled efficiently in factorized form. $\mathcal{H}^2$-matrices refine this representation following the ideas of fast multipole methods in order to achieve linear, i.e., optimal complexity for a variety of important algorithms. The matrix multiplication, a key component of many more advanced numerical algorithms, has so far proven tricky: the only linear-time algorithms known so far either require the very special structure of HSS-matrices or need to know a suitable basis for all submatrices in advance. In this article, a new and fairly general algorithm for multiplying $\mathcal{H}^2$-matrices in linear complexity with adaptively constructed bases is presented. The algorithm consists of two phases: first an intermediate representation with a generalized block structure is constructed, then this representation is re-compressed in order to match the structure prescribed by the application. The complexity and accuracy are analysed and numerical experiments indicate that the new algorithm can indeed be significantly faster than previous attempts.

Symmetry is a unifying concept in physics. In quantum information and beyond, it is known that quantum states possessing symmetry are not useful for certain information-processing tasks. For example, states that commute with a Hamiltonian realizing a time evolution are not useful for timekeeping during that evolution, and bipartite states that are highly extendible are not strongly entangled and thus not useful for basic tasks like teleportation. Motivated by this perspective, this paper details several quantum algorithms that test the symmetry of quantum states and channels. For the case of testing Bose symmetry of a state, we show that there is a simple and efficient quantum algorithm, while the tests for other kinds of symmetry rely on the aid of a quantum prover. We prove that the acceptance probability of each algorithm is equal to the maximum symmetric fidelity of the state being tested, thus giving a firm operational meaning to these latter resource quantifiers. Special cases of the algorithms test for incoherence or separability of quantum states. We evaluate the performance of these algorithms on choice examples by using the variational approach to quantum algorithms, replacing the quantum prover with a parameterized circuit. We demonstrate this approach for numerous examples using the IBM quantum noiseless and noisy simulators, and we observe that the algorithms perform well in the noiseless case and exhibit noise resilience in the noisy case. We also show that the maximum symmetric fidelities can be calculated by semi-definite programs, which is useful for benchmarking the performance of these algorithms for sufficiently small examples. Finally, we establish various generalizations of the resource theory of asymmetry, with the upshot being that the acceptance probabilities of the algorithms are resource monotones and thus well motivated from the resource-theoretic perspective.

Optimal transport and Wasserstein distances are flourishing in many scientific fields as a means for comparing and connecting random structures. Here we pioneer the use of an optimal transport distance between L\'{e}vy measures to solve a statistical problem. Dependent Bayesian nonparametric models provide flexible inference on distinct, yet related, groups of observations. Each component of a vector of random measures models a group of exchangeable observations, while their dependence regulates the borrowing of information across groups. We derive the first statistical index of dependence in $[0,1]$ for (completely) random measures that accounts for their whole infinite-dimensional distribution, which is assumed to be equal across different groups. This is accomplished by using the geometric properties of the Wasserstein distance to solve a max-min problem at the level of the underlying L\'{e}vy measures. The Wasserstein index of dependence sheds light on the models' deep structure and has desirable properties: (i) it is $0$ if and only if the random measures are independent; (ii) it is $1$ if and only if the random measures are completely dependent; (iii) it simultaneously quantifies the dependence of $d \ge 2$ random measures, avoiding the need for pairwise comparisons; (iv) it can be evaluated numerically. Moreover, the index allows for informed prior specifications and fair model comparisons for Bayesian nonparametric models.

The Johnson--Lindenstrauss (JL) lemma is a powerful tool for dimensionality reduction in modern algorithm design. The lemma states that any set of high-dimensional points in a Euclidean space can be flattened to lower dimensions while approximately preserving pairwise Euclidean distances. Random matrices satisfying this lemma are called JL transforms (JLTs). Inspired by existing $s$-hashing JLTs with exactly $s$ nonzero elements on each column, the present work introduces an ensemble of sparse matrices encompassing so-called $s$-hashing-like matrices whose expected number of nonzero elements on each column is~$s$. The independence of the sub-Gaussian entries of these matrices and the knowledge of their exact distribution play an important role in their analyses. Using properties of independent sub-Gaussian random variables, these matrices are demonstrated to be JLTs, and their smallest and largest singular values are estimated non-asymptotically using a technique from geometric functional analysis. As the dimensions of the matrix grow to infinity, these singular values are proved to converge almost surely to fixed quantities (by using the universal Bai--Yin law), and in distribution to the Gaussian orthogonal ensemble (GOE) Tracy--Widom law after proper rescalings. Understanding the behaviors of extreme singular values is important in general because they are often used to define a measure of stability of matrix algorithms. For example, JLTs were recently used in derivative-free optimization algorithmic frameworks to select random subspaces in which are constructed random models or poll directions to achieve scalability, whence estimating their smallest singular value in particular helps determine the dimension of these subspaces.

Women are at increased risk of bone loss during the menopausal transition; in fact, nearly 50\% of women's lifetime bone loss occurs during this time. The longitudinal relationships between estradiol (E2) and follicle-stimulating hormone (FSH), two hormones that change have characteristic changes during the menopausal transition, and bone health outcomes are complex. However, in addition to level and rate of change in E2 and FSH, variability in these hormones across the menopausal transition may be an important predictor of bone health, but this question has yet to be well explored. We introduce a joint model that characterizes individual mean estradiol (E2) trajectories and the individual residual variances and links these variances to bone health trajectories. In our application, we found that higher FSH variability was associated with declines in bone mineral density (BMD) before menopause, but this association was moderated over time after the menopausal transition. Additionally, higher mean E2, but not E2 variability, was associated with slower decreases in during the menopausal transition. We also include a simulation study that shows that naive two-stage methods often fail to propagate uncertainty in the individual-level variance estimates, resulting in estimation bias and invalid interval coverage

Quantum computing is a growing field where the information is processed by two-levels quantum states known as qubits. Current physical realizations of qubits require a careful calibration, composed by different experiments, due to noise and decoherence phenomena. Among the different characterization experiments, a crucial step is to develop a model to classify the measured state by discriminating the ground state from the excited state. In this proceedings we benchmark multiple classification techniques applied to real quantum devices.

Gaussian elimination with partial pivoting (GEPP) is a widely used method to solve dense linear systems. Each GEPP step uses a row transposition pivot movement if needed to ensure the leading pivot entry is maximal in magnitude for the leading column of the remaining untriangularized subsystem. We will use theoretical and numerical approaches to study how often this pivot movement is needed. We provide full distributional descriptions for the number of pivot movements needed using GEPP using particular Haar random ensembles, as well as compare these models to other common transformations from randomized numerical linear algebra. Additionally, we introduce new random ensembles with fixed pivot movement counts and fixed sparsity, $\alpha$. Experiments estimating the empirical spectral density (ESD) of these random ensembles leads to a new conjecture on a universality class of random matrices with fixed sparsity whose scaled ESD converges to a measure on the complex unit disk that depends on $\alpha$ and is an interpolation of the uniform measure on the unit disk and the Dirac measure at the origin.

北京阿比特科技有限公司