亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Matrix functions play an increasingly important role in many areas of scientific computing and engineering disciplines. In such real-world applications, algorithms working in floating-point arithmetic are used for computing matrix functions and additionally, input data might be unreliable, e.g., due to measurement errors. Therefore, it is crucial to understand the sensitivity of matrix functions to perturbations, which is measured by condition numbers. However, the condition number itself might not be computed exactly as well due to round-off and errors in the input. The sensitivity of the condition number is measured by the so-called level-2 condition number. For the usual (level-1) condition number, it is well-known that structured condition numbers (i.e., where only perturbations are taken into account that preserve the structure of the input matrix) might be much smaller than unstructured ones, which, e.g., suggests that structure-preserving algorithms for matrix functions might yield much more accurate results than general-purpose algorithms. In this work, we examine structured level-2 condition numbers in the particular case of restricting the perturbation matrix to an automorphism group, a Lie or Jordan algebra or the space of quasi-triangular matrices. In numerical experiments, we then compare the unstructured level-2 condition number with the structured one for some specific matrix functions such as the matrix logarithm, matrix square root, and matrix exponential.

相關內容

Knowledge graph embedding (KGE) aims at learning powerful representations to benefit various artificial intelligence applications. Meanwhile, contrastive learning has been widely leveraged in graph learning as an effective mechanism to enhance the discriminative capacity of the learned representations. However, the complex structures of KG make it hard to construct appropriate contrastive pairs. Only a few attempts have integrated contrastive learning strategies with KGE. But, most of them rely on language models ( e.g., Bert) for contrastive pair construction instead of fully mining information underlying the graph structure, hindering expressive ability. Surprisingly, we find that the entities within a relational symmetrical structure are usually similar and correlated. To this end, we propose a knowledge graph contrastive learning framework based on relation-symmetrical structure, KGE-SymCL, which mines symmetrical structure information in KGs to enhance the discriminative ability of KGE models. Concretely, a plug-and-play approach is proposed by taking entities in the relation-symmetrical positions as positive pairs. Besides, a self-supervised alignment loss is designed to pull together positive pairs. Experimental results on link prediction and entity classification datasets demonstrate that our KGE-SymCL can be easily adopted to various KGE models for performance improvements. Moreover, extensive experiments show that our model could outperform other state-of-the-art baselines.

While research on the geometry of planar graphs has been active in the past decades, many properties of planar metrics remain mysterious. This paper studies a fundamental aspect of the planar graph geometry: covering planar metrics by a small collection of simpler metrics. Specifically, a \emph{tree cover} of a metric space $(X, \delta)$ is a collection of trees, so that every pair of points $u$ and $v$ in $X$ has a low-distortion path in at least one of the trees. The celebrated ``Dumbbell Theorem'' [ADMSS95] states that any low-dimensional Euclidean space admits a tree cover with $O(1)$ trees and distortion $1+\varepsilon$, for any fixed $\varepsilon \in (0,1)$. This result has found numerous algorithmic applications, and has been generalized to the wider family of doubling metrics [BFN19]. Does the same result hold for planar metrics? A positive answer would add another evidence to the well-observed connection between Euclidean/doubling metrics and planar metrics. In this work, we answer this fundamental question affirmatively. Specifically, we show that for any given fixed $\varepsilon \in (0,1)$, any planar metric can be covered by $O(1)$ trees with distortion $1+\varepsilon$. Our result for planar metrics follows from a rather general framework: First we reduce the problem to constructing tree covers with \emph{additive distortion}. Then we introduce the notion of \emph{shortcut partition}, and draw connection between shortcut partition and additive tree cover. Finally we prove the existence of shortcut partition for any planar metric, using new insights regarding the grid-like structure of planar graphs. [...]

Studying conditional independence structure among many variables with few observations is a challenging task. Gaussian Graphical Models (GGMs) tackle this problem by encouraging sparsity in the precision matrix through an $l_p$ regularization with $p\leq1$. However, since the objective is highly non-convex for sub-$l_1$ pseudo-norms, most approaches rely on the $l_1$ norm. In this case frequentist approaches allow to elegantly compute the solution path as a function of the shrinkage parameter $\lambda$. Instead of optimizing the penalized likelihood, the Bayesian formulation introduces a Laplace prior on the precision matrix. However, posterior inference for different $\lambda$ values requires repeated runs of expensive Gibbs samplers. We propose a very general framework for variational inference in GGMs that unifies the benefits of frequentist and Bayesian frameworks. Specifically, we propose to approximate the posterior with a matrix-variate Normalizing Flow defined on the space of symmetric positive definite matrices. As a key improvement on previous work, we train a continuum of sparse regression models jointly for all regularization parameters $\lambda$ and all $l_p$ norms, including non-convex sub-$l_1$ pseudo-norms. This is achieved by conditioning the flow on $p>0$ and on the shrinkage parameter $\lambda$. We have then access with one model to (i) the evolution of the posterior for any $\lambda$ and for any $l_p$ (pseudo-) norms, (ii) the marginal log-likelihood for model selection, and (iii) we can recover the frequentist solution paths as the MAP, which is obtained through simulated annealing.

This study demonstrates the existence of a testable condition for the identification of the causal effect of a treatment on an outcome in observational data, which relies on two sets of variables: observed covariates to be controlled for and a suspected instrument. Under a causal structure commonly found in empirical applications, the testable conditional independence of the suspected instrument and the outcome given the treatment and the covariates has two implications. First, the instrument is valid, i.e. it does not directly affect the outcome (other than through the treatment) and is unconfounded conditional on the covariates. Second, the treatment is unconfounded conditional on the covariates such that the treatment effect is identified. We suggest tests of this conditional independence based on machine learning methods that account for covariates in a data-driven way and investigate their asymptotic behavior and finite sample performance in a simulation study. We also apply our testing approach to evaluating the impact of fertility on female labor supply when using the sibling sex ratio of the first two children as supposed instrument, which by and large points to a violation of our testable implication for the moderate set of socio-economic covariates considered.

Among randomized numerical linear algebra strategies, so-called sketching procedures are emerging as effective reduction means to accelerate the computation of Krylov subspace methods for, e.g., the solution of linear systems, eigenvalue computations, and the approximation of matrix functions. While there is plenty of experimental evidence showing that sketched Krylov solvers may dramatically improve performance over standard Krylov methods, many features of these schemes are still unexplored. We derive new theoretical results that allow us to significantly improve our understanding of sketched Krylov methods, and to identify, among several possible equivalent formulations, the most suitable sketched approximations according to their numerical stability properties. These results are also employed to analyze the error of sketched Krylov methods in the approximation of the action of matrix functions, significantly contributing to the theory available in the current literature.

We present an extended validation of semi-analytical, semi-empirical covariance matrices for the two-point correlation function (2PCF) on simulated catalogs representative of Luminous Red Galaxies (LRG) data collected during the initial two months of operations of the Stage-IV ground-based Dark Energy Spectroscopic Instrument (DESI). We run the pipeline on multiple extended Zel'dovich (EZ) mock galaxy catalogs with the corresponding cuts applied and compare the results with the mock sample covariance to assess the accuracy and its fluctuations. We propose an extension of the previously developed formalism for catalogs processed with standard reconstruction algorithms. We consider methods for comparing covariance matrices in detail, highlighting their interpretation and statistical properties caused by sample variance, in particular, nontrivial expectation values of certain metrics even when the external covariance estimate is perfect. With improved mocks and validation techniques, we confirm a good agreement between our predictions and sample covariance. This allows one to generate covariance matrices for comparable datasets without the need to create numerous mock galaxy catalogs with matching clustering, only requiring 2PCF measurements from the data itself. The code used in this paper is publicly available at //github.com/oliverphilcox/RascalC.

The nonlinear inverse problem of exponential data fitting is separable since the fitting function is a linear combination of parameterized exponential functions, thus allowing to solve for the linear coefficients separately from the nonlinear parameters. The matrix pencil method, which reformulates the problem statement into a generalized eigenvalue problem for the nonlinear parameters and a structured linear system for the linear parameters, is generally considered as the more stable method to solve the problem computationally. In Section 2 the matrix pencil associated with the classical complex exponential fitting or sparse interpolation problem is summarized and the concepts of dilation and translation are introduced to obtain matrix pencils at different scales. Exponential analysis was earlier generalized to the use of several polynomial basis functions and some operator eigenfunctions. However, in most generalizations a computational scheme in terms of an eigenvalue problem is lacking. In the subsequent Sections 3--6 the matrix pencil formulation, including the dilation and translation paradigm, is generalized to more functions. Each of these periodic, polynomial or special function classes needs a tailored approach, where optimal use is made of the properties of the parameterized elementary or special function used in the sparse interpolation problem under consideration. With each generalization a structured linear matrix pencil is associated, immediately leading to a computational scheme for the nonlinear and linear parameters, respectively from a generalized eigenvalue problem and one or more structured linear systems. Finally, in Section 7 we illustrate the new methods.

Direct volume rendering is often used to compare different 3D scalar fields. The choice of the transfer function which maps scalar values to color and opacity plays a critical role in this task. We present a technique for the automatic optimization of a transfer function so that rendered images of a second field match as good as possible images of a field that has been rendered with some other transfer function. This enables users to see whether differences in the visualizations can be solely attributed to the choice of transfer function or remain after optimization. We propose and compare two different approaches to solve this problem, a voxel-based solution solving a least squares problem, and an image-based solution using differentiable volume rendering for optimization. We further propose a residual-based visualization to emphasize the differences in information content.

In this paper, we present a notion of differential privacy (DP) for data that comes from different classes. Here, the class-membership is private information that needs to be protected. The proposed method is an output perturbation mechanism that adds noise to the release of query response such that the analyst is unable to infer the underlying class-label. The proposed DP method is capable of not only protecting the privacy of class-based data but also meets quality metrics of accuracy and is computationally efficient and practical. We illustrate the efficacy of the proposed method empirically while outperforming the baseline additive Gaussian noise mechanism. We also examine a real-world application and apply the proposed DP method to the autoregression and moving average (ARMA) forecasting method, protecting the privacy of the underlying data source. Case studies on the real-world advanced metering infrastructure (AMI) measurements of household power consumption validate the excellent performance of the proposed DP method while also satisfying the accuracy of forecasted power consumption measurements.

With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learning-based 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose occupancy networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.

北京阿比特科技有限公司