亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present an extended validation of semi-analytical, semi-empirical covariance matrices for the two-point correlation function (2PCF) on simulated catalogs representative of Luminous Red Galaxies (LRG) data collected during the initial two months of operations of the Stage-IV ground-based Dark Energy Spectroscopic Instrument (DESI). We run the pipeline on multiple extended Zel'dovich (EZ) mock galaxy catalogs with the corresponding cuts applied and compare the results with the mock sample covariance to assess the accuracy and its fluctuations. We propose an extension of the previously developed formalism for catalogs processed with standard reconstruction algorithms. We consider methods for comparing covariance matrices in detail, highlighting their interpretation and statistical properties caused by sample variance, in particular, nontrivial expectation values of certain metrics even when the external covariance estimate is perfect. With improved mocks and validation techniques, we confirm a good agreement between our predictions and sample covariance. This allows one to generate covariance matrices for comparable datasets without the need to create numerous mock galaxy catalogs with matching clustering, only requiring 2PCF measurements from the data itself. The code used in this paper is publicly available at //github.com/oliverphilcox/RascalC.

相關內容

We examine the last-iterate convergence rate of Bregman proximal methods - from mirror descent to mirror-prox and its optimistic variants - as a function of the local geometry induced by the prox-mapping defining the method. For generality, we focus on local solutions of constrained, non-monotone variational inequalities, and we show that the convergence rate of a given method depends sharply on its associated Legendre exponent, a notion that measures the growth rate of the underlying Bregman function (Euclidean, entropic, or other) near a solution. In particular, we show that boundary solutions exhibit a stark separation of regimes between methods with a zero and non-zero Legendre exponent: the former converge at a linear rate, while the latter converge, in general, sublinearly. This dichotomy becomes even more pronounced in linearly constrained problems where methods with entropic regularization achieve a linear convergence rate along sharp directions, compared to convergence in a finite number of steps under Euclidean regularization.

Learning distance functions between complex objects, such as the Wasserstein distance to compare point sets, is a common goal in machine learning applications. However, functions on such complex objects (e.g., point sets and graphs) are often required to be invariant to a wide variety of group actions e.g. permutation or rigid transformation. Therefore, continuous and symmetric product functions (such as distance functions) on such complex objects must also be invariant to the product of such group actions. We call these functions symmetric and factor-wise group invariant (or SFGI functions in short). In this paper, we first present a general neural network architecture for approximating SFGI functions. The main contribution of this paper combines this general neural network with a sketching idea to develop a specific and efficient neural network which can approximate the $p$-th Wasserstein distance between point sets. Very importantly, the required model complexity is independent of the sizes of input point sets. On the theoretical front, to the best of our knowledge, this is the first result showing that there exists a neural network with the capacity to approximate Wasserstein distance with bounded model complexity. Our work provides an interesting integration of sketching ideas for geometric problems with universal approximation of symmetric functions. On the empirical front, we present a range of results showing that our newly proposed neural network architecture performs comparatively or better than other models (including a SOTA Siamese Autoencoder based approach). In particular, our neural network generalizes significantly better and trains much faster than the SOTA Siamese AE. Finally, this line of investigation could be useful in exploring effective neural network design for solving a broad range of geometric optimization problems (e.g., $k$-means in a metric space).

The proliferation of data generation has spurred advancements in functional data analysis. With the ability to analyze multiple variables simultaneously, the demand for working with multivariate functional data has increased. This study proposes a novel formulation of the epigraph and hypograph indexes, as well as their generalized expressions, specifically tailored for the multivariate functional context. These definitions take into account the interrelations between components. Furthermore, the proposed indexes are employed to cluster multivariate functional data. In the clustering process, the indexes are applied to both the data and their first and second derivatives. This generates a reduced-dimension dataset from the original multivariate functional data, enabling the application of well-established multivariate clustering techniques that have been extensively studied in the literature. This methodology has been tested through simulated and real datasets, performing comparative analyses against state-of-the-art to assess its performance.

The marginal structure quantile model (MSQM) provides a unique lens to understand the causal effect of a time-varying treatment on the full distribution of potential outcomes. Under the semiparametric framework, we derive the efficiency influence function for the MSQM, from which a doubly robust estimator is proposed. We show that the doubly robust estimator is consistent if either of the models associated with treatment assignment or the potential outcome distributions is correctly specified, and is semiparametrically efficient if both models are correct. We also develop a novel sensitivity analysis framework for the assumption of no unmeasured time-varying confounding.

A non-intrusive proper generalized decomposition (PGD) strategy, coupled with an overlapping domain decomposition (DD) method, is proposed to efficiently construct surrogate models of parametric linear elliptic problems. A parametric multi-domain formulation is presented, with local subproblems featuring arbitrary Dirichlet interface conditions represented through the traces of the finite element functions used for spatial discretization at the subdomain level, with no need for additional auxiliary basis functions. The linearity of the operator is exploited to devise low-dimensional problems with only few active boundary parameters. An overlapping Schwarz method is used to glue the local surrogate models, solving a linear system for the nodal values of the parametric solution at the interfaces, without introducing Lagrange multipliers to enforce the continuity in the overlapping region. The proposed DD-PGD methodology relies on a fully algebraic formulation allowing for real-time computation based on the efficient interpolation of the local surrogate models in the parametric space, with no additional problems to be solved during the execution of the Schwarz algorithm. Numerical results for parametric diffusion and convection-diffusion problems are presented to showcase the accuracy of the DD-PGD approach, its robustness in different regimes and its superior performance with respect to standard high-fidelity DD methods.

We propose a new concept of codivergence, which quantifies the similarity between two probability measures $P_1, P_2$ relative to a reference probability measure $P_0$. In the neighborhood of the reference measure $P_0$, a codivergence behaves like an inner product between the measures $P_1 - P_0$ and $P_2 - P_0$. Codivergences of covariance-type and correlation-type are introduced and studied with a focus on two specific correlation-type codivergences, the $\chi^2$-codivergence and the Hellinger codivergence. We derive explicit expressions for several common parametric families of probability distributions. For a codivergence, we introduce moreover the divergence matrix as an analogue of the Gram matrix. It is shown that the $\chi^2$-divergence matrix satisfies a data-processing inequality.

Human interactions create social networks forming the backbone of societies. Individuals adjust their opinions by exchanging information through social interactions. Two recurrent questions are whether social structures promote opinion polarisation or consensus in societies and whether polarisation can be avoided, particularly on social media. In this paper, we hypothesise that not only network structure but also the timings of social interactions regulate the emergence of opinion clusters. We devise a temporal version of the Deffuant opinion model where pairwise interactions follow temporal patterns and show that burstiness alone is sufficient to refrain from consensus and polarisation by promoting the reinforcement of local opinions. Individuals self-organise into a multi-partisan society due to network clustering, but the diversity of opinion clusters further increases with burstiness, particularly when individuals have low tolerance and prefer to adjust to similar peers. The emergent opinion landscape is well-balanced regarding clusters' size, with a small fraction of individuals converging to extreme opinions. We thus argue that polarisation is more likely to emerge in social media than offline social networks because of the relatively low social clustering observed online. Counter-intuitively, strengthening online social networks by increasing social redundancy may be a venue to reduce polarisation and promote opinion diversity.

We consider parametrized linear-quadratic optimal control problems and provide their online-efficient solutions by combining greedy reduced basis methods and machine learning algorithms. To this end, we first extend the greedy control algorithm, which builds a reduced basis for the manifold of optimal final time adjoint states, to the setting where the objective functional consists of a penalty term measuring the deviation from a desired state and a term describing the control energy. Afterwards, we apply machine learning surrogates to accelerate the online evaluation of the reduced model. The error estimates proven for the greedy procedure are further transferred to the machine learning models and thus allow for efficient a posteriori error certification. We discuss the computational costs of all considered methods in detail and show by means of two numerical examples the tremendous potential of the proposed methodology.

The majority of fault-tolerant distributed algorithms are designed assuming a nominal corruption model, in which at most a fraction $f_n$ of parties can be corrupted by the adversary. However, due to the infamous Sybil attack, nominal models are not sufficient to express the trust assumptions in open (i.e., permissionless) settings. Instead, permissionless systems typically operate in a weighted model, where each participant is associated with a weight and the adversary can corrupt a set of parties holding at most a fraction $f_w$ of total weight. In this paper, we suggest a simple way to transform a large class of protocols designed for the nominal model into the weighted model. To this end, we formalize and solve three novel optimization problems, which we collectively call the weight reduction problems, that allow us to map large real weights into small integer weights while preserving the properties necessary for the correctness of the protocols. In all cases, we manage to keep the sum of the integer weights to be at most linear in the number of parties, resulting in extremely efficient protocols for the weighted model. Moreover, we demonstrate that, on weight distributions that emerge in practice, the sum of the integer weights tends to be far from the theoretical worst-case and, often even smaller than the number of participants. While, for some protocols, our transformation requires an arbitrarily small reduction in resilience (i.e., $f_w = f_n - \epsilon$), surprisingly, for many important problems we manage to obtain weighted solutions with the same resilience ($f_w = f_n$) as nominal ones. Notable examples include asynchronous consensus, verifiable secret sharing, erasure-coded distributed storage and broadcast protocols.

The notions and certain fundamental characteristics of the proximal and limiting normal cones with respect to a set are first presented in this paper. We present the ideas of the limiting coderivative and subdifferential with respect to a set of multifunctions and singleton mappings, respectively, based on these normal cones. The necessary and sufficient conditions for the Aubin property with respect to a set of multifunctions are then described by using the limiting coderivative with respect to a set. As a result of the limiting subdifferential with respect to a set, we offer the requisite optimality criteria for local solutions to optimization problems. In addition, we also provide examples to demonstrate the outcomes.

北京阿比特科技有限公司