亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this article, we propose a one-sample test to check whether the support of the unknown distribution generating the data is homologically equivalent to the support of some specified distribution or not OR using the corresponding two-sample test, one can test whether the supports of two unknown distributions are homologically equivalent or not. In the course of this study, test statistics based on the Betti numbers are formulated, and the consistency of the tests is established under the critical and the supercritical regimes. Moreover, some simulation studies are conducted and results are compared with the existing methodologies such as Robinson's permutation test and test based on mean persistent landscape functions. Furthermore, the practicability of the tests is shown on two well-known real data sets also.

相關內容

This paper considers the classical problem of sampling with Monte Carlo methods a target probability distribution obtained by conditioning on a rare event defined by the level set of a real-valued score function that is very expensive to compute. We also consider a context where, with each new evaluation of the true score function, a method that iteratively builds a sequence of reduced scores is available; these reduced scores being moreover certified with pointwise error bounds. This work proposes a fully adaptive algorithm that iteratively: i) builds a sequence of proposal distributions obtained by conditioning on the reduced score above an adaptively well-chosen level, and ii) draws from the latter both for importance sampling of the true target rare events, as well as for proposing relevant (expensive) updates to the reduced score. An essential contribution consists in the adaptive choice of the level in i) and ii). The latter is calculated solely from the reduced score and its error bound, and is interpreted as the first non-achievable level as quantified by a given cost (in a pessimistic scenario) of importance sampling of the associated true target distribution. From a practical point of view, sampling the proposal sequence is performed by extending the framework of the popular Adaptive Multilevel Splitting (AMS) algorithm to the use of score function reduction. Numerical experiments evaluate the proposed importance sampling algorithm in terms of computational complexity versus squared error. In particular, we investigate the performance of the algorithm when simulating rare events related to the solution of a parametric PDE approximated by a reduced basis.

Since the development of the conjugate gradient (CG) method in 1952 by Hestenes and Stiefel, CG, has become an indispensable tool in computational mathematics for solving positive definite linear systems. On the other hand, the conjugate residual (CR) method, closely related CG and introduced by Stiefel in 1955 for the same settings, remains relatively less known outside the numerical linear algebra community. Since their inception, these methods -- henceforth collectively referred to as conjugate direction methods -- have been extended beyond positive definite to indefinite, albeit consistent, settings. Going one step further, in this paper, we investigate theoretical and empirical properties of these methods under inconsistent systems. Among other things, we show that small modifications to the original algorithms allow for the pseudo-inverse solution. Furthermore, we show that CR is essentially equivalent to the minimum residual method, proposed by Paige and Saunders in 1975, in such contexts. Lastly, we conduct a series of numerical experiments to shed lights on their numerical stability (or lack thereof) and their performance for inconsistent systems. Surprisingly, we will demonstrate that, unlike CR and contrary to popular belief, CG can exhibit significant numerical instability, bordering on catastrophe in some instances.

Privacy, security and data governance constraints rule out a brute force process in the integration of cross-silo data, which inherits the development of the Internet of Things. Federated learning is proposed to ensure that all parties can collaboratively complete the training task while the data is not out of the local. Vertical federated learning is a specialization of federated learning for distributed features. To preserve privacy, homomorphic encryption is applied to enable encrypted operations without decryption. Nevertheless, together with a robust security guarantee, homomorphic encryption brings extra communication and computation overhead. In this paper, we analyze the current bottlenecks of vertical federated learning under homomorphic encryption comprehensively and numerically. We propose a straggler-resilient and computation-efficient accelerating system that reduces the communication overhead in heterogeneous scenarios by 65.26% at most and reduces the computation overhead caused by homomorphic encryption by 40.66% at most. Our system can improve the robustness and efficiency of the current vertical federated learning framework without loss of security.

In this paper, we investigate tradeoffs between differential privacy (DP) and several voting axioms for approval-based committee voting, including proportionality, Pareto efficiency, Condorcet criterion, and strategyproofness. For all the axioms except strategyproofness, we show their incompatibility with DP, and provide both upper and lower bounds for their tradeoffs with DP. Furthermore, we show that any $\epsilon$-DP mechanism satisfies $e^{-\epsilon}$-cardinality strategyproofness, and the satisfaction can be further improved if the mechanism satisfies monotonicity.

This book is the result of a seminar in which we reviewed multimodal approaches and attempted to create a solid overview of the field, starting with the current state-of-the-art approaches in the two subfields of Deep Learning individually. Further, modeling frameworks are discussed where one modality is transformed into the other, as well as models in which one modality is utilized to enhance representation learning for the other. To conclude the second part, architectures with a focus on handling both modalities simultaneously are introduced. Finally, we also cover other modalities as well as general-purpose multi-modal models, which are able to handle different tasks on different modalities within one unified architecture. One interesting application (Generative Art) eventually caps off this booklet.

In order to overcome the expressive limitations of graph neural networks (GNNs), we propose the first method that exploits vector flows over graphs to develop globally consistent directional and asymmetric aggregation functions. We show that our directional graph networks (DGNs) generalize convolutional neural networks (CNNs) when applied on a grid. Whereas recent theoretical works focus on understanding local neighbourhoods, local structures and local isomorphism with no global information flow, our novel theoretical framework allows directional convolutional kernels in any graph. First, by defining a vector field in the graph, we develop a method of applying directional derivatives and smoothing by projecting node-specific messages into the field. Then we propose the use of the Laplacian eigenvectors as such vector field, and we show that the method generalizes CNNs on an n-dimensional grid, and is provably more discriminative than standard GNNs regarding the Weisfeiler-Lehman 1-WL test. Finally, we bring the power of CNN data augmentation to graphs by providing a means of doing reflection, rotation and distortion on the underlying directional field. We evaluate our method on different standard benchmarks and see a relative error reduction of 8\% on the CIFAR10 graph dataset and 11% to 32% on the molecular ZINC dataset. An important outcome of this work is that it enables to translate any physical or biological problems with intrinsic directional axes into a graph network formalism with an embedded directional field.

We present a new method to learn video representations from large-scale unlabeled video data. Ideally, this representation will be generic and transferable, directly usable for new tasks such as action recognition and zero or few-shot learning. We formulate unsupervised representation learning as a multi-modal, multi-task learning problem, where the representations are shared across different modalities via distillation. Further, we introduce the concept of loss function evolution by using an evolutionary search algorithm to automatically find optimal combination of loss functions capturing many (self-supervised) tasks and modalities. Thirdly, we propose an unsupervised representation evaluation metric using distribution matching to a large unlabeled dataset as a prior constraint, based on Zipf's law. This unsupervised constraint, which is not guided by any labeling, produces similar results to weakly-supervised, task-specific ones. The proposed unsupervised representation learning results in a single RGB network and outperforms previous methods. Notably, it is also more effective than several label-based methods (e.g., ImageNet), with the exception of large, fully labeled video datasets.

Knowledge graph (KG) embedding encodes the entities and relations from a KG into low-dimensional vector spaces to support various applications such as KG completion, question answering, and recommender systems. In real world, knowledge graphs (KGs) are dynamic and evolve over time with addition or deletion of triples. However, most existing models focus on embedding static KGs while neglecting dynamics. To adapt to the changes in a KG, these models need to be re-trained on the whole KG with a high time cost. In this paper, to tackle the aforementioned problem, we propose a new context-aware Dynamic Knowledge Graph Embedding (DKGE) method which supports the embedding learning in an online fashion. DKGE introduces two different representations (i.e., knowledge embedding and contextual element embedding) for each entity and each relation, in the joint modeling of entities and relations as well as their contexts, by employing two attentive graph convolutional networks, a gate strategy, and translation operations. This effectively helps limit the impacts of a KG update in certain regions, not in the entire graph, so that DKGE can rapidly acquire the updated KG embedding by a proposed online learning algorithm. Furthermore, DKGE can also learn KG embedding from scratch. Experiments on the tasks of link prediction and question answering in a dynamic environment demonstrate the effectiveness and efficiency of DKGE.

We investigate a lattice-structured LSTM model for Chinese NER, which encodes a sequence of input characters as well as all potential words that match a lexicon. Compared with character-based methods, our model explicitly leverages word and word sequence information. Compared with word-based methods, lattice LSTM does not suffer from segmentation errors. Gated recurrent cells allow our model to choose the most relevant characters and words from a sentence for better NER results. Experiments on various datasets show that lattice LSTM outperforms both word-based and character-based LSTM baselines, achieving the best results.

This paper proposes a method to modify traditional convolutional neural networks (CNNs) into interpretable CNNs, in order to clarify knowledge representations in high conv-layers of CNNs. In an interpretable CNN, each filter in a high conv-layer represents a certain object part. We do not need any annotations of object parts or textures to supervise the learning process. Instead, the interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. Our method can be applied to different types of CNNs with different structures. The clear knowledge representation in an interpretable CNN can help people understand the logics inside a CNN, i.e., based on which patterns the CNN makes the decision. Experiments showed that filters in an interpretable CNN were more semantically meaningful than those in traditional CNNs.

北京阿比特科技有限公司