亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we propose a numerical method to uniformly handle the random genetic drift model for pure drift with or without natural selection and mutation. For pure drift and natural selection case, the Dirac $\delta$ singularity will develop at two boundary ends and the mass lumped at the two ends stands for the fixation probability. For the one-way mutation case, known as Muller's ratchet, the accumulation of deleterious mutations leads to the loss of the fittest gene, the Dirac $\delta$ singularity will spike only at one boundary end, which stands for the fixation of the deleterious gene and loss of the fittest one. For two-way mutation case, the singularity with negative power law may emerge near boundary points. We first rewrite the original model on the probability density function (PDF) to one with respect to the cumulative distribution function (CDF). Dirac $\delta$ singularity of the PDF becomes the discontinuity of the CDF. Then we establish a upwind scheme, which keeps the total probability, is positivity preserving and unconditionally stable. For pure drift, the scheme also keeps the conservation of expectation. It can catch the discontinuous jump of the CDF, then predicts accurately the fixation probability for pure drift with or without natural selection and one-way mutation. For two-way mutation case, it can catch the power law of the singularity. %Moreover, some artificial algorithms or additional boundary criteria is not needed in the numerical simulation. The numerical results show the effectiveness of the scheme.

相關內容

In this paper, we give an overview of a recently developed method for dynamic domain adaptation, named DIRA, which relies on a few samples in addition to a regularisation approach, named elastic weight consolidation, to achieve state-of-the-art (SOTA) domain adaptation results. DIRA has been previously shown to perform competitively with SOTA unsupervised adaption techniques. However, a limitation of DIRA is that it relies on labels to be provided for the few samples used in adaption. This makes it a supervised technique. In this paper, we propose a modification to the DIRA method to make it self-supervised i.e. remove the need for providing labels. Our proposed approach will be evaluated experimentally in future work.

In this paper, we study the (decentralized) distributed optimization problem with high-dimensional sparse structure. Building upon the FedDA algorithm, we propose a (Decentralized) FedDA-GT algorithm, which combines the \textbf{gradient tracking} technique. It is able to eliminate the heterogeneity among different clients' objective functions while ensuring a dimension-free convergence rate. Compared to the vanilla FedDA approach, (D)FedDA-GT can significantly reduce the communication complexity, from ${O}(s^2\log d/\varepsilon^{3/2})$ to a more efficient ${O}(s^2\log d/\varepsilon)$. In cases where strong convexity is applicable, we introduce a multistep mechanism resulting in the Multistep ReFedDA-GT algorithm, a minor modified version of FedDA-GT. This approach achieves an impressive communication complexity of ${O}\left(s\log d \log \frac{1}{\varepsilon}\right)$ through repeated calls to the ReFedDA-GT algorithm. Finally, we conduct numerical experiments, illustrating that our proposed algorithms enjoy the dual advantage of being dimension-free and heterogeneity-free.

In this paper we consider the inverse problem of electrical conductivity retrieval starting from boundary measurements, in the framework of Electrical Resistance Tomography (ERT). In particular, the focus is on non-iterative reconstruction algorithms, compatible with real-time applications. In this work a new non-iterative reconstruction method for Electrical Resistance Tomography, termed Kernel Method, is presented. The imaging algorithm deals with the problem of retrieving the shape of one or more anomalies embedded in a known background. The foundation of the proposed method is given by the idea that if there exists a current flux at the boundary (Neumann data) able to produce the same voltage measurements on two different configurations, with and without the anomaly, respectively, then the corresponding electric current density for the problem involving only the background material vanishes in the region occupied by the anomaly. Coherently with this observation, the Kernel Method consists in (i) evaluating a proper current flux at the boundary $g$, (ii) solving one direct problem on a configuration without anomaly and driven by $g$, (iii) reconstructing the anomaly from the spatial plot of the power density as the region in which the power density vanishes. This new tomographic method has a very simple numerical implementation at a very low computational cost. Beside theoretical results and justifications of our method, we present a large number of numerical examples to show the potential of this new algorithm.

In this work, we investigate the margin-maximization bias exhibited by gradient-based algorithms in classifying linearly separable data. We present an in-depth analysis of the specific properties of the velocity field associated with (normalized) gradients, focusing on their role in margin maximization. Inspired by this analysis, we propose a novel algorithm called Progressive Rescaling Gradient Descent (PRGD) and show that PRGD can maximize the margin at an {\em exponential rate}. This stands in stark contrast to all existing algorithms, which maximize the margin at a slow {\em polynomial rate}. Specifically, we identify mild conditions on data distribution under which existing algorithms such as gradient descent (GD) and normalized gradient descent (NGD) {\em provably fail} in maximizing the margin efficiently. To validate our theoretical findings, we present both synthetic and real-world experiments. Notably, PRGD also shows promise in enhancing the generalization performance when applied to linearly non-separable datasets and deep neural networks.

In this work, we discover a phenomenon of community bias amplification in graph representation learning, which refers to the exacerbation of performance bias between different classes by graph representation learning. We conduct an in-depth theoretical study of this phenomenon from a novel spectral perspective. Our analysis suggests that structural bias between communities results in varying local convergence speeds for node embeddings. This phenomenon leads to bias amplification in the classification results of downstream tasks. Based on the theoretical insights, we propose random graph coarsening, which is proved to be effective in dealing with the above issue. Finally, we propose a novel graph contrastive learning model called Random Graph Coarsening Contrastive Learning (RGCCL), which utilizes random coarsening as data augmentation and mitigates community bias by contrasting the coarsened graph with the original graph. Extensive experiments on various datasets demonstrate the advantage of our method when dealing with community bias amplification.

In this paper we consider the problem of predicting survey response rates using a family of flexible and interpretable nonparametric models. The study is motivated by the US Census Bureau's well-known ROAM application which uses a linear regression model trained on the US Census Planning Database data to identify hard-to-survey areas. A crowdsourcing competition (Erdman and Bates, 2016) organized around ten years ago revealed that machine learning methods based on ensembles of regression trees led to the best performance in predicting survey response rates; however, the corresponding models could not be adopted for the intended application due to their black-box nature. We consider nonparametric additive models with small number of main and pairwise interaction effects using $\ell_0$-based penalization. From a methodological viewpoint, we study both computational and statistical aspects of our estimator; and discuss variants that incorporate strong hierarchical interactions. Our algorithms (opensourced on github) extend the computational frontiers of existing algorithms for sparse additive models, to be able to handle datasets relevant for the application we consider. We discuss and interpret findings from our model on the US Census Planning Database. In addition to being useful from an interpretability standpoint, our models lead to predictions that appear to be better than popular black-box machine learning methods based on gradient boosting and feedforward neural networks - suggesting that it is possible to have models that have the best of both worlds: good model accuracy and interpretability.

In this paper, we present an accurate and scalable approach to the face clustering task. We aim at grouping a set of faces by their potential identities. We formulate this task as a link prediction problem: a link exists between two faces if they are of the same identity. The key idea is that we find the local context in the feature space around an instance (face) contains rich information about the linkage relationship between this instance and its neighbors. By constructing sub-graphs around each instance as input data, which depict the local context, we utilize the graph convolution network (GCN) to perform reasoning and infer the likelihood of linkage between pairs in the sub-graphs. Experiments show that our method is more robust to the complex distribution of faces than conventional methods, yielding favorably comparable results to state-of-the-art methods on standard face clustering benchmarks, and is scalable to large datasets. Furthermore, we show that the proposed method does not need the number of clusters as prior, is aware of noises and outliers, and can be extended to a multi-view version for more accurate clustering accuracy.

In this paper, we propose a deep reinforcement learning framework called GCOMB to learn algorithms that can solve combinatorial problems over large graphs. GCOMB mimics the greedy algorithm in the original problem and incrementally constructs a solution. The proposed framework utilizes Graph Convolutional Network (GCN) to generate node embeddings that predicts the potential nodes in the solution set from the entire node set. These embeddings enable an efficient training process to learn the greedy policy via Q-learning. Through extensive evaluation on several real and synthetic datasets containing up to a million nodes, we establish that GCOMB is up to 41% better than the state of the art, up to seven times faster than the greedy algorithm, robust and scalable to large dynamic networks.

In this paper, we introduce the Reinforced Mnemonic Reader for machine reading comprehension tasks, which enhances previous attentive readers in two aspects. First, a reattention mechanism is proposed to refine current attentions by directly accessing to past attentions that are temporally memorized in a multi-round alignment architecture, so as to avoid the problems of attention redundancy and attention deficiency. Second, a new optimization approach, called dynamic-critical reinforcement learning, is introduced to extend the standard supervised method. It always encourages to predict a more acceptable answer so as to address the convergence suppression problem occurred in traditional reinforcement learning algorithms. Extensive experiments on the Stanford Question Answering Dataset (SQuAD) show that our model achieves state-of-the-art results. Meanwhile, our model outperforms previous systems by over 6% in terms of both Exact Match and F1 metrics on two adversarial SQuAD datasets.

In this paper, we propose a conceptually simple and geometrically interpretable objective function, i.e. additive margin Softmax (AM-Softmax), for deep face verification. In general, the face verification task can be viewed as a metric learning problem, so learning large-margin face features whose intra-class variation is small and inter-class difference is large is of great importance in order to achieve good performance. Recently, Large-margin Softmax and Angular Softmax have been proposed to incorporate the angular margin in a multiplicative manner. In this work, we introduce a novel additive angular margin for the Softmax loss, which is intuitively appealing and more interpretable than the existing works. We also emphasize and discuss the importance of feature normalization in the paper. Most importantly, our experiments on LFW BLUFR and MegaFace show that our additive margin softmax loss consistently performs better than the current state-of-the-art methods using the same network architecture and training dataset. Our code has also been made available at //github.com/happynear/AMSoftmax

北京阿比特科技有限公司