亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we give the first efficient algorithms for the $k$-center problem on dynamic graphs undergoing edge updates. In this problem, the goal is to partition the input into $k$ sets by choosing $k$ centers such that the maximum distance from any data point to the closest center is minimized. It is known that it is NP-hard to get a better than $2$ approximation for this problem. While in many applications the input may naturally be modeled as a graph, all prior works on $k$-center problem in dynamic settings are on metrics. In this paper, we give a deterministic decremental $(2+\epsilon)$-approximation algorithm and a randomized incremental $(4+\epsilon)$-approximation algorithm, both with amortized update time $kn^{o(1)}$ for weighted graphs. Moreover, we show a reduction that leads to a fully dynamic $(2+\epsilon)$-approximation algorithm for the $k$-center problem, with worst-case update time that is within a factor $k$ of the state-of-the-art upper bound for maintaining $(1+\epsilon)$-approximate single-source distances in graphs. Matching this bound is a natural goalpost because the approximate distances of each vertex to its center can be used to maintain a $(2+\epsilon)$-approximation of the graph diameter and the fastest known algorithms for such a diameter approximation also rely on maintaining approximate single-source distances.

相關內容

In this paper, we address the unsupervised speech enhancement problem based on recurrent variational autoencoder (RVAE). This approach offers promising generalization performance over the supervised counterpart. Nevertheless, the involved iterative variational expectation-maximization (VEM) process at test time, which relies on a variational inference method, results in high computational complexity. To tackle this issue, we present efficient sampling techniques based on Langevin dynamics and Metropolis-Hasting algorithms, adapted to the EM-based speech enhancement with RVAE. By directly sampling from the intractable posterior distribution within the EM process, we circumvent the intricacies of variational inference. We conduct a series of experiments, comparing the proposed methods with VEM and a state-of-the-art supervised speech enhancement approach based on diffusion models. The results reveal that our sampling-based algorithms significantly outperform VEM, not only in terms of computational efficiency but also in overall performance. Furthermore, when compared to the supervised baseline, our methods showcase robust generalization performance in mismatched test conditions.

An extension of Cencov's categorical description of classical inference theory to the domain of quantum systems is presented. It provides a novel categorical foundation to the theory of quantum information that embraces both classical and quantum information theory in a natural way, while also allowing to formalise the notion of quantum environment. A first application of these ideas is provided by extending the notion of statistical manifold to incorporate categories, and investigating a possible, uniparametric Cramer-Rao inequality in this setting.

We study the behavior of a label propagation algorithm (LPA) on the Erd\H{o}s-R\'enyi random graph $\mathcal{G}(n,p)$. Initially, given a network, each vertex starts with a random label in the interval $[0,1]$. Then, in each round of LPA, every vertex switches its label to the majority label in its neighborhood (including its own label). At the first round, ties are broken towards smaller labels, while at each of the next rounds, ties are broken uniformly at random. The algorithm terminates once all labels stay the same in two consecutive iterations. LPA is successfully used in practice for detecting communities in networks (corresponding to vertex sets with the same label after termination of the algorithm). Perhaps surprisingly, LPA's performance on dense random graphs is hard to analyze, and so far convergence to consenus was known only when $np\ge n^{3/4+\varepsilon}$. By a very careful multi-stage exposure of the edges, we break this barrier and show that, when $np \ge n^{5/8+\varepsilon}$, a.a.s. the algorithm terminates with a single label. Moreover, we show that, if $np\gg n^{2/3}$, a.a.s. this label is the smallest one, whereas if $n^{5/8+\varepsilon}\le np\ll n^{2/3}$, the surviving label is a.a.s. not the smallest one.

We develop the contour integral method for numerically solving the Feynman-Kac equation with two internal states [P. B. Xu and W. H. Deng, Math. Model. Nat. Phenom., 13 (2018), 10], describing the functional distribution of particle's internal states. The striking benefits are obtained, including spectral accuracy, low computational complexity, small memory requirement, etc. We perform the error estimates and stability analyses, which are confirmed by numerical experiments.

The problem of optimal recovering high-order mixed derivatives of bivariate functions with finite smoothness is studied. On the basis of the truncation method, an algorithm for numerical differentiation is constructed, which is order-optimal both in the sense of accuracy and in terms of the amount of involved Galerkin information.

In this paper, we propose an energy stable network (EStable-Net) for solving gradient flow equations. The solution update scheme in our neural network EStable-Net is inspired by a proposed auxiliary variable based equivalent form of the gradient flow equation. EStable-Net enables decreasing of a discrete energy along the neural network, which is consistent with the property in the evolution process of the gradient flow equation. The architecture of the neural network EStable-Net consists of a few energy decay blocks, and the output of each block can be interpreted as an intermediate state of the evolution process of the gradient flow equation. This design provides a stable, efficient and interpretable network structure. Numerical experimental results demonstrate that our network is able to generate high accuracy and stable predictions.

This paper presents a reduced algorithm to the classical projection method for the solution of $d$-dimensional quasiperiodic problems, particularly Schr\"{o}dinger eigenvalue problems. Using the properties of the Schr\"{o}dinger operator in higher-dimensional space via a projection matrix of size $d\times n$, we rigorously prove that the generalized Fourier coefficients of the eigenfunctions decay exponentially along a fixed direction associated with the projection matrix. An efficient reduction strategy of the basis space is then proposed to reduce the degrees of freedom from $O(N^{n})$ to $O(N^{n-d}D^d)$, where $N$ is the number of Fourier grids in one dimension and the truncation coefficient $D$ is much less than $N$. Correspondingly, the computational complexity of the proposed algorithm for solving the first $k$ eigenpairs using the Krylov subspace method decreases from $O(kN^{2n})$ to $O(kN^{2(n-d)}D^{2d})$. Rigorous error estimates of the proposed reduced projection method are provided, indicating that a small $D$ is sufficient to achieve the same level of accuracy as the classical projection method. We present numerical examples of quasiperiodic Schr\"{o}dinger eigenvalue problems in one and two dimensions to demonstrate the accuracy and efficiency of our proposed method.

In this paper we present an abstract nonsmooth optimization problem for which we recall existence and uniqueness results. We show a numerical scheme to approximate its solution. The theory is later applied to a sample static contact problem describing an elastic body in frictional contact with a foundation. This problem leads to a hemivariational inequality which we solve numerically. Finally, we compare three computational methods of solving contact mechanical problems: direct optimization method, augmented Lagrangian method and primal-dual active set strategy.

In this paper, we study the problem of maximizing $k$-submodular functions subject to a knapsack constraint. For monotone objective functions, we present a $\frac{1}{2}(1-e^{-2})\approx 0.432$ greedy approximation algorithm. For the non-monotone case, we are the first to consider the knapsack problem and provide a greedy-type combinatorial algorithm with approximation ratio $\frac{1}{3}(1-e^{-3})\approx 0.317$.

Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.

北京阿比特科技有限公司