亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The single-site dynamics are a canonical class of Markov chains for sampling from high-dimensional probability distributions, e.g. the ones represented by graphical models. We give a simple and generic parallel algorithm that can faithfully simulate single-site dynamics. When the chain asymptotically satisfies the $\ell_p$-Dobrushin's condition, specifically, when the Dobrushin's influence matrix has constantly bounded $\ell_p$-induced operator norm for an arbitrary $p\in[1,\infty]$, the parallel simulation of $N$ steps of single-site updates succeeds within $O\left({N}/{n}+\log n\right)$ depth of parallel computing using $\tilde{O}(m)$ processors, where $n$ is the number of sites and $m$ is the size of graphical model. Since the Dobrushin's condition is almost always satisfied asymptotically by mixing chains, this parallel simulation algorithm essentially transforms single-site dynamics with optimal $O(n\log n)$ mixing time to RNC algorithms for sampling. In particular we obtain RNC samplers, for the Ising models on general graphs in the uniqueness regime, and for satisfying solutions of CNF formulas in a local lemma regime. With non-adaptive simulated annealing, these RNC samplers can be transformed routinely to RNC algorithms for approximate counting. A key step in our parallel simulation algorithm, is a so-called "universal coupling" procedure, which tries to simultaneously couple all distributions over the same sample space. We construct such a universal coupling, that for every pair of distributions the coupled probability is at least their Jaccard similarity. We also prove this is optimal in the worst case. The universal coupling and its applications are of independent interests.

相關內容

This paper devises a novel lowest-order conforming virtual element method (VEM) for planar linear elasticity with the pure displacement/traction boundary condition. The main trick is to view a generic polygon $K$ as a new one $\widetilde{K}$ with additional vertices consisting of interior points on edges of $K$, so that the discrete admissible space is taken as the $V_1$ type virtual element space related to the partition $\{\widetilde{K}\}$ instead of $\{K\}$. The method is shown to be uniformly convergent with the optimal rates both in $H^1$ and $L^2$ norms with respect to the Lam\'{e} constant $\lambda$. Numerical tests are presented to illustrate the good performance of the proposed VEM and confirm the theoretical results.

The additive hazards model specifies the effect of covariates on the hazard in an additive way, in contrast to the popular Cox model, in which it is multiplicative. As non-parametric model, it offers a very flexible way of modeling time-varying covariate effects. It is most commonly estimated by ordinary least squares. In this paper we consider the case where covariates are bounded, and derive the maximum likelihood estimator under the constraint that the hazard is non-negative for all covariate values in their domain. We describe an efficient algorithm to find the maximum likelihood estimator. The method is contrasted with the ordinary least squares approach in a simulation study, and the method is illustrated on a realistic data set.

Machine learning algorithms perform well on identifying patterns in many different datasets due to their versatility. However, as one increases the size of the dataset, the computation time for training and using these statistical models grows quickly. Quantum computing offers a new paradigm which may have the ability to overcome these computational difficulties. Here, we propose a quantum analogue to K-means clustering, implement it on simulated superconducting qubits, and compare it to a previously developed quantum support vector machine. We find the algorithm's accuracy comparable to the classical K-means algorithm for clustering and classification problems, and find that it has asymptotic complexity $O(N^{3/2}K^{1/2}\log{P})$, where $N$ is the number of data points, $K$ is the number of clusters, and $P$ is the dimension of the data points, giving a significant speedup over the classical analogue.

The program-over-monoid model of computation originates with Barrington's proof that the model captures the complexity class $\mathsf{NC^1}$. Here we make progress in understanding the subtleties of the model. First, we identify a new tameness condition on a class of monoids that entails a natural characterization of the regular languages recognizable by programs over monoids from the class. Second, we prove that the class known as $\mathbf{DA}$ satisfies tameness and hence that the regular languages recognized by programs over monoids in $\mathbf{DA}$ are precisely those recognizable in the classical sense by morphisms from $\mathbf{QDA}$. Third, we show by contrast that the well studied class of monoids called $\mathbf{J}$ is not tame. Finally, we exhibit a program-length-based hierarchy within the class of languages recognized by programs over monoids from $\mathbf{DA}$.

We study the class of first-order locally-balanced Metropolis--Hastings algorithms introduced in Livingstone & Zanella (2021). To choose a specific algorithm within the class the user must select a balancing function $g:\mathbb{R} \to \mathbb{R}$ satisfying $g(t) = tg(1/t)$, and a noise distribution for the proposal increment. Popular choices within the class are the Metropolis-adjusted Langevin algorithm and the recently introduced Barker proposal. We first establish a universal limiting optimal acceptance rate of 57% and scaling of $n^{-1/3}$ as the dimension $n$ tends to infinity among all members of the class under mild smoothness assumptions on $g$ and when the target distribution for the algorithm is of the product form. In particular we obtain an explicit expression for the asymptotic efficiency of an arbitrary algorithm in the class, as measured by expected squared jumping distance. We then consider how to optimise this expression under various constraints. We derive an optimal choice of noise distribution for the Barker proposal, optimal choice of balancing function under a Gaussian noise distribution, and optimal choice of first-order locally-balanced algorithm among the entire class, which turns out to depend on the specific target distribution. Numerical simulations confirm our theoretical findings and in particular show that a bi-modal choice of noise distribution in the Barker proposal gives rise to a practical algorithm that is consistently more efficient than the original Gaussian version.

We present a novel finite element analysis of inelastic structures containing Shape Memory Alloys (SMAs). Phenomenological constitutive models for SMAs lead to material nonlinearities, that require substantial computational effort to resolve. Finite element analysis methods, which rely on Gauss quadrature integration schemes, must solve two sets of coupled differential equations: one at the global level and the other at the local, i.e. Gauss point level. In contrast to the conventional return mapping algorithm, which solves these two sets of coupled differential equations separately using a nested Newton procedure, we propose a scheme to solve the local and global differential equations simultaneously. In the process we also derive closed-form expressions used to update the internal/constitutive state variables, and unify the popular closest-point and cutting plane methods with our formulas. Numerical testing indicates that our method allows for larger thermomechanical loading steps and provides increased computational efficiency, over the standard return mapping algorithm.

Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

Being intensively studied, visual object tracking has witnessed great advances in either speed (e.g., with correlation filters) or accuracy (e.g., with deep features). Real-time and high accuracy tracking algorithms, however, remain scarce. In this paper we study the problem from a new perspective and present a novel parallel tracking and verifying (PTAV) framework, by taking advantage of the ubiquity of multi-thread techniques and borrowing ideas from the success of parallel tracking and mapping in visual SLAM. The proposed PTAV framework is typically composed of two components, a (base) tracker T and a verifier V, working in parallel on two separate threads. The tracker T aims to provide a super real-time tracking inference and is expected to perform well most of the time; by contrast, the verifier V validates the tracking results and corrects T when needed. The key innovation is that, V does not work on every frame but only upon the requests from T; on the other end, T may adjust the tracking according to the feedback from V. With such collaboration, PTAV enjoys both the high efficiency provided by T and the strong discriminative power by V. Meanwhile, to adapt V to object appearance changes over time, we maintain a dynamic target template pool for adaptive verification, resulting in further performance improvements. In our extensive experiments on popular benchmarks including OTB2015, TC128, UAV20L and VOT2016, PTAV achieves the best tracking accuracy among all real-time trackers, and in fact even outperforms many deep learning based algorithms. Moreover, as a general framework, PTAV is very flexible with great potentials for future improvement and generalization.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司