亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A $k$-submodular function is a pairwise monotone function that given $k$ disjoint subsets outputs a value that is submodular in every orthant. In this paper, we provide a new framework for $k$-submodular maximization problems, by relaxing the optimization to the continuous space with the multilinear extension of $k$-submodular functions and rounding the fractional solution to recover the discrete solution. The multilinear extension introduces new techniques to analyze and optimize $k$-submodular functions. When the variables are unconstrained, we propose simple algorithms that achieve almost $\frac{1}{2}$-approximation, which is asymptotically optimal. For finite $k$, the factors could be improved to almost as good as any combinatorial algorithm based on Iwata, Tanigawa, and Yoshida's meta-framework ($\frac{k}{2k-1}$-approximation for the monotone case and $\frac{k^2+1}{2k^2+1}$-approximation for the non-monotone case). For monotone functions, almost $\frac{1}{2}$-approximation are obtained under total size and knapsack constraints.

相關內容

iOS 8 提供的應用間和應用跟系統的功能交互特性。
  • Today (iOS and OS X): widgets for the Today view of Notification Center
  • Share (iOS and OS X): post content to web services or share content with others
  • Actions (iOS and OS X): app extensions to view or manipulate inside another app
  • Photo Editing (iOS): edit a photo or video in Apple's Photos app with extensions from a third-party apps
  • Finder Sync (OS X): remote file storage in the Finder with support for Finder content annotation
  • Storage Provider (iOS): an interface between files inside an app and other apps on a user's device
  • Custom Keyboard (iOS): system-wide alternative keyboards

Source:

This work proposes an adaptive structure-preserving model order reduction method for finite-dimensional parametrized Hamiltonian systems modeling non-dissipative phenomena. To overcome the slowly decaying Kolmogorov width typical of transport problems, the full model is approximated on local reduced spaces that are adapted in time using dynamical low-rank approximation techniques. The reduced dynamics is prescribed by approximating the symplectic projection of the Hamiltonian vector field in the tangent space to the local reduced space. This ensures that the canonical symplectic structure of the Hamiltonian dynamics is preserved during the reduction. In addition, accurate approximations with low-rank reduced solutions are obtained by allowing the dimension of the reduced space to change during the time evolution. Whenever the quality of the reduced solution, assessed via an error indicator, is not satisfactory, the reduced basis is augmented in the parameter direction that is worst approximated by the current basis. Extensive numerical tests involving wave interactions, nonlinear transport problems, and the Vlasov equation demonstrate the superior stability properties and considerable runtime speedups of the proposed method as compared to global and traditional reduced basis approaches.

We consider the problem of sketching a stochastic valuation function, defined as the expectation of a valuation function of independent random item values. We show that for monotone subadditive or submodular valuation functions that satisfy a weak homogeneity condition, or certain other conditions, there exist discretized distributions of item values with $O(k\log(k))$ support sizes that yield a sketch valuation function which is a constant-factor approximation, for any value query for a set of items of cardinality less than or equal to $k$. These discretized distributions can be efficiently computed by an algorithm for each item's value distribution separately. The obtained sketch results are of interest for various optimization problems such as best set selection and welfare maximization problems.

This paper studies the problem of computing a linear approximation of quadratic Wasserstein distance $W_2$. In particular, we compute an approximation of the negative homogeneous weighted Sobolev norm whose connection to Wasserstein distance follows from a classic linearization of a general Monge-Amp\'ere equation. Our contribution is threefold. First, we provide expository material on this classic linearization of Wasserstein distance including a quantitative error estimate. econd, we reduce the computational problem to solving a elliptic boundary value problem involving the Witten Laplacian, which is a Schr\"odinger operator of the form $H = -\Delta + V$, and describe an associated embedding. Third, for the case of probability distributions on the unit square $[0,1]^2$ represented by $n \times n$ arrays we present a fast code demonstrating our approach. Several numerical examples are presented.

Numerical Generalized Randomized Hamiltonian Monte Carlo is introduced, as a robust, easy to use and computationally fast alternative to conventional Markov chain Monte Carlo methods for continuous target distributions. A wide class of piecewise deterministic Markov processes generalizing Randomized HMC (Bou-Rabee and Sanz-Serna, 2017) by allowing for state-dependent event rates is defined. Under very mild restrictions, such processes will have the desired target distribution as an invariant distribution. Secondly, the numerical implementation of such processes, based on adaptive numerical integration of second order ordinary differential equations (ODEs) is considered. The numerical implementation yields an approximate, yet highly robust algorithm that, unlike conventional Hamiltonian Monte Carlo, enables the exploitation of the complete Hamiltonian trajectories (hence the title). The proposed algorithm may yield large speedups and improvements in stability relative to relevant benchmarks, while incurring numerical biases that are negligible relative to the overall Monte Carlo errors. Granted access to a high-quality ODE code, the proposed methodology is both easy to implement and use, even for highly challenging and high-dimensional target distributions.

Maximizing a monotone submodular function is a fundamental task in machine learning. In this paper, we study the deletion robust version of the problem under the classic matroids constraint. Here the goal is to extract a small size summary of the dataset that contains a high value independent set even after an adversary deleted some elements. We present constant-factor approximation algorithms, whose space complexity depends on the rank $k$ of the matroid and the number $d$ of deleted elements. In the centralized setting we present a $(3.582+O(\varepsilon))$-approximation algorithm with summary size $O(k + \frac{d \log k}{\varepsilon^2})$. In the streaming setting we provide a $(5.582+O(\varepsilon))$-approximation algorithm with summary size and memory $O(k + \frac{d \log k}{\varepsilon^2})$. We complement our theoretical results with an in-depth experimental analysis showing the effectiveness of our algorithms on real-world datasets.

In this paper, we introduce the Hessian-Schatten total variation (HTV) -- a novel seminorm that quantifies the total "rugosity" of multivariate functions. Our motivation for defining HTV is to assess the complexity of supervised-learning schemes. We start by specifying the adequate matrix-valued Banach spaces that are equipped with suitable classes of mixed norms. We then show that the HTV is invariant to rotations, scalings, and translations. Additionally, its minimum value is achieved for linear mappings, which supports the common intuition that linear regression is the least complex learning model. Next, we present closed-form expressions of the HTV for two general classes of functions. The first one is the class of Sobolev functions with a certain degree of regularity, for which we show that the HTV coincides with the Hessian-Schatten seminorm that is sometimes used as a regularizer for image reconstruction. The second one is the class of continuous and piecewise-linear (CPWL) functions. In this case, we show that the HTV reflects the total change in slopes between linear regions that have a common facet. Hence, it can be viewed as a convex relaxation (l1-type) of the number of linear regions (l0-type) of CPWL mappings. Finally, we illustrate the use of our proposed seminorm.

Variational Bayesian inference is an important machine-learning tool that finds application from statistics to robotics. The goal is to find an approximate probability density function (PDF) from a chosen family that is in some sense 'closest' to the full Bayesian posterior. Closeness is typically defined through the selection of an appropriate loss functional such as the Kullback-Leibler (KL) divergence. In this paper, we explore a new formulation of variational inference by exploiting the fact that (most) PDFs are members of a Bayesian Hilbert space under careful definitions of vector addition, scalar multiplication and an inner product. We show that variational inference based on KL divergence then amounts to an iterative projection, in the Euclidean sense, of the Bayesian posterior onto a subspace corresponding to the selected approximation family. We work through the details of this general framework for the specific case of the Gaussian approximation family and show the equivalence to another Gaussian variational inference approach. We furthermore discuss the implications for systems that exhibit sparsity, which is handled naturally in Bayesian space, and give an example of a high-dimensional robotic state estimation problem that can be handled as a result. Finally, we provide some preliminary examples of how the approach could be applied to non-Gaussian inference.

Zero-free based algorithm is a major technique for deterministic approximate counting. In Barvinok's original framework[Bar17], by calculating truncated Taylor expansions, a quasi-polynomial time algorithm was given for estimating zero-free partition functions. Patel and Regts[PR17] later gave a refinement of Barvinok's framework, which gave a polynomial-time algorithm for a class of zero-free graph polynomials that can be expressed as counting induced subgraphs in bounded-degree graphs. In this paper, we give a polynomial-time algorithm for estimating classical and quantum partition functions specified by local Hamiltonians with bounded maximum degree, assuming a zero-free property for the temperature. Consequently, when the inverse temperature is close enough to zero by a constant gap, we have polynomial-time approximation algorithm for all such partition functions. Our result is based on a new abstract framework that extends and generalizes the approach of Patel and Regts.

In online learning problems, exploiting low variance plays an important role in obtaining tight performance guarantees yet is challenging because variances are often not known a priori. Recently, considerable progress has been made by Zhang et al. (2021) where they obtain a variance-adaptive regret bound for linear bandits without knowledge of the variances and a horizon-free regret bound for linear mixture Markov decision processes (MDPs). In this paper, we present novel analyses that improve their regret bounds significantly. For linear bandits, we achieve $\tilde O(d^{1.5}\sqrt{\sum_{k}^K \sigma_k^2} + d^2)$ where $d$ is the dimension of the features, $K$ is the time horizon, and $\sigma_k^2$ is the noise variance at time step $k$, and $\tilde O$ ignores polylogarithmic dependence, which is a factor of $d^3$ improvement. For linear mixture MDPs with the assumption of maximum cumulative reward in an episode being in $[0,1]$, we achieve a horizon-free regret bound of $\tilde O(d \sqrt{K} + d^2)$ where $d$ is the number of base models and $K$ is the number of episodes. This is a factor of $d^{3.5}$ improvement in the leading term and $d^7$ in the lower order term. Our analysis critically relies on a novel elliptical potential `count' lemma. This lemma allows a novel regret analysis in conjunction with the peeling trick, which is of independent interest.

While Generative Adversarial Networks (GANs) have empirically produced impressive results on learning complex real-world distributions, recent work has shown that they suffer from lack of diversity or mode collapse. The theoretical work of Arora et al.~\cite{AroraGeLiMaZh17} suggests a dilemma about GANs' statistical properties: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. In contrast, we show in this paper that GANs can in principle learn distributions in Wasserstein distance (or KL-divergence in many cases) with polynomial sample complexity, if the discriminator class has strong distinguishing power against the particular generator class (instead of against all possible generators). For various generator classes such as mixture of Gaussians, exponential families, and invertible neural networks generators, we design corresponding discriminators (which are often neural nets of specific architectures) such that the Integral Probability Metric (IPM) induced by the discriminators can provably approximate the Wasserstein distance and/or KL-divergence. This implies that if the training is successful, then the learned distribution is close to the true distribution in Wasserstein distance or KL divergence, and thus cannot drop modes. Our preliminary experiments show that on synthetic datasets the test IPM is well correlated with KL divergence, indicating that the lack of diversity may be caused by the sub-optimality in optimization instead of statistical inefficiency.

北京阿比特科技有限公司