亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given a collection of independent events each of which has strictly positive probability, the probability that all of them occur is also strictly positive. The Lov\'asz local lemma (LLL) asserts that this remains true if the events are not too strongly negatively correlated. The formulation of the lemma involves a graph with one vertex per event, with edges indicating potential negative dependence. The word "Local" in LLL reflects that the condition for the negative correlation can be expressed solely in terms of the neighborhood of each vertex. In contrast to this local view, Shearer developed an exact criterion for the avoidance probability to be strictly positive, but it involves summing over all independent sets of the graph. In this work we make two contributions. The first is to develop a hierarchy of increasingly powerful, increasingly non-local lemmata for bounding the avoidance probability from below, each lemma associated with a different set of walks in the graph. Already, at its second level, our hierarchy is stronger than all known local lemmata. To demonstrate its power we prove new bounds for the negative-fugacity singularity of the hard-core model on several lattices, a central problem in statistical physics. Our second contribution is to prove that Shearer's connection between the probabilistic setting and the independent set polynomial holds for \emph{arbitrary supermodular} functions, not just probability measures. This means that all LLL machinery can be employed to bound from below an arbitrary supermodular function, based only on information regarding its value at singleton sets and partial information regarding their interactions. We show that this readily implies both the quantum LLL of Ambainis, Kempe, and Sattath~[JACM 2012], and the quantum Shearer criterion of Sattath, Morampudi, Laumann, and Moessner~[PNAS 2016].

相關內容

We give an axiomatic foundation to $\Lambda$-quantiles, a family of generalized quantiles introduced by Frittelli et al. (2014) under the name of Lambda Value at Risk. Under mild assumptions, we show that these functionals are characterized by a property that we call "locality", that means that any change in the distribution of the probability mass that arises entirely above or below the value of the $\Lambda$-quantile does not modify its value. We compare with a related axiomatization of the usual quantiles given by Chambers (2009), based on the stronger property of "ordinal covariance", that means that quantiles are covariant with respect to increasing transformations. Further, we present a systematic treatment of the properties of $\Lambda$-quantiles, refining some of the results of Frittelli et al. (2014) and Burzoni et al. (2017) and showing that in the case of a nonincreasing $\Lambda$ the properties of $\Lambda$-quantiles closely resemble those of the usual quantiles.

The vast majority of the work on adaptive data analysis focuses on the case where the samples in the dataset are independent. Several approaches and tools have been successfully applied in this context, such as differential privacy, max-information, compression arguments, and more. The situation is far less well-understood without the independence assumption. We embark on a systematic study of the possibilities of adaptive data analysis with correlated observations. First, we show that, in some cases, differential privacy guarantees generalization even when there are dependencies within the sample, which we quantify using a notion we call Gibbs-dependence. We complement this result with a tight negative example. Second, we show that the connection between transcript-compression and adaptive data analysis can be extended to the non-iid setting.

With the advent of Network Function Virtualization (NFV), network services that traditionally run on proprietary dedicated hardware can now be realized using Virtual Network Functions (VNFs) that are hosted on general-purpose commodity hardware. This new network paradigm offers a great flexibility to Internet service providers (ISPs) for efficiently operating their networks (collecting network statistics, enforcing management policies, etc.). However, introducing NFV requires an investment to deploy VNFs at certain network nodes (called VNF-nodes), which has to account for practical constraints such as the deployment budget and the VNF-node capacity. To that end, it is important to design a joint VNF-nodes placement and capacity allocation algorithm that can maximize the total amount of network flows that are fully processed by the VNF-nodes while respecting such practical constraints. In contrast to most prior work that often neglects either the budget constraint or the capacity constraint, we explicitly consider both of them. We prove that accounting for these constraints introduces several new challenges. Specifically, we prove that the studied problem is not only NP-hard but also non-submodular. To address these challenges, we introduce a novel relaxation method such that the objective function of the relaxed placement subproblem becomes submodular. Leveraging this useful submodular property, we propose two algorithms that achieve an approximation ratio of $\frac{1}{2}(1-1/e)$ and $\frac{1}{3}(1-1/e)$ for the original non-relaxed problem, respectively. Finally, we corroborate the effectiveness of the proposed algorithms through extensive evaluations using trace-driven simulations.

In this note, we introduce a general version of the well-known elliptical potential lemma that is a widely used technique in the analysis of algorithms in sequential learning and decision-making problems. We consider a stochastic linear bandit setting where a decision-maker sequentially chooses among a set of given actions, observes their noisy rewards, and aims to maximize her cumulative expected reward over a decision-making horizon. The elliptical potential lemma is a key tool for quantifying uncertainty in estimating parameters of the reward function, but it requires the noise and the prior distributions to be Gaussian. Our general elliptical potential lemma relaxes this Gaussian requirement which is a highly non-trivial extension for a number of reasons; unlike the Gaussian case, there is no closed-form solution for the covariance matrix of the posterior distribution, the covariance matrix is not a deterministic function of the actions, and the covariance matrix is not decreasing with respect to the semidefinite inequality. While this result is of broad interest, we showcase an application of it to prove an improved Bayesian regret bound for the well-known Thompson sampling algorithm in stochastic linear bandits with changing action sets where prior and noise distributions are general. This bound is minimax optimal up to constants.

In this work, we introduce a time memory formalism in poroelasticity model that couples the pressure and displacement. We assume this multiphysics process occurs in multicontinuum media. The mathematical model contains a coupled system of equations for pressures in each continuum and elasticity equations for displacements of the medium. We assume that the temporal dynamics is governed by fractional derivatives following some works in the literature. We derive an implicit finite difference approximation for time discretization based on the Caputo time fractional derivative. A Discrete Fracture Model (DFM) is used to model fluid flow through fractures and treat the complex network of fractures. We assume different fractional powers in fractures and matrix due to slow and fast dynamics. We develop a coarse grid approximation based on the Generalized Multiscale Finite Element Method (GMsFEM), where we solve local spectral problems for construction of the multiscale basis functions. We present numerical results for the two-dimensional model problems in fractured heterogeneous porous media. We investigate error analysis between reference (fine-scale) solution and multiscale solution with different numbers of multiscale basis functions. The results show that the proposed method can provide good accuracy on a coarse grid.

In this work we propose and unify classes of different models for information propagation over graphs. In a first class, propagation is modeled as a wave which emanates from a set of known nodes at an initial time, to all other unknown nodes at later times with an ordering determined by the time at which the information wave front reaches nodes. A second class of models is based on the notion of a travel time along paths between nodes. The time of information propagation from an initial known set of nodes to a node is defined as the minimum of a generalized travel time over subsets of all admissible paths. A final class is given by imposing a local equation of an eikonal form at each unknown node, with boundary conditions at the known nodes. The solution value of the local equation at a node is coupled the neighbouring nodes with smaller solution values. We provide precise formulations of the model classes in this graph setting, and prove equivalences between them. Motivated by the connection between first arrival time model and the eikonal equation in the continuum setting, we demonstrate that for graphs in the particular form of grids in Euclidean space mean field limits under grid refinement of certain graph models lead to Hamilton-Jacobi PDEs. For a specific parameter setting, we demonstrate that the solution on the grid approximates the Euclidean distance.

Bayesian approaches are appealing for constrained inference problems by allowing a probabilistic characterization of uncertainty, while providing a computational machinery for incorporating complex constraints in hierarchical models. However, the usual Bayesian strategy of placing a prior on the constrained space and conducting posterior computation with Markov chain Monte Carlo algorithms is often intractable. An alternative is to conduct inference for a less constrained posterior and project samples to the constrained space through a minimal distance mapping. We formalize and provide a unifying framework for such posterior projections. For theoretical tractability, we initially focus on constrained parameter spaces corresponding to closed and convex subsets of the original space. We then consider non-convex Stiefel manifolds. We provide a general formulation of projected posteriors in a Bayesian decision-theoretic framework. We show that asymptotic properties of the unconstrained posterior are transferred to the projected posterior, leading to asymptotically correct credible intervals. We demonstrate numerically that projected posteriors can have better performance that competitor approaches in real data examples.

Graph Neural Networks (GNN) come in many flavors, but should always be either invariant (permutation of the nodes of the input graph does not affect the output) or equivariant (permutation of the input permutes the output). In this paper, we consider a specific class of invariant and equivariant networks, for which we prove new universality theorems. More precisely, we consider networks with a single hidden layer, obtained by summing channels formed by applying an equivariant linear operator, a pointwise non-linearity and either an invariant or equivariant linear operator. Recently, Maron et al. (2019) showed that by allowing higher-order tensorization inside the network, universal invariant GNNs can be obtained. As a first contribution, we propose an alternative proof of this result, which relies on the Stone-Weierstrass theorem for algebra of real-valued functions. Our main contribution is then an extension of this result to the equivariant case, which appears in many practical applications but has been less studied from a theoretical point of view. The proof relies on a new generalized Stone-Weierstrass theorem for algebra of equivariant functions, which is of independent interest. Finally, unlike many previous settings that consider a fixed number of nodes, our results show that a GNN defined by a single set of parameters can approximate uniformly well a function defined on graphs of varying size.

Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.

Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors. Recent work in theoretical computer science has shown that, in appropriate distributional models, it is possible to robustly estimate the mean and covariance with polynomial time algorithms that can tolerate a constant fraction of corruptions, independent of the dimension. However, the sample and time complexity of these algorithms is prohibitively large for high-dimensional applications. In this work, we address both of these issues by establishing sample complexity bounds that are optimal, up to logarithmic factors, as well as giving various refinements that allow the algorithms to tolerate a much larger fraction of corruptions. Finally, we show on both synthetic and real data that our algorithms have state-of-the-art performance and suddenly make high-dimensional robust estimation a realistic possibility.

北京阿比特科技有限公司