亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We revisit two well-established verification techniques, $k$-induction and bounded model checking (BMC), in the more general setting of fixed point theory over complete lattices. Our main theoretical contribution is latticed $k$-induction, which (i) generalizes classical $k$-induction for verifying transition systems, (ii) generalizes Park induction for bounding fixed points of monotonic maps on complete lattices, and (iii) extends from naturals $k$ to transfinite ordinals $\kappa$, thus yielding $\kappa$-induction. The lattice-theoretic understanding of $k$-induction and BMC enables us to apply both techniques to the fully automatic verification of infinite-state probabilistic programs. Our prototypical implementation manages to automatically verify non-trivial specifications for probabilistic programs taken from the literature that - using existing techniques - cannot be verified without synthesizing a stronger inductive invariant first.

相關內容

We present a parallelized geometric multigrid (GMG) method, based on the cell-based Vanka smoother, for higher order space-time finite element methods (STFEM) to the incompressible Navier--Stokes equations. The STFEM is implemented as a time marching scheme. The GMG solver is applied as a preconditioner for GMRES iterations. Its performance properties are demonstrated for 2d and 3d benchmarks of flow around a cylinder. The key ingredients of the GMG approach are the construction of the local Vanka smoother over all degrees of freedom in time of the respective subinterval and its efficient application. For this, data structures that store pre-computed cell inverses of the Jacobian for all hierarchical levels and require only a reasonable amount of memory overhead are generated. The GMG method is built for the \emph{deal.II} finite element library. The concepts are flexible and can be transferred to similar software platforms.

We consider density estimation for Besov spaces when each sample is quantized to only a limited number of bits. We provide a noninteractive adaptive estimator that exploits the sparsity of wavelet bases, along with a simulate-and-infer technique from parametric estimation under communication constraints. We show that our estimator is nearly rate-optimal by deriving minimax lower bounds that hold even when interactive protocols are allowed. Interestingly, while our wavelet-based estimator is almost rate-optimal for Sobolev spaces as well, it is unclear whether the standard Fourier basis, which arise naturally for those spaces, can be used to achieve the same performance.

In this paper we develop a plane wave type method for discretization of homogeneous Helmholtz equations with variable wave numbers. In the proposed method, local basis functions (on each element) are constructed by the geometric optics ansatz such that they approximately satisfy a homogeneous Helmholtz equation without boundary condition. More precisely, each basis function is expressed as the product of an exponential plane wave function and a polynomial function, where the phase function in the exponential function approximately satisfies the eikonal equation and the polynomial factor is recursively determined by transport equations associated with the considered Helmholtz equation. We prove that the resulting plane wave spaces have high order $h$-approximations as the standard plane wave spaces (which are available only to the case with constant wave number). We apply the proposed plane wave spaces to the discretization of nonhomogeneous Helmholtz equations with variable wave numbers and establish the corresponding error estimates of their finite element solutions. We report some numerical results to illustrate the efficiency of the proposed method.

We extend the theory of PAC learning in a way which allows to model a rich variety of learning tasks where the data satisfy special properties that ease the learning process. For example, tasks where the distance of the data from the decision boundary is bounded away from zero. The basic and simple idea is to consider partial concepts: these are functions that can be undefined on certain parts of the space. When learning a partial concept, we assume that the source distribution is supported only on points where the partial concept is defined. This way, one can naturally express assumptions on the data such as lying on a lower dimensional surface or margin conditions. In contrast, it is not at all clear that such assumptions can be expressed by the traditional PAC theory. In fact we exhibit easy-to-learn partial concept classes which provably cannot be captured by the traditional PAC theory. This also resolves a question posed by Attias, Kontorovich, and Mansour 2019. We characterize PAC learnability of partial concept classes and reveal an algorithmic landscape which is fundamentally different than the classical one. For example, in the classical PAC model, learning boils down to Empirical Risk Minimization (ERM). In stark contrast, we show that the ERM principle fails in explaining learnability of partial concept classes. In fact, we demonstrate classes that are incredibly easy to learn, but such that any algorithm that learns them must use an hypothesis space with unbounded VC dimension. We also find that the sample compression conjecture fails in this setting. Thus, this theory features problems that cannot be represented nor solved in the traditional way. We view this as evidence that it might provide insights on the nature of learnability in realistic scenarios which the classical theory fails to explain.

We study a variant of Min Cost Flow in which the flow needs to be connected. Specifically, in the Connected Flow problem one is given a directed graph $G$, along with a set of demand vertices $D \subseteq V(G)$ with demands $\mathsf{dem}: D \rightarrow \mathbb{N}$, and costs and capacities for each edge. The goal is to find a minimum cost flow that satisfies the demands, respects the capacities and induces a (strongly) connected subgraph. This generalizes previously studied problems like the (Many Visits) TSP. We study the parameterized complexity of Connected Flow parameterized by $|D|$, the treewidth $tw$ and by vertex cover size $k$ of $G$ and provide: (i) $\mathsf{NP}$-completeness already for the case $|D|=2$ with only unit demands and capacities and no edge costs, and fixed-parameter tractability if there are no capacities, (ii) a fixed-parameter tractable $\mathcal{O}^{\star}(k^{\mathcal{O}(k)})$ time algorithm for the general case, and a kernel of size polynomial in $k$ for the special case of Many Visits TSP, (iii) a $|V(G)|^{\mathcal{O}(tw)}$ time algorithm and a matching $|V(G)|^{o(tw)}$ time conditional lower bound conditioned on the Exponential Time Hypothesis. To achieve some of our results, we significantly extend an approach by Kowalik et al.~[ESA'20].

Finding the minimum approximate ratio for Nash equilibrium of bi-matrix games has derived a series of studies, started with 3/4, followed by 1/2, 0.38 and 0.36, finally the best approximate ratio of 0.3393 by Tsaknakis and Spirakis (TS algorithm for short). Efforts to improve the results remain not successful in the past 14 years. This work makes the first progress to show that the bound of 0.3393 is indeed tight for the TS algorithm. Next, we characterize all possible tight game instances for the TS algorithm. It allows us to conduct extensive experiments to study the nature of the TS algorithm and to compare it with other algorithms. We find that this lower bound is not smoothed for the TS algorithm in that any perturbation on the initial point may deviate away from this tight bound approximate solution. Other approximate algorithms such as Fictitious Play and Regret Matching also find better approximate solutions. However, the new distributed algorithm for approximate Nash equilibrium by Czumaj et al. performs consistently at the same bound of 0.3393. This proves our lower bound instances generated against the TS algorithm can serve as a benchmark in design and analysis of approximate Nash equilibrium algorithms.

Bayesian nonparametric priors based on completely random measures (CRMs) offer a flexible modeling approach when the number of latent components in a dataset is unknown. However, managing the infinite dimensionality of CRMs typically requires practitioners to derive ad-hoc algorithms, preventing the use of general-purpose inference methods and often leading to long compute times. We propose a general but explicit recipe to construct a simple finite-dimensional approximation that can replace the infinite-dimensional CRMs. Our independent finite approximation (IFA) is a generalization of important cases that are used in practice. The independence of atom weights in our approximation (i) makes the construction well-suited for parallel and distributed computation and (ii) facilitates more convenient inference schemes. We quantify the approximation error between IFAs and the target nonparametric prior. We compare IFAs with an alternative approximation scheme -- truncated finite approximations (TFAs), where the atom weights are constructed sequentially. We prove that, for worst-case choices of observation likelihoods, TFAs are a more efficient approximation than IFAs. However, in real-data experiments with image denoising and topic modeling, we find that IFAs perform very similarly to TFAs in terms of task-specific accuracy metrics.

Rational verification is the problem of determining which temporal logic properties will hold in a multi-agent system, under the assumption that agents in the system act rationally, by choosing strategies that collectively form a game-theoretic equilibrium. Previous work in this area has largely focussed on deterministic systems. In this paper, we develop the theory and algorithms for rational verification in probabilistic systems. We focus on concurrent stochastic games (CSGs), which can be used to model uncertainty and randomness in complex multi-agent environments. We study the rational verification problem for both non-cooperative games and cooperative games in the qualitative probabilistic setting. In the former case, we consider LTL properties satisfied by the Nash equilibria of the game and in the latter case LTL properties satisfied by the core. In both cases, we show that the problem is 2EXPTIME-complete, thus not harder than the much simpler verification problem of model checking LTL properties of systems modelled as Markov decision processes (MDPs).

Discretization of the uniform norm of functions from a given finite dimensional subspace of continuous functions is studied. We pay special attention to the case of trigonometric polynomials with frequencies from an arbitrary finite set with fixed cardinality. We give two different proofs of the fact that for any $N$-dimensional subspace of the space of continuous functions it is sufficient to use $e^{CN}$ sample points for an accurate upper bound for the uniform norm. Previous known results show that one cannot improve on the exponential growth of the number of sampling points for a good discretization theorem in the uniform norm. Also, we prove a general result, which connects the upper bound on the number of sampling points in the discretization theorem for the uniform norm with the best $m$-term bilinear approximation of the Dirichlet kernel associated with the given subspace. We illustrate application of our technique on the example of trigonometric polynomials.

Embedding methods which enforce a partial order or lattice structure over the concept space, such as Order Embeddings (OE) (Vendrov et al., 2016), are a natural way to model transitive relational data (e.g. entailment graphs). However, OE learns a deterministic knowledge base, limiting expressiveness of queries and the ability to use uncertainty for both prediction and learning (e.g. learning from expectations). Probabilistic extensions of OE (Lai and Hockenmaier, 2017) have provided the ability to somewhat calibrate these denotational probabilities while retaining the consistency and inductive bias of ordered models, but lack the ability to model the negative correlations found in real-world knowledge. In this work we show that a broad class of models that assign probability measures to OE can never capture negative correlation, which motivates our construction of a novel box lattice and accompanying probability measure to capture anticorrelation and even disjoint concepts, while still providing the benefits of probabilistic modeling, such as the ability to perform rich joint and conditional queries over arbitrary sets of concepts, and both learning from and predicting calibrated uncertainty. We show improvements over previous approaches in modeling the Flickr and WordNet entailment graphs, and investigate the power of the model.

北京阿比特科技有限公司