亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

For probabilistic programs, it is usually not possible to automatically derive exact information about their properties, such as the distribution of states at a given program point. Instead, one can attempt to derive approximations, such as upper bounds on tail probabilities. Such bounds can be obtained via concentration inequalities, which rely on the moments of a distribution, such as the expectation (the first raw moment) or the variance (the second central moment). Tail bounds obtained using central moments are often tighter than the ones obtained using raw moments, but automatically analyzing central moments is more challenging. This paper presents an analysis for probabilistic programs that automatically derives symbolic upper and lower bounds on variances, as well as higher central moments, of cost accumulators. To overcome the challenges of higher-moment analysis, it generalizes analyses for expectations with an algebraic abstraction that simultaneously analyzes different moments, utilizing relations between them. A key innovation is the notion of moment-polymorphic recursion, and a practical derivation system that handles recursive functions. The analysis has been implemented using a template-based technique that reduces the inference of polynomial bounds to linear programming. Experiments with our prototype central-moment analyzer show that, despite the analyzer's upper/lower bounds on various quantities, it obtains tighter tail bounds than an existing system that uses only raw moments, such as expectations.

相關內容

Various verification techniques for temporal properties transform temporal verification to safety verification. For infinite-state systems, these transformations are inherently imprecise. That is, for some instances, the temporal property holds, but the resulting safety property does not. This paper introduces a mechanism for tackling this imprecision. This mechanism, which we call temporal prophecy, is inspired by prophecy variables. Temporal prophecy refines an infinite-state system using first-order linear temporal logic formulas, via a suitable tableau construction. For a specific liveness-to-safety transformation based on first-order logic, we show that using temporal prophecy strictly increases the precision. Furthermore, temporal prophecy leads to robustness of the proof method, which is manifested by a cut elimination theorem. We integrate our approach into the Ivy deductive verification system, and show that it can handle challenging temporal verification examples.

Automatic verification of array manipulating programs is a challenging problem because it often amounts to the inference of in ductive quantified loop invariants which, in some cases, may not even be firstorder expressible. In this paper, we suggest a novel verification tech nique that is based on induction on userdefined rank of program states as an alternative to loopinvariants. Our technique, dubbed inductive rank reduction, works in two steps. Firstly, we simplify the verification problem and prove that the program is correct when the input state con tains an input array of length B or less, using the length of the array as the rank of the state. Secondly, we employ a squeezing function g which converts a program state sigma with an array of length > B to a state g(sigma) containing an array of length minus 1 or less. We prove that when g satisfies certain natural conditions then if the program violates its specification on sigma then it does so also on g(sigma). The correctness of the program on inputs with arrays of arbitrary lengths follows by induction. We make our technique automatic for array programs whose length of execution is proportional to the length of the input arrays by (i) perform ing the first step using symbolic execution, (ii) verifying the conditions required of g using Z3, and (iii) providing a heuristic procedure for syn thesizing g. We implemented our technique and applied it successfully to several interesting arraymanipulating programs, including a bidirec tional summation program whose loop invariant cannot be expressed in firstorder logic while its specification is quantifier free.

We consider the sequential composite binary hypothesis testing problem in which one of the hypotheses is governed by a single distribution while the other is governed by a family of distributions whose parameters belong to a known set $\Gamma$. We would like to design a test to decide which hypothesis is in effect. Under the constraints that the probabilities that the length of the test, a stopping time, exceeds $n$ are bounded by a certain threshold $\epsilon$, we obtain certain fundamental limits on the asymptotic behavior of the sequential test as $n$ tends to infinity. Assuming that $\Gamma$ is a convex and compact set, we obtain the set of all first-order error exponents for the problem. We also prove a strong converse. Additionally, we obtain the set of second-order error exponents under the assumption that $\mathcal{X}$ is a finite alphabet. In the proof of second-order asymptotics, a main technical contribution is the derivation of a central limit-type result for a maximum of an uncountable set of log-likelihood ratios under suitable conditions. This result may be of independent interest. We also show that some important statistical models satisfy the conditions.

Probabilistic deep learning is deep learning that accounts for uncertainty, both model uncertainty and data uncertainty. It is based on the use of probabilistic models and deep neural networks. We distinguish two approaches to probabilistic deep learning: probabilistic neural networks and deep probabilistic models. The former employs deep neural networks that utilize probabilistic layers which can represent and process uncertainty; the latter uses probabilistic models that incorporate deep neural network components which capture complex non-linear stochastic relationships between the random variables. We discuss some major examples of each approach including Bayesian neural networks and mixture density networks (for probabilistic neural networks), and variational autoencoders, deep Gaussian processes and deep mixed effects models (for deep probabilistic models). TensorFlow Probability is a library for probabilistic modeling and inference which can be used for both approaches of probabilistic deep learning. We include its code examples for illustration.

We consider the following problem: given a program, find tight asymptotic bounds on the values of some variables at the end of the computation (or at any given program point) in terms of its input values. We focus on the case of polynomially-bounded variables, and on a weak programming language for which we have recently shown that tight bounds for polynomially-bounded variables are computable. These bounds are sets of multivariate polynomials. While their computability has been settled, the complexity of this program-analysis problem remained open. In this paper, we show the problem to be PSPACE-complete. The main contribution is a new, space-efficient analysis algorithm. This algorithm is obtained in a few steps. First, we develop an algorithm for univariate bounds, a sub-problem which is already PSPACE-hard. Then, a decision procedure for multivariate bounds is achieved by reducing this problem to the univariate case; this reduction is orthogonal to the solution of the univariate problem and uses observations on the geometry of a set of vectors that represent multivariate bounds. Finally, we transform the univariate-bound algorithm to produce multivariate bounds.

The COVID-19 pandemic left its unique mark on the 21st century as one of the most significant disasters in history, triggering governments all over the world to respond with a wide range of interventions. However, these restrictions come with a substantial price tag. It is crucial for governments to form anti-virus strategies that balance the trade-off between protecting public health and minimizing the economic cost. This work proposes a probabilistic programming method to quantify the efficiency of major non-pharmaceutical interventions. We present a generative simulation model that accounts for the economic and human capital cost of adopting such strategies, and provide an end-to-end pipeline to simulate the virus spread and the incurred loss of various policy combinations. By investigating the national response in 10 countries covering four continents, we found that social distancing coupled with contact tracing is the most successful policy, reducing the virus transmission rate by 96\% along with a 98\% reduction in economic and human capital loss. Together with experimental results, we open-sourced a framework to test the efficacy of each policy combination.

We consider the problem of estimating the joint distribution of $n$ independent random variables. Our approach is based on a family of candidate probabilities that we shall call a model and which is chosen to either contain the true distribution of the data or at least to provide a good approximation of it with respect to some loss function. The aim of the present paper is to describe a general estimation strategy that allows to adapt to both the specific features of the model and the choice of the loss function in view of designing an estimator with good estimation properties. The losses we have in mind are based on the total variation, Hellinger, Wasserstein and $\mathbb{L}_p$-distances to name a few. We show that the risk of the resulting estimator with respect to the loss function can be bounded by the sum of an approximation term accounting for the loss between the true distribution and the model and a complexity term that corresponds to the bound we would get if this distribution did belong to the model. Our results hold under mild assumptions on the true distribution of the data and are based on exponential deviation inequalities that are non-asymptotic and involve explicit constants. When the model reduces to two distinct probabilities, we show how our estimation strategy leads to a robust test whose errors of first and second kinds only depend on the losses between the true distribution and the two tested probabilities.

We revisit two well-established verification techniques, $k$-induction and bounded model checking (BMC), in the more general setting of fixed point theory over complete lattices. Our main theoretical contribution is latticed $k$-induction, which (i) generalizes classical $k$-induction for verifying transition systems, (ii) generalizes Park induction for bounding fixed points of monotonic maps on complete lattices, and (iii) extends from naturals $k$ to transfinite ordinals $\kappa$, thus yielding $\kappa$-induction. The lattice-theoretic understanding of $k$-induction and BMC enables us to apply both techniques to the fully automatic verification of infinite-state probabilistic programs. Our prototypical implementation manages to automatically verify non-trivial specifications for probabilistic programs taken from the literature that - using existing techniques - cannot be verified without synthesizing a stronger inductive invariant first.

The Neural Tangent Kernel (NTK) is the wide-network limit of a kernel defined using neural networks at initialization, whose embedding is the gradient of the output of the network with respect to its parameters. We study the "after kernel", which is defined using the same embedding, except after training, for neural networks with standard architectures, on binary classification problems extracted from MNIST and CIFAR-10, trained using SGD in a standard way. For some dataset-architecture pairs, after a few epochs of neural network training, a hard-margin SVM using the network's after kernel is much more accurate than when the network's initial kernel is used. For networks with an architecture similar to VGG, the after kernel is more "global", in the sense that it is less invariant to transformations of input images that disrupt the global structure of the image while leaving the local statistics largely intact. For fully connected networks, the after kernel is less global in this sense. The after kernel tends to be more invariant to small shifts, rotations and zooms; data augmentation does not improve these invariances. The (finite approximation to the) conjugate kernel, obtained using the last layer of hidden nodes, sometimes, but not always, provides a good approximation to the NTK and the after kernel.

Identification of parameters in ordinary differential equations (ODEs) is an important and challenging task when modeling dynamic systems in biomedical research and other scientific areas, especially with the presence of time-varying parameters. This article proposes a fast and accurate method, TVMAGI (Time-Varying MAnifold-constrained Gaussian process Inference), to estimate both time-constant and time-varying parameters in the ODE using noisy and sparse observation data. TVMAGI imposes a Gaussian process model over the time series of system components as well as time-varying parameters, and restricts the derivative process to satisfy ODE conditions. Consequently, TVMAGI completely bypasses numerical integration and achieves substantial savings in computation time. By incorporating the ODE structures through manifold constraints, TVMAGI enjoys a principled statistical construction under the Bayesian paradigm, which further enables it to handle systems with missing data or unobserved components. The Gaussian process prior also alleviates the identifiability issue often associated with the time-varying parameters in ODE. Unlike existing approaches, TVMAGI assumes no specific linearity of the ODE structure, and can be applied to general nonlinear systems. We demonstrate the robustness and efficiency of our method through three simulation examples, including an infectious disease compartmental model.

北京阿比特科技有限公司