亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider an extension of linear-time temporal logic (LTL) with both local and remote data constraints interpreted over a concrete domain. This extension is a natural extension of constraint LTL and the Temporal Logic of Repeating Values, which have been studied before. We shall use previous results to prove that the satisfiability problem for this logic is decidable. Further, we shall see that trying to extend this logic by making it more expressive can lead to undecidability.

相關內容

iOS 8 提供的應用間和應用跟系統的功能交互特性。
  • Today (iOS and OS X): widgets for the Today view of Notification Center
  • Share (iOS and OS X): post content to web services or share content with others
  • Actions (iOS and OS X): app extensions to view or manipulate inside another app
  • Photo Editing (iOS): edit a photo or video in Apple's Photos app with extensions from a third-party apps
  • Finder Sync (OS X): remote file storage in the Finder with support for Finder content annotation
  • Storage Provider (iOS): an interface between files inside an app and other apps on a user's device
  • Custom Keyboard (iOS): system-wide alternative keyboards

Source:

The proliferation of automated data collection schemes and the advances in sensorics are increasing the amount of data we are able to monitor in real-time. However, given the high annotation costs and the time required by quality inspections, data is often available in an unlabeled form. This is fostering the use of active learning for the development of soft sensors and predictive models. In production, instead of performing random inspections to obtain product information, labels are collected by evaluating the information content of the unlabeled data. Several query strategy frameworks for regression have been proposed in the literature but most of the focus has been dedicated to the static pool-based scenario. In this work, we propose a new strategy for the stream-based scenario, where instances are sequentially offered to the learner, which must instantaneously decide whether to perform the quality check to obtain the label or discard the instance. The approach is inspired by the optimal experimental design theory and the iterative aspect of the decision-making process is tackled by setting a threshold on the informativeness of the unlabeled data points. The proposed approach is evaluated using numerical simulations and the Tennessee Eastman Process simulator. The results confirm that selecting the examples suggested by the proposed algorithm allows for a faster reduction in the prediction error.

Importance sampling (IS) is valuable in reducing the variance of Monte Carlo sampling for many areas, including finance, rare event simulation, and Bayesian inference. It is natural and obvious to combine quasi-Monte Carlo (QMC) methods with IS to achieve a faster rate of convergence. However, a naive replacement of Monte Carlo with QMC may not work well. This paper investigates the convergence rates of randomized QMC-based IS for estimating integrals with respect to a Gaussian measure, in which the IS measure is a Gaussian or $t$ distribution. We prove that if the target function satisfies the so-called boundary growth condition and the covariance matrix of the IS density has eigenvalues no smaller than 1, then randomized QMC with the Gaussian proposal has a root mean squared error of $O(N^{-1+\epsilon})$ for arbitrarily small $\epsilon>0$. Similar results of $t$ distribution as the proposal are also established. These sufficient conditions help to assess the effectiveness of IS in QMC. For some particular applications, we find that the Laplace IS, a very general approach to approximate the target function by a quadratic Taylor approximation around its mode, has eigenvalues smaller than 1, making the resulting integrand less favorable for QMC. From this point of view, when using Gaussian distributions as the IS proposal, a change of measure via Laplace IS may transform a favorable integrand into unfavorable one for QMC although the variance of Monte Carlo sampling is reduced. We also give some examples to verify our propositions and warn against naive replacement of MC with QMC under IS proposals. Numerical results suggest that using Laplace IS with $t$ distributions is more robust than that with Gaussian distributions.

This paper first strictly proved that the growth of the second moment of a large class of Gaussian processes is not greater than power function and the covariance matrix is strictly positive definite. Under these two conditions, the maximum likelihood estimators of the mean and variance of such classes of drift Gaussian process have strong consistency under broader growth of t_n. At the same time, the asymptotic normality of binary random vectors and the Berry-Ess\'{e}en bound of estimators are obtained by using the Stein's method via Malliavian calculus.

We describe a new algorithm for vertex cover with runtime $O^*(1.25298^k)$, where $k$ is the size of the desired solution and $O^*$ hides polynomial factors in the input size. This improves over previous runtime of $O^*(1.2738^k)$ due to Chen, Kanj, & Xia (2010) standing for more than a decade. The key to our algorithm is to use a potential function which simultaneously tracks $k$ as well as the optimal value $\lambda$ of the vertex cover LP relaxation. This approach also allows us to make use of prior algorithms for Maximum Independent Set in bounded-degree graphs and Above-Guarantee Vertex Cover. The main step in the algorithm is to branch on high-degree vertices, while ensuring that both $k$ and $\mu = k - \lambda$ are decreased at each step. There can be local obstructions in the graph that prevent $\mu$ from decreasing in this process; we develop a number of novel branching steps to handle these situations.

Dedicated to Tony Hoare. In a paper published in 1972 Hoare articulated the fundamental notions of hiding invariants and simulations. Hiding: invariants on encapsulated data representations need not be mentioned in specifications that comprise the API of a module. Simulation: correctness of a new data representation and implementation can be established by proving simulation between the old and new implementations using a coupling relation defined on the encapsulated state. These results were formalized semantically and for a simple model of state, though the paper claimed this could be extended to encompass dynamically allocated objects. In recent years, progress has been made towards formalizing the claim, for simulation, though mainly in semantic developments. In this article, hiding and simulation are combined with the idea in Hoare's 1969 paper: a logic of programs. For an object-based language with dynamic allocation, we introduce a relational Hoare logic with stateful frame conditions that formalizes encapsulation, hiding of invariants, and couplings that relate two implementations. Relations and other assertions are expressed in first-order logic. Specifications can express a wide range of relational properties such as conditional equivalence and noninterference with declassification. The proof rules facilitate relational reasoning by means of convenient alignments and are shown sound with respect to a conventional operational semantics. A derived proof rule for equivalence of linked programs directly embodies representation independence. Applicability to representative examples is demonstrated using an SMT-based implementation.

We introduce a practical method to enforce linear partial differential equation (PDE) constraints for functions defined by neural networks (NNs), up to a desired tolerance. By combining methods in differentiable physics and applications of the implicit function theorem to NN models, we develop a differentiable PDE-constrained NN layer. During training, our model learns a family of functions, each of which defines a mapping from PDE parameters to PDE solutions. At inference time, the model finds an optimal linear combination of the functions in the learned family by solving a PDE-constrained optimization problem. Our method provides continuous solutions over the domain of interest that exactly satisfy desired physical constraints. Our results show that incorporating hard constraints directly into the NN architecture achieves much lower test error, compared to training on an unconstrained objective.

Rule set learning has long been studied and has recently been frequently revisited due to the need for interpretable models. Still, existing methods have several shortcomings: 1) most recent methods require a binary feature matrix as input, while learning rules directly from numeric variables is understudied; 2) existing methods impose orders among rules, either explicitly or implicitly, which harms interpretability; and 3) currently no method exists for learning probabilistic rule sets for multi-class target variables (there is only one for probabilistic rule lists). We propose TURS, for Truly Unordered Rule Sets, which addresses these shortcomings. We first formalize the problem of learning truly unordered rule sets. To resolve conflicts caused by overlapping rules, i.e., instances covered by multiple rules, we propose a novel approach that exploits the probabilistic properties of our rule sets. We next develop a two-phase heuristic algorithm that learns rule sets by carefully growing rules. An important innovation is that we use a surrogate score to take the global potential of the rule set into account when learning a local rule. Finally, we empirically demonstrate that, compared to non-probabilistic and (explicitly or implicitly) ordered state-of-the-art methods, our method learns rule sets that not only have better interpretability but also better predictive performance.

The supremum of the standardized empirical process is a promising statistic for testing whether the distribution function $F$ of i.i.d. real random variables is either equal to a given distribution function $F_0$ (hypothesis) or $F \ge F_0$ (one-sided alternative). Since \cite{r5} it is well-known that an affine-linear transformation of the suprema converge in distribution to the Gumbel law as the sample size tends to infinity. This enables the construction of an asymptotic level-$\alpha$ test. However, the rate of convergence is extremely slow. As a consequence the probability of the type I error is much larger than $\alpha$ even for sample sizes beyond $10.000$. Now, the standardization consists of the weight-function $1/\sqrt{F_0(x)(1-F_0(x))}$. Substituting the weight-function by a suitable random constant leads to a new test-statistic, for which we can derive the exact distribution (and the limit distribution) under the hypothesis. A comparison via a Monte-Carlo simulation shows that the new test is uniformly better than the Smirnov-test and an appropriately modified test due to \cite{r20}. Our methodology also works for the two-sided alternative $F \neq F_0$.

The kernel herding algorithm is used to construct quadrature rules in a reproducing kernel Hilbert space (RKHS). While the computational efficiency of the algorithm and stability of the output quadrature formulas are advantages of this method, the convergence speed of the integration error for a given number of nodes is slow compared to that of other quadrature methods. In this paper, we propose a modified kernel herding algorithm whose framework was introduced in a previous study and aim to obtain sparser solutions while preserving the advantages of standard kernel herding. In the proposed algorithm, the negative gradient is approximated by several vertex directions, and the current solution is updated by moving in the approximate descent direction in each iteration. We show that the convergence speed of the integration error is directly determined by the cosine of the angle between the negative gradient and approximate gradient. Based on this, we propose new gradient approximation algorithms and analyze them theoretically, including through convergence analysis. In numerical experiments, we confirm the effectiveness of the proposed algorithms in terms of sparsity of nodes and computational efficiency. Moreover, we provide a new theoretical analysis of the kernel quadrature rules with fully-corrective weights, which realizes faster convergence speeds than those of previous studies.

Imitation learning aims to extract knowledge from human experts' demonstrations or artificially created agents in order to replicate their behaviors. Its success has been demonstrated in areas such as video games, autonomous driving, robotic simulations and object manipulation. However, this replicating process could be problematic, such as the performance is highly dependent on the demonstration quality, and most trained agents are limited to perform well in task-specific environments. In this survey, we provide a systematic review on imitation learning. We first introduce the background knowledge from development history and preliminaries, followed by presenting different taxonomies within Imitation Learning and key milestones of the field. We then detail challenges in learning strategies and present research opportunities with learning policy from suboptimal demonstration, voice instructions and other associated optimization schemes.

北京阿比特科技有限公司