亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Semi-unification is the combination of first-order unification and first-order matching. The undecidability of semi-unification has been proven by Kfoury, Tiuryn, and Urzyczyn in the 1990s by Turing reduction from Turing machine immortality (existence of a diverging configuration). The particular Turing reduction is intricate, uses non-computational principles, and involves various intermediate models of computation. The present work gives a constructive many-one reduction from the Turing machine halting problem to semi-unification. This establishes RE-completeness of semi-unification under many-one reductions. Computability of the reduction function, constructivity of the argument, and correctness of the argument is witnessed by an axiom-free mechanization in the Coq proof assistant. Arguably, this serves as comprehensive, precise, and surveyable evidence for the result at hand. The mechanization is incorporated into the existing, well-maintained Coq library of undecidability proofs. Notably, a variant of Hooper's argument for the undecidability of Turing machine immortality is part of the mechanization.

相關內容

This work aims at making a comprehensive contribution in the general area of parametric inference for discretely observed diffusion processes. Established approaches for likelihood-based estimation invoke a time-discretisation scheme for the approximation of the intractable transition dynamics of the Stochastic Differential Equation (SDE) model over finite time periods. The scheme is applied for a step-size that is either user-selected or determined by the data. Recent research has highlighted the critical ef-fect of the choice of numerical scheme on the behaviour of derived parameter estimates in the setting of hypo-elliptic SDEs. In brief, in our work, first, we develop two weak second order sampling schemes (to cover both hypo-elliptic and elliptic SDEs) and produce a small time expansion for the density of the schemes to form a proxy for the true intractable SDE transition density. Then, we establish a collection of analytic results for likelihood-based parameter estimates obtained via the formed proxies, thus providing a theoretical framework that showcases advantages from the use of the developed methodology for SDE calibration. We present numerical results from carrying out classical or Bayesian inference, for both elliptic and hypo-elliptic SDEs.

Occasional deadline misses are acceptable for soft real-time systems. Quantifying probabilistic and deterministic characteristics of deadline misses is therefore essential to ensure that deadline misses indeed happen only occasionally. This is supported by recent research activities on probabilistic worst-case execution time, worst-case deadline failure probability, the maximum number of deadline misses, upper bounds on the deadline miss probability, and the deadline miss rate. This paper focuses on the deadline miss rate of a periodic soft real-time task in the long run. Our model assumes that this soft real-time task has an arbitrary relative deadline and that a job can still be executed after a deadline-miss until a dismiss point. This model generalizes the existing models that either dismiss a job immediately after its deadline miss or never dismiss a job. We provide mathematical notation on the convergence of the deadline miss rate in the long run and essential properties to calculate the deadline miss rate. Specifically, we use a Markov chain to model the execution behavior of a periodic soft real-time task. We present the required ergodicity property to ensure that the deadline miss rate in the long run is described by a stationary distribution.

The assumption that data are invariant under the action of a compact group is implicit in many statistical modeling assumptions such as normality, or the assumption of independence and identical distributions. Hence, testing for the presence of such invariances offers a principled way to falsify various statistical models. In this article, we develop sequential, anytime-valid tests of distributional symmetry under the action of general compact groups. The tests that are developed allow for the continuous monitoring of data as it is collected while keeping type-I error guarantees, and include tests for exchangeability and rotational symmetry as special examples. The main tool to this end is the machinery developed for conformal prediction. The resulting test statistic, called a conformal martingale, can be interpreted as a likelihood ratio. We use this interpretation to show that the test statistics are optimal -- in a specific log-optimality sense -- against certain alternatives. Furthermore, we draw a connection between conformal prediction, anytime-valid tests of distributional invariance, and current developments on anytime-valid testing. In particular, we extend existing anytime-valid tests of independence, which leverage exchangeability, to work under general group invariances. Additionally, we discuss testing for invariance under subgroups of the permutation group and orthogonal group, the latter of which corresponds to testing the assumptions behind linear regression models.

The development of nonlinear optimization algorithms capable of performing reliably in the presence of noise has garnered considerable attention lately. This paper advocates for strategies to create noise-tolerant nonlinear optimization algorithms by adapting classical deterministic methods. These adaptations follow certain design guidelines described here, which make use of estimates of the noise level in the problem. The application of our methodology is illustrated by the development of a line search gradient projection method, which is tested on an engineering design problem. It is shown that a new self-calibrated line search and noise-aware finite-difference techniques are effective even in the high noise regime. Numerical experiments investigate the resiliency of key algorithmic components. A convergence analysis of the line search gradient projection method establishes convergence to a neighborhood of the solution.

Long patch validation time is a limiting factor for automated program repair (APR). Though the duality between patch validation and mutation testing is recognized, so far there exists no study of systematically adapting mutation testing techniques to general-purpose patch validation. To address this gap, we investigate existing mutation testing techniques and identify five classes of acceleration techniques that are suitable for general-purpose patch validation. Among them, mutant schemata and mutant deduplication have not been adapted to general-purpose patch validation due to the arbitrary changes that third-party APR approaches may introduce. This presents two problems for adaption: 1) the difficulty of implementing the static equivalence analysis required by the state-of-the-art mutant deduplication approach; 2) the difficulty of capturing the changes of patches to the system state at runtime. To overcome these problems, we propose two novel approaches: 1) execution scheduling, which detects the equivalence between patches online, avoiding the static equivalence analysis and its imprecision; 2) interception-based instrumentation, which intercepts the changes of patches to the system state, avoiding a full interpreter and its overhead. Based on the contributions above, we implement ExpressAPR, a general-purpose patch validator for Java that integrates all recognized classes of techniques suitable for patch validation. Our large-scale evaluation with four APR approaches shows that ExpressAPR accelerates patch validation by 137.1x over plainvalidation or 8.8x over the state-of-the-art approach, making patch validation no longer the time bottleneck of APR. Patch validation time for a single bug can be reduced to within a few minutes on mainstream CPUs.

We use Markov categories to develop generalizations of the theory of Markov chains and hidden Markov models in an abstract setting. This comprises characterizations of hidden Markov models in terms of local and global conditional independences as well as existing algorithms for Bayesian filtering and smoothing applicable in all Markov categories with conditionals. We show that these algorithms specialize to existing ones such as the Kalman filter, forward-backward algorithm, and the Rauch-Tung-Striebel smoother when instantiated in appropriate Markov categories. Under slightly stronger assumptions, we also prove that the sequence of outputs of the Bayes filter is itself a Markov chain with a concrete formula for its transition maps. There are two main features of this categorical framework. The first is its generality, as it can be used in any Markov category with conditionals. In particular, it provides a systematic unified account of hidden Markov models and algorithms for filtering and smoothing in discrete probability, Gaussian probability, measure-theoretic probability, possibilistic nondeterminism and others at the same time. The second feature is the intuitive visual representation of information flow in these algorithms in terms of string diagrams.

Recent semantic communication methods explore effective ways to expand the communication paradigm and improve the system performance of the communication systems. Nonetheless, the common problem of these methods is that the essence of semantics is not explicitly pointed out and directly utilized. A new epistemology suggests that synonymy, which is revealed as the fundamental feature of semantics, guides the establishment of the semantic information theory from a novel viewpoint. Building on this theoretical basis, this paper proposes a semantic arithmetic coding (SAC) method for semantic lossless compression using intuitive semantic synonymy. By constructing reasonable synonymous mappings and performing arithmetic coding procedures over synonymous sets, SAC can achieve higher compression efficiency for meaning-contained source sequences at the semantic level and thereby approximate the semantic entropy limits. Experimental results on edge texture map compression show an evident improvement in coding efficiency using SAC without semantic losses, compared to traditional arithmetic coding, which demonstrates its effectiveness.

Sequence-independent lifting is a procedure for strengthening valid inequalities of an integer program. We generalize the sequence-independent lifting method of Gu, Nemhauser, and Savelsbergh (GNS lifting) for cover inequalities and correct an error in their proposed generalization. We obtain a new sequence-independent lifting technique -- piecewise-constant (PC) lifting -- with a number of interesting properties. We derive a broad set of sufficient conditions under which PC lifting is facet defining. To our knowledge, this is the first characterization of facet-defining sequence-independent liftings that are efficiently computable from the underlying cover. Finally, we demonstrate via experiments that PC lifting can be a useful alternative to GNS lifting. We test our new lifting techniques atop a number of novel cover cut generation routines, which prove to be effective in experiments with CPLEX.

The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.

A community reveals the features and connections of its members that are different from those in other communities in a network. Detecting communities is of great significance in network analysis. Despite the classical spectral clustering and statistical inference methods, we notice a significant development of deep learning techniques for community detection in recent years with their advantages in handling high dimensional network data. Hence, a comprehensive overview of community detection's latest progress through deep learning is timely to both academics and practitioners. This survey devises and proposes a new taxonomy covering different categories of the state-of-the-art methods, including deep learning-based models upon deep neural networks, deep nonnegative matrix factorization and deep sparse filtering. The main category, i.e., deep neural networks, is further divided into convolutional networks, graph attention networks, generative adversarial networks and autoencoders. The survey also summarizes the popular benchmark data sets, model evaluation metrics, and open-source implementations to address experimentation settings. We then discuss the practical applications of community detection in various domains and point to implementation scenarios. Finally, we outline future directions by suggesting challenging topics in this fast-growing deep learning field.

北京阿比特科技有限公司