亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We generalize several propositional preprocessing techniques to higher-order logic, building on existing first-order generalizations. These techniques eliminate literals, clauses, or predicate symbols from the problem, with the aim of making it more amenable to automatic proof search. We also introduce a new technique, which we call quasipure literal elimination, that strictly subsumes pure literal elimination. The new techniques are implemented in the Zipperposition theorem prover. Our evaluation shows that they sometimes help prove problems originating from Isabelle formalizations and the TPTP library.

相關內容

SAT是研究者關注命題可滿足性問題的理論與應用的第一次年度會議。除了簡單命題可滿足性外,它還包括布爾優化(如MaxSAT和偽布爾(PB)約束)、量化布爾公式(QBF)、可滿足性模理論(SMT)和約束規劃(CP),用于與布爾級推理有明確聯系的問題。官網鏈接: · Lipschitz · 線性的 · 閉式解 · Minimax ·
2023 年 5 月 19 日

Off-policy evaluation (OPE) is the problem of estimating the value of a target policy using historical data collected under a different logging policy. OPE methods typically assume overlap between the target and logging policy, enabling solutions based on importance weighting and/or imputation. In this work, we approach OPE without assuming either overlap or a well-specified model by considering a strategy based on partial identification under non-parametric assumptions on the conditional mean function, focusing especially on Lipschitz smoothness. Under such smoothness assumptions, we formulate a pair of linear programs whose optimal values upper and lower bound the contributions of the no-overlap region to the off-policy value. We show that these linear programs have a concise closed form solution that can be computed efficiently and that their solutions converge, under the Lipschitz assumption, to the sharp partial identification bounds on the off-policy value. Furthermore, we show that the rate of convergence is minimax optimal, up to log factors. We deploy our methods on two semi-synthetic examples, and obtain informative and valid bounds that are tighter than those possible without smoothness assumptions.

Software engineering techniques are increasingly relying on deep learning approaches to support many software engineering tasks, from bug triaging to code generation. To assess the efficacy of such techniques researchers typically perform controlled experiments. Conducting these experiments, however, is particularly challenging given the complexity of the space of variables involved, from specialized and intricate architectures and algorithms to a large number of training hyper-parameters and choices of evolving datasets, all compounded by how rapidly the machine learning technology is advancing, and the inherent sources of randomness in the training process. In this work we conduct a mapping study, examining 194 experiments with techniques that rely on deep neural networks appearing in 55 papers published in premier software engineering venues to provide a characterization of the state-of-the-practice, pinpointing experiments common trends and pitfalls. Our study reveals that most of the experiments, including those that have received ACM artifact badges, have fundamental limitations that raise doubts about the reliability of their findings. More specifically, we find: weak analyses to determine that there is a true relationship between independent and dependent variables (87% of the experiments); limited control over the space of DNN relevant variables, which can render a relationship between dependent variables and treatments that may not be causal but rather correlational (100% of the experiments); and lack of specificity in terms of what are the DNN variables and their values utilized in the experiments (86% of the experiments) to define the treatments being applied, which makes it unclear whether the techniques designed are the ones being assessed, or how the sources of extraneous variation are controlled. We provide some practical recommendations to address these limitations.

Robots often rely on a repertoire of previously-learned motion policies for performing tasks of diverse complexities. When facing unseen task conditions or when new task requirements arise, robots must adapt their motion policies accordingly. In this context, policy optimization is the \emph{de facto} paradigm to adapt robot policies as a function of task-specific objectives. Most commonly-used motion policies carry particular structures that are often overlooked in policy optimization algorithms. We instead propose to leverage the structure of probabilistic policies by casting the policy optimization as an optimal transport problem. Specifically, we focus on robot motion policies that build on Gaussian mixture models (GMMs) and formulate the policy optimization as a Wassertein gradient flow over the GMMs space. This naturally allows us to constrain the policy updates via the $L^2$-Wasserstein distance between GMMs to enhance the stability of the policy optimization process. Furthermore, we leverage the geometry of the Bures-Wasserstein manifold to optimize the Gaussian distributions of the GMM policy via Riemannian optimization. We evaluate our approach on common robotic settings: Reaching motions, collision-avoidance behaviors, and multi-goal tasks. Our results show that our method outperforms common policy optimization baselines in terms of task success rate and low-variance solutions.

Works in quantum machine learning (QML) over the past few years indicate that QML algorithms can function just as well as their classical counterparts, and even outperform them in some cases. Among the corpus of recent work, many current QML models take advantage of variational quantum algorithm (VQA) circuits, given that their scale is typically small enough to be compatible with NISQ devices and the method of automatic differentiation for optimizing circuit parameters is familiar to machine learning (ML). While the results bear interesting promise for an era when quantum machines are more readily accessible, if one can achieve similar results through non-quantum methods then there may be a more near-term advantage available to practitioners. To this end, the nature of this work is to investigate the utilization of stochastic methods inspired by a variational quantum version of the long short-term memory (LSTM) model in an attempt to approach the reported successes in performance and rapid convergence. By analyzing the performance of classical, stochastic, and quantum methods, this work aims to elucidate if it is possible to achieve some of QML's major reported benefits on classical machines by incorporating aspects of its stochasticity.

Quantum Repeaters are one critical technology for scalable quantum networking. One of the key challenges regarding quantum repeaters is their management of how they provide quantum entanglement for distant quantum computers. We focus on the RuleSet architecture, which is a decentralized way to manage repeaters. The RuleSet concept is designed to scale up the management of quantum repeaters for future quantum repeaters, suitable because of its flexibility and asynchronous operation, however, it is still at the conceptual level of definition and it is very hard to define raw RuleSets. In this thesis, we introduce a new programming language, called "RuLa", to write the RuleSets in an intuitive and coherent way. The way RuLa defines RuleSet and Rule is very similar to how the Rule and RuleSets are executed so that the programmer can construct the RuleSets the way they want repeaters to execute them. We provide some examples of how the RuleSets are defined in RuLa and what is the output of the compilation. We also discussed future use cases and applications of this language.

Long-distance quantum communication presents a significant challenge as maintaining the fidelity of qubits can be difficult. This issue can be addressed through the use of quantum repeaters to transmit entanglement information through Bell measurements. However, despite its necessity to enable wide-area quantum internet, the deployment cost of quantum repeaters can be prohibitively expensive, thus it is important to develop a quantum repeater deployment model that can strike a balance between cost and effectiveness. In this work, we present novel heuristic models to quickly determine a minimum number of quantum repeaters to deploy in large-scale networks to provide end-to-end connectivity between all end hosts. The results show that, compared to the linear programming approach, the heuristic methods can find near-optimal solutions while reducing the execution time from days to seconds when evaluated against several synthetic and real-world networks such as SURFnet and ESnet. As reliability is key for any network, we also demonstrate that the heuristic method can determine deployment models that can endure up to two link/node failures.

Sequential Latin hypercube designs have recently received great attention for computer experiments. Much of the work has been restricted to invariant spaces. The related systematic construction methods are inflexible while algorithmic methods are ineffective for large designs. For such designs in space contraction, systematic construction methods have not been investigated yet. This paper proposes a new method for constructing sequential Latin hypercube designs via good lattice point sets in a variety of experimental spaces. These designs are called sequential good lattice point sets. Moreover, we provide fast and efficient approaches for identifying the (nearly) optimal sequential good lattice point sets under a given criterion. Combining with the linear level permutation technique, we obtain a class of asymptotically optimal sequential Latin hypercube designs in invariant spaces where the $L_1$-distance in each stage is either optimal or asymptotically optimal. Numerical results demonstrate that the sequential good lattice point set has a better space-filling property than the existing sequential Latin hypercube designs in the invariant space. It is also shown that the sequential good lattice point sets have less computational complexity and more adaptability.

Deep reinforcement learning algorithms can perform poorly in real-world tasks due to the discrepancy between source and target environments. This discrepancy is commonly viewed as the disturbance in transition dynamics. Many existing algorithms learn robust policies by modeling the disturbance and applying it to source environments during training, which usually requires prior knowledge about the disturbance and control of simulators. However, these algorithms can fail in scenarios where the disturbance from target environments is unknown or is intractable to model in simulators. To tackle this problem, we propose a novel model-free actor-critic algorithm -- namely, state-conservative policy optimization (SCPO) -- to learn robust policies without modeling the disturbance in advance. Specifically, SCPO reduces the disturbance in transition dynamics to that in state space and then approximates it by a simple gradient-based regularizer. The appealing features of SCPO include that it is simple to implement and does not require additional knowledge about the disturbance or specially designed simulators. Experiments in several robot control tasks demonstrate that SCPO learns robust policies against the disturbance in transition dynamics.

Link prediction is a very fundamental task on graphs. Inspired by traditional path-based methods, in this paper we propose a general and flexible representation learning framework based on paths for link prediction. Specifically, we define the representation of a pair of nodes as the generalized sum of all path representations, with each path representation as the generalized product of the edge representations in the path. Motivated by the Bellman-Ford algorithm for solving the shortest path problem, we show that the proposed path formulation can be efficiently solved by the generalized Bellman-Ford algorithm. To further improve the capacity of the path formulation, we propose the Neural Bellman-Ford Network (NBFNet), a general graph neural network framework that solves the path formulation with learned operators in the generalized Bellman-Ford algorithm. The NBFNet parameterizes the generalized Bellman-Ford algorithm with 3 neural components, namely INDICATOR, MESSAGE and AGGREGATE functions, which corresponds to the boundary condition, multiplication operator, and summation operator respectively. The NBFNet is very general, covers many traditional path-based methods, and can be applied to both homogeneous graphs and multi-relational graphs (e.g., knowledge graphs) in both transductive and inductive settings. Experiments on both homogeneous graphs and knowledge graphs show that the proposed NBFNet outperforms existing methods by a large margin in both transductive and inductive settings, achieving new state-of-the-art results.

Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.

北京阿比特科技有限公司