亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a simple programming language based on G\"odel numbering and prime factorization, enhanced with explicit, scoped loops, allowing for easy program composition. Further, we will present a theorem prover that allows expressing and working with formal systems. The theorem prover is simple as it relies merely on a substitution rule and set equality to derive theorems. Finally, we will represent the programming language in the theorem prover. We will show the syntax and semantics of both, and then provide a few example programs and their evaluation.

相關內容

Covert communication is focused on hiding the mere existence of communication from unwanted listeners via the physical layer. In this work, we consider the problem of perfect covert communication in wireless networks. Specifically, harnessing an Intelligent Reflecting Surface (IRS), we turn our attention to schemes which allow the transmitter to completely hide the communication, with zero energy at the unwanted listener (Willie) and hence zero probability of detection. Applications of such schemes go beyond simple covertness, as we prevent detectability or decoding even when the codebook, timings and channel characteristics are known to Willie. That is, perfect covertness also ensures Willie is unable to decode, even assuming communication took place and knowing the codebook. We define perfect covertness, give a necessary and sufficient condition for it in IRS-assisted communication and define the optimization problem. For N = 2 IRS elements, we analyze the probability of finding a solution and derive its closed-form. We then investigate the problem of N > 2 IRS elements, by analyzing probability of such a zero-detection solution. We prove that this probability converge to 1 as the number of IRS tends to infinity. We provide an iterative algorithm to find a perfectly covert scheme and prove its convergence. The results are also supported by simulations, showing that a small amount of IRS elements allows for a positive rate at the legitimate user yet with zero probability of detection at an unwanted listener.

Class invariants -- consistency constraints preserved by every operation on objects of a given type -- are fundamental to building, understanding and verifying object-oriented programs. For verification, however, they raise difficulties, which have not yet received a generally accepted solution. The present work introduces a proof rule meant to address these issues and allow verification tools to benefit from invariants. It clarifies the notion of invariant and identifies the three associated problems: callbacks, furtive access and reference leak. As an example, the 2016 Ethereum DAO bug, in which $50 million were stolen, resulted from a callback invalidating an invariant. The discussion starts with a simplified model of computation and an associated proof rule, demonstrating its soundness. It then removes one by one the three simplifying assumptions, each removal raising one of the three issues, and leading to a corresponding adaptation to the proof rule. The final version of the rule can tackle tricky examples, including "challenge problems" listed in the literature.

Humans understand language by extracting information (meaning) from sentences, combining it with existing commonsense knowledge, and then performing reasoning to draw conclusions. While large language models (LLMs) such as GPT-3 and ChatGPT are able to leverage patterns in the text to solve a variety of NLP tasks, they fall short in problems that require reasoning. They also cannot reliably explain the answers generated for a given question. In order to emulate humans better, we propose STAR, a framework that combines LLMs with Answer Set Programming (ASP). We show how LLMs can be used to effectively extract knowledge -- represented as predicates -- from language. Goal-directed ASP is then employed to reliably reason over this knowledge. We apply the STAR framework to three different NLU tasks requiring reasoning: qualitative reasoning, mathematical reasoning, and goal-directed conversation. Our experiments reveal that STAR is able to bridge the gap of reasoning in NLU tasks, leading to significant performance improvements, especially for smaller LLMs, i.e., LLMs with a smaller number of parameters. NLU applications developed using the STAR framework are also explainable: along with the predicates generated, a justification in the form of a proof tree can be produced for a given output.

The specifics of data layout can be important for the efficiency of functional programs and interaction with external libraries. In this paper, we develop a type-theoretic approach to data layout that could be used as a typed intermediate language in a compiler or to give a programmer more control. Our starting point is a computational interpretation of the semi-axiomatic sequent calculus for intuitionistic logic that defines abstract notions of cells and addresses. We refine this semantics so addresses have more structure to reflect possible alternative layouts without fundamentally departing from intuitionistic logic. We then add recursive types and explore example programs and properties of the resulting language.

Simple temporal problems represent a powerful class of models capable of describing the temporal relations between events that arise in many real-world applications such as logistics, robot planning and management systems. The classic simple temporal problem permits each event to have only a single release and due date. In this paper, we focus on the case where events may have an arbitrarily large number of release and due dates. This type of problem, however, has been referred to by various names. In order to simplify and standardize nomenclatures, we introduce the name Simple Disjunctive Temporal Problem. We provide three mathematical models to describe this problem using constraint programming and linear programming. To efficiently solve simple disjunctive temporal problems, we design two new algorithms inspired by previous research, both of which exploit the problem's structure to significantly reduce their space complexity. Additionally, we implement algorithms from the literature and provide the first in-depth empirical study comparing methods to solve simple disjunctive temporal problems across a wide range of experiments. Our analysis and conclusions offer guidance for future researchers and practitioners when tackling similar temporal constraint problems in new applications. All results, source code and instances are made publicly available to further assist future research.

The problem of computing minimally sparse solutions of under-determined linear systems is $NP$ hard in general. Subsets with extra properties, may allow efficient algorithms, most notably problems with the restricted isometry property (RIP) can be solved by convex $\ell_1$-minimization. While these classes have been very successful, they leave out many practical applications. In this paper, we consider adaptable classes that are tractable after training on a curriculum of increasingly difficult samples. The setup is intended as a candidate model for a human mathematician, who may not be able to tackle an arbitrary proof right away, but may be successful in relatively flexible subclasses, or areas of expertise, after training on a suitable curriculum.

Even for known nonlinear dynamical systems, feedback controller synthesis is a difficult problem that often requires leveraging the particular structure of the dynamics to induce a stable closed-loop system. For general nonlinear models, including those fit to data, there may not be enough known structure to reliably synthesize a stabilizing feedback controller. In this paper, we propose a novel nonlinear tracking controller formulation based on a state-dependent Riccati equation for general nonlinear control-affine systems. Our formulation depends on a nonlinear factorization of the system of vector fields defining the control-affine dynamics, which we show always exists under mild smoothness assumptions. We discuss how this factorization can be learned from a finite set of data. On a variety of simulated nonlinear dynamical systems, we demonstrate the efficacy of learned versions of our controller in stable trajectory tracking. Alongside our method, we evaluate recent ideas in jointly learning a controller and stabilizability certificate for known dynamical systems; we show empirically that such methods can be data-inefficient in comparison.

We tackle the problem of joint frequency and power allocation while emphasizing the generalization capability of a deep reinforcement learning model. Most of the existing methods solve reinforcement learning-based wireless problems for a specific pre-determined wireless network scenario. The performance of a trained agent tends to be very specific to the network and deteriorates when used in a different network operating scenario (e.g., different in size, neighborhood, and mobility, among others). We demonstrate our approach to enhance training to enable a higher generalization capability during inference of the deployed model in a distributed multi-agent setting in a hostile jamming environment. With all these, we show the improved training and inference performance of the proposed methods when tested on previously unseen simulated wireless networks of different sizes and architectures. More importantly, to prove practical impact, the end-to-end solution was implemented on the embedded software-defined radio and validated using over-the-air evaluation.

Empirical game-theoretic analysis (EGTA) is a general framework for reasoning about complex games using agent-based simulation. Data from simulating select strategy profiles is employed to estimate a cogent and tractable game model approximating the underlying game. To date, EGTA methodology has focused on game models in normal form; though the simulations play out in sequential observations and decisions over time, the game model abstracts away this temporal structure. Richer models of \textit{extensive-form games} (EFGs) provide a means to capture temporal patterns in action and information, using tree representations. We propose \textit{tree-exploiting EGTA} (TE-EGTA), an approach to incorporate EFG models into EGTA\@. TE-EGTA constructs game models that express observations and temporal organization of activity, albeit at a coarser grain than the underlying agent-based simulation model. The idea is to exploit key structure while maintaining tractability. We establish theoretically and experimentally that exploiting even a little temporal structure can vastly reduce estimation error in strategy-profile payoffs compared to the normal-form model. Further, we explore the implications of EFG models for iterative approaches to EGTA, where strategy spaces are extended incrementally. Our experiments on several game instances demonstrate that TE-EGTA can also improve performance in the iterative setting, as measured by the quality of equilibrium approximation as the strategy spaces are expanded.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

北京阿比特科技有限公司