亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Automatic verification of array manipulating programs is a challenging problem because it often amounts to the inference of in ductive quantified loop invariants which, in some cases, may not even be firstorder expressible. In this paper, we suggest a novel verification tech nique that is based on induction on userdefined rank of program states as an alternative to loopinvariants. Our technique, dubbed inductive rank reduction, works in two steps. Firstly, we simplify the verification problem and prove that the program is correct when the input state con tains an input array of length B or less, using the length of the array as the rank of the state. Secondly, we employ a squeezing function g which converts a program state sigma with an array of length > B to a state g(sigma) containing an array of length minus 1 or less. We prove that when g satisfies certain natural conditions then if the program violates its specification on sigma then it does so also on g(sigma). The correctness of the program on inputs with arrays of arbitrary lengths follows by induction. We make our technique automatic for array programs whose length of execution is proportional to the length of the input arrays by (i) perform ing the first step using symbolic execution, (ii) verifying the conditions required of g using Z3, and (iii) providing a heuristic procedure for syn thesizing g. We implemented our technique and applied it successfully to several interesting arraymanipulating programs, including a bidirec tional summation program whose loop invariant cannot be expressed in firstorder logic while its specification is quantifier free.

相關內容

In multi-cell non-orthogonal multiple access (NOMA) systems, designing an appropriate user grouping strategy is an open problem due to diverse quality of service (QoS) requirements and inter-cell interference. In this paper, we exploit both game theory and graph theory to study QoS-aware user grouping strategies, aiming at minimizing power consumption in downlink multi-cell NOMA systems. Under different QoS requirements, we derive the optimal successive interference cancellation (SIC) decoding order with inter-cell interference, which is different from existing SIC decoding order of increasing channel gains, and obtain the corresponding power allocation strategy. Based on this, the exact potential game model of the user grouping strategies adopted by multiple cells is formulated. We prove that, in this game, the problem for each player to find a grouping strategy can be converted into the problem of searching for specific negative loops in the graph composed of users. Bellman-Ford algorithm is expanded to find these negative loops. Furthermore, we design a greedy based suboptimal strategy to approach the optimal solution with polynomial time. Extensive simulations confirm the effectiveness of grouping users with consideration of QoS and inter-cell interference, and show that the proposed strategies can considerably reduce total power consumption comparing with reference strategies.

Rational verification is the problem of determining which temporal logic properties will hold in a multi-agent system, under the assumption that agents in the system act rationally, by choosing strategies that collectively form a game-theoretic equilibrium. Previous work in this area has largely focussed on deterministic systems. In this paper, we develop the theory and algorithms for rational verification in probabilistic systems. We focus on concurrent stochastic games (CSGs), which can be used to model uncertainty and randomness in complex multi-agent environments. We study the rational verification problem for both non-cooperative games and cooperative games in the qualitative probabilistic setting. In the former case, we consider LTL properties satisfied by the Nash equilibria of the game and in the latter case LTL properties satisfied by the core. In both cases, we show that the problem is 2EXPTIME-complete, thus not harder than the much simpler verification problem of model checking LTL properties of systems modelled as Markov decision processes (MDPs).

Mixed Integer Programming (MIP) solvers rely on an array of sophisticated heuristics developed with decades of research to solve large-scale MIP instances encountered in practice. Machine learning offers to automatically construct better heuristics from data by exploiting shared structure among instances in the data. This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one. Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP. Neural Diving learns a deep neural network to generate multiple partial assignments for its integer variables, and the resulting smaller MIPs for un-assigned variables are solved with SCIP to construct high quality joint assignments. Neural Branching learns a deep neural network to make variable selection decisions in branch-and-bound to bound the objective value gap with a small tree. This is done by imitating a new variant of Full Strong Branching we propose that scales to large instances using GPUs. We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each. Most instances in all the datasets combined have $10^3-10^6$ variables and constraints after presolve, which is significantly larger than previous learning approaches. Comparing solvers with respect to primal-dual gap averaged over a held-out set of instances, the learning-augmented SCIP is 2x to 10x better on all datasets except one on which it is $10^5$x better, at large time limits. To the best of our knowledge, ours is the first learning approach to demonstrate such large improvements over SCIP on both large-scale real-world application datasets and MIPLIB.

Information technology system (ITS), informally, is a set of workstations, servers, laptops, installed software, databases, LANs, firewalls, etc. Nowadays, every company has an ITS, but rarely is information about it available outside the company that owns it. However, there are many situations where the availability of such data would be beneficial. For example, cyber ranges emulate IT systems and need their description. Machine learning, and in particular the use of ML to automate attack and defense, would also benefit from descriptions of ITSs. In this paper, we describe a system we call the Generator, that as inputs takes requirements such as the number of employees and the vertical to which the company belongs, and produces as output a model of an ITS that satisfies the given requirements. A very important property that we have put special emphasis on is that the generated ITS looks like a model of a real system to anyone who analyzes it. To the best of our knowledge, we are the first to have attempted to build something like this. We validate the Generator by generating an ITS model for a fictional financial institution, and analyze its performance with respect to the problem size. The conducted experiments show that our approach is feasible. In the future, we intend to extend this prototype to allow probabilistic generation of IT systems when only a subset of parameters is explicitly defined.

Low rank matrix recovery problems appear widely in statistics, combinatorics, and imaging. One celebrated method for solving these problems is to formulate and solve a semidefinite program (SDP). It is often known that the exact solution to the SDP with perfect data recovers the solution to the original low rank matrix recovery problem. It is more challenging to show that an approximate solution to the SDP formulated with noisy problem data acceptably solves the original problem; arguments are usually ad hoc for each problem setting, and can be complex. In this note, we identify a set of conditions that we call simplicity that limit the error due to noisy problem data or incomplete convergence. In this sense, simple SDPs are robust: simple SDPs can be (approximately) solved efficiently at scale; and the resulting approximate solutions, even with noisy data, can be trusted. Moreover, we show that simplicity holds generically, and also for many structured low rank matrix recovery problems, including the stochastic block model, $\mathbb{Z}_2$ synchronization, and matrix completion. Formally, we call an SDP simple if it has a surjective constraint map, admits a unique primal and dual solution pair, and satisfies strong duality and strict complementarity. However, simplicity is not a panacea: we show the Burer-Monteiro formulation of the SDP may have spurious second-order critical points, even for a simple SDP with a rank 1 solution.

This letter investigates a downlink multiple input single output (MISO) system based on transmissive reconfigurable metasurface (RMS) transmitter. Specifically, a transmitter design based on a transmissive RMS equipped with a feed antenna is first proposed. Then, in order to maximize the achievable sum-rate of the system, the beamforming design and power allocation are jointly optimized. Since the optimization variables are coupled, this formulated optimization problem is non-convex, so it is difficult to solve it directly. To solve this problem, we proposed an alternating optimization (AO) technique based on difference-of-convex (DC) programming and successive convex approximation (SCA). Simulation results verify that the proposed algorithm can achieve convergence and improve the achievable sum-rate of the system.

We are interested in solving decision problem $\exists? t \in \mathbb{N}, \cos t \theta = c$ where $\cos \theta$ and $c$ are algebraic numbers. We call this the $\cos t \theta$ problem. This is an exploration of Diophantine equations with analytic functions. Polynomial, exponential with real base and cosine function are closely related to this decision problem: $ \exists ? t \in \mathbb{N}, u^T M^t v = 0$ where $u, v \in \mathbb{Q}^n, M \in \mathbb{Q}^{n\times n}$. This problem is also known as "Skolem problem" and is useful in verification of linear systems. Its decidability remains unknown. Single variable Diophantine equations with exponential function with real algebraic base and $\cos t \theta$ function with $\theta$ a rational multiple of $\pi$ is decidable. This idea is central in proving the decidability of Skolem problem when the eigenvalues of $M$ are roots of real numbers. The main difficulty with the cases when eigenvalues are not roots of reals is that even for small order cases decidability requires application of trancendental number theory which does not scale for higher order cases. We provide a first attempt to overcome that by providing a $PTIME$ algorithm for $\cos t \theta$ when $\theta$ is not a rational multiple of $\pi$. We do so without using techniques from transcendental number theory. \par One of the main difficulty in Diophantine equations is being unable to use tools from calculus to solve this equation as the domain of variable is $\mathbb{N}$. We also provide an attempt to overcome that by providing reduction of Skolem problem to solving a one variable equation (which involves polynomials, exponentials with real bases and $\cos t \theta$ function with $t$ ranging over reals and $\theta \in [0, \pi]$) over reals.

This paper proposes a model-free Reinforcement Learning (RL) algorithm to synthesise policies for an unknown Markov Decision Process (MDP), such that a linear time property is satisfied. We convert the given property into a Limit Deterministic Buchi Automaton (LDBA), then construct a synchronized MDP between the automaton and the original MDP. According to the resulting LDBA, a reward function is then defined over the state-action pairs of the product MDP. With this reward function, our algorithm synthesises a policy whose traces satisfies the linear time property: as such, the policy synthesis procedure is "constrained" by the given specification. Additionally, we show that the RL procedure sets up an online value iteration method to calculate the maximum probability of satisfying the given property, at any given state of the MDP - a convergence proof for the procedure is provided. Finally, the performance of the algorithm is evaluated via a set of numerical examples. We observe an improvement of one order of magnitude in the number of iterations required for the synthesis compared to existing approaches.

This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.

In recent years, deep learning techniques have been developed to improve the performance of program synthesis from input-output examples. Albeit its significant progress, the programs that can be synthesized by state-of-the-art approaches are still simple in terms of their complexity. In this work, we move a significant step forward along this direction by proposing a new class of challenging tasks in the domain of program synthesis from input-output examples: learning a context-free parser from pairs of input programs and their parse trees. We show that this class of tasks are much more challenging than previously studied tasks, and the test accuracy of existing approaches is almost 0%. We tackle the challenges by developing three novel techniques inspired by three novel observations, which reveal the key ingredients of using deep learning to synthesize a complex program. First, the use of a non-differentiable machine is the key to effectively restrict the search space. Thus our proposed approach learns a neural program operating a domain-specific non-differentiable machine. Second, recursion is the key to achieve generalizability. Thus, we bake-in the notion of recursion in the design of our non-differentiable machine. Third, reinforcement learning is the key to learn how to operate the non-differentiable machine, but it is also hard to train the model effectively with existing reinforcement learning algorithms from a cold boot. We develop a novel two-phase reinforcement learning-based search algorithm to overcome this issue. In our evaluation, we show that using our novel approach, neural parsing programs can be learned to achieve 100% test accuracy on test inputs that are 500x longer than the training samples.

北京阿比特科技有限公司