亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A barrier certificate, defined over the states of a dynamical system, is a real-valued function whose zero level set characterizes an inductively verifiable state invariant separating reachable states from unsafe ones. When combined with powerful decision procedures such as sum-of-squares programming (SOS) or satisfiability-modulo-theory solvers (SMT) barrier certificates enable an automated deductive verification approach to safety. The barrier certificate approach has been extended to refute omega-regular specifications by separating consecutive transitions of omega-automata in the hope of denying all accepting runs. Unsurprisingly, such tactics are bound to be conservative as refutation of recurrence properties requires reasoning about the well-foundedness of the transitive closure of the transition relation. This paper introduces the notion of closure certificates as a natural extension of barrier certificates from state invariants to transition invariants. We provide SOS and SMT based characterization for automating the search of closure certificates and demonstrate their effectiveness via a paradigmatic case study.

相關內容

Automator是蘋果公司為他們的Mac OS X系統開發的一款軟件。 只要通過點擊拖拽鼠標等操作就可以將一系列動作組合成一個工作流,從而幫助你自動的(可重復的)完成一些復雜的工作。Automator還能橫跨很多不同種類的程序,包括:查找器、Safari網絡瀏覽器、iCal、地址簿或者其他的一些程序。它還能和一些第三方的程序一起工作,如微軟的Office、Adobe公司的Photoshop或者Pixelmator等。

Power management in multi-server data centers~especially at scale is a vital issue of increasing importance in cloud computing paradigm. Existing studies mostly consider thresholds on the number of idle servers to switch the servers on or off and suffer from scalability issues. As a natural approach in view~of~the Markovian assumption, we present a multi-level continuous-time Markov decision process (CTMDP) model based on state aggregation of multi-server data centers with setup times that interestingly overcomes the inherent intractability of traditional MDP approaches due to their colossal state-action space. The beauty of the presented model is that, while it keeps loyalty to the Markovian behavior, it approximates the calculation of the transition probabilities in a way that keeps the accuracy of the results at a desirable level. Moreover, near-optimal performance is attained at the expense of the increased state-space dimensionality by tuning the number of levels in the multi-level approach. The simulation results were promising and confirm that in many scenarios of interest, the proposed approach attains noticeable improvements, namely a near 50% reduction in the size of CTMDP while yielding better rewards as compared to existing fixed threshold-based policies and aggregation methods.

Variable independence and decomposability are algorithmic techniques for simplifying logical formulas by tearing apart connections between free variables. These techniques were originally proposed to speed up query evaluation in constraint databases, in particular by representing the query as a Boolean combination of formulas with no interconnected variables. They also have many other applications in SMT, string analysis, databases, automata theory and other areas. However, the precise complexity of variable independence and decomposability has been left open especially for the quantifier-free theory of linear real arithmetic (LRA), which is central in database applications. We introduce a novel characterization of formulas admitting decompositions and use it to show that it is coNP-complete to decide variable decomposability over LRA. As a corollary, we obtain that deciding variable independence is in $ \Sigma_2^p $. These results substantially improve the best known double-exponential time algorithms for variable decomposability and independence. In many practical applications, it is crucial to be able to efficiently eliminate connections between variables whenever possible. We design and implement an algorithm for this problem, which is optimal in theory, exponentially faster compared to the current state-of-the-art algorithm and efficient on various microbenchmarks. In particular, our algorithm is the first one to overcome a fundamental barrier between non-discrete and discrete first-order theories. Formulas arising in practice often have few or even no free variables that are perfectly independent. In this case, our algorithm can compute a best-possible approximation of a decomposition, which can be used to optimize database queries by exploiting partial variable independence, which is present in almost every logical formula or database query constraint.

We study pseudo-polynomial time algorithms for the fundamental \emph{0-1 Knapsack} problem. In terms of $n$ and $w_{\max}$, previous algorithms for 0-1 Knapsack have cubic time complexities: $O(n^2w_{\max})$ (Bellman 1957), $O(nw_{\max}^2)$ (Kellerer and Pferschy 2004), and $O(n + w_{\max}^3)$ (Polak, Rohwedder, and W\k{e}grzycki 2021). On the other hand, fine-grained complexity only rules out $O((n+w_{\max})^{2-\delta})$ running time, and it is an important question in this area whether $\tilde O(n+w_{\max}^2)$ time is achievable. Our main result makes significant progress towards solving this question: - The 0-1 Knapsack problem has a deterministic algorithm in $\tilde O(n + w_{\max}^{2.5})$ time. Our techniques also apply to the easier \emph{Subset Sum} problem: - The Subset Sum problem has a randomized algorithm in $\tilde O(n + w_{\max}^{1.5})$ time. This improves (and simplifies) the previous $\tilde O(n + w_{\max}^{5/3})$-time algorithm by Polak, Rohwedder, and W\k{e}grzycki (2021) (based on Galil and Margalit (1991), and Bringmann and Wellnitz (2021)). Similar to recent works on Knapsack (and integer programs in general), our algorithms also utilize the \emph{proximity} between optimal integral solutions and fractional solutions. Our new ideas are as follows: - Previous works used an $O(w_{\max})$ proximity bound in the $\ell_1$-norm. As our main conceptual contribution, we use an additive-combinatorial theorem by Erd\H{o}s and S\'{a}rk\"{o}zy (1990) to derive an $\ell_0$-proximity bound of $\tilde O(\sqrt{w_{\max}})$. - Then, the main technical component of our Knapsack result is a dynamic programming algorithm that exploits both $\ell_0$- and $\ell_1$-proximity. It is based on a vast extension of the ``witness propagation'' method, originally designed by Deng, Mao, and Zhong (2023) for the easier \emph{unbounded} setting only.

We introduce an operator on classes of regular languages, the star-free closure. Our motivation is to generalize standard results of automata theory within a unified framework. Given an arbitrary input class $C$, the star-free closure operator outputs the least class closed under Boolean operations and language concatenation, and containing all languages of $C$ as well as all finite languages. We establish several equivalent characterizations of star-free closure: in terms of regular expressions, first-order logic, pure future and future-past temporal logic, and recognition by finite monoids. A key ingredient is that star-free closure coincides with another closure operator, defined in terms of regular operations where Kleene stars are allowed in restricted~contexts. A consequence of this first result is that we can decide membership of a regular language in the star-free closure of a class whose separation problem is decidable. Moreover, we prove that separation itself is decidable for the star-free closure of any finite class, and of any class of group languages having itself decidable separation (plus mild additional properties). We actually show decidability of a stronger property, called covering.

We combine dependent types with linear type systems that soundly and completely capture polynomial time computation. We explore two systems for capturing polynomial time: one system that disallows construction of iterable data, and one, based on the LFPL system of Martin Hofmann, that controls construction via a payment method. Both of these are extended to full dependent types via Quantitative Type Theory, allowing for arbitrary computation in types alongside guaranteed polynomial time computation in terms. We prove the soundness of the systems using a realisability technique due to Dal Lago and Hofmann. Our long-term goal is to combine the extensional reasoning of type theory with intensional reasoning about the resources intrinsically consumed by programs. This paper is a step along this path, which we hope will lead both to practical systems for reasoning about programs' resource usage, and to theoretical use as a form of synthetic computational complexity theory.

Automated certificate authorities (CAs) have expanded the reach of public key infrastructure on the web and for software signing. The certificates that these CAs issue attest to proof of control of some digital identity. Some of these automated CAs issue certificates in response to client authentication using OpenID Connect (OIDC, an extension of OAuth 2.0). This places these CAs in a position to impersonate any identity. Mitigations for this risk, like certificate transparency and signature thresholds, have emerged, but these mitigations only detect or raise the difficulty of compromise. Researchers have proposed alternatives to CAs in this setting, but many of these alternatives would require prohibitive changes to deployed authentication protocols. In this work, we propose a cryptographic technique for reducing trust in these automated CAs. When issuing a certificate, the CAs embed a proof of authentication from the subject of the certificate -- but without enabling replay attacks. We explain multiple methods for achieving this with tradeoffs between user privacy, performance, and changes to existing infrastructure. We implement a proof of concept for a method using Guillou-Quisquater signatures that works out-of-the-box with existing OIDC deployments for the open-source Sigstore CA, finding that minimal modifications are required.

Quadratic programming is a fundamental problem in the field of convex optimization. Many practical tasks can be formulated as quadratic programming, for example, the support vector machine (SVM). Linear SVM is one of the most popular tools over the last three decades in machine learning before deep learning method dominating. In general, a quadratic program has input size $\Theta(n^2)$ (where $n$ is the number of variables), thus takes $\Omega(n^2)$ time to solve. Nevertheless, quadratic programs coming from SVMs has input size $O(n)$, allowing the possibility of designing nearly-linear time algorithms. Two important classes of SVMs are programs admitting low-rank kernel factorizations and low-treewidth programs. Low-treewidth convex optimization has gained increasing interest in the past few years (e.g.~linear programming [Dong, Lee and Ye 2021] and semidefinite programming [Gu and Song 2022]). Therefore, an important open question is whether there exist nearly-linear time algorithms for quadratic programs with these nice structures. In this work, we provide the first nearly-linear time algorithm for solving quadratic programming with low-rank factorization or low-treewidth, and a small number of linear constraints. Our results imply nearly-linear time algorithms for low-treewidth or low-rank SVMs.

The fundamental theorem of Tur\'{a}n from Extremal Graph Theory determines the exact bound on the number of edges $t_r(n)$ in an $n$-vertex graph that does not contain a clique of size $r+1$. We establish an interesting link between Extremal Graph Theory and Algorithms by providing a simple compression algorithm that in linear time reduces the problem of finding a clique of size $\ell$ in an $n$-vertex graph $G$ with $m \ge t_r(n)-k$ edges, where $\ell\leq r+1$, to the problem of finding a maximum clique in a graph on at most $5k$ vertices. This also gives us an algorithm deciding in time $2.49^{k}\cdot(n + m)$ whether $G$ has a clique of size $\ell$. As a byproduct of the new compression algorithm, we give an algorithm that in time $2^{\mathcal{O}(td^2)} \cdot n^2$ decides whether a graph contains an independent set of size at least $n/(d+1) + t$. Here $d$ is the average vertex degree of the graph $G$. The multivariate complexity analysis based on ETH indicates that the asymptotical dependence on several parameters in the running times of our algorithms is tight.

We study the fundamental problem of fairly allocating a set of indivisible goods among $n$ agents with additive valuations using the desirable fairness notion of maximin share (MMS). MMS is the most popular share-based notion, in which an agent finds an allocation fair to her if she receives goods worth at least her MMS value. An allocation is called MMS if all agents receive at least their MMS value. However, since MMS allocations need not exist when $n>2$, a series of works showed the existence of approximate MMS allocations with the current best factor of $\frac{3}{4} + O(\frac{1}{n})$. The recent work by Akrami et al. showed the limitations of existing approaches and proved that they cannot improve this factor to $3/4 + \Omega(1)$. In this paper, we bypass these barriers to show the existence of $(\frac{3}{4} + \frac{3}{3836})$-MMS allocations by developing new reduction rules and analysis techniques.

When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.

北京阿比特科技有限公司