亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The analysis of formal models that include quantitative aspects such as timing or probabilistic choices is performed by quantitative verification tools. Broad and mature tool support is available for computing basic properties such as expected rewards on basic models such as Markov chains. Previous editions of QComp, the comparison of tools for the analysis of quantitative formal models, focused on this setting. Many application scenarios, however, require more advanced property types such as LTL and parameter synthesis queries as well as advanced models like stochastic games and partially observable MDPs. For these, tool support is in its infancy today. This paper presents the outcomes of QComp 2023: a survey of the state of the art in quantitative verification tool support for advanced property types and models. With tools ranging from first research prototypes to well-supported integrations into established toolsets, this report highlights today's active areas and tomorrow's challenges in tool-focused research for quantitative verification.

相關內容

這個新版本的工具會議系列恢復了從1989年到2012年的50個會議的傳統。工具最初是“面向對象語言和系統的技術”,后來發展到包括軟件技術的所有創新方面。今天許多最重要的軟件概念都是在這里首次引入的。2019年TOOLS 50+1在俄羅斯喀山附近舉行,以同樣的創新精神、對所有與軟件相關的事物的熱情、科學穩健性和行業適用性的結合以及歡迎該領域所有趨勢和社區的開放態度,延續了該系列。 官網鏈接: · MoDELS · Agent · 振蕩 · Less ·
2024 年 7 月 2 日

Previous active inference accounts of emotion translate fluctuations in free energy to a sense of emotion, mainly focusing on valence. However, in affective science, emotions are often represented as multi-dimensional. In this paper, we propose to adopt a Circumplex Model of emotion by mapping emotions into a two-dimensional spectrum of valence and arousal. We show how one can derive a valence and arousal signal from an agent's expected free energy, relating arousal to the entropy of posterior beliefs and valence to utility less expected utility. Under this formulation, we simulate artificial agents engaged in a search task. We show that the manipulation of priors and object presence results in commonsense variability in emotional states.

The Schur-Horn theorem is a well-known result that characterizes the relationship between the diagonal elements and eigenvalues of a symmetric (Hermitian) matrix. In this paper, we extend this theorem by exploring the eigenvalue perturbation of a symmetric (Hermitian) matrix with fixed diagonals, which is referred to as the continuity of the Schur-Horn mapping. We introduce a concept called strong Schur-Horn continuity, characterized by minimal constraints on the perturbation. We demonstrate that several categories of matrices exhibit strong Schur-Horn continuity. Leveraging this notion, along with a majorization constraint on the perturbation, we prove the Schur-Horn continuity for general symmetric (Hermitian) matrices. The Schur-Horn continuity finds applications in oblique manifold optimization related to quantum computing.

Capability ontologies are increasingly used to model functionalities of systems or machines. The creation of such ontological models with all properties and constraints of capabilities is very complex and can only be done by ontology experts. However, Large Language Models (LLMs) have shown that they can generate machine-interpretable models from natural language text input and thus support engineers / ontology experts. Therefore, this paper investigates how LLMs can be used to create capability ontologies. We present a study with a series of experiments in which capabilities with varying complexities are generated using different prompting techniques and with different LLMs. Errors in the generated ontologies are recorded and compared. To analyze the quality of the generated ontologies, a semi-automated approach based on RDF syntax checking, OWL reasoning, and SHACL constraints is used. The results of this study are very promising because even for complex capabilities, the generated ontologies are almost free of errors.

Formalization of real analysis offers a chance to rebuild traditional proofs of important theorems as unambiguous theories that can be interactively explored. This paper provides a comprehensive overview of the Lebesgue Differentiation Theorem formalized in the Coq proof assistant, from which the first Fundamental Theorem of Calculus (FTC) for the Lebesgue integral is obtained as a corollary. Proving the first FTC in this way has the advantage of decomposing into loosely-coupled theories of moderate size and of independent interest that lend themselves well to incremental and collaborative development. We explain how we formalize all the topological constructs and all the standard lemmas needed to eventually relate the definitions of derivability and of Lebesgue integration of MathComp-Analysis, a formalization of analysis developed on top of the Mathematical Components library. In the course of this experiment, we substantially enrich MathComp-Analysis and even devise a new proof for Urysohn's lemma.

To analyze the worst-case running time of branching algorithms, the majority of work in exponential time algorithms focuses on designing complicated branching rules over developing better analysis methods for simple algorithms. In the mid-$2000$s, Fomin et al. [2005] introduced measure & conquer, an advanced general analysis method, sparking widespread adoption for obtaining tighter worst-case running time upper bounds for many fundamental NP-complete problems. Yet, much potential in this direction remains untapped, as most subsequent work applied it without further advancement. Motivated by this, we present piecewise analysis, a new general method that analyzes the running time of branching algorithms. Our approach is to define a similarity ratio that divides instances into groups and then analyze the running time within each group separately. The similarity ratio is a scale between two parameters of an instance I. Instead of relying on a single measure and a single analysis for the whole instance space, our method allows to take advantage of different intrinsic properties of instances with different similarity ratios. To showcase its potential, we reanalyze two $17$-year-old algorithms from Fomin et al. [2007] that solve $4$-Coloring and #$3$-Coloring respectively. The original analysis in their paper gave running times of $O(1.7272^n)$ and $O(1.6262^n)$ respectively for these algorithms, our analysis improves these running times to $O(1.7207^n)$ and $O(1.6225^n)$.

Existing debiasing methods inevitably make unreasonable or undesired predictions as they are designated and evaluated to achieve parity across different social groups but leave aside individual facts, resulting in modified existing knowledge. In this paper, we first establish a new bias mitigation benchmark BiasKE leveraging existing and additional constructed datasets, which systematically assesses debiasing performance by complementary metrics on fairness, specificity, and generalization. Meanwhile, we propose a novel debiasing method, Fairness Stamp (FAST), which enables editable fairness through fine-grained calibration on individual biased knowledge. Comprehensive experiments demonstrate that FAST surpasses state-of-the-art baselines with remarkable debiasing performance while not hampering overall model capability for knowledge preservation, highlighting the prospect of fine-grained debiasing strategies for editable fairness in LLMs.

An inference procedure is proposed to provide consistent estimators of parameters in a modal regression model with a covariate prone to measurement error. A score-based diagnostic tool exploiting parametric bootstrap is developed to assess adequacy of parametric assumptions imposed on the regression model. The proposed estimation method and diagnostic tool are applied to synthetic data generated from simulation experiments and data from real-world applications to demonstrate their implementation and performance. These empirical examples illustrate the importance of adequately accounting for measurement error in the error-prone covariate when inferring the association between a response and covariates based on a modal regression model that is especially suitable for skewed and heavy-tailed response data.

Denotational models of type theory, such as set-theoretic, domain-theoretic, or category-theoretic models use (actual) infinite sets of objects in one way or another. The potential infinite, seen as an extensible finite, requires a dynamic understanding of the infinite sets of objects. It follows that the type $nat$ cannot be interpreted as a set of all natural numbers, $\lbrack\!\lbrack nat \rbrack\!\rbrack = \mathbb{N}$, but as an increasing family of finite sets $\mathbb{N}_i = \{0, \dots, i-1\}$. Any reference to $\lbrack\!\lbrack nat \rbrack\!\rbrack$, either by the formal syntax or by meta-level concepts, must be a reference to a (sufficiently large) set $\mathbb{N}_i$. We present the basic concepts for interpreting a fragment of the simply typed $\lambda$-calculus within such a dynamic model. A type $\varrho$ is thereby interpreted as a process, which is formally a factor system together with a limit of it. A factor system is very similar to a direct or an inverse system, and its limit is also defined by a universal property. It is crucial to recognize that a limit is not necessarily an unreachable end beyond the process. Rather, it can be regarded as an intermediate state within the factor system, which can still be extended. The logical type $bool$ plays an important role, which we interpret classically as the set $\{true, false\}$. We provide an interpretation of simply typed $\lambda$-terms in these factor systems and limits. The main result is a reflection principle, which states that an element in the limit has a ``full representative'' at a sufficiently large stage within the factor system. For propositions, that is, terms of type $bool$, this implies that statements about the limit are true if and only if they are true at that sufficiently large stage.

Threshold automata are a computational model that has proven to be versatile in modeling threshold-based distributed algorithms and enabling their completely automatic parameterized verification. We present novel techniques for the verification of threshold automata, based on well-structured transition systems, that allow us to extend the expressiveness of both the computational model and the specifications that can be verified. In particular, we extend the model to allow decrements and resets of shared variables, possibly on cycles, and the specifications to general coverability. While these extensions of the model in general lead to undecidability, our algorithms provide a semi-decision procedure. We demonstrate the benefit of our extensions by showing that we can model complex round-based algorithms such as the phase king consensus algorithm and the Red Belly Blockchain protocol (published in 2019), and verify them fully automatically for the first time.

As artificial intelligence (AI) models continue to scale up, they are becoming more capable and integrated into various forms of decision-making systems. For models involved in moral decision-making, also known as artificial moral agents (AMA), interpretability provides a way to trust and understand the agent's internal reasoning mechanisms for effective use and error correction. In this paper, we provide an overview of this rapidly-evolving sub-field of AI interpretability, introduce the concept of the Minimum Level of Interpretability (MLI) and recommend an MLI for various types of agents, to aid their safe deployment in real-world settings.

北京阿比特科技有限公司