亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We explore the notion of history-determinism in the context of timed automata (TA) over infinite timed words. History-deterministic (HD) automata are those in which nondeterminism can be resolved on the fly, based on the run constructed thus far. History-determinism is a robust property that admits different game-based characterisations, and HD specifications allow for game-based verification without an expensive determinization step. We show that the class of timed $\omega$-languages recognised by HD timed automata strictly extends that of deterministic ones, and is strictly included in those recognised by fully non-deterministic TA. For non-deterministic timed automata it is known that universality is already undecidable for B\"uchi TA. For history-deterministic TA with arbitrary parity acceptance, we show that timed universality, inclusion, and synthesis all remain decidable and are EXPTIME-complete. For the subclass of TA with safety or reachability acceptance, one can decide (in EXPTIME) whether such an automaton is history-deterministic. If so, it can effectively determinized without introducing new automata states.

相關內容

The Non-dominated Sorting Genetic Algorithm II (NSGA-II) is the most prominent multi-objective evolutionary algorithm for real-world applications. While it performs evidently well on bi-objective optimization problems, empirical studies suggest that it is less effective when applied to problems with more than two objectives. A recent mathematical runtime analysis confirmed this observation by proving the NGSA-II for an exponential number of iterations misses a constant factor of the Pareto front of the simple 3-objective OneMinMax problem. In this work, we provide the first mathematical runtime analysis of the NSGA-III, a refinement of the NSGA-II aimed at better handling more than two objectives. We prove that the NSGA-III with sufficiently many reference points -- a small constant factor more than the size of the Pareto front, as suggested for this algorithm -- computes the complete Pareto front of the 3-objective OneMinMax benchmark in an expected number of O(n log n) iterations. This result holds for all population sizes (that are at least the size of the Pareto front). It shows a drastic advantage of the NSGA-III over the NSGA-II on this benchmark. The mathematical arguments used here and in previous work on the NSGA-II suggest that similar findings are likely for other benchmarks with three or more objectives.

We introduce the study of designing allocation mechanisms for fairly allocating indivisible goods in settings with interdependent valuation functions. In our setting, there is a set of goods that needs to be allocated to a set of agents (without disposal). Each agent is given a private signal, and his valuation function depends on the signals of all agents. Without the use of payments, there are strong impossibility results for designing strategyproof allocation mechanisms even in settings without interdependent values. Therefore, we turn to design mechanisms that always admit equilibria that are fair with respect to their true signals, despite their potentially distorted perception. To do so, we first extend the definitions of pure Nash equilibrium and well-studied fairness notions in literature to the interdependent setting. We devise simple allocation mechanisms that always admit a fair equilibrium with respect to the true signals. We complement this result by showing that, even for very simple cases with binary additive interdependent valuation functions, no allocation mechanism that always admits an equilibrium, can guarantee that all equilibria are fair with respect to the true signals.

The Internet revolution in 1990, followed by the data-driven and information revolution, has transformed the world as we know it. Nowadays, what seam to be 10 to 20 years ago, a science fiction idea (i.e., machines dominating the world) is seen as possible. This revolution also brought a need for new regulatory practices where user trust and artificial Intelligence (AI) discourse has a central role. This work aims to clarify some misconceptions about user trust in AI discourse and fight the tendency to design vulnerable interactions that lead to further breaches of trust, both real and perceived. Findings illustrate the lack of clarity in understanding user trust and its effects on computer science, especially in measuring user trust characteristics. It argues for clarifying those notions to avoid possible trust gaps and misinterpretations in AI adoption and appropriation.

Bayesian probabilistic numerical methods for numerical integration offer significant advantages over their non-Bayesian counterparts: they can encode prior information about the integrand, and can quantify uncertainty over estimates of an integral. However, the most popular algorithm in this class, Bayesian quadrature, is based on Gaussian process models and is therefore associated with a high computational cost. To improve scalability, we propose an alternative approach based on Bayesian neural networks which we call Bayesian Stein networks. The key ingredients are a neural network architecture based on Stein operators, and an approximation of the Bayesian posterior based on the Laplace approximation. We show that this leads to orders of magnitude speed-ups on the popular Genz functions benchmark, and on challenging problems arising in the Bayesian analysis of dynamical systems, and the prediction of energy production for a large-scale wind farm.

We developed a new method PROTES for black-box optimization, which is based on the probabilistic sampling from a probability density function given in the low-parametric tensor train format. We tested it on complex multidimensional arrays and discretized multivariable functions taken, among others, from real-world applications, including unconstrained binary optimization and optimal control problems, for which the possible number of elements is up to $2^{100}$. In numerical experiments, both on analytic model functions and on complex problems, PROTES outperforms existing popular discrete optimization methods (Particle Swarm Optimization, Covariance Matrix Adaptation, Differential Evolution, and others).

In [4], we introduced an extension of team semantics (causal teams) which assigns an interpretation to interventionist counterfactuals and causal notions based on them (as e.g. in Pearl's and Woodward's manipulationist approaches to causation). We now present a further extension of this framework (causal multiteams) which allows us to talk about probabilistic causal statements. We analyze the expressivity resources of two causal-probabilistic languages, one finitary and one infinitary. We show that many causal-probabilistic notions from the field of causal inference can be expressed already in the finitary language, and we prove a normal form theorem that throws new light on Pearl's ``ladder of causation''. On the other hand, we provide an exact semantic characterization of the infinitary language, which shows that this language captures precisely those causal-probabilistic statements that do not commit us to any specific interpretation of probability; and we prove that no usual, countable language is apt for this task.

In this paper, we present a new methodology to develop arbitrary high-order structure-preserving methods for solving the quantum Zakharov system. The key ingredients of our method are: (i) the original Hamiltonian energy is reformulated into a quadratic form by introducing a new quadratic auxiliary variable; (ii) based on the energy variational principle, the original system is then rewritten into a new equivalent system which inherits the mass conservation law and a quadratic energy; (iii) the resulting system is discretized by symplectic Runge-Kutta method in time combining with the Fourier pseudo-spectral method in space. The proposed method achieves arbitrary high-order accurate in time and can preserve the discrete mass and original Hamiltonian energy exactly. Moreover, an efficient iterative solver is presented to solve the resulting discrete nonlinear equations. Finally, ample numerical examples are presented to demonstrate the theoretical claims and illustrate the efficiency of our methods.

In this paper analysis is performed on a computational method for thermal radiative transfer (TRT) problems based on the multilevel quasidiffusion (variable Eddington factor) method with the method of long characteristics (ray tracing) for the Boltzmann transport equation (BTE). The method is formulated with a multilevel set of moment equations of the BTE which are coupled to the material energy balance (MEB). The moment equations are exactly closed via the Eddington tensor defined by the BTE solution. Two discrete spatial meshes are defined: a material grid on which the MEB and low-order moment equations are discretized, and a grid of characteristics for solving the BTE. Numerical testing of the method is completed on the well-known Fleck-Cummings test problem which models a supersonic radiation wave propagation. Mesh refinement studies are performed on each of the two spatial grids independently, holding one mesh width constant while refining the other. We also present the data on convergence of iterations.

Lexicase selection is a widely used parent selection algorithm in genetic programming, known for its success in various task domains such as program synthesis, symbolic regression, and machine learning. Due to its non-parametric and recursive nature, calculating the probability of each individual being selected by lexicase selection has been proven to be an NP-hard problem, which discourages deeper theoretical understanding and practical improvements to the algorithm. In this work, we introduce probabilistic lexicase selection (plexicase selection), a novel parent selection algorithm that efficiently approximates the probability distribution of lexicase selection. Our method not only demonstrates superior problem-solving capabilities as a semantic-aware selection method, but also benefits from having a probabilistic representation of the selection process for enhanced efficiency and flexibility. Experiments are conducted in two prevalent domains in genetic programming: program synthesis and symbolic regression, using standard benchmarks including PSB and SRBench. The empirical results show that plexicase selection achieves state-of-the-art problem-solving performance that is competitive to the lexicase selection, and significantly outperforms lexicase selection in computation efficiency.

Binary multirelations can model alternating nondeterminism, for instance, in games or nondeterministically evolving systems interacting with an environment. Such systems can show partial or total functional behaviour at both levels of alternation, so that nondeterministic behaviour may occur only at one level or both levels, or not at all. We study classes of inner and outer partial and total functional multirelations in a multirelational language based on relation algebra and power allegories. While it is known that general multirelations do not form a category, we show that the classes of deterministic multirelations mentioned form categories with respect to Peleg composition from concurrent dynamic logic, and sometimes quantaloids. Some of these are isomorphic to the category of binary relations. We also introduce determinisation maps that approximate multirelations either by binary relations or by deterministic multirelations. Such maps are useful for defining modal operators on multirelations.

北京阿比特科技有限公司