亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Kleene's computability theory based on the S1-S9 computation schemes constitutes a model for computing with objects of any finite type and extends Turing's 'machine model' which formalises computing with real numbers. A fundamental distinction in Kleene's framework is between normal and non-normal functionals where the former compute the associated Kleene quantifier $\exists^n$ and the latter do not. Historically, the focus was on normal functionals, but recently new non-normal functionals have been studied based on well-known theorems, the weakest among which seems to be the uncountability of the reals. These new non-normal functionals are fundamentally different from historical examples like Tait's fan functional: the latter is computable from $\exists^2$, while the former are computable in $\exists^3$ but not in weaker oracles. Of course, there is a great divide or abyss separating $\exists^2$ and $\exists^3$ and we identify slight variations of our new non-normal functionals that are again computable in $\exists^2$, i.e. fall on different sides of this abyss. Our examples are based on mainstream mathematical notions, like quasi-continuity, Baire classes, bounded variation, and semi-continuity from real analysis.

相關內容

We extend classical work by Janusz Czelakowski on the closure properties of the class of matrix models of entailment relations - nowadays more commonly called multiple-conclusion logics - to the setting of non-deterministic matrices (Nmatrices), characterizing the Nmatrix models of an arbitrary logic through a generalization of the standard class operators to the non-deterministic setting. We highlight the main differences that appear in this more general setting, in particular: the possibility to obtain Nmatrix quotients using any compatible equivalence relation (not necessarily a congruence); the problem of determining when strict homomorphisms preserve the logic of a given Nmatrix; the fact that the operations of taking images and preimages cannot be swapped, which determines the exact sequence of operators that generates, from any complete semantics, the class of all Nmatrix models of a logic. Many results, on the other hand, generalize smoothly to the non-deterministic setting: we show for instance that a logic is finitely based if and only if both the class of its Nmatrix models and its complement are closed under ultraproducts. We conclude by mentioning possible developments in adapting the Abstract Algebraic Logic approach to logics induced by Nmatrices and the associated equational reasoning over non-deterministic algebras.

Approximated forms of the RII and RIII redistribution matrices are frequently applied to simplify the numerical solution of the radiative transfer problem for polarized radiation, taking partial frequency redistribution (PRD) effects into account. A widely used approximation for RIII is to consider its expression under the assumption of complete frequency redistribution (CRD) in the observer frame (RIII CRD). The adequacy of this approximation for modeling the intensity profiles has been firmly established. By contrast, its suitability for modeling scattering polarization signals has only been analyzed in a few studies, considering simplified settings. In this work, we aim at quantitatively assessing the impact and the range of validity of the RIII CRD approximation in the modeling of scattering polarization. Methods. We first present an analytic comparison between RIII and RIII CRD. We then compare the results of radiative transfer calculations, out of local thermodynamic equilibrium, performed with RIII and RIII CRD in realistic 1D atmospheric models. We focus on the chromospheric Ca i line at 4227 A and on the photospheric Sr i line at 4607 A.

Conditional independence plays a foundational role in database theory, probability theory, information theory, and graphical models. In databases, conditional independence appears in database normalization and is known as the (embedded) multivalued dependency. Many properties of conditional independence are shared across various domains, and to some extent these commonalities can be studied through a measure-theoretic approach. The present paper proposes an alternative approach via semiring relations, defined by extending database relations with tuple annotations from some commutative semiring. Integrating various interpretations of conditional independence in this context, we investigate how the choice of the underlying semiring impacts the corresponding axiomatic and decomposition properties. We specifically identify positivity and multiplicative cancellativity as the key semiring properties that enable extending results from the relational context to the broader semiring framework. Additionally, we explore the relationships between different conditional independence notions through model theory, and consider how methods to test logical consequence and validity generalize from database theory and information theory to semiring relations.

We construct and analyze a message-passing algorithm for random constraint satisfaction problems (CSPs) at large clause density, generalizing work of El Alaoui, Montanari, and Sellke for Maximum Cut [arXiv:2111.06813] through a connection between random CSPs and mean-field Ising spin glasses. For CSPs with even predicates, the algorithm asymptotically solves a stochastic optimal control problem dual to an extended Parisi variational principle. This gives an optimal fraction of satisfied constraints among algorithms obstructed by the branching overlap gap property of Huang and Sellke [arXiv:2110.07847], notably including the Quantum Approximate Optimization Algorithm and all quantum circuits on a bounded-degree architecture of up to $\epsilon \cdot \log n$ depth.

A finite element based computational scheme is developed and employed to assess a duality based variational approach to the solution of the linear heat and transport PDE in one space dimension and time, and the nonlinear system of ODEs of Euler for the rotation of a rigid body about a fixed point. The formulation turns initial-(boundary) value problems into degenerate elliptic boundary value problems in (space)-time domains representing the Euler-Lagrange equations of suitably designed dual functionals in each of the above problems. We demonstrate reasonable success in approximating solutions of this range of parabolic, hyperbolic, and ODE primal problems, which includes energy dissipation as well as conservation, by a unified dual strategy lending itself to a variational formulation. The scheme naturally associates a family of dual solutions to a unique primal solution; such `gauge invariance' is demonstrated in our computed solutions of the heat and transport equations, including the case of a transient dual solution corresponding to a steady primal solution of the heat equation. Primal evolution problems with causality are shown to be correctly approximated by non-causal dual problems.

We propose a new method to compare survival data based on Higher Criticism (HC) of P-values obtained from many exact hypergeometric tests. The method can accommodate censorship and is sensitive to moderate differences in some unknown and relatively few time intervals, attaining much better power against such differences than the log-rank test and other tests that are popular under non-proportional hazard alternatives. We demonstrate the usefulness of the HC-based test in detecting rare differences compared to existing tests using simulated data and using actual gene expression data. Additionally, we analyze the asymptotic power of our method under a piece-wise homogeneous exponential decay model with rare and weak departures, describing two groups experiencing failure rates that are usually identical over time except in a few unknown instances in which the second group's failure rate is higher. Under an asymptotic calibration of the model's parameters, the HC-based test's power experiences a phase transition across the plane involving the rarity and intensity parameters that mirrors the phase transition in a two-sample rare and weak normal means setting. In particular, the phase transition curve of our test indicates a larger region in which it is fully powered than the corresponding region of the log-rank test. %The latter attains a phase transition curve that is analogous to a test based on Fisher's combination statistic of the hypergeometric P-values. %To our knowledge, this is the first analysis of a rare and weak signal detection model that involves individually dependent effects in a non-Gaussian setting.

Meta-analysis aims to combine effect measures from several studies. For continuous outcomes, the most popular effect measures use simple or standardized differences in sample means. However, a number of applications focus on the absolute values of these effect measures (i.e., unsigned magnitude effects). We provide statistical methods for meta-analysis of magnitude effects based on standardized mean differences. We propose a suitable statistical model for random-effects meta-analysis of absolute standardized mean differences (ASMD), investigate a number of statistical methods for point and interval estimation, and provide practical recommendations for choosing among them.

The dichromatic number of a digraph is the minimum integer $k$ such that it admits a $k$-dicolouring, i.e. a partition of its vertices into $k$ acyclic subdigraphs. We say that a digraph $D$ is a super-orientation of an undirected graph $G$ if $G$ is the underlying graph of $D$. If $D$ does not contain any pair of symmetric arcs, we just say that $D$ is an orientation of $G$. In this work, we give both lower and upper bounds on the dichromatic number of super-orientations of chordal graphs. We also show a family of orientations of cographs for which the dichromatic number is equal to the clique number of the underlying graph.

We study functional and concurrent calculi with non-determinism, along with type systems to control resources based on linearity. The interplay between non-determinism and linearity is delicate: careless handling of branches can discard resources meant to be used exactly once. Here we go beyond prior work by considering non-determinism in its standard sense: once a branch is selected, the rest are discarded. Our technical contributions are three-fold. First, we introduce a $\pi$-calculus with non-deterministic choice, governed by session types. Second, we introduce a resource $\lambda$-calculus, governed by intersection types, in which non-determinism concerns fetching of resources from bags. Finally, we connect our two typed non-deterministic calculi via a correct translation.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

北京阿比特科技有限公司