亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the linear lambda-calculus extended with the sup type constructor, which provides an additive conjunction along with a non-deterministic destructor. The sup type constructor has been introduced in the context of quantum computing. In this paper, we study this type constructor within a simple linear logic categorical model, employing the category of semimodules over a commutative semiring. We demonstrate that the non-deterministic destructor finds a suitable model in a "weighted" codiagonal map. This approach offers a valid and insightful alternative to interpreting non-determinism, especially in instances where the conventional Powerset Monad interpretation does not align with the category's structure, as is the case with the category of semimodules. The validity of this alternative relies on the presence of biproducts within the category.

相關內容

We construct a bipartite generalization of Alon and Szegedy's nearly orthogonal vectors, thereby obtaining strong bounds for several extremal problems involving the Lov\'asz theta function, vector chromatic number, minimum semidefinite rank, nonnegative rank, and extension complexity of polytopes. In particular, we derive some general lower bounds for the vector chromatic number which may be of independent interest.

Generative diffusion models have achieved spectacular performance in many areas of generative modeling. While the fundamental ideas behind these models come from non-equilibrium physics, in this paper we show that many aspects of these models can be understood using the tools of equilibrium statistical mechanics. Using this reformulation, we show that generative diffusion models undergo second-order phase transitions corresponding to symmetry breaking phenomena. We argue that this lead to a form of instability that lies at the heart of their generative capabilities and that can be described by a set of mean field critical exponents. We conclude by analyzing recent work connecting diffusion models and associative memory networks in view of the thermodynamic formulations.

The prevailing statistical approach to analyzing persistence diagrams is concerned with filtering out topological noise. In this paper, we adopt a different viewpoint and aim at estimating the actual distribution of a random persistence diagram, which captures both topological signal and noise. To that effect, Chazel and Divol (2019) proved that, under general conditions, the expected value of a random persistence diagram is a measure admitting a Lebesgue density, called the persistence intensity function. In this paper, we are concerned with estimating the persistence intensity function and a novel, normalized version of it -- called the persistence density function. We present a class of kernel-based estimators based on an i.i.d. sample of persistence diagrams and derive estimation rates in the supremum norm. As a direct corollary, we obtain uniform consistency rates for estimating linear representations of persistence diagrams, including Betti numbers and persistence surfaces. Interestingly, the persistence density function delivers stronger statistical guarantees.

The categorical models of the differential lambda-calculus are additive categories because of the Leibniz rule which requires the summation of two expressions. This means that, as far as the differential lambda-calculus and differential linear logic are concerned, these models feature finite non-determinism and indeed these languages are essentially non-deterministic. In a previous paper we introduced a categorical framework for differentiation which does not require additivity and is compatible with deterministic models such as coherence spaces and probabilistic models such as probabilistic coherence spaces. Based on this semantics we develop a syntax of a deterministic version of the differential lambda-calculus. One nice feature of this new approach to differentiation is that it is compatible with general fixpoints of terms, so our language is actually a differential extension of PCF for which we provide a fully deterministic operational semantics.

We propose an augmented Lagrangian-based preconditioner to accelerate the convergence of Krylov subspace methods applied to linear systems of equations with a block three-by-three structure such as those arising from mixed finite element discretizations of the coupled Stokes-Darcy flow problem. We analyze the spectrum of the preconditioned matrix and we show how the new preconditioner can be efficiently applied. Numerical experiments are reported to illustrate the effectiveness of the preconditioner in conjunction with flexible GMRES for solving linear systems of equations arising from a 3D test problem.

Feature attribution is a fundamental task in both machine learning and data analysis, which involves determining the contribution of individual features or variables to a model's output. This process helps identify the most important features for predicting an outcome. The history of feature attribution methods can be traced back to General Additive Models (GAMs), which extend linear regression models by incorporating non-linear relationships between dependent and independent variables. In recent years, gradient-based methods and surrogate models have been applied to unravel complex Artificial Intelligence (AI) systems, but these methods have limitations. GAMs tend to achieve lower accuracy, gradient-based methods can be difficult to interpret, and surrogate models often suffer from stability and fidelity issues. Furthermore, most existing methods do not consider users' contexts, which can significantly influence their preferences. To address these limitations and advance the current state-of-the-art, we define a novel feature attribution framework called Context-Aware Feature Attribution Through Argumentation (CA-FATA). Our framework harnesses the power of argumentation by treating each feature as an argument that can either support, attack or neutralize a prediction. Additionally, CA-FATA formulates feature attribution as an argumentation procedure, and each computation has explicit semantics, which makes it inherently interpretable. CA-FATA also easily integrates side information, such as users' contexts, resulting in more accurate predictions.

We study the stability of randomized Taylor schemes for ODEs. We consider three notions of probabilistic stability: asymptotic stability, mean-square stability, and stability in probability. We prove fundamental properties of the probabilistic stability regions and benchmark them against the absolute stability regions for deterministic Taylor schemes.

The multispecies Landau collision operator describes the two-particle, small scattering angle or grazing collisions in a plasma made up of different species of particles such as electrons and ions. Recently, a structure preserving deterministic particle method arXiv:1910.03080 has been developed for the single species spatially homogeneous Landau equation. This method relies on a regularization of the Landau collision operator so that an approximate solution, which is a linear combination of Dirac delta distributions, is well-defined. Based on a weak form of the regularized Landau equation, the time dependent locations of the Dirac delta functions satisfy a system of ordinary differential equations. In this work, we extend this particle method to the multispecies case, and examine its conservation of mass, momentum, and energy, and decay of entropy properties. We show that the equilibrium distribution of the regularized multispecies Landau equation is a Maxwellian distribution, and state a critical condition on the regularization parameters that guarantees a species independent equilibrium temperature. A convergence study comparing an exact multispecies BKW solution to the particle solution shows approximately 2nd order accuracy. Important physical properties such as conservation, decay of entropy, and equilibrium distribution of the particle method are demonstrated with several numerical examples.

Quantum computing promises transformational gains for solving some problems, but little to none for others. For anyone hoping to use quantum computers now or in the future, it is important to know which problems will benefit. In this paper, we introduce a framework for answering this question both intuitively and quantitatively. The underlying structure of the framework is a race between quantum and classical computers, where their relative strengths determine when each wins. While classical computers operate faster, quantum computers can sometimes run more efficient algorithms. Whether the speed advantage or the algorithmic advantage dominates determines whether a problem will benefit from quantum computing or not. Our analysis reveals that many problems, particularly those of small to moderate size that can be important for typical businesses, will not benefit from quantum computing. Conversely, larger problems or those with particularly big algorithmic gains will benefit from near-term quantum computing. Since very large algorithmic gains are rare in practice and theorized to be rare even in principle, our analysis suggests that the benefits from quantum computing will flow either to users of these rare cases, or practitioners processing very large data.

We propose a method for computing the Lyapunov exponents of renewal equations (delay equations of Volterra type) and of coupled systems of renewal and delay differential equations. The method consists in the reformulation of the delay equation as an abstract differential equation, the reduction of the latter to a system of ordinary differential equations via pseudospectral collocation, and the application of the standard discrete QR method. The effectiveness of the method is shown experimentally and a MATLAB implementation is provided.

北京阿比特科技有限公司