亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

It often happens that free algebras for a given theory satisfy useful reasoning principles that are not preserved under homomorphisms of algebras, and hence need not hold in an arbitrary algebra. For instance, if $M$ is the free monoid on a set $A$, then the scalar multiplication function $A\times M \to M$ is injective. Therefore, when reasoning in the formal theory of monoids under $A$, it is possible to use this injectivity law to make sound deductions even about monoids under $A$ for which scalar multiplication is not injective -- a principle known in algebra as the permanence of identity. Properties of this kind are of fundamental practical importance to the logicians and computer scientists who design and implement computerized proof assistants like Lean and Coq, as they enable the formal reductions of equational problems that make type checking tractable. As type theories have become increasingly more sophisticated, it has become more and more difficult to establish the useful properties of their free models that enable effective implementation. These obstructions have facilitated a fruitful return to foundational work in type theory, which has taken on a more geometrical flavor than ever before. Here we expose a modern way to prove a highly non-trivial injectivity law for free models of Martin-L\"of type theory, paying special attention to the ways that contemporary methods in type theory have been influenced by three important ideas of the Grothendieck school: the relative point of view, the language of universes, and the recollement of generalized spaces.

相關內容

迄今為止,產品設計師最友好的交互動畫軟件。

G\'acs' coarse-grained algorithmic entropy leverages universal computation to quantify the information content of any given physical state. Unlike the Boltzmann and Shannon-Gibbs entropies, it requires no prior commitment to macrovariables or probabilistic ensembles. Whereas earlier work had made loose connections between the entropy of thermodynamic systems and information-processing systems, the algorithmic entropy formally unifies them both. After adapting G\'acs' definition to Markov processes, we prove a very general second law of thermodynamics, and discuss its advantages over previous formulations. Finally, taking inspiration from Maxwell's demon, we model an information engine powered by compressible data.

The spectral clustering algorithm is often used as a binary clustering method for unclassified data by applying the principal component analysis. To study theoretical properties of the algorithm, the assumption of homoscedasticity is often supposed in existing studies. However, this assumption is restrictive and often unrealistic in practice. Therefore, in this paper, we consider the allometric extension model, that is, the directions of the first eigenvectors of two covariance matrices and the direction of the difference of two mean vectors coincide, and we provide a non-asymptotic bound of the error probability of the spectral clustering algorithm for the allometric extension model. As a byproduct of the result, we obtain the consistency of the clustering method in high-dimensional settings.

We explore a link between complexity and physics for circuits of given functionality. Taking advantage of the connection between circuit counting problems and the derivation of ensembles in statistical mechanics, we tie the entropy of circuits of a given functionality and fixed number of gates to circuit complexity. We use thermodynamic relations to connect the quantity analogous to the equilibrium temperature to the exponent describing the exponential growth of the number of distinct functionalities as a function of complexity. This connection is intimately related to the finite compressibility of typical circuits. Finally, we use the thermodynamic approach to formulate a framework for the obfuscation of programs of arbitrary length -- an important problem in cryptography -- as thermalization through recursive mixing of neighboring sections of a circuit, which can viewed as the mixing of two containers with ``gases of gates''. This recursive process equilibrates the average complexity and leads to the saturation of the circuit entropy, while preserving functionality of the overall circuit. The thermodynamic arguments hinge on ergodicity in the space of circuits which we conjecture is limited to disconnected ergodic sectors due to fragmentation. The notion of fragmentation has important implications for the problem of circuit obfuscation as it implies that there are circuits with same size and functionality that cannot be connected via local moves. Furthermore, we argue that fragmentation is unavoidable unless the complexity classes NP and coNP coincide, a statement that implies the collapse of the polynomial hierarchy of complexity theory to its first level.

We develop a hybrid scheme based on a finite difference scheme and a rescaling technique to approximate the solution of nonlinear wave equation. In order to numerically reproduce the blow-up phenomena, we propose a rule of scaling transformation, which is a variant of what was successfully used in the case of nonlinear parabolic equations. A careful study of the convergence of the proposed scheme is carried out and several numerical examples are performed in illustration.

We study a variant of quantum hypothesis testing wherein an additional 'inconclusive' measurement outcome is added, allowing one to abstain from attempting to discriminate the hypotheses. The error probabilities are then conditioned on a successful attempt, with inconclusive trials disregarded. We completely characterise this task in both the single-shot and asymptotic regimes, providing exact formulas for the optimal error probabilities. In particular, we prove that the asymptotic error exponent of discriminating any two quantum states $\rho$ and $\sigma$ is given by the Hilbert projective metric $D_{\max}(\rho\|\sigma) + D_{\max}(\sigma \| \rho)$ in asymmetric hypothesis testing, and by the Thompson metric $\max \{ D_{\max}(\rho\|\sigma), D_{\max}(\sigma \| \rho) \}$ in symmetric hypothesis testing. This endows these two quantities with fundamental operational interpretations in quantum state discrimination. Our findings extend to composite hypothesis testing, where we show that the asymmetric error exponent with respect to any convex set of density matrices is given by a regularisation of the Hilbert projective metric. We apply our results also to quantum channels, showing that no advantage is gained by employing adaptive or even more general discrimination schemes over parallel ones, in both the asymmetric and symmetric settings. Our state discrimination results make use of no properties specific to quantum mechanics and are also valid in general probabilistic theories.

Very recently, Qi and Cui extended the Perron-Frobenius theory to dual number matrices with primitive and irreducible nonnegative standard parts and proved that they have Perron eigenpair and Perron-Frobenius eigenpair. The Collatz method was also extended to find Perron eigenpair. Qi and Cui proposed two conjectures. One is the k-order power of a dual number matrix tends to zero if and only if the spectral radius of its standard part less than one, and another is the linear convergence of the Collatz method. In this paper, we confirm these conjectures and provide theoretical proof. The main contribution is to show that the Collatz method R-linearly converges with an explicit rate.

We consider the linear lambda-calculus extended with the sup type constructor, which provides an additive conjunction along with a non-deterministic destructor. The sup type constructor has been introduced in the context of quantum computing. In this paper, we study this type constructor within a simple linear logic categorical model, employing the category of semimodules over a commutative semiring. We demonstrate that the non-deterministic destructor finds a suitable model in a "weighted" codiagonal map. This approach offers a valid and insightful alternative to interpreting non-determinism, especially in instances where the conventional Powerset Monad interpretation does not align with the category's structure, as is the case with the category of semimodules. The validity of this alternative relies on the presence of biproducts within the category.

It is disproved the Tokareva's conjecture that any balanced boolean function of appropriate degree is a derivative of some bent function. This result is based on new upper bounds for the numbers of bent and plateaued functions.

The semi-empirical nature of best-estimate models closing the balance equations of thermal-hydraulic (TH) system codes is well-known as a significant source of uncertainty for accuracy of output predictions. This uncertainty, called model uncertainty, is usually represented by multiplicative (log-)Gaussian variables whose estimation requires solving an inverse problem based on a set of adequately chosen real experiments. One method from the TH field, called CIRCE, addresses it. We present in the paper a generalization of this method to several groups of experiments each having their own properties, including different ranges for input conditions and different geometries. An individual (log-)Gaussian distribution is therefore estimated for each group in order to investigate whether the model uncertainty is homogeneous between the groups, or should depend on the group. To this end, a multi-group CIRCE is proposed where a variance parameter is estimated for each group jointly to a mean parameter common to all the groups to preserve the uniqueness of the best-estimate model. The ECME algorithm for Maximum Likelihood Estimation is adapted to the latter context, then applied to relevant demonstration cases. Finally, it is tested on a practical case to assess the uncertainty of critical mass flow assuming two groups due to the difference of geometry between the experimental setups.

Simulating physical problems involving multi-time scale coupling is challenging due to the need of solving these multi-time scale processes simultaneously. In response to this challenge, this paper proposed an explicit multi-time step algorithm coupled with a solid dynamic relaxation scheme. The explicit scheme simplifies the equation system in contrast to the implicit scheme, while the multi-time step algorithm allows the equations of different physical processes to be solved under different time step sizes. Furthermore, an implicit viscous damping relaxation technique is applied to significantly reduce computational iterations required to achieve equilibrium in the comparatively fast solid response process. To validate the accuracy and efficiency of the proposed algorithm, two distinct scenarios, i.e., a nonlinear hardening bar stretching and a fluid diffusion coupled with Nafion membrane flexure, are simulated. The results show good agreement with experimental data and results from other numerical methods, and the simulation time is reduced firstly by independently addressing different processes with the multi-time step algorithm and secondly decreasing solid dynamic relaxation time through the incorporation of damping techniques.

北京阿比特科技有限公司