亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We give generators and relations for the hypergraph props of Gaussian relations and positive affine Lagrangian relations. The former extends Gaussian probabilistic processes by completely-uninformative priors, and the latter extends Gaussian quantum mechanics with infinitely-squeezed states. These presentations are given by adding a generator to the presentation of real affine relations and of real affine Lagrangian relations which freely codiscards effects, as well as certain rotations. The presentation of positive affine Lagrangian relations provides a rigorous justification for many common yet informal calculations in the quantum physics literature involving infinite-squeezing. Our presentation naturally extends Menicucci et al.'s graph-theoretic representation of Gaussian quantum states with a representation for Gaussian transformations. Using this graphical calculus, we also give a graphical proof of Braunstein and Kimble's continuous-variable quantum teleportation protocol. We also interpret the LOv-calculus, a diagrammatic calculus for reasoning about passive linear-optical quantum circuits in our graphical calculus. Moreover, we show how our presentation allows for additional optical operations such as active squeezing.

相關內容

We develop further the theory of monoidal bicategories by introducing and studying bicategorical counterparts of the notions of a linear explonential comonad, as considered in the study of linear logic, and of a codereliction transformation, introduced to study differential linear logic via differential categories. As an application, we extend the differential calculus of Joyal's analytic functors to analytic functors between presheaf categories, just as ordinary calculus extends from a single variable to many variables.

The logistic regression model is one of the most popular data generation model in noisy binary classification problems. In this work, we study the sample complexity of estimating the parameters of the logistic regression model up to a given $\ell_2$ error, in terms of the dimension and the inverse temperature, with standard normal covariates. The inverse temperature controls the signal-to-noise ratio of the data generation process. While both generalization bounds and asymptotic performance of the maximum-likelihood estimator for logistic regression are well-studied, the non-asymptotic sample complexity that shows the dependence on error and the inverse temperature for parameter estimation is absent from previous analyses. We show that the sample complexity curve has two change-points in terms of the inverse temperature, clearly separating the low, moderate, and high temperature regimes.

We propose a new simple and explicit numerical scheme for time-homogeneous stochastic differential equations. The scheme is based on sampling increments at each time step from a skew-symmetric probability distribution, with the level of skewness determined by the drift and volatility of the underlying process. We show that as the step-size decreases the scheme converges weakly to the diffusion of interest. We then consider the problem of simulating from the limiting distribution of an ergodic diffusion process using the numerical scheme with a fixed step-size. We establish conditions under which the numerical scheme converges to equilibrium at a geometric rate, and quantify the bias between the equilibrium distributions of the scheme and of the true diffusion process. Notably, our results do not require a global Lipschitz assumption on the drift, in contrast to those required for the Euler--Maruyama scheme for long-time simulation at fixed step-sizes. Our weak convergence result relies on an extension of the theory of Milstein \& Tretyakov to stochastic differential equations with non-Lipschitz drift, which could also be of independent interest. We support our theoretical results with numerical simulations.

Classical algorithms for market equilibrium computation such as proportional response dynamics face scalability issues with Internet-based applications such as auctions, recommender systems, and fair division, despite having an almost linear runtime in terms of the product of buyers and goods. In this work, we provide the first quantum algorithm for market equilibrium computation with sub-linear performance. Our algorithm provides a polynomial runtime speedup in terms of the product of the number of buyers and goods while reaching the same optimization objective value as the classical algorithm. Numerical simulations of a system with 16384 buyers and goods support our theoretical results that our quantum algorithm provides a significant speedup.

We propose a new approach to discretize the von Neumann equation, which is efficient in the semi-classical limit. This method is first based on the so called Weyl's variables to address the stiffness associated with the equation. Then, by applying a truncated Hermite expansion of the density operator, we successfully handle this stiffness. Additionally, we develop a finite volume approximation for practical implementation and conduct numerical simulations to illustrate the efficiency of our approach. This asymptotic preserving numerical approximation, combined with the use of Hermite polynomials, provides an efficient tool for solving the von Neumann equation in all regimes, near classical or not.

This work presents several new results concerning the analysis of the convergence of binary, univariate, and linear subdivision schemes, all related to the {\it contractivity factor} of a convergent scheme. First, we prove that a convergent scheme cannot have a contractivity factor lower than half. Since the lower this factor is, the faster is the convergence of the scheme, schemes with contractivity factor $\frac{1}{2}$, such as those generating spline functions, have optimal convergence rate. Additionally, we provide further insights and conditions for the convergence of linear schemes and demonstrate their applicability in an improved algorithm for determining the convergence of such subdivision schemes.

We marshall the arguments for preferring Bayesian hypothesis testing and confidence sets to frequentist ones. We define admissible solutions to inference problems, noting that Bayesian solutions are admissible. We give seven weaker common-sense criteria for solutions to inference problems, all failed by these frequentist methods but satisfied by any admissible method. We note that pseudo-Bayesian methods made by handicapping Bayesian methods to satisfy criteria on type I error rate makes them frequentist not Bayesian in nature. We give five examples showing the differences between Bayesian and frequentist methods; the first requiring little calculus, the second showing in abstract what is wrong with these frequentist methods, the third to illustrate information conservation, the fourth to show that the same problems arise in everyday statistical problems, and the fifth to illustrate how on some real-life inference problems Bayesian methods require less data than fixed sample-size (resp. pseudo-Bayesian) frequentist hypothesis testing by factors exceeding 3000 (resp 300) without recourse to informative priors. To address the issue of different parties with opposing interests reaching agreement on a prior, we illustrate the beneficial effects of a Bayesian "Let the data decide" policy both on results under a wide variety of conditions and on motivation to reach a common prior by consent. We show that in general the frequentist confidence level contains less relevant Shannon information than the Bayesian posterior, and give an example where no deterministic frequentist critical regions give any relevant information even though the Bayesian posterior contains up to the maximum possible amount. In contrast use of the Bayesian prior allows construction of non-deterministic critical regions for which the Bayesian posterior can be recovered from the frequentist confidence.

We propose a local, past-oriented fragment of propositional dynamic logic to reason about concurrent scenarios modelled as Mazurkiewicz traces, and prove it to be expressively complete with respect to regular trace languages. Because of locality, specifications in this logic are efficiently translated into asynchronous automata, in a way that reflects the structure of formulas. In particular, we obtain a new proof of Zielonka's fundamental theorem and we prove that any regular trace language can be implemented by a cascade product of localized asynchronous automata, which essentially operate on a single process. These results refine earlier results by Adsul et al. which involved a larger fragment of past propositional dynamic logic and used Mukund and Sohoni's gossip automaton. Our new results avoid using this automaton, or Zielonka's timestamping mechanism and, in particular, they show how to implement a gossip automaton as a cascade product.

Rodents employ a broad spectrum of ultrasonic vocalizations (USVs) for social communication. As these vocalizations offer valuable insights into affective states, social interactions, and developmental stages of animals, various deep learning approaches have aimed to automate both the quantitative (detection) and qualitative (classification) analysis of USVs. Here, we present the first systematic evaluation of different types of neural networks for USV classification. We assessed various feedforward networks, including a custom-built, fully-connected network and convolutional neural network, different residual neural networks (ResNets), an EfficientNet, and a Vision Transformer (ViT). Paired with a refined, entropy-based detection algorithm (achieving recall of 94.9% and precision of 99.3%), the best architecture (achieving 86.79% accuracy) was integrated into a fully automated pipeline capable of analyzing extensive USV datasets with high reliability. Additionally, users can specify an individual minimum accuracy threshold based on their research needs. In this semi-automated setup, the pipeline selectively classifies calls with high pseudo-probability, leaving the rest for manual inspection. Our study focuses exclusively on neonatal USVs. As part of an ongoing phenotyping study, our pipeline has proven to be a valuable tool for identifying key differences in USVs produced by mice with autism-like behaviors.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

北京阿比特科技有限公司