亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Previous papers give accounts of quests for satisfactory formalizations of the classical informal notion of an algorithm and the contemporary informal notion of an interactive algoritm. In this paper, an attempt is made to generalize the results of the former quest to the contemporary informal notion of a concurrent algorithm. The notion of a concurrent proto-algorithm is introduced. The thought is that concurrent algorithms are equivalence classes of concurrent proto-algorithms under an appropriate equivalence relation. Three equivalence relations are defined. Two of them are deemed to be bounds for an appropriate equivalence relation and the third is likely an appropriate one. The connection between concurrency and non-determinism in the presented setting is also addressed.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 近似 · 離散化 · Integration · 講稿 ·
2024 年 12 月 2 日

A method for analyzing non-asymptotic guarantees of numerical discretizations of ergodic SDEs in Wasserstein-2 distance is presented by Sanz-Serna and Zygalakis in ``Wasserstein distance estimates for the distributions of numerical approximations to ergodic stochastic differential equations". They analyze the UBU integrator which is strong order two and only requires one gradient evaluation per step, resulting in desirable non-asymptotic guarantees, in particular $\mathcal{O}(d^{1/4}\epsilon^{-1/2})$ steps to reach a distance of $\epsilon > 0$ in Wasserstein-2 distance away from the target distribution. However, there is a mistake in the local error estimates in Sanz-Serna and Zygalakis (2021), in particular, a stronger assumption is needed to achieve these complexity estimates. This note reconciles the theory with the dimension dependence observed in practice in many applications of interest.

G\'acs' coarse-grained algorithmic entropy leverages universal computation to quantify the information content of any given physical state. Unlike the Boltzmann and Gibbs-Shannon entropies, it requires no prior commitment to macrovariables or probabilistic ensembles, rendering it applicable to settings arbitrarily far from equilibrium. For measure-preserving dynamical systems equipped with a Markovian coarse-graining, we prove a number of fluctuation inequalities. These include algorithmic versions of Jarzynski's equality, Landauer's principle, and the second law of thermodynamics. In general, the algorithmic entropy determines a system's actual capacity to do work from an individual state, whereas the Gibbs-Shannon entropy only gives the mean capacity to do work from a state ensemble that is known a priori.

Under a multinormal distribution with an arbitrary unknown covariance matrix, the main purpose of this paper is to propose a framework to achieve the goal of reconciliation of Bayesian, frequentist, and Fisher's reporting $p$-values, Neyman-Pearson's optimal theory and Wald's decision theory for the problems of testing mean against restricted alternatives (closed convex cones). To proceed, the tests constructed via the likelihood ratio (LR) and the union-intersection (UI) principles are studied. For the problems of testing against restricted alternatives, first, we show that the LRT and the UIT are not the proper Bayes tests, however, they are shown to be the integrated LRT and the integrated UIT, respectively. For the problem of testing against the positive orthant space alternative, both the null distributions of the LRT and the UIT depend on the unknown nuisance covariance matrix. Hence we have difficulty adopting Fisher's approach to reporting $p$-values. On the other hand, according to the definition of the level of significance, both the LRT and the UIT are shown to be power-dominated by the corresponding LRT and UIT for testing against the half-space alternative, respectively. Hence, both the LRT and the UIT are $\alpha$-inadmissible, these results are against the common statistical sense. Neither Fisher's approach of reporting $p$-values alone nor Neyman-Pearson's optimal theory for power function alone is a satisfactory criterion for evaluating the performance of tests. Wald's decision theory via $d$-admissibility may shed light on resolving these challenging issues of imposing the balance between type 1 error and power.

Approximating a univariate function on the interval $[-1,1]$ with a polynomial is among the most classical problems in numerical analysis. When the function evaluations come with noise, a least-squares fit is known to reduce the effect of noise as more samples are taken. The generic algorithm for the least-squares problem requires $O(Nn^2)$ operations, where $N+1$ is the number of sample points and $n$ is the degree of the polynomial approximant. This algorithm is unstable when $n$ is large, for example $n\gg \sqrt{N}$ for equispaced sample points. In this study, we blend numerical analysis and statistics to introduce a stable and fast $O(N\log N)$ algorithm called NoisyChebtrunc based on the Chebyshev interpolation. It has the same error reduction effect as least-squares and the convergence is spectral until the error reaches $O(\sigma \sqrt{{n}/{N}})$, where $\sigma$ is the noise level, after which the error continues to decrease at the Monte-Carlo $O(1/\sqrt{N})$ rate. To determine the polynomial degree, NoisyChebtrunc employs a statistical criterion, namely Mallows' $C_p$. We analyze NoisyChebtrunc in terms of the variance and concentration in the infinity norm to the underlying noiseless function. These results show that with high probability the infinity-norm error is bounded by a small constant times $\sigma \sqrt{{n}/{N}}$, when the noise {is} independent and follows a subgaussian or subexponential distribution. We illustrate the performance of NoisyChebtrunc with numerical experiments.

This paper proposes a novel canonical correlation analysis for semiparametric inference in $I(1)/I(0)$ systems via functional approximation. The approach can be applied coherently to panels of $p$ variables with a generic number $s$ of stochastic trends, as well as to subsets or aggregations of variables. This study discusses inferential tools on $s$ and on the loading matrix $\psi$ of the stochastic trends (and on their duals $r$ and $\beta$, the cointegration rank and the cointegrating matrix): asymptotically pivotal test sequences and consistent estimators of $s$ and $r$, $T$-consistent, mixed Gaussian and efficient estimators of $\psi$ and $\beta$, Wald tests thereof, and misspecification tests for checking model assumptions. Monte Carlo simulations show that these tools have reliable performance uniformly in $s$ for small, medium and large-dimensional systems, with $p$ ranging from 10 to 300. An empirical analysis of 20 exchange rates illustrates the methods.

This article presents examples of an application of the finite field method for the computation of the characteristic polynomial of the matching arrangement of a graph. Weight functions on edges of a graph with weights from a finite field are divided into proper and improper functions in connection with proper colorings of vertices of the matching polytope of a graph. An improper weight function problem is introduced, a proof of its NP-completeness is presented, and a knapsack-like public key cryptosystem is constructed based on the improper weight function problem.

In this paper we introduce a multilevel Picard approximation algorithm for general semilinear parabolic PDEs with gradient-dependent nonlinearities whose coefficient functions do not need to be constant. We also provide a full convergence and complexity analysis of our algorithm. To obtain our main results, we consider a particular stochastic fixed-point equation (SFPE) motivated by the Feynman-Kac representation and the Bismut-Elworthy-Li formula. We show that the PDE under consideration has a unique viscosity solution which coincides with the first component of the unique solution of the stochastic fixed-point equation. Moreover, the gradient of the unique viscosity solution of the PDE exists and coincides with the second component of the unique solution of the stochastic fixed-point equation. Furthermore, we also provide a numerical example in up to $300$ dimensions to demonstrate the practical applicability of our multilevel Picard algorithm.

We present a simple universal algorithm for high-dimensional integration which has the optimal error rate (independent of the dimension) in all weighted Korobov classes both in the randomized and the deterministic setting. Our theoretical findings are complemented by numerical tests.

We propose a quadrature-based formula for computing the exponential function of matrices with a non-oscillatory integral on an infinite interval and an oscillatory integral on a finite interval. In the literature, existing quadrature-based formulas are based on the inverse Laplace transform or the Fourier transform. We show these expressions are essentially equivalent in terms of complex integrals and choose the former as a starting point to reduce computational cost. By choosing a simple integral path, we derive an integral expression mentioned above. Then, we can easily apply the double-exponential formula and the Gauss-Legendre formula, which have rigorous error bounds. As numerical experiments show, the proposed formula outperforms the existing formulas when the imaginary parts of the eigenvalues of matrices have large absolute values.

Parameter inference is essential when interpreting observational data using mathematical models. Standard inference methods for differential equation models typically rely on obtaining repeated numerical solutions of the differential equation(s). Recent results have explored how numerical truncation error can have major, detrimental, and sometimes hidden impacts on likelihood-based inference by introducing false local maxima into the log-likelihood function. We present a straightforward approach for inference that eliminates the need for solving the underlying differential equations, thereby completely avoiding the impact of truncation error. Open-access Jupyter notebooks, available on GitHub, allow others to implement this method for a broad class of widely-used models to interpret biological data.

北京阿比特科技有限公司