亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Collatz hypothesis is a theorem of the algorithmic theory of natural numbers. We prove the (algorithmic) formula that expresses the halting property of Collatz algorithm. The observation that Collatz's theorem cannot be proved in any elementary number theory completes the main result.

相關內容

Penalizing complexity (PC) priors is a principled framework for designing priors that reduce model complexity. PC priors penalize the Kullback-Leibler Divergence (KLD) between the distributions induced by a ``simple'' model and that of a more complex model. However, in many common cases, it is impossible to construct a prior in this way because the KLD is infinite. Various approximations are used to mitigate this problem, but the resulting priors then fail to follow the designed principles. We propose a new class of priors, the Wasserstein complexity penalization (WCP) priors, by replacing KLD with the Wasserstein distance in the PC prior framework. These priors avoid the infinite model distance issues and can be derived by following the principles exactly, making them more interpretable. Furthermore, principles and recipes to construct joint WCP priors for multiple parameters analytically and numerically are proposed and we show that they can be easily obtained, either numerically or analytically, for a general class of models. The methods are illustrated through several examples for which PC priors have previously been applied.

This work puts forth low-complexity Riemannian subspace descent algorithms for the minimization of functions over the symmetric positive definite (SPD) manifold. Different from the existing Riemannian gradient descent variants, the proposed approach utilizes carefully chosen subspaces that allow the update to be written as a product of the Cholesky factor of the iterate and a sparse matrix. The resulting updates avoid the costly matrix operations like matrix exponentiation and dense matrix multiplication, which are generally required in almost all other Riemannian optimization algorithms on SPD manifold. We further identify a broad class of functions, arising in diverse applications, such as kernel matrix learning, covariance estimation of Gaussian distributions, maximum likelihood parameter estimation of elliptically contoured distributions, and parameter estimation in Gaussian mixture model problems, over which the Riemannian gradients can be calculated efficiently. The proposed uni-directional and multi-directional Riemannian subspace descent variants incur per-iteration complexities of $\O(n)$ and $\O(n^2)$ respectively, as compared to the $\O(n^3)$ or higher complexity incurred by all existing Riemannian gradient descent variants. The superior runtime and low per-iteration complexity of the proposed algorithms is also demonstrated via numerical tests on large-scale covariance estimation and matrix square root problems.

Fredholm integral equations of the second kind that are defined on a finite or infinite interval arise in many applications. This paper discusses Nystr\"om methods based on Gauss quadrature rules for the solution of such integral equations. It is important to be able to estimate the error in the computed solution, because this allows the choice of an appropriate number of nodes in the Gauss quadrature rule used. This paper explores the application of averaged and weighted averaged Gauss quadrature rules for this purpose, and introduces new stability properties for them.

We extend the use of piecewise orthogonal collocation to computing periodic solutions of renewal equations, which are particularly important in modeling population dynamics. We prove convergence through a rigorous error analysis. Finally, we show some numerical experiments confirming the theoretical results, and a couple of applications in view of bifurcation analysis.

Neufeld and Wu (arXiv:2310.12545) developed a multilevel Picard (MLP) algorithm which can approximately solve general semilinear parabolic PDEs with gradient-dependent nonlinearities, allowing also for coefficient functions of the corresponding PDE to be non-constant. By introducing a particular stochastic fixed-point equation (SFPE) motivated by the Feynman-Kac representation and the Bismut-Elworthy-Li formula and identifying the first and second component of the unique fixed-point of the SFPE with the unique viscosity solution of the PDE and its gradient, they proved convergence of their algorithm. However, it remained an open question whether the proposed MLP schema in arXiv:2310.12545 does not suffer from the curse of dimensionality. In this paper, we prove that the MLP algorithm in arXiv:2310.12545 indeed can overcome the curse of dimensionality, i.e. that its computational complexity only grows polynomially in the dimension $d\in \mathbb{N}$ and the reciprocal of the accuracy $\varepsilon$, under some suitable assumptions on the nonlinear part of the corresponding PDE.

We describe a novel algorithm for solving general parametric (nonlinear) eigenvalue problems. Our method has two steps: first, high-accuracy solutions of non-parametric versions of the problem are gathered at some values of the parameters; these are then combined to obtain global approximations of the parametric eigenvalues. To gather the non-parametric data, we use non-intrusive contour-integration-based methods, which, however, cannot track eigenvalues that migrate into/out of the contour as the parameter changes. Special strategies are described for performing the combination-over-parameter step despite having only partial information on such migrating eigenvalues. Moreover, we dedicate a special focus to the approximation of eigenvalues that undergo bifurcations. Finally, we propose an adaptive strategy that allows one to effectively apply our method even without any a priori information on the behavior of the sought-after eigenvalues. Numerical tests are performed, showing that our algorithm can achieve remarkably high approximation accuracy.

The incompressibility method is a counting argument in the framework of algorithmic complexity that permits discovering properties that are satisfied by most objects of a class. This paper gives a preliminary insight into Kolmogorov's complexity of groupoids and some algebras. The incompressibility method shows that almost all the groupoids are asymmetric and simple: Only trivial or constant homomorphisms are possible. However, highly random groupoids allow subgroupoids with interesting restrictions that reveal intrinsic structural properties. We also study the issue of the algebraic varieties and wonder which equational identities allow randomness.

Probabilistic variants of Model Order Reduction (MOR) methods have recently emerged for improving stability and computational performance of classical approaches. In this paper, we propose a probabilistic Reduced Basis Method (RBM) for the approximation of a family of parameter-dependent functions. It relies on a probabilistic greedy algorithm with an error indicator that can be written as an expectation of some parameter-dependent random variable. Practical algorithms relying on Monte Carlo estimates of this error indicator are discussed. In particular, when using Probably Approximately Correct (PAC) bandit algorithm, the resulting procedure is proven to be a weak greedy algorithm with high probability. Intended applications concern the approximation of a parameter-dependent family of functions for which we only have access to (noisy) pointwise evaluations. As a particular application, we consider the approximation of solution manifolds of linear parameter-dependent partial differential equations with a probabilistic interpretation through the Feynman-Kac formula.

We provide numerical bounds on the Crouzeix ratiofor KLS matrices $A$ which have a line segment on the boundary of the numerical range. The Crouzeix ratio is the supremum over all polynomials $p$ of the spectral norm of $p(A)$ dividedby the maximum absolute value of $p$ on the numerical range of $A$.Our bounds confirm the conjecture that this ratiois less than or equal to $2$. We also give a precise description of these numerical ranges.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

小貼士
登錄享
相關主題
北京阿比特科技有限公司