亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Collatz hypothesis is a theorem of the algorithmic theory of natural numbers. We prove the (algorithmic) formula that expresses the halting property of Collatz algorithm. The observation that Collatz's theorem cannot be proved in any elementary number theory completes the main result.

相關內容

In recent years a great deal of attention has been paid to discretizations of the incompressible Stokes equations that exactly preserve the incompressibility constraint. These are of substantial interest because these discretizations are pressure-robust, i.e. the error estimates for the velocity do not depend on the error in the pressure. Similar considerations arise in nearly incompressible linear elastic solids. Conforming discretizations with this property are now well understood in two dimensions, but remain poorly understood in three dimensions. In this work we state two conjectures on this subject. The first is that the Scott-Vogelius element pair is inf-sup stable on uniform meshes for velocity degree $k \ge 4$; the best result available in the literature is for $k \ge 6$. The second is that there exists a stable space decomposition of the kernel of the divergence for $k \ge 5$. We present numerical evidence supporting our conjectures.

The DTW Barycenter Averaging (DBA) algorithm is a widely used algorithm for estimating the mean of a given set of point sequences. In this context, the mean is defined as a point sequence that minimises the sum of dynamic time warping distances (DTW). The algorithm is similar to the $k$-means algorithm in the sense that it alternately repeats two steps: (1) computing an optimal assignment to the points of the current mean, and (2) computing an optimal mean under the current assignment. The popularity of DBA can be attributed to the fact that it works well in practice, despite any theoretical guarantees to be known. In our paper, we aim to initiate a theoretical study of the number of iterations that DBA performs until convergence. We assume the algorithm is given $n$ sequences of $m$ points in $\mathbb{R}^d$ and a parameter $k$ that specifies the length of the mean sequence to be computed. We show that, in contrast to its fast running time in practice, the number of iterations can be exponential in $k$ in the worst case - even if the number of input sequences is $n=2$. We complement these findings with experiments on real-world data that suggest this worst-case behaviour is likely degenerate. To better understand the performance of the algorithm on non-degenerate input, we study DBA in the model of smoothed analysis, upper-bounding the expected number of iterations in the worst case under random perturbations of the input. Our smoothed upper bound is polynomial in $k$, $n$ and $d$, and for constant $n$, it is also polynomial in $m$. For our analysis, we adapt the set of techniques that were developed for analysing $k$-means and observe that this set of techniques is not sufficient to obtain tight bounds for general $n$.

We are interested in generating surfaces with arbitrary roughness and forming patterns on the surfaces. Two methods are applied to construct rough surfaces. In the first method, some superposition of wave functions with random frequencies and angles of propagation are used to get periodic rough surfaces with analytic parametric equations. The amplitude of such surfaces is also an important variable in the provided eigenvalue analysis for the Laplace-Beltrami operator and in the generation of pattern formation. Numerical experiments show that the patterns become irregular as the amplitude and frequency of the rough surface increase. For the sake of easy generalization to closed manifolds, we propose a second construction method for rough surfaces, which uses random nodal values and discretized heat filters. We provide numerical evidence that both surface {construction methods} yield comparable patterns to those {observed} in real-life animals.

We introduce a new type of influence function, the asymptotic expected sensitivity function, which is often equivalent to but mathematically more tractable than the traditional one based on the Gateaux derivative. To illustrate, we study the robustness of some important rank correlations, including Spearman's and Kendall's correlations, and the recently developed Chatterjee's correlation.

Stabbing Planes (also known as Branch and Cut) is a proof system introduced very recently which, informally speaking, extends the DPLL method by branching on integer linear inequalities instead of single variables. The techniques known so far to prove size and depth lower bounds for Stabbing Planes are generalizations of those used for the Cutting Planes proof system. For size lower bounds these are established by monotone circuit arguments, while for depth these are found via communication complexity and protection. As such these bounds apply for lifted versions of combinatorial statements. Rank lower bounds for Cutting Planes are also obtained by geometric arguments called protection lemmas. In this work we introduce two new geometric approaches to prove size/depth lower bounds in Stabbing Planes working for any formula: (1) the antichain method, relying on Sperner's Theorem and (2) the covering method which uses results on essential coverings of the boolean cube by linear polynomials, which in turn relies on Alon's combinatorial Nullenstellensatz. We demonstrate their use on classes of combinatorial principles such as the Pigeonhole principle, the Tseitin contradictions and the Linear Ordering Principle. By the first method we prove almost linear size lower bounds and optimal logarithmic depth lower bounds for the Pigeonhole principle and analogous lower bounds for the Tseitin contradictions over the complete graph and for the Linear Ordering Principle. By the covering method we obtain a superlinear size lower bound and a logarithmic depth lower bound for Stabbing Planes proof of Tseitin contradictions over a grid graph.

The summary receiver operating characteristic (SROC) curve has been recommended as one important meta-analytical summary to represent the accuracy of a diagnostic test in the presence of heterogeneous cutoff values. However, selective publication of diagnostic studies for meta-analysis can induce publication bias (PB) on the estimate of the SROC curve. Several sensitivity analysis methods have been developed to quantify PB on the SROC curve, and all these methods utilize parametric selection functions to model the selective publication mechanism. The main contribution of this article is to propose a new sensitivity analysis approach that derives the worst-case bounds for the SROC curve by adopting nonparametric selection functions under minimal assumptions. The estimation procedures of the worst-case bounds use the Monte Carlo method to obtain the SROC curves along with the corresponding area under the curves in the worst case where the maximum possible PB under a range of marginal selection probabilities is considered. We apply the proposed method to a real-world meta-analysis to show that the worst-case bounds of the SROC curves can provide useful insights for discussing the robustness of meta-analytical findings on diagnostic test accuracy.

Regularization promotes well-posedness in solving an inverse problem with incomplete measurement data. The regularization term is typically designed based on a priori characterization of the unknown signal, such as sparsity or smoothness. The standard inhomogeneous regularization incorporates a spatially changing exponent $p$ of the standard $\ell_p$ norm-based regularization to recover a signal whose characteristic varies spatially. This study proposes a weighted inhomogeneous regularization that extends the standard inhomogeneous regularization through new exponent design and weighting using spatially varying weights. The new exponent design avoids misclassification when different characteristics stay close to each other. The weights handle another issue when the region of one characteristic is too small to be recovered effectively by the $\ell_p$ norm-based regularization even after identified correctly. A suite of numerical tests shows the efficacy of the proposed weighted inhomogeneous regularization, including synthetic image experiments and real sea ice recovery from its incomplete wave measurements.

We consider the parallel-in-time solution of scalar nonlinear conservation laws in one spatial dimension. The equations are discretized in space with a conservative finite-volume method using weighted essentially non-oscillatory (WENO) reconstructions, and in time with high-order explicit Runge-Kutta methods. The solution of the global, discretized space-time problem is sought via a nonlinear iteration that uses a novel linearization strategy in cases of non-differentiable equations. Under certain choices of discretization and algorithmic parameters, the nonlinear iteration coincides with Newton's method, although, more generally, it is a preconditioned residual correction scheme. At each nonlinear iteration, the linearized problem takes the form of a certain discretization of a linear conservation law over the space-time domain in question. An approximate parallel-in-time solution of the linearized problem is computed with a single multigrid reduction-in-time (MGRIT) iteration. The MGRIT iteration employs a novel coarse-grid operator that is a modified conservative semi-Lagrangian discretization and generalizes those we have developed previously for non-conservative scalar linear hyperbolic problems. Numerical tests are performed for the inviscid Burgers and Buckley--Leverett equations. For many test problems, the solver converges in just a handful of iterations with convergence rate independent of mesh resolution, including problems with (interacting) shocks and rarefactions.

By using the stochastic particle method, the truncated Euler-Maruyama (TEM) method is proposed for numerically solving McKean-Vlasov stochastic differential equations (MV-SDEs), possibly with both drift and diffusion coefficients having super-linear growth in the state variable. Firstly, the result of the propagation of chaos in the L^q (q\geq 2) sense is obtained under general assumptions. Then, the standard 1/2-order strong convergence rate in the L^q sense of the proposed method corresponding to the particle system is derived by utilizing the stopping time analysis technique. Furthermore, long-time dynamical properties of MV-SDEs, including the moment boundedness, stability, and the existence and uniqueness of the invariant probability measure, can be numerically realized by the TEM method. Additionally, it is proven that the numerical invariant measure converges to the underlying one of MV-SDEs in the L^2-Wasserstein metric. Finally, the conclusions obtained in this paper are verified through examples and numerical simulations.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

小貼士
登錄享
相關主題
北京阿比特科技有限公司