亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The regularity of refinable functions has been analysed in an extensive literature and is well-understood in two cases: 1) univariate 2) multivariate with an isotropic dilation matrix. The general (non-isotropic) case offered a great resistance. It was done only recently by developing the matrix method. In this paper we make the next step and extend the Littlewood-Paley type method, which is very efficient in the aforementioned special cases, to general equations with arbitrary dilation matrices. This gives formulas for the higher order regularity in $W_2^k(\mathbb{R}^n)$ by means of the Perron eigenvalue of a finite-dimensional linear operator on a special cone. Applying those results to recently introduced tile B-splines, we prove that they can have a higher smoothness than the classical ones of the same order. Moreover, the two-digit tile B-splines have the minimal support of the mask among all refinable functions of the same order of approximation. This proves, in particular, the lowest algorithmic complexity of the corresponding subdivision schemes. Examples and numerical results are provided.

相關內容

We propose a new concept of lifts of reversible diffusion processes and show that various well-known non-reversible Markov processes arising in applications are lifts in this sense of simple reversible diffusions. Furthermore, we introduce a concept of non-asymptotic relaxation times and show that these can at most be reduced by a square root through lifting, generalising a related result in discrete time. Finally, we demonstrate how the recently developed approach to quantitative hypocoercivity based on space-time Poincar\'e inequalities can be rephrased and simplified in the language of lifts and how it can be applied to find optimal lifts.

We consider nonparametric Bayesian inference in a multidimensional diffusion model with reflecting boundary conditions based on discrete high-frequency observations. We prove a general posterior contraction rate theorem in $L^2$-loss, which is applied to Gaussian priors. The resulting posteriors, as well as their posterior means, are shown to converge to the ground truth at the minimax optimal rate over H\"older smoothness classes in any dimension. Of independent interest and as part of our proofs, we show that certain frequentist penalized least squares estimators are also minimax optimal.

One of the most fundamental problems in tiling theory is the domino problem: given a set of tiles and tiling rules, decide if there exists a way to tile the plane using copies of tiles and following their rules. The problem is known to be undecidable in general and even for sets of Wang tiles, which are unit square tiles wearing colours on their edges which can be assembled provided they share the same colour on their common edge, as proven by Berger in the 1960s. In this paper, we focus on Wang tilesets. We prove that the domino problem is decidable for robust tilesets, i.e. tilesets that either cannot tile the plane or can but, if so, satisfy some particular invariant provably. We establish that several famous tilesets considered in the literature are robust. We give arguments that this is true for all tilesets unless they are produced from non-robust Turing machines: a Turing machine is said to be non-robust if it does not halt and furthermore does so non-provably. As a side effect of our work, we provide a sound and relatively complete method for proving that a tileset can tile the plane. Our analysis also provides explanations for the observed similarities between proofs in the literature for various tilesets, as well as of phenomena that have been observed experimentally in the systematic study of tilesets using computer methods.

We present and analyze a new finite volume scheme of Gudonov-type for a nonlinear scalar conservation law whose flux function has a discontinuous coefficient due to time-dependent changes in its sign along a Lipschitz continuous curve.

Constant (naive) imputation is still widely used in practice as this is a first easy-to-use technique to deal with missing data. Yet, this simple method could be expected to induce a large bias for prediction purposes, as the imputed input may strongly differ from the true underlying data. However, recent works suggest that this bias is low in the context of high-dimensional linear predictors when data is supposed to be missing completely at random (MCAR). This paper completes the picture for linear predictors by confirming the intuition that the bias is negligible and that surprisingly naive imputation also remains relevant in very low dimension.To this aim, we consider a unique underlying random features model, which offers a rigorous framework for studying predictive performances, whilst the dimension of the observed features varies.Building on these theoretical results, we establish finite-sample bounds on stochastic gradient (SGD) predictors applied to zero-imputed data, a strategy particularly well suited for large-scale learning.If the MCAR assumption appears to be strong, we show that similar favorable behaviors occur for more complex missing data scenarios.

We consider finite element approximations to the optimal constant for the Hardy inequality with exponent $p=2$ in bounded domains of dimension $n=1$ or $n \geq 3$. For finite element spaces of piecewise linear and continuous functions on a mesh of size $h$, we prove that the approximate Hardy constant converges to the optimal Hardy constant at a rate proportional to $1/| \log h |^2$. This result holds in dimension $n=1$, in any dimension $n \geq 3$ if the domain is the unit ball and the finite element discretization exploits the rotational symmetry of the problem, and in dimension $n=3$ for general finite element discretizations of the unit ball. In the first two cases, our estimates show excellent quantitative agreement with values of the discrete Hardy constant obtained computationally.

We consider the problem of optimising the expected value of a loss functional over a nonlinear model class of functions, assuming that we have only access to realisations of the gradient of the loss. This is a classical task in statistics, machine learning and physics-informed machine learning. A straightforward solution is to replace the exact objective with a Monte Carlo estimate before employing standard first-order methods like gradient descent, which yields the classical stochastic gradient descent method. But replacing the true objective with an estimate ensues a ``generalisation error''. Rigorous bounds for this error typically require strong compactness and Lipschitz continuity assumptions while providing a very slow decay with sample size. We propose a different optimisation strategy relying on a natural gradient descent in which the true gradient is approximated in local linearisations of the model class via (quasi-)projections based on optimal sampling methods. Under classical assumptions on the loss and the nonlinear model class, we prove that this scheme converges almost surely monotonically to a stationary point of the true objective and we provide convergence rates.

For a sequence of random structures with $n$-element domains over a relational signature, we define its first order (FO) complexity as a certain subset in the Banach space $\ell^{\infty}/c_0$. The well-known FO zero-one law and FO convergence law correspond to FO complexities equal to $\{0,1\}$ and a subset of $\mathbb{R}$, respectively. We present a hierarchy of FO complexity classes, introduce a stochastic FO reduction that allows to transfer complexity results between different random structures, and deduce using this tool several new logical limit laws for binomial random structures. Finally, we introduce a conditional distribution on graphs, subject to a FO sentence $\varphi$, that generalises certain well-known random graph models, show instances of this distribution for every complexity class, and prove that the set of all $\varphi$ validating 0--1 law is not recursively enumerable.

The joint bidiagonalization (JBD) process iteratively reduces a matrix pair $\{A,L\}$ to two bidiagonal forms simultaneously, which can be used for computing a partial generalized singular value decomposition (GSVD) of $\{A,L\}$. The process has a nested inner-outer iteration structure, where the inner iteration usually can not be computed exactly. In this paper, we study the inaccurately computed inner iterations of JBD by first investigating influence of computational error of the inner iteration on the outer iteration, and then proposing a reorthogonalized JBD (rJBD) process to keep orthogonality of a part of Lanczos vectors. An error analysis of the rJBD is carried out to build up connections with Lanczos bidiagonalizations. The results are then used to investigate convergence and accuracy of the rJBD based GSVD computation. It is shown that the accuracy of computed GSVD components depend on the computing accuracy of inner iterations and condition number of $(A^T,L^T)^T$ while the convergence rate is not affected very much. For practical JBD based GSVD computations, our results can provide a guideline for choosing a proper computing accuracy of inner iterations in order to obtain approximate GSVD components with a desired accuracy. Numerical experiments are made to confirm our theoretical results.

A rectangulation is a decomposition of a rectangle into finitely many rectangles. Via natural equivalence relations, rectangulations can be seen as combinatorial objects with a rich structure, with links to lattice congruences, flip graphs, polytopes, lattice paths, Hopf algebras, etc. In this paper, we first revisit the structure of the respective equivalence classes: weak rectangulations that preserve rectangle-segment adjacencies, and strong rectangulations that preserve rectangle-rectangle adjacencies. We thoroughly investigate posets defined by adjacency in rectangulations of both kinds, and unify and simplify known bijections between rectangulations and permutation classes. This yields a uniform treatment of mappings between permutations and rectangulations that unifies the results from earlier contributions, and emphasizes parallelism and differences between the weak and the strong cases. Then, we consider the special case of guillotine rectangulations, and prove that they can be characterized - under all known mappings between permutations and rectangulations - by avoidance of two mesh patterns that correspond to "windmills" in rectangulations. This yields new permutation classes in bijection with weak guillotine rectangulations, and the first known permutation class in bijection with strong guillotine rectangulations. Finally, we address enumerative issues and prove asymptotic bounds for several families of strong rectangulations.

北京阿比特科技有限公司