亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The $L_p$-discrepancy is a quantitative measure for the irregularity of distribution of an $N$-element point set in the $d$-dimensional unit cube, which is closely related to the worst-case error of quasi-Monte Carlo algorithms for numerical integration. Its inverse for dimension $d$ and error threshold $\varepsilon \in (0,1)$ is the minimal number of points in $[0,1)^d$ such that the minimal normalized $L_p$-discrepancy is less or equal $\varepsilon$. It is well known, that the inverse of $L_2$-discrepancy grows exponentially fast with the dimension $d$, i.e., we have the curse of dimensionality, whereas the inverse of $L_{\infty}$-discrepancy depends exactly linearly on $d$. The behavior of inverse of $L_p$-discrepancy for general $p \not\in \{2,\infty\}$ has been an open problem for many years. In this paper we show that the $L_p$-discrepancy suffers from the curse of dimensionality for all $p$ in $(1,2]$ which are of the form $p=2 \ell/(2 \ell -1)$ with $\ell \in \mathbb{N}$. This result follows from a more general result that we show for the worst-case error of numerical integration in an anchored Sobolev space with anchor 0 of once differentiable functions in each variable whose first derivative has finite $L_q$-norm, where $q$ is an even positive integer satisfying $1/p+1/q=1$.

相關內容

維度災難是指在高維空間中分析和組織數據時出現的各種現象,這些現象在低維設置(例如日常體驗的三維物理空間)中不會發生。

We investigate error of the Euler scheme in the case when the right-hand side function of the underlying ODE satisfies nonstandard assumptions such as local one-sided Lipschitz condition and local H\"older continuity. Moreover, we assume two cases in regards to information availability: exact and noisy with respect to the right-hand side function. Optimality analysis of the Euler scheme is also provided. Finally, we present the results of some numerical experiments.

We study the problem of maximizing a non-negative monotone $k$-submodular function $f$ under a knapsack constraint, where a $k$-submodular function is a natural generalization of a submodular function to $k$ dimensions. We present a deterministic $(\frac12-\frac{1}{2e})\approx 0.316$-approximation algorithm that evaluates $f$ $O(n^4k^3)$ times, based on the result of Sviridenko (2004) on submodular knapsack maximization.

This paper concerns an expansion of first-order Belnap-Dunn logic which is called $\mathrm{BD}^{\supset,\mathsf{F}}$. Its connectives and quantifiers are all familiar from classical logic and its logical consequence relation is very closely connected to the one of classical logic. Results that convey this close connection are established. Fifteen classical laws of logical equivalence are used to distinguish $\mathrm{BD}^{\supset,\mathsf{F}}$ from all other four-valued logics with the same connectives and quantifiers whose logical consequence relation is as closely connected to the logical consequence relation of classical logic. It is shown that several interesting non-classical connectives added to Belnap-Dunn logic in its expansions that have been studied earlier are definable in $\mathrm{BD}^{\supset,\mathsf{F}}$. It is also established that $\mathrm{BD}^{\supset,\mathsf{F}}$ is both paraconsistent and paracomplete. Moreover, a sequent calculus proof system that is sound and complete with respect to the logical consequence relation of $\mathrm{BD}^{\supset,\mathsf{F}}$ is presented.

We prove lower bounds for the randomized approximation of the embedding $\ell_1^m \rightarrow \ell_\infty^m$ based on algorithms that use arbitrary linear (hence non-adaptive) information provided by a (randomized) measurement matrix $N \in \mathbb{R}^{n \times m}$. These lower bounds reflect the increasing difficulty of the problem for $m \to \infty$, namely, a term $\sqrt{\log m}$ in the complexity $n$. This result implies that non-compact operators between arbitrary Banach spaces are not approximable using non-adaptive Monte Carlo methods. We also compare these lower bounds for non-adaptive methods with upper bounds based on adaptive, randomized methods for recovery for which the complexity $n$ only exhibits a $(\log\log m)$-dependence. In doing so we give an example of linear problems where the error for adaptive vs. non-adaptive Monte Carlo methods shows a gap of order $n^{1/2} ( \log n)^{-1/2}$.

We consider finite element approximations to the optimal constant for the Hardy inequality with exponent $p=2$ in bounded domains of dimension $n=1$ or $n\geq 3$. For finite element spaces of piecewise linear and continuous functions on a mesh of size $h$, we prove that the approximate Hardy constant, $S_h^n$, converges to the optimal Hardy constant $S^n$ no slower than $O(1/\vert \log h \vert)$. We also show that the convergence is no faster than $O(1/\vert \log h \vert^2)$ if $n=1$ or if $n\geq 3$, the domain is the unit ball, and the finite element discretization exploits the rotational symmetry of the problem. Our estimates are compared to exact values for $S_h^n$ obtained computationally.

In this paper, we derive a variant of the Taylor theorem to obtain a new minimized remainder. For a given function $f$ defined on the interval $[a,b]$, this formula is derived by introducing a linear combination of $f'$ computed at $n+1$ equally spaced points in $[a,b]$, together with $f''(a)$ and $f''(b)$. We then consider two classical applications of this Taylor-like expansion: the interpolation error and the numerical quadrature formula. We show that using this approach improves both the Lagrange $P_2$ - interpolation error estimate and the error bound of the Simpson rule in numerical integration.

The monotonicity of discrete Laplacian, i.e., inverse positivity of stiffness matrix, implies discrete maximum principle, which is in general not true for high order accurate schemes on unstructured meshes. On the other hand, it is possible to construct high order accurate monotone schemes on structured meshes. All previously known high order accurate inverse positive schemes are fourth order accurate schemes, which is either an M-matrix or a product of two M-matrices. For the $Q^3$ spectral element method for the two-dimensional Laplacian, we prove its stiffness matrix is a product of four M-matrices thus it is monotone. Such a scheme can be regarded as a fifth order accurate finite difference scheme.

Let $F_q$ be the finite field with $q$ elements and $F_q[x_1,\ldots, x_n]$ the ring of polynomials in $n$ variables over $F_q$. In this paper we consider permutation polynomials and local permutation polynomials over $F_q[x_1,\ldots, x_n]$, which define interesting generalizations of permutations over finite fields. We are able to construct permutation polynomials in $F_q[x_1,\ldots, x_n]$ of maximum degree $n(q-1)-1$ and local permutation polynomials in $F_q[x_1,\ldots, x_n]$ of maximum degree $n(q-2)$ when $q>3$, extending previous results.

We provide a framework for the numerical approximation of distributed optimal control problems, based on least-squares finite element methods. Our proposed method simultaneously solves the state and adjoint equations and is $\inf$--$\sup$ stable for any choice of conforming discretization spaces. A reliable and efficient a posteriori error estimator is derived for problems where box constraints are imposed on the control. It can be localized and therefore used to steer an adaptive algorithm. For unconstrained optimal control problems, i.e., the set of controls being a Hilbert space, we obtain a coercive least-squares method and, in particular, quasi-optimality for any choice of discrete approximation space. For constrained problems we derive and analyze a variational inequality where the PDE part is tackled by least-squares finite element methods. We show that the abstract framework can be applied to a wide range of problems, including scalar second-order PDEs, the Stokes problem, and parabolic problems on space-time domains. Numerical examples for some selected problems are presented.

Multivariate histograms are difficult to construct due to the curse of dimensionality. Motivated by $k$-d trees in computer science, we show how to construct an efficient data-adaptive partition of Euclidean space that possesses the following two properties: With high confidence the distribution from which the data are generated is close to uniform on each rectangle of the partition; and despite the data-dependent construction we can give guaranteed finite sample simultaneous confidence intervals for the probabilities (and hence for the average densities) of each rectangle in the partition. This partition will automatically adapt to the sizes of the regions where the distribution is close to uniform. The methodology produces confidence intervals whose widths depend only on the probability content of the rectangles and not on the dimensionality of the space, thus avoiding the curse of dimensionality. Moreover, the widths essentially match the optimal widths in the univariate setting. The simultaneous validity of the confidence intervals allows to use this construction, which we call {\sl Beta-trees}, for various data-analytic purposes. We illustrate this by using Beta-trees for visualizing data and for multivariate mode-hunting.

北京阿比特科技有限公司