亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We give necessary and sufficient condition for an infinite collection of axis-parallel boxes in $\mathbb{R}^{d}$ to be pierceable by finitely many axis-parallel $k$-flats, where $0 \leq k < d$. We also consider colorful generalizations of the above result and establish their feasibility. The problem considered in this paper is an infinite variant of the Hadwiger-Debrunner $(p,q)$-problem.

相關內容

The computation of a matrix function $f(A)$ is an important task in scientific computing appearing in machine learning, network analysis and the solution of partial differential equations. In this work, we use only matrix-vector products $x\mapsto Ax$ to approximate functions of sparse matrices and matrices with similar structures such as sparse matrices $A$ themselves or matrices that have a similar decay property as matrix functions. We show that when $A$ is a sparse matrix with an unknown sparsity pattern, techniques from compressed sensing can be used under natural assumptions. Moreover, if $A$ is a banded matrix then certain deterministic matrix-vector products can efficiently recover the large entries of $f(A)$. We describe an algorithm for each of the two cases and give error analysis based on the decay bound for the entries of $f(A)$. We finish with numerical experiments showing the accuracy of our algorithms.

We consider the problem of approximating the solution to $A(\mu) x(\mu) = b$ for many different values of the parameter $\mu$. Here we assume $A(\mu)$ is large, sparse, and nonsingular with a nonlinear dependence on $\mu$. Our method is based on a companion linearization derived from an accurate Chebyshev interpolation of $A(\mu)$ on the interval $[-a,a]$, $a \in \mathbb{R}$. The solution to the linearization is approximated in a preconditioned BiCG setting for shifted systems, where the Krylov basis matrix is formed once. This process leads to a short-term recurrence method, where one execution of the algorithm produces the approximation to $x(\mu)$ for many different values of the parameter $\mu \in [-a,a]$ simultaneously. In particular, this work proposes one algorithm which applies a shift-and-invert preconditioner exactly as well as an algorithm which applies the preconditioner inexactly. The competitiveness of the algorithms are illustrated with large-scale problems arising from a finite element discretization of a Helmholtz equation with parameterized material coefficient. The software used in the simulations is publicly available online, and thus all our experiments are reproducible.

Let $(X_t)_{t \ge 0}$ be the solution of the stochastic differential equation $$dX_t = b(X_t) dt+A dZ_t, \quad X_{0}=x,$$ where $b: \mathbb{R}^d \rightarrow \mathbb R^d$ is a Lipschitz function, $A \in \mathbb R^{d \times d}$ is a positive definite matrix, $(Z_t)_{t\geq 0}$ is a $d$-dimensional rotationally invariant $\alpha$-stable L\'evy process with $\alpha \in (1,2)$ and $x\in\mathbb{R}^{d}$. We use two Euler-Maruyama schemes with decreasing step sizes $\Gamma = (\gamma_n)_{n\in \mathbb{N}}$ to approximate the invariant measure of $(X_t)_{t \ge 0}$: one with i.i.d. $\alpha$-stable distributed random variables as its innovations and the other with i.i.d. Pareto distributed random variables as its innovations. We study the convergence rate of these two approximation schemes in the Wasserstein-1 distance. For the first scheme, when the function $b$ is Lipschitz and satisfies a certain dissipation condition, we show that the convergence rate is $\gamma^{1/\alpha}_n$. Under an additional assumption on the second order directional derivatives of $b$, this convergence rate can be improved to $\gamma^{1+\frac 1 {\alpha}-\frac{1}{\kappa}}_n$ for any $\kappa \in [1,\alpha)$. For the second scheme, when the function $b$ is twice continuously differentiable, we obtain a convergence rate of $\gamma^{\frac{2-\alpha}{\alpha}}_n$. We show that the rate $\gamma^{\frac{2-\alpha}{\alpha}}_n$ is optimal for the one dimensional stable Ornstein-Uhlenbeck process. Our theorems indicate that the recent remarkable result about the unadjusted Langevin algorithm with additive innovations can be extended to the SDEs driven by an $\alpha$-stable L\'evy process and the corresponding convergence rate has a similar behaviour. Compared with the previous result, we have relaxed the second order differentiability condition to the Lipschitz condition for the first scheme.

We propose a new method for estimating the minimizer $\boldsymbol{x}^*$ and the minimum value $f^*$ of a smooth and strongly convex regression function $f$ from the observations contaminated by random noise. Our estimator $\boldsymbol{z}_n$ of the minimizer $\boldsymbol{x}^*$ is based on a version of the projected gradient descent with the gradient estimated by a regularized local polynomial algorithm. Next, we propose a two-stage procedure for estimation of the minimum value $f^*$ of regression function $f$. At the first stage, we construct an accurate enough estimator of $\boldsymbol{x}^*$, which can be, for example, $\boldsymbol{z}_n$. At the second stage, we estimate the function value at the point obtained in the first stage using a rate optimal nonparametric procedure. We derive non-asymptotic upper bounds for the quadratic risk and optimization error of $\boldsymbol{z}_n$, and for the risk of estimating $f^*$. We establish minimax lower bounds showing that, under certain choice of parameters, the proposed algorithms achieve the minimax optimal rates of convergence on the class of smooth and strongly convex functions.

We consider the problem of estimating an expectation $ \mathbb{E}\left[ h(W)\right]$ by quasi-Monte Carlo (QMC) methods, where $ h $ is an unbounded smooth function on $ \mathbb{R}^d $ and $ W$ is a standard normal distributed random variable. To study rates of convergence for QMC on unbounded integrands, we use a smoothed projection operator to project the output of $W$ to a bounded region, which differs from the strategy of avoiding the singularities along the boundary of the unit cube $ [0,1]^d $ in 10.1137/S00363. The error is then bounded by the quadrature error of the transformed integrand and the projection error. If the function $h(\boldsymbol{x})$ and its mixed partial derivatives do not grow too fast as the Euclidean norm $|\boldsymbol{x}|$ goes to infinity, we obtain an error rate of $O(n^{-1+\epsilon})$ for QMC and randomized QMC (RQMC) with a sample size $n$ and an arbitrarily small $\epsilon>0$. However, the rate turns out to be $O(n^{-1+2M+\epsilon})$ if the functions grow exponentially with a rate of $O(\exp\{M|\boldsymbol{x}|^2\})$ for a constant $M\in(0,1/2)$. Superisingly, we find that using importance sampling with t distribution as the proposal can improve the root mean squared error of RQMC from $O(n^{-1+2M+\epsilon})$ to $O( n^{-3/2+\epsilon})$ for any $M\in(0,1/2)$.

We study the numerical integration of functions from isotropic Sobolev spaces $W_p^s([0,1]^d)$ using finitely many function evaluations within randomized algorithms, aiming for the smallest possible probabilistic error guarantee $\varepsilon > 0$ at confidence level $1-\delta \in (0,1)$. For spaces consisting of continuous functions, non-linear Monte Carlo methods with optimal confidence properties have already been known, in few cases even linear methods that succeed in that respect. In this paper we promote a new method called stratified control variates (SCV) and by it show that already linear methods achieve optimal probabilistic error rates in the high smoothness regime without the need to adjust algorithmic parameters to the uncertainty $\delta$. We also analyse a version of SCV in the low smoothness regime where $W_p^s([0,1]^d)$ may contain functions with singularities. Here, we observe a polynomial dependence of the error on $\delta^{-1}$ which cannot be avoided for linear methods. This is worse than what is known to be possible using non-linear algorithms where only a logarithmic dependence on $\delta^{-1}$ occurs if we tune in for a specific value of $\delta$.

We expound on some known lower bounds of the quadratic Wasserstein distance between random vectors in $\mathbb{R}^n$ with an emphasis on affine transformations that have been used in manifold learning of data in Wasserstein space. In particular, we give concrete lower bounds for rotated copies of random vectors in $\mathbb{R}^2$ with uncorrelated components by computing the Bures metric between the covariance matrices. We also derive upper bounds for compositions of affine maps which yield a fruitful variety of diffeomorphisms applied to an initial data measure. We apply these bounds to various distributions including those lying on a 1-dimensional manifold in $\mathbb{R}^2$ and illustrate the quality of the bounds. Finally, we give a framework for mimicking handwritten digit or alphabet datasets that can be applied in a manifold learning framework.

We consider Gibbs distributions, which are families of probability distributions over a discrete space $\Omega$ with probability mass function of the form $\mu^\Omega_\beta(\omega) \propto e^{\beta H(\omega)}$ for $\beta$ in an interval $[\beta_{\min}, \beta_{\max}]$ and $H( \omega ) \in \{0 \} \cup [1, n]$. The partition function is the normalization factor $Z(\beta)=\sum_{\omega \in\Omega}e^{\beta H(\omega)}$. Two important parameters of these distributions are the log partition ratio $q = \log \tfrac{Z(\beta_{\max})}{Z(\beta_{\min})}$ and the counts $c_x = |H^{-1}(x)|$. These are correlated with system parameters in a number of physical applications and sampling algorithms. Our first main result is to estimate the counts $c_x$ using roughly $\tilde O( \frac{q}{\varepsilon^2})$ samples for general Gibbs distributions and $\tilde O( \frac{n^2}{\varepsilon^2} )$ samples for integer-valued distributions (ignoring some second-order terms and parameters), and we show this is optimal up to logarithmic factors. We illustrate with improved algorithms for counting connected subgraphs, independent sets, and perfect matchings. As a key subroutine, we also develop algorithms to compute the partition function $Z$ using $\tilde O(\frac{q}{\varepsilon^2})$ samples for general Gibbs distributions and using $\tilde O(\frac{n^2}{\varepsilon^2})$ samples for integer-valued distributions.

The property that the velocity $\boldsymbol{u}$ belongs to $L^\infty(0,T;L^2(\Omega)^d)$ is an essential requirement in the definition of energy solutions of models for incompressible fluids. It is, therefore, highly desirable that the solutions produced by discretisation methods are uniformly stable in the $L^\infty(0,T;L^2(\Omega)^d)$-norm. In this work, we establish that this is indeed the case for Discontinuous Galerkin (DG) discretisations (in time and space) of non-Newtonian models with $p$-structure, assuming that $p\geq \frac{3d+2}{d+2}$; the time discretisation is equivalent to the RadauIIA Implicit Runge-Kutta method. We also prove (weak) convergence of the numerical scheme to the weak solution of the system; this type of convergence result for schemes based on quadrature seems to be new. As an auxiliary result, we also derive Gagliardo-Nirenberg-type inequalities on DG spaces, which might be of independent interest.

We provide sufficient conditions for the existence of viscosity solutions of fractional semilinear elliptic PDEs of index $\alpha \in (1,2)$ with polynomial gradient nonlinearities on $d$-dimensional balls, $d\geq 2$. Our approach uses a tree-based probabilistic representation based on $\alpha$-stable branching processes, and allows us to take into account gradient nonlinearities not covered by deterministic finite difference methods so far. Numerical illustrations demonstrate the accuracy of the method in dimension $d=10$, solving a challenge encountered with the use of deterministic finite difference methods in high-dimensional settings.

北京阿比特科技有限公司