亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper develops a general asymptotic theory of local polynomial (LP) regression for spatial data observed at irregularly spaced locations in a sampling region $R_n \subset \mathbb{R}^d$. We adopt a stochastic sampling design that can generate irregularly spaced sampling sites in a flexible manner including both pure increasing and mixed increasing domain frameworks. We first introduce a nonparametric regression model for spatial data defined on $\mathbb{R}^d$ and then establish the asymptotic normality of LP estimators with general order $p \geq 1$. We also propose methods for constructing confidence intervals and establishing uniform convergence rates of LP estimators. Our dependence structure conditions on the underlying processes cover a wide class of random fields such as L\'evy-driven continuous autoregressive moving average random fields. As an application of our main results, we discuss a two-sample testing problem for mean functions and their partial derivatives.

相關內容

The theory of generalized locally Toeplitz (GLT) sequences is a powerful apparatus for computing the asymptotic spectral distribution of matrices $A_n$ arising from numerical discretizations of differential equations. Indeed, when the mesh fineness parameter $n$ tends to infinity, these matrices $A_n$ give rise to a sequence $\{A_n\}_n$, which often turns out to be a GLT sequence. In this paper, we extend the theory of GLT sequences in several directions: we show that every GLT sequence enjoys a normal form, we identify the spectral symbol of every GLT sequence formed by normal matrices, and we prove that, for every GLT sequence $\{A_n\}_n$ formed by normal matrices and every continuous function $f:\mathbb C\to\mathbb C$, the sequence $\{f(A_n)\}_n$ is again a GLT sequence whose spectral symbol is $f(\kappa)$, where $\kappa$ is the spectral symbol of $\{A_n\}_n$. In addition, using the theory of GLT sequences, we prove a spectral distribution result for perturbed normal matrices.

This paper explores the application of the multiscale finite element method (MsFEM) to address steady-state Stokes-Darcy problems with BJS interface conditions in highly heterogeneous porous media. We assume the existence of multiscale features in the Darcy region and propose an algorithm for the multiscale Stokes-Darcy model. During the offline phase, we employ MsFEM to construct permeability-dependent offline bases for efficient coarse-grid simulation, with this process conducted in parallel to enhance its efficiency. In the online phase, we use the Robin-Robin algorithm to derive the model's solution. Subsequently, we conduct error analysis based on $L^2$ and $H^1$ norms, assuming certain periodic coefficients in the Darcy region. To validate our approach, we present extensive numerical tests on highly heterogeneous media, illustrating the results of the error analysis.

In this paper, we provide a simple proof of a generalization of the Gauss-Lucas theorem. By using methods of D-companion matrix, we get the majorization relationship between the zeros of convex combinations of incomplete polynomials and an origin polynomial. Moreover, we prove that the set of all zeros of all convex combinations of incomplete polynomials coincides with the closed convex hull of zeros of the original polynomial. The location of zeros of convex combinations of incomplete polynomials is determined.

In the present paper, we propose a block variant of the extended Hessenberg process for computing approximations of matrix functions and other problems producing large-scale matrices. Applications to the computation of a matrix function such as f(A)V, where A is an nxn large sparse matrix, V is an nxp block with p<<n, and f is a function are presented. Solving shifted linear systems with multiple right hand sides are also given. Computing approximations of these matrix problems appear in many scientific and engineering applications. Different numerical experiments are provided to show the effectiveness of the proposed method for these problems.

Personalized recommendations form an important part of today's internet ecosystem, helping artists and creators to reach interested users, and helping users to discover new and engaging content. However, many users today are skeptical of platforms that personalize recommendations, in part due to historically careless treatment of personal data and data privacy. Now, businesses that rely on personalized recommendations are entering a new paradigm, where many of their systems must be overhauled to be privacy-first. In this article, we propose an algorithm for personalized recommendations that facilitates both precise and differentially-private measurement. We consider advertising as an example application, and conduct offline experiments to quantify how the proposed privacy-preserving algorithm affects key metrics related to user experience, advertiser value, and platform revenue compared to the extremes of both (private) non-personalized and non-private, personalized implementations.

This paper proposes several approaches as baselines to compute a shared active subspace for multivariate vector-valued functions. The goal is to minimize the deviation between the function evaluations on the original space and those on the reconstructed one. This is done either by manipulating the gradients or the symmetric positive (semi-)definite (SPD) matrices computed from the gradients of each component function so as to get a single structure common to all component functions. These approaches can be applied to any data irrespective of the underlying distribution unlike the existing vector-valued approach that is constrained to a normal distribution. We test the effectiveness of these methods on five optimization problems. The experiments show that, in general, the SPD-level methods are superior to the gradient-level ones, and are close to the vector-valued approach in the case of a normal distribution. Interestingly, in most cases it suffices to take the sum of the SPD matrices to identify the best shared active subspace.

This paper explores an iterative coupling approach to solve linear thermo-poroelasticity problems, with its application as a high-fidelity discretization utilizing finite elements during the training of projection-based reduced order models. One of the main challenges in addressing coupled multi-physics problems is the complexity and computational expenses involved. In this study, we introduce a decoupled iterative solution approach, integrated with reduced order modeling, aimed at augmenting the efficiency of the computational algorithm. The iterative coupling technique we employ builds upon the established fixed-stress splitting scheme that has been extensively investigated for Biot's poroelasticity. By leveraging solutions derived from this coupled iterative scheme, the reduced order model employs an additional Galerkin projection onto a reduced basis space formed by a small number of modes obtained through proper orthogonal decomposition. The effectiveness of the proposed algorithm is demonstrated through numerical experiments, showcasing its computational prowess.

Despite the fundamental role the Quantum Satisfiability (QSAT) problem has played in quantum complexity theory, a central question remains open: At which local dimension does the complexity of QSAT transition from "easy" to "hard"? Here, we study QSAT with each constraint acting on a $k$-dimensional and $l$-dimensional qudit pair, denoted $(k,l)$-QSAT. Our first main result shows that, surprisingly, QSAT on qubits can remain $\mathsf{QMA}_1$-hard, in that $(2,5)$-QSAT is $\mathsf{QMA}_1$-complete. In contrast, $2$-SAT on qubits is well-known to be poly-time solvable [Bravyi, 2006]. Our second main result proves that $(3,d)$-QSAT on the 1D line with $d\in O(1)$ is also $\mathsf{QMA}_1$-hard. Finally, we initiate the study of 1D $(2,d)$-QSAT by giving a frustration-free 1D Hamiltonian with a unique, entangled ground state. Our first result uses a direct embedding, combining a novel clock construction with the 2D circuit-to-Hamiltonian construction of [Gosset, Nagaj, 2013]. Of note is a new simplified and analytic proof for the latter (as opposed to a partially numeric proof in [GN13]). This exploits Unitary Labelled Graphs [Bausch, Cubitt, Ozols, 2017] together with a new "Nullspace Connection Lemma", allowing us to break low energy analyses into small patches of projectors, and to improve the soundness analysis of [GN13] from $\Omega(1/T^6)$ to $\Omega(1/T^2)$, for $T$ the number of gates. Our second result goes via black-box reduction: Given an arbitrary 1D Hamiltonian $H$ on $d'$-dimensional qudits, we show how to embed it into an effective null-space of a 1D $(3,d)$-QSAT instance, for $d\in O(1)$. Our approach may be viewed as a weaker notion of "simulation" (\`a la [Bravyi, Hastings 2017], [Cubitt, Montanaro, Piddock 2018]). As far as we are aware, this gives the first "black-box simulation"-based $\mathsf{QMA}_1$-hardness result, i.e. for frustration-free Hamiltonians.

We solve the problem of estimating the distribution of presumed i.i.d. observations for the total variation loss. Our approach is based on density models and is versatile enough to cope with many different ones, including some density models for which the Maximum Likelihood Estimator (MLE for short) does not exist. We mainly illustrate the properties of our estimator on models of densities on the line that satisfy a shape constraint. We show that it possesses some similar optimality properties, with regard to some global rates of convergence, as the MLE does when it exists. It also enjoys some adaptation properties with respect to some specific target densities in the model for which our estimator is proven to converge at parametric rate. More important is the fact that our estimator is robust, not only with respect to model misspecification, but also to contamination, the presence of outliers among the dataset and the equidistribution assumption. This means that the estimator performs almost as well as if the data were i.i.d. with density $p$ in a situation where these data are only independent and most of their marginals are close enough in total variation to a distribution with density $p$. We also show that our estimator converges to the average density of the data, when this density belongs to the model, even when none of the marginal densities belongs to it. Our main result on the risk of the estimator takes the form of an exponential deviation inequality which is non-asymptotic and involves explicit numerical constants. We deduce from it several global rates of convergence, including some bounds for the minimax $\mathbb{L}_{1}$-risks over the sets of concave and log-concave densities. These bounds derive from some specific results on the approximation of densities which are monotone, convex, concave and log-concave. Such results may be of independent interest.

We introduce in this paper the numerical analysis of high order both in time and space Lagrange-Galerkin methods for the conservative formulation of the advection-diffusion equation. As time discretization scheme we consider the Backward Differentiation Formulas up to order $q=5$. The development and analysis of the methods are performed in the framework of time evolving finite elements presented in C. M. Elliot and T. Ranner, IMA Journal of Numerical Analysis \textbf{41}, 1696-1845 (2021). The error estimates show through their dependence on the parameters of the equation the existence of different regimes in the behavior of the numerical solution; namely, in the diffusive regime, that is, when the diffusion parameter $\mu$ is large, the error is $O(h^{k+1}+\Delta t^{q})$, whereas in the advective regime, $\mu \ll 1$, the convergence is $O(\min (h^{k},\frac{h^{k+1} }{\Delta t})+\Delta t^{q})$. It is worth remarking that the error constant does not have exponential $\mu ^{-1}$ dependence.

北京阿比特科技有限公司