亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we present an asymptotically compatible meshfree method for solving nonlocal equations with random coefficients, describing diffusion in heterogeneous media. In particular, the random diffusivity coefficient is described by a finite-dimensional random variable or a truncated combination of random variables with the Karhunen-Lo\`{e}ve decomposition, then a probabilistic collocation method (PCM) with sparse grids is employed to sample the stochastic process. On each sample, the deterministic nonlocal diffusion problem is discretized with an optimization-based meshfree quadrature rule. We present rigorous analysis for the proposed scheme and demonstrate convergence for a number of benchmark problems, showing that it sustains the asymptotic compatibility spatially and achieves an algebraic or sub-exponential convergence rate in the random coefficients space as the number of collocation points grows. Finally, to validate the applicability of this approach we consider a randomly heterogeneous nonlocal problem with a given spatial correlation structure, demonstrating that the proposed PCM approach achieves substantial speed-up compared to conventional Monte Carlo simulations.

相關內容

環太平洋多媒體會議(PCM)是(shi)一個重(zhong)要的年(nian)度國際會議,其組織為論壇(tan),以討論理論,實驗和(he)應用多媒體分析(xi)與處理領(ling)域的最新進展和(he)研究成果。 官網地址:

Retraction note: After posting the manuscript on arXiv, we were informed by Erik Jan van Leeuwen that both results were known and they appeared in his thesis[vL09]. A PTAS for MDS is at Theorem 6.3.21 on page 79 and A PTAS for MCDS is at Theorem 6.3.31 on page 82. The techniques used are very similar. He noted that the idea for dealing with the connected version using a constant number of extra layers in the shifting technique not only appeared Zhang et al.[ZGWD09] but also in his 2005 paper [vL05]. Finally, van Leeuwen also informed us that the open problem that we posted has been resolved by Marx~[Mar06, Mar07] who showed that an efficient PTAS for MDS does not exist [Mar06] and under ETH, the running time of $n^{O(1/\epsilon)}$ is best possible [Mar07]. We thank Erik Jan van Leeuwen for the information and we regret that we made this mistake. Abstract before retraction: We present two (exponentially) faster PTAS's for dominating set problems in unit disk graphs. Given a geometric representation of a unit disk graph, our PTAS's that find $(1+\epsilon)$-approximate solutions to the Minimum Dominating Set (MDS) and the Minimum Connected Dominating Set (MCDS) of the input graph run in time $n^{O(1/\epsilon)}$. This can be compared to the best known $n^{O(1/\epsilon \log {1/\epsilon})}$-time PTAS by Nieberg and Hurink~[WAOA'05] for MDS that only uses graph structures and an $n^{O(1/\epsilon^2)}$-time PTAS for MCDS by Zhang, Gao, Wu, and Du~[J Glob Optim'09]. Our key ingredients are improved dynamic programming algorithms that depend exponentially on more essential 1-dimensional "widths" of the problems.

We consider the null controllability problem for the wave equation, and analyse a stabilized finite element method formulated on a global, unstructured spacetime mesh. We prove error estimates for the approximate control given by the computational method. The proofs are based on the regularity properties of the control given by the Hilbert Uniqueness Method, together with the stability properties of the numerical scheme. Numerical experiments illustrate the results.

The celebrated Bernstein von-Mises theorem ensures that credible regions from Bayesian posterior are well-calibrated when the model is correctly-specified, in the frequentist sense that their coverage probabilities tend to the nominal values as data accrue. However, this conventional Bayesian framework is known to lack robustness when the model is misspecified or only partly specified, such as in quantile regression, risk minimization based supervised/unsupervised learning and robust estimation. To overcome this difficulty, we propose a new Bayesian inferential approach that substitutes the (misspecified or partly specified) likelihoods with proper exponentially tilted empirical likelihoods plus a regularization term. Our surrogate empirical likelihood is carefully constructed by using the first order optimality condition of the empirical risk minimization as the moment condition. We show that the Bayesian posterior obtained by combining this surrogate empirical likelihood and the prior is asymptotically close to a normal distribution centering at the empirical risk minimizer with covariance matrix taking an appropriate sandwiched form. Consequently, the resulting Bayesian credible regions are automatically calibrated to deliver valid uncertainty quantification. Computationally, the proposed method can be easily implemented by Markov Chain Monte Carlo sampling algorithms. Our numerical results show that the proposed method tends to be more accurate than existing state-of-the-art competitors.

We consider boundary element methods where the Calder\'on projector is used for the system matrix and boundary conditions are weakly imposed using a particular variational boundary operator designed using techniques from augmented Lagrangian methods. Regardless of the boundary conditions, both the primal trace variable and the flux are approximated. We focus on the imposition of Dirichlet conditions on the Helmholtz equation, and extend the analysis of the Laplace problem from \emph{Boundary element methods with weakly imposed boundary conditions} to this case. The theory is illustrated by a series of numerical examples.

We propose a new method for the analysis of competing risks data with long term survivors. The proposed method enables us to estimate the overall survival probability and cure fraction simultaneously. We formulate the effect of covariates on cumulative incidence functions using linear transformation models. Estimating equations based on counting process are developed to estimate regression coefficients. The asymptotic properties of the estimators are studied using martingale theory. An extensive Monte Carlo simulation study is carried out to assess the finite sample performance of the proposed estimators. Finally, we illustrate our method using a real data set.

We study the problem of approximating the eigenspectrum of a symmetric matrix $A \in \mathbb{R}^{n \times n}$ with bounded entries (i.e., $\|A\|_{\infty} \leq 1$). We present a simple sublinear time algorithm that approximates all eigenvalues of $A$ up to additive error $\pm \epsilon n$ using those of a randomly sampled $\tilde{O}(\frac{1}{\epsilon^4}) \times \tilde O(\frac{1}{\epsilon^4})$ principal submatrix. Our result can be viewed as a concentration bound on the full eigenspectrum of a random principal submatrix. It significantly extends existing work which shows concentration of just the spectral norm [Tro08]. It also extends work on sublinear time algorithms for testing the presence of large negative eigenvalues in the spectrum [BCJ20]. To complement our theoretical results, we provide numerical simulations, which demonstrate the effectiveness of our algorithm in approximating the eigenvalues of a wide range of matrices.

A general adaptive refinement strategy for solving linear elliptic partial differential equation with random data is proposed and analysed herein. The adaptive strategy extends the a posteriori error estimation framework introduced by Guignard and Nobile in 2018 (SIAM J. Numer. Anal., 56, 3121--3143) to cover problems with a nonaffine parametric coefficient dependence. A suboptimal, but nonetheless reliable and convenient implementation of the strategy involves approximation of the decoupled PDE problems with a common finite element approximation space. Computational results obtained using such a single-level strategy are presented in this paper (part I). Results obtained using a potentially more efficient multilevel approximation strategy, where meshes are individually tailored, will be discussed in part II of this work. The codes used to generate the numerical results are available online.

We present a space-time multiscale method for a parabolic model problem with an underlying coefficient that may be highly oscillatory with respect to both the spatial and the temporal variables. The method is based on the framework of the Variational Multiscale Method in the context of a space-time formulation and computes a coarse-scale representation of the differential operator that is enriched by auxiliary space-time corrector functions. Once computed, the coarse-scale representation allows us to efficiently obtain well-approximating discrete solutions for multiple right-hand sides. We prove first-order convergence independently of the oscillation scales in the coefficient and illustrate how the space-time correctors decay exponentially in both space and time, making it possible to localize the corresponding computations. This localization allows us to define a practical and computationally efficient method in terms of complexity and memory, for which we provide a posteriori error estimates and present numerical examples.

Stochastic gradient Markov chain Monte Carlo (SGMCMC) has become a popular method for scalable Bayesian inference. These methods are based on sampling a discrete-time approximation to a continuous time process, such as the Langevin diffusion. When applied to distributions defined on a constrained space, such as the simplex, the time-discretisation error can dominate when we are near the boundary of the space. We demonstrate that while current SGMCMC methods for the simplex perform well in certain cases, they struggle with sparse simplex spaces; when many of the components are close to zero. However, most popular large-scale applications of Bayesian inference on simplex spaces, such as network or topic models, are sparse. We argue that this poor performance is due to the biases of SGMCMC caused by the discretization error. To get around this, we propose the stochastic CIR process, which removes all discretization error and we prove that samples from the stochastic CIR process are asymptotically unbiased. Use of the stochastic CIR process within a SGMCMC algorithm is shown to give substantially better performance for a topic model and a Dirichlet process mixture model than existing SGMCMC approaches.

Network embedding has attracted considerable research attention recently. However, the existing methods are incapable of handling billion-scale networks, because they are computationally expensive and, at the same time, difficult to be accelerated by distributed computing schemes. To address these problems, we propose RandNE, a novel and simple billion-scale network embedding method. Specifically, we propose a Gaussian random projection approach to map the network into a low-dimensional embedding space while preserving the high-order proximities between nodes. To reduce the time complexity, we design an iterative projection procedure to avoid the explicit calculation of the high-order proximities. Theoretical analysis shows that our method is extremely efficient, and friendly to distributed computing schemes without any communication cost in the calculation. We demonstrate the efficacy of RandNE over state-of-the-art methods in network reconstruction and link prediction tasks on multiple datasets with different scales, ranging from thousands to billions of nodes and edges.

北京阿比特科技有限公司