亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we study the numerical method for the bi-Laplace problems with inhomogeneous coefficients; particularly, we propose finite element schemes on rectangular grids respectively for an inhomogeneous fourth-order elliptic singular perturbation problem and for the Helmholtz transmission eigenvalue problem. The new methods use the reduced rectangle Morley (RRM for short) element space with piecewise quadratic polynomials, which are of the lowest degree possible. For the finite element space, a discrete analogue of an equality by Grisvard is proved for the stability issue and a locally-averaged interpolation operator is constructed for the approximation issue. Optimal convergence rates of the schemes are proved, and numerical experiments are given to verify the theoretical analysis.

相關內容

In this paper, we present new high-probability PAC-Bayes bounds for different types of losses. Firstly, for losses with a bounded range, we recover a strengthened version of Catoni's bound that holds uniformly for all parameter values. This leads to new fast-rate and mixed-rate bounds that are interpretable and tighter than previous bounds in the literature. In particular, the fast-rate bound is equivalent to the Seeger--Langford bound. Secondly, for losses with more general tail behaviors, we introduce two new parameter-free bounds: a PAC-Bayes Chernoff analogue when the loss' cumulative generating function is bounded, and a bound when the loss' second moment is bounded. These two bounds are obtained using a new technique based on a discretization of the space of possible events for the ``in probability'' parameter optimization problem. This technique is both simpler and more general than previous approaches optimizing over a grid on the parameters' space. Finally, using a simple technique that is applicable to any existing bound, we extend all previous results to anytime-valid bounds.

In this paper, we introduce and analyze a variant of the Thompson sampling (TS) algorithm for contextual bandits. At each round, traditional TS requires samples from the current posterior distribution, which is usually intractable. To circumvent this issue, approximate inference techniques can be used and provide samples with distribution close to the posteriors. However, current approximate techniques yield to either poor estimation (Laplace approximation) or can be computationally expensive (MCMC methods, Ensemble sampling...). In this paper, we propose a new algorithm, Varational Inference Thompson sampling VITS, based on Gaussian Variational Inference. This scheme provides powerful posterior approximations which are easy to sample from, and is computationally efficient, making it an ideal choice for TS. In addition, we show that VITS achieves a sub-linear regret bound of the same order in the dimension and number of round as traditional TS for linear contextual bandit. Finally, we demonstrate experimentally the effectiveness of VITS on both synthetic and real world datasets.

In this paper, we propose novel proper orthogonal decomposition (POD)--based model reduction methods that effectively address the issue of inverse crime in solving parabolic inverse problems. Both the inverse initial value problems and inverse source problems are studied. By leveraging the inherent low-dimensional structures present in the data, our approach enables a reduction in the forward model complexity without compromising the accuracy of the inverse problem solution. Besides, we prove the convergence analysis of the proposed methods for solving parabolic inverse problems. Through extensive experimentation and comparative analysis, we demonstrate the effectiveness of our method in overcoming inverse crime and achieving improved inverse problem solutions. The proposed POD model reduction method offers a promising direction for improving the reliability and applicability of inverse problem-solving techniques in various domains.

In this paper, we investigate the Walk on Spheres algorithm (WoS) for motion planning in robotics. WoS is a Monte Carlo method to solve the Dirichlet problem developed in the 50s by Muller and has recently been repopularized by Sawhney and Crane, who showed its applicability for geometry processing in volumetric domains. This paper provides a first study into the applicability of WoS for robot motion planning in configuration spaces, with potential fields defined as the solution of screened Poisson equations. The experiments in this paper empirically indicate the method's trivial parallelization, its dimension-independent convergence characteristic of $O(1/N)$ in the number of walks, and a validation experiment on the RR platform.

In this work, we detail the GPU-porting of an in-house pseudo-spectral solver tailored towards large-scale simulations of interface-resolved simulation of drop- and bubble-laden turbulent flows. The code relies on direct numerical simulation of the Navier-Stokes equations, used to describe the flow field, coupled with a phase-field method, used to describe the shape, deformation, and topological changes of the interface of the drops or bubbles. The governing equations -Navier-Stokes and Cahn-Hilliard equations-are solved using a pseudo-spectral method that relies on transforming the variables in the wavenumber space. The code targets large-scale simulations of drop- and bubble-laden turbulent flows and relies on a multilevel parallelism. The first level of parallelism relies on the message-passing interface (MPI) and is used on multi-core architectures in CPU-based infrastructures. A second level of parallelism relies on OpenACC directives and cuFFT libraries and is used to accelerate the code execution when GPU-based infrastructures are targeted. The resulting multiphase flow solver can be efficiently executed in heterogeneous computing infrastructures and exhibits a remarkable speed-up when GPUs are employed. Thanks to the modular structure of the code and the use of a directive-based strategy to offload code execution on GPUs, only minor code modifications are required when targeting different computing architectures. This improves code maintenance, version control and the implementation of additional modules or governing equations.

In this paper, we deal with sequential testing of multiple hypotheses. In the general scheme of construction of optimal tests based on the backward induction, we propose a modification which provides a simplified (generally speaking, suboptimal) version of the optimal test, for any particular criterion of optimization. We call this DBC version (the one with Dropped Backward Control) of the optimal test. In particular, for the case of two simple hypotheses, dropping backward control in the Bayesian test produces the classical sequential probability ratio test (SPRT). Similarly, dropping backward control in the modified Kiefer-Weiss solutions produces Lorden's 2-SPRTs . In the case of more than two hypotheses, we obtain in this way new classes of sequential multi-hypothesis tests, and investigate their properties. The efficiency of the DBC-tests is evaluated with respect to the optimal Bayesian multi-hypothesis test and with respect to the matrix sequential probability ratio test (MSPRT) by Armitage. In a multihypothesis variant of the Kiefer-Weiss problem for binomial proportions the performance of the DBC-test is numerically compared with that of the exact solution. In a model of normal observations with a linear trend, the performance of of the DBC-test is numerically compared with that of the MSPRT. Some other numerical examples are presented. In all the cases the proposed tests exhibit a very high efficiency with respect to the optimal tests (more than 99.3\% when sampling from Bernoulli populations) and/or with respect to the MSPRT (even outperforming the latter in some scenarios).

In this work, we propose two information generating functions: general weighted information and relative information generating functions, and study their properties. { It is shown that the general weighted information generating function (GWIGF) is shift-dependent and can be expressed in terms of the weighted Shannon entropy. The GWIGF of a transformed random variable has been obtained in terms of the GWIGF of a known distribution. Several bounds of the GWIGF have been proposed. We have obtained sufficient conditions under which the GWIGFs of two distributions are comparable. Further, we have established a connection between the weighted varentropy and varentropy with proposed GWIGF. An upper bound for GWIGF of the sum of two independent random variables is derived. The effect of general weighted relative information generating function (GWRIGF) for two transformed random variables under strictly monotone functions has been studied. } Further, these information generating functions are studied for escort, generalized escort and mixture distributions. {Specially, we propose weighted $\beta$-cross informational energy and establish a close connection with GWIGF for escort distribution.} The residual versions of the newly proposed generating functions are considered and several similar properties have been explored. A non-parametric estimator of the residual general weighted information generating function is proposed. A simulated data set and two real data sets are considered for the purpose of illustration. { Finally, we have compared the non-parametric approach with a parametric approach in terms of the absolute bias and mean squared error values.}

In this work, we propose extropy measures based on density copula, distributional copula, and survival copula, and explore their properties. We study the effect of monotone transformations for the proposed measures and obtain bounds. We establish connections between cumulative copula extropy and three dependence measures: Spearman's rho, Kendall's tau, and Blest's measure of rank correlation. Finally, we propose estimators for the cumulative copula extropy and survival copula extropy with an illustration using real life datasets.

In this paper, to address the optimization problem on a compact matrix manifold, we introduce a novel algorithmic framework called the Transformed Gradient Projection (TGP) algorithm, using the projection onto this compact matrix manifold. Compared with the existing algorithms, the key innovation in our approach lies in the utilization of a new class of search directions and various stepsizes, including the Armijo, nonmonotone Armijo, and fixed stepsizes, to guide the selection of the next iterate. Our framework offers flexibility by encompassing the classical gradient projection algorithms as special cases, and intersecting the retraction-based line-search algorithms. Notably, our focus is on the Stiefel or Grassmann manifold, revealing that many existing algorithms in the literature can be seen as specific instances within our proposed framework, and this algorithmic framework also induces several new special cases. Then, we conduct a thorough exploration of the convergence properties of these algorithms, considering various search directions and stepsizes. To achieve this, we extensively analyze the geometric properties of the projection onto compact matrix manifolds, allowing us to extend classical inequalities related to retractions from the literature. Building upon these insights, we establish the weak convergence, convergence rate, and global convergence of TGP algorithms under three distinct stepsizes. In cases where the compact matrix manifold is the Stiefel or Grassmann manifold, our convergence results either encompass or surpass those found in the literature. Finally, through a series of numerical experiments, we observe that the TGP algorithms, owing to their increased flexibility in choosing search directions, outperform classical gradient projection and retraction-based line-search algorithms in several scenarios.

In this paper, we investigate the problem of strong approximation of the solution of SDEs in the case when the drift coefficient is given in the integral form. Such drift often appears when analyzing stochastic dynamics of optimization procedures in machine learning problems. We discuss connections of the defined randomized Euler approximation scheme with the perturbed version of the stochastic gradient descent (SGD) algorithm. We investigate its upper error bounds, in terms of the discretization parameter n and the size M of the random sample drawn at each step of the algorithm, in different subclasses of coefficients of the underlying SDE. Finally, the results of numerical experiments performed by using GPU architecture are also reported.

北京阿比特科技有限公司