亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we consider an orthonormal basis, generated by a tensor product of Fourier basis functions, half period cosine basis functions, and the Chebyshev basis functions. We deal with the approximation problem in high dimensions related to this basis and design a fast algorithm to multiply with the underlying matrix, consisting of rows of the non-equidistant Fourier matrix, the non-equidistant cosine matrix and the non-equidistant Chebyshev matrix, and its transposed. This leads us to an ANOVA (analysis of variance) decomposition for functions with partially periodic boundary conditions through using the Fourier basis in some dimensions and the half period cosine basis or the Chebyshev basis in others. We consider sensitivity analysis in this setting, in order to find an adapted basis for the underlying approximation problem. More precisely, we find the underlying index set of the multidimensional series expansion. Additionally, we test this ANOVA approximation with mixed basis at numerical experiments, and refer to the advantage of interpretable results.

相關內容

Black holes are believed to be fast scramblers of information, as they rapidly destroy local correlations and spread information throughout the system. Unscrambling this information is in principle possible, given perfect knowledge of the black hole internal dynamics[arXiv:1710.03363]. This work shows that even if one doesn't know the internal dynamics of the black hole, information can be efficiently decoded from an unknown black hole by observing the outgoing Hawking radiation. We show that, surprisingly, black holes with an unknown internal dynamics that are rapidly scrambling but not fully chaotic admit Clifford decoders: the salient properties of a scrambling unitary can be efficiently recovered even if exponentially complex. This recovery is possible because all the redundant complexity can be described as an entropy, the stabilizer entropy. We show how for non-chaotic black holes the stabilizer entropy can be efficiently pumped away, just as in a refrigerator.

In this study, we consider a class of non-autonomous time-fractional partial advection-diffusion-reaction (TF-ADR) equations with Caputo type fractional derivative. To obtain the numerical solution of the model problem, we apply the non-symmetric interior penalty Galerkin (NIPG) method in space on a uniform mesh and the L1-scheme in time on a graded mesh. It is demonstrated that the computed solution is discretely stable. Superconvergence of error estimates for the proposed method are obtained using the discrete energy-norm. Also, we have applied the proposed method to solve semilinear problems after linearizing by the Newton linearization process. The theoretical results are verified through numerical experiments.

We consider the case of performing Bayesian inference for stochastic epidemic compartment models, using incomplete time course data consisting of incidence counts that are either the number of new infections or removals in time intervals of fixed length. We eschew the most natural Markov jump process representation for reasons of computational efficiency, and focus on a stochastic differential equation representation. This is further approximated to give a tractable Gaussian process, that is, the linear noise approximation (LNA). Unless the observation model linking the LNA to data is both linear and Gaussian, the observed data likelihood remains intractable. It is in this setting that we consider two approaches for marginalising over the latent process: a correlated pseudo-marginal method and analytic marginalisation via a Gaussian approximation of the observation model. We compare and contrast these approaches using synthetic data before applying the best performing method to real data consisting of removal incidence of oak processionary moth nests in Richmond Park, London. Our approach further allows comparison between various competing compartment models.

In this paper, to the best of our knowledge, we make the first attempt at studying the parametric semilinear elliptic eigenvalue problems with the parametric coefficient and some power-type nonlinearities. The parametric coefficient is assumed to have an affine dependence on the countably many parameters with an appropriate class of sequences of functions. In this paper, we obtain the upper bound estimation for the mixed derivatives of the ground eigenpairs that has the same form obtained recently for the linear eigenvalue problem. The three most essential ingredients for this estimation are the parametric analyticity of the ground eigenpairs, the uniform boundedness of the ground eigenpairs, and the uniform positive differences between ground eigenvalues of linear operators. All these three ingredients need new techniques and a careful investigation of the nonlinear eigenvalue problem that will be presented in this paper. As an application, considering each parameter as a uniformly distributed random variable, we estimate the expectation of the eigenpairs using a randomly shifted quasi-Monte Carlo lattice rule and show the dimension-independent error bound.

The purpose of this paper is to introduce a new numerical method to solve multi-marginal optimal transport problems with pairwise interaction costs. The complexity of multi-marginal optimal transport generally scales exponentially in the number of marginals $m$. We introduce a one parameter family of cost functions that interpolates between the original and a special cost function for which the problem's complexity scales linearly in $m$. We then show that the solution to the original problem can be recovered by solving an ordinary differential equation in the parameter $\epsilon$, whose initial condition corresponds to the solution for the special cost function mentioned above; we then present some simulations, using both explicit Euler and explicit higher order Runge-Kutta schemes to compute solutions to the ODE, and, as a result, the multi-marginal optimal transport problem.

In this paper we propose a new finite element discretization for the two-field formulation of poroelasticity which uses the elastic displacement and the pore pressure as primary variables. The main goal is to develop a numerical method with small problem sizes which still achieve key features such as parameter-robustness, local mass conservation, and robust preconditionor construction. For this we combine a nonconforming finite element and the interior over-stabilized enriched Galerkin methods with a suitable stabilization term. Robust a priori error estimates and parameter-robust preconditioner construction are proved, and numerical results illustrate our theoretical findings.

This paper addresses the problem of providing robust estimators under a functional logistic regression model. Logistic regression is a popular tool in classification problems with two populations. As in functional linear regression, regularization tools are needed to compute estimators for the functional slope. The traditional methods are based on dimension reduction or penalization combined with maximum likelihood or quasi--likelihood techniques and for that reason, they may be affected by misclassified points especially if they are associated to functional covariates with atypical behaviour. The proposal given in this paper adapts some of the best practices used when the covariates are finite--dimensional to provide reliable estimations. Under regularity conditions, consistency of the resulting estimators and rates of convergence for the predictions are derived. A numerical study illustrates the finite sample performance of the proposed method and reveals its stability under different contamination scenarios. A real data example is also presented.

Partially linear additive models generalize linear ones since they model the relation between a response variable and covariates by assuming that some covariates have a linear relation with the response but each of the others enter through unknown univariate smooth functions. The harmful effect of outliers either in the residuals or in the covariates involved in the linear component has been described in the situation of partially linear models, that is, when only one nonparametric component is involved in the model. When dealing with additive components, the problem of providing reliable estimators when atypical data arise, is of practical importance motivating the need of robust procedures. Hence, we propose a family of robust estimators for partially linear additive models by combining $B-$splines with robust linear regression estimators. We obtain consistency results, rates of convergence and asymptotic normality for the linear components, under mild assumptions. A Monte Carlo study is carried out to compare the performance of the robust proposal with its classical counterpart under different models and contamination schemes. The numerical experiments show the advantage of the proposed methodology for finite samples. We also illustrate the usefulness of the proposed approach on a real data set.

In this paper, we consider the problem of parameter estimating for a family of exponential distributions. We develop the improved estimation method, which generalized the James--Stein approach for a wide class of distributions. The proposed estimator dominates the classical maximum likelihood estimator under the quadratic risk. The estimating procedure is applied to special cases of distributions. The numerical simulations results are given.

Computing the marginal likelihood (also called the Bayesian model evidence) is an important task in Bayesian model selection, providing a principled quantitative way to compare models. The learned harmonic mean estimator solves the exploding variance problem of the original harmonic mean estimation of the marginal likelihood. The learned harmonic mean estimator learns an importance sampling target distribution that approximates the optimal distribution. While the approximation need not be highly accurate, it is critical that the probability mass of the learned distribution is contained within the posterior in order to avoid the exploding variance problem. In previous work a bespoke optimization problem is introduced when training models in order to ensure this property is satisfied. In the current article we introduce the use of normalizing flows to represent the importance sampling target distribution. A flow-based model is trained on samples from the posterior by maximum likelihood estimation. Then, the probability density of the flow is concentrated by lowering the variance of the base distribution, i.e. by lowering its "temperature", ensuring its probability mass is contained within the posterior. This approach avoids the need for a bespoke optimisation problem and careful fine tuning of parameters, resulting in a more robust method. Moreover, the use of normalizing flows has the potential to scale to high dimensional settings. We present preliminary experiments demonstrating the effectiveness of the use of flows for the learned harmonic mean estimator. The harmonic code implementing the learned harmonic mean, which is publicly available, has been updated to now support normalizing flows.

北京阿比特科技有限公司