亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Markov-modulated L\'evy processes lead to matrix integral equations of the kind $ A_0 + A_1X+A_2 X^2+A_3(X)=0$ where $A_0$, $A_1$, $A_2$ are given matrix coefficients, while $A_3(X)$ is a nonlinear function, expressed in terms of integrals involving the exponential of the matrix $X$ itself. In this paper we propose some numerical methods for the solution of this class of matrix equations, perform a theoretical convergence analysis and show the effectiveness of the new methods by means of a wide numerical experimentation.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

We consider the problem of inference for nonlinear, multivariate diffusion processes, satisfying It\^o stochastic differential equations (SDEs), using data at discrete times that may be incomplete and subject to measurement error. Our starting point is a state-of-the-art correlated pseudo-marginal Metropolis-Hastings algorithm, that uses correlated particle filters to induce strong and positive correlation between successive likelihood estimates. However, unless the measurement error or the dimension of the SDE is small, correlation can be eroded by the resampling steps in the particle filter. We therefore propose a novel augmentation scheme, that allows for conditioning on values of the latent process at the observation times, completely avoiding the need for resampling steps. We integrate over the uncertainty at the observation times with an additional Gibbs step. Connections between the resulting pseudo-marginal scheme and existing inference schemes for diffusion processes are made, giving a unified inference framework that encompasses Gibbs sampling and pseudo marginal schemes. The methodology is applied in three examples of increasing complexity. We find that our approach offers substantial increases in overall efficiency, compared to competing methods.

Piecewise deterministic Markov processes (PDMPs) are a class of stochastic processes with applications in several fields of applied mathematics spanning from mathematical modeling of physical phenomena to computational methods. A PDMP is specified by three characteristic quantities: the deterministic motion, the law of the random event times, and the jump kernels. The applicability of PDMPs to real world scenarios is currently limited by the fact that these processes can be simulated only when these three characteristics of the process can be simulated exactly. In order to overcome this problem, we introduce discretisation schemes for PDMPs which make their approximate simulation possible. In particular, we design both first order and higher order schemes that rely on approximations of one or more of the three characteristics. For the proposed approximation schemes we study both pathwise convergence to the continuous PDMP as the step size converges to zero and convergence in law to the invariant measure of the PDMP in the long time limit. Moreover, we apply our theoretical results to several PDMPs that arise from the computational statistics and mathematical biology literature.

In this paper, we consider the multi-armed bandit problem with high-dimensional features. First, we prove a minimax lower bound, $\mathcal{O}\big((\log d)^{\frac{\alpha+1}{2}}T^{\frac{1-\alpha}{2}}+\log T\big)$, for the cumulative regret, in terms of horizon $T$, dimension $d$ and a margin parameter $\alpha\in[0,1]$, which controls the separation between the optimal and the sub-optimal arms. This new lower bound unifies existing regret bound results that have different dependencies on T due to the use of different values of margin parameter $\alpha$ explicitly implied by their assumptions. Second, we propose a simple and computationally efficient algorithm inspired by the general Upper Confidence Bound (UCB) strategy that achieves a regret upper bound matching the lower bound. The proposed algorithm uses a properly centered $\ell_1$-ball as the confidence set in contrast to the commonly used ellipsoid confidence set. In addition, the algorithm does not require any forced sampling step and is thereby adaptive to the practically unknown margin parameter. Simulations and a real data analysis are conducted to compare the proposed method with existing ones in the literature.

As a consequence of Bloch's theorem, the numerical computation of the fermionic ground state density matrices and energies of periodic Schrodinger operators involves integrals over the Brillouin zone. These integrals are difficult to compute numerically in metals due to discontinuities in the integrand. We perform an error analysis of several widely-used quadrature rules and smearing methods for Brillouin zone integration. We precisely identify the assumptions implicit in these methods and rigorously prove error bounds. Numerical results for two-dimensional periodic systems are also provided. Our results shed light on the properties of these numerical schemes, and provide guidance as to the appropriate choice of numerical parameters.

Derivative based optimization methods are efficient at solving optimal control problems near local optima. However, their ability to converge halts when derivative information vanishes. The inference approach to optimal control does not have strict requirements on the objective landscape. However, sampling, the primary tool for solving such problems, tends to be much slower in computation time. We propose a new method that combines second order methods with inference. We utilise the Kullback Leibler (KL) control framework to formulate an inference problem that computes the optimal controls from an adaptive distribution approximating the solution of the second order method. Our method allows for combining simple convex and non convex cost functions. This simplifies the process of cost function design and leverages the strengths of both inference and second order optimization. We compare our method to Model Predictive Path Integral (MPPI) and iterative Linear Quadratic Regulator (iLQG), outperforming both in sample efficiency and quality on manipulation and obstacle avoidance tasks.

When statistical analyses consider multiple data sources, Markov melding provides a method for combining the source-specific Bayesian models. Markov melding joins together submodels that have a common quantity. One challenge is that the prior for this quantity can be implicit, and its prior density must be estimated. We show that error in this density estimate makes the two-stage Markov chain Monte Carlo sampler employed by Markov melding unstable and unreliable. We propose a robust two-stage algorithm that estimates the required prior marginal self-density ratios using weighted samples, dramatically improving accuracy in the tails of the distribution. The stabilised version of the algorithm is pragmatic and provides reliable inference. We demonstrate our approach using an evidence synthesis for inferring HIV prevalence, and an evidence synthesis of A/H1N1 influenza.

We study the robust recovery of a low-rank matrix from sparsely and grossly corrupted Gaussian measurements, with no prior knowledge on the intrinsic rank. We consider the robust matrix factorization approach. We employ a robust $\ell_1$ loss function and deal with the challenge of the unknown rank by using an overspecified factored representation of the matrix variable. We then solve the associated nonconvex nonsmooth problem using a subgradient method with diminishing stepsizes. We show that under a regularity condition on the sensing matrices and corruption, which we call restricted direction preserving property (RDPP), even with rank overspecified, the subgradient method converges to the exact low-rank solution at a sublinear rate. Moreover, our result is more general in the sense that it automatically speeds up to a linear rate once the factor rank matches the unknown rank. On the other hand, we show that the RDPP condition holds under generic settings, such as Gaussian measurements under independent or adversarial sparse corruptions, where the result could be of independent interest. Both the exact recovery and the convergence rate of the proposed subgradient method are numerically verified in the overspecified regime. Moreover, our experiment further shows that our particular design of diminishing stepsize effectively prevents overfitting for robust recovery under overparameterized models, such as robust matrix sensing and learning robust deep image prior. This regularization effect is worth further investigation.

We propose a monotone discretization for the integral fractional Laplace equation on bounded Lipschitz domains with the homogeneous Dirichlet boundary condition. The method is inspired by a quadrature-based finite difference method of Huang and Oberman, but is defined on unstructured grids in arbitrary dimensions with a more flexible domain for approximating singular integral. The scale of the singular integral domain not only depends on the local grid size, but also on the distance to the boundary, since the H\"{o}lder coefficient of the solution deteriorates as it approaches the boundary. By using a discrete barrier function that also reflects the distance to the boundary, we show optimal pointwise convergence rates in terms of the H\"{o}lder regularity of the data on both quasi-uniform and graded grids. Several numerical examples are provided to illustrate the sharpness of the theoretical results.

In recent years, more attention has been paid prominently to accelerated degradation testing in order to characterize accurate estimation of reliability properties for systems that are designed to work properly for years of even decades. %In this regard, degradation data from particular testing levels of the stress variable(s) are extrapolated with an appropriate statistical model to obtain estimates of lifetime quantiles at normal use levels. In this paper we propose optimal experimental designs for repeated measures accelerated degradation tests with competing failure modes that correspond to multiple response components. The observation time points are assumed to be fixed and known in advance. The marginal degradation paths are expressed using linear mixed effects models. The optimal design is obtained by minimizing the asymptotic variance of the estimator of some quantile of the failure time distribution at the normal use conditions. Numerical examples are introduced to ensure the robustness of the proposed optimal designs and compare their efficiency with standard experimental designs.

Implicitly filtered large-eddy simulation (LES) is by nature numerically under-resolved. With the sole exception of Fourier-spectral methods, discrete numerical derivative operators cannot accurately represent the dynamics of all of the represented scales. Since the resolution scale in an LES usually lies in the inertial range, these poorly represented scales are dynamically significant and errors in their dynamics can affect all resolved scales. This Letter is focused on characterizing the effects of numerical dispersion error by studying the energy cascade in LES of convecting homogeneous isotropic turbulence. Numerical energy and transfer spectra reveal that energy is not transferred at the appropriate rate to wavemodes where significant dispersion error is present. This leads to a deficiency of energy in highly dispersive modes and an accompanying pile up of energy in the well resolved modes, since dissipation by the subgrid model is diminished. An asymptotic analysis indicates that dispersion error causes a phase decoherence between triad interacting wavemodes, leading to a reduction in the mean energy transfer rate for these scales. These findings are relevant to a wide range of LES, since turbulence commonly convects through the grid in practical simulations. Further, these results indicate that the resolved scales should be defined to not include the dispersive modes.

北京阿比特科技有限公司