亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We construct a fast exact algorithm for the simulation of the first-passage time, jointly with the undershoot and overshoot, of a tempered stable subordinator over an arbitrary non-increasing absolutely continuous function. We prove that the running time of our algorithm has finite exponential moments and provide bounds on its expected running time with explicit dependence on the characteristics of the process and the initial value of the function. The expected running time grows at most cubically in the stability parameter (as it approaches either $0$ or $1$) and is linear in the tempering parameter and the initial value of the function. Numerical performance, based on the implementation in the dedicated GitHub repository, exhibits a good agreement with our theoretical bounds. We provide numerical examples to illustrate the performance of our algorithm in Monte Carlo estimation.

相關內容

Multiphysics simulations frequently require transferring solution fields between subproblems with non-matching spatial discretizations, typically using interpolation techniques. Standard methods are usually based on measuring the closeness between points by means of the Euclidean distance, which does not account for curvature, cuts, cavities or other non-trivial geometrical or topological features of the domain. This may lead to spurious oscillations in the interpolant in proximity to these features. To overcome this issue, we propose a modification to rescaled localized radial basis function (RL-RBF) interpolation to account for the geometry of the interpolation domain, by yielding conformity and fidelity to geometrical and topological features. The proposed method, referred to as RL-RBF-G, relies on measuring the geodesic distance between data points. RL-RBF-G removes spurious oscillations appearing in the RL-RBF interpolant, resulting in increased accuracy in domains with complex geometries. We demonstrate the effectiveness of RL-RBF-G interpolation through a convergence study in an idealized setting. Furthermore, we discuss the algorithmic aspects and the implementation of RL-RBF-G interpolation in a distributed-memory parallel framework, and present the results of a strong scalability test yielding nearly ideal results. Finally, we show the effectiveness of RL-RBF-G interpolation in multiphysics simulations by considering an application to a whole-heart cardiac electromecanics model.

A component-splitting method is proposed to improve convergence characteristics for implicit time integration of compressible multicomponent reactive flows. The characteristic decomposition of flux jacobian of multicomponent Navier-Stokes equations yields a large sparse eigensystem, presenting challenges of slow convergence and high computational costs for implicit methods. To addresses this issue, the component-splitting method segregates the implicit operator into two parts: one for the flow equations (density/momentum/energy) and the other for the component equations. Each part's implicit operator employs flux-vector splitting based on their respective spectral radii to achieve accelerated convergence. This approach improves the computational efficiency of implicit iteration, mitigating the quadratic increase in time cost with the number of species. Two consistence corrections are developed to reduce the introduced component-splitting error and ensure the numerical consistency of mass fraction. Importantly, the impact of component-splitting method on accuracy is minimal as the residual approaches convergence. The accuracy, efficiency, and robustness of component-splitting method are thoroughly investigated and compared with the coupled implicit scheme through several numerical cases involving thermo-chemical nonequilibrium hypersonic flows. The results demonstrate that the component-splitting method decreases the required number of iteration steps for convergence of residual and wall heat flux, decreases the computation time per iteration step, and diminishes the residual to lower magnitude. The acceleration efficiency is enhanced with increases in CFL number and number of species.

We consider the problem of zero-error function computation with side information. Alice has a source $X$ and Bob has correlated source $Y$ and they can communicate via either classical or a quantum channel. Bob wants to calculate $f(X,Y)$ with zero error. We aim to characterize the minimum amount of information that Alice needs to send to Bob for this to happen with zero-error. In the classical setting, this quantity depends on the asymptotic growth of $\chi(G^{(m)})$, the chromatic number of an appropriately defined $m$-instance "confusion graph". In this work we present structural characterizations of $G^{(m)}$ and demonstrate two function computation scenarios that have the same single-instance confusion graph. However, in one case there a strict advantage in using quantum transmission as against classical transmission, whereas there is no such advantage in the other case.

Iterated conditional expectation (ICE) g-computation is an estimation approach for addressing time-varying confounding for both longitudinal and time-to-event data. Unlike other g-computation implementations, ICE avoids the need to specify models for each time-varying covariate. For variance estimation, previous work has suggested the bootstrap. However, bootstrapping can be computationally intense and sensitive to the number of resamples used. Here, we present ICE g-computation as a set of stacked estimating equations. Therefore, the variance for the ICE g-computation estimator can be consistently estimated using the empirical sandwich variance estimator. Performance of the variance estimator was evaluated empirically with a simulation study. The proposed approach is also demonstrated with an illustrative example on the effect of cigarette smoking on the prevalence of hypertension. In the simulation study, the empirical sandwich variance estimator appropriately estimated the variance. When comparing runtimes between the sandwich variance estimator and the bootstrap for the applied example, the sandwich estimator was substantially faster, even when bootstraps were run in parallel. The empirical sandwich variance estimator is a viable option for variance estimation with ICE g-computation.

We use backward error analysis for differential equations to obtain modified or distorted equations describing the behaviour of the Newmark scheme applied to the transient structural dynamics equation. Using these results, we show how to construct compensation terms from the original parameters of the system, which improve the performance of Newmark simulations without changing the time step or modifying the scheme itself. Two such compensations are given: one eliminates numerical damping, while the other achieves fourth-order accurate calculations using the traditionally second-order Newmark method.

As data from monitored structures become increasingly available, the demand grows for it to be used efficiently to add value to structural operation and management. One way in which this can be achieved is to use structural response measurements to assess the usefulness of models employed to describe deterioration processes acting on a structure, as well the mechanical behavior of the latter. This is what this work aims to achieve by first, framing Structural Health Monitoring as a Bayesian model updating problem, in which the quantities of inferential interest characterize the deterioration process and/or structural state. Then, using the posterior estimates of these quantities, a decision-theoretic definition is proposed to assess the structural and/or deterioration models based on (a) their ability to explain the data and (b) their performance on downstream decision support-based tasks. The proposed framework is demonstrated on strain response data obtained from a test specimen which was subjected to three-point bending while simultaneously exposed to accelerated corrosion leading to thickness loss. Results indicate that the level of \textit{a priori} domain knowledge on the deterioration form is critical.

Weakly modular graphs are defined as the class of graphs that satisfy the \emph{triangle condition ($TC$)} and the \emph{quadrangle condition ($QC$)}. We study an interesting subclass of weakly modular graphs that satisfies a stronger version of the triangle condition, known as the \emph{triangle diamond condition ($TDC$)}. and term this subclass of weakly modular graphs as the \emph{diamond-weakly modular graphs}. It is observed that this class contains the class of bridged graphs and the class of weakly bridged graphs. The interval function $I_G$ of a connected graph $G$ with vertex set $V$ is an important concept in metric graph theory and is one of the prime example of a transit function; a set function defined on the Cartesian product $V\times V$ to the power set of $V$ satisfying the expansive, symmetric and idempotent axioms. In this paper, we derive an interesting axiom denoted as $(J0')$, obtained from a well-known axiom introduced by Marlow Sholander in 1952, denoted as $(J0)$. It is proved that the axiom $(J0')$ is a characterizing axiom of the diamond-weakly modular graphs. We propose certain types of independent first-order betweenness axioms on an arbitrary transit function $R$ and prove that an arbitrary transit function becomes the interval function of a diamond-weakly modular graph if and only if $R$ satisfies these betweenness axioms. Similar characterizations are obtained for the interval function of bridged graphs and weakly bridged graphs.

We survey recent developments in the field of complexity of pathwise approximation in $p$-th mean of the solution of a stochastic differential equation at the final time based on finitely many evaluations of the driving Brownian motion. First, we briefly review the case of equations with globally Lipschitz continuous coefficients, for which an error rate of at least $1/2$ in terms of the number of evaluations of the driving Brownian motion is always guaranteed by using the equidistant Euler-Maruyama scheme. Then we illustrate that giving up the global Lipschitz continuity of the coefficients may lead to a non-polynomial decay of the error for the Euler-Maruyama scheme or even to an arbitrary slow decay of the smallest possible error that can be achieved on the basis of finitely many evaluations of the driving Brownian motion. Finally, we turn to recent positive results for equations with a drift coefficient that is not globally Lipschitz continuous. Here we focus on scalar equations with a Lipschitz continuous diffusion coefficient and a drift coefficient that satisfies piecewise smoothness assumptions or has fractional Sobolev regularity and we present corresponding complexity results.

Many applications rely on solving time-dependent partial differential equations (PDEs) that include second derivatives. Summation-by-parts (SBP) operators are crucial for developing stable, high-order accurate numerical methodologies for such problems. Conventionally, SBP operators are tailored to the assumption that polynomials accurately approximate the solution, and SBP operators should thus be exact for them. However, this assumption falls short for a range of problems for which other approximation spaces are better suited. We recently addressed this issue and developed a theory for first-derivative SBP operators based on general function spaces, coined function-space SBP (FSBP) operators. In this paper, we extend the innovation of FSBP operators to accommodate second derivatives. The developed second-derivative FSBP operators maintain the desired mimetic properties of existing polynomial SBP operators while allowing for greater flexibility by being applicable to a broader range of function spaces. We establish the existence of these operators and detail a straightforward methodology for constructing them. By exploring various function spaces, including trigonometric, exponential, and radial basis functions, we illustrate the versatility of our approach. The work presented here opens up possibilities for using second-derivative SBP operators based on suitable function spaces, paving the way for a wide range of applications in the future.

We propose an algorithm which predicts each subsequent time step relative to the previous timestep of intractable short rate model (when adjusted for drift and overall distribution of previous percentile result) and show that the method achieves superior outcomes to the unbiased estimate both on the trained dataset and different validation data.

北京阿比特科技有限公司