亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We provide full theoretical guarantees for the convergence behaviour of diffusion-based generative models under the assumption of strongly logconcave data distributions while our approximating class of functions used for score estimation is made of Lipschitz continuous functions. We demonstrate via a motivating example, sampling from a Gaussian distribution with unknown mean, the powerfulness of our approach. In this case, explicit estimates are provided for the associated optimization problem, i.e. score approximation, while these are combined with the corresponding sampling estimates. As a result, we obtain the best known upper bound estimates in terms of key quantities of interest, such as the dimension and rates of convergence, for the Wasserstein-2 distance between the data distribution (Gaussian with unknown mean) and our sampling algorithm. Beyond the motivating example and in order to allow for the use of a diverse range of stochastic optimizers, we present our results using an $L^2$-accurate score estimation assumption, which crucially is formed under an expectation with respect to the stochastic optimizer and our novel auxiliary process that uses only known information. This approach yields the best known convergence rate for our sampling algorithm.

相關內容

We study the Euler scheme for scalar non-autonomous stochastic differential equations, whose diffusion coefficient is not globally Lipschitz but a fractional power of a globally Lipschitz function. We analyse the strong error and establish a criterion, which relates the convergence order of the Euler scheme to an inverse moment condition for the diffusion coefficient. Our result in particular applies to Cox-Ingersoll-Ross-, Chan-Karolyi-Longstaff-Sanders- or Wright-Fisher-type stochastic differential equations and thus provides a unifying framework.

The construction of coherent prediction models holds great importance in medical research as such models enable health researchers to gain deeper insights into disease epidemiology and clinicians to identify patients at higher risk of adverse outcomes. One commonly employed approach to developing prediction models is variable selection through penalized regression techniques. Integrating natural variable structures into this process not only enhances model interpretability but can also %increase the likelihood of recovering the true underlying model and boost prediction accuracy. However, a challenge lies in determining how to effectively integrate potentially complex selection dependencies into the penalized regression. In this work, we demonstrate how to represent selection dependencies mathematically, provide algorithms for deriving the complete set of potential models, and offer a structured approach for integrating complex rules into variable selection through the latent overlapping group Lasso. To illustrate our methodology, we applied these techniques to construct a coherent prediction model for major bleeding in hypertensive patients recently hospitalized for atrial fibrillation and subsequently prescribed oral anticoagulants. In this application, we account for a proxy of anticoagulant adherence and its interaction with dosage and the type of oral anticoagulants in addition to drug-drug interactions.

We present accurate and mathematically consistent formulations of a diffuse-interface model for two-phase flow problems involving rapid evaporation. The model addresses challenges including discontinuities in the density field by several orders of magnitude, leading to high velocity and pressure jumps across the liquid-vapor interface, along with dynamically changing interface topologies. To this end, we integrate an incompressible Navier--Stokes solver combined with a conservative level-set formulation and a regularized, i.e., diffuse, representation of discontinuities into a matrix-free adaptive finite element framework. The achievements are three-fold: First, this work proposes mathematically consistent definitions for the level-set transport velocity in the diffuse interface region by extrapolating the velocity from the liquid or gas phase, which exhibit superior prediction accuracy for the evaporated mass and the resulting interface dynamics compared to a local velocity evaluation, especially for highly curved interfaces. Second, we show that accurate prediction of the evaporation-induced pressure jump requires a consistent, namely a reciprocal, density interpolation across the interface, which satisfies local mass conservation. Third, the combination of diffuse interface models for evaporation with standard Stokes-type constitutive relations for viscous flows leads to significant pressure artifacts in the diffuse interface region. To mitigate these, we propose a modification for such constitutive model types. Through selected analytical and numerical examples, the aforementioned properties are validated. The presented model promises new insights in simulation-based prediction of melt-vapor interactions in thermal multiphase flows such as in laser-based powder bed fusion of metals.

It is well-known that a multilinear system with a nonsingular M-tensor and a positive right-hand side has a unique positive solution. Tensor splitting methods generalizing the classical iterative methods for linear systems have been proposed for finding the unique positive solution. The Alternating Anderson-Richardson (AAR) method is an effective method to accelerate the classical iterative methods. In this study, we apply the idea of AAR for finding the unique positive solution quickly. We first present a tensor Richardson method based on tensor regular splittings, then apply Anderson acceleration to the tensor Richardson method and derive a tensor Anderson-Richardson method, finally, we periodically employ the tensor Anderson-Richardson method within the tensor Richardson method and propose a tensor AAR method. Numerical experiments show that the proposed method is effective in accelerating tensor splitting methods.

In a network of reinforced stochastic processes, for certain values of the parameters, all the agents' inclinations synchronize and converge almost surely toward a certain random variable. The present work aims at clarifying when the agents can asymptotically polarize, i.e. when the common limit inclination can take the extreme values, 0 or 1, with probability zero, strictly positive, or equal to one. Moreover, we present a suitable technique to estimate this probability that, along with the theoretical results, has been framed in the more general setting of a class of martingales taking values in [0, 1] and following a specific dynamics.

The numerical solution of continuum damage mechanics (CDM) problems suffers from convergence-related challenges during the material softening stage, and consequently existing iterative solvers are subject to a trade-off between computational expense and solution accuracy. In this work, we present a novel unified arc-length (UAL) method, and we derive the formulation of the analytical tangent matrix and governing system of equations for both local and non-local gradient damage problems. Unlike existing versions of arc-length solvers that monolithically scale the external force vector, the proposed method treats the latter as an independent variable and determines the position of the system on the equilibrium path based on all the nodal variations of the external force vector. This approach renders the proposed solver substantially more efficient and robust than existing solvers used in CDM problems. We demonstrate the considerable advantages of the proposed algorithm through several benchmark 1D problems with sharp snap-backs and 2D examples under various boundary conditions and loading scenarios. The proposed UAL approach exhibits a superior ability of overcoming critical increments along the equilibrium path. Moreover, in the presented examples, the proposed UAL method is 1-2 orders of magnitude faster than force-controlled arc-length and monolithic Newton-Raphson solvers.

Numerical integration over the real line for analytic functions is studied. Our main focus is on the sharpness of the error bounds. We first derive two general lower estimates for the worst-case integration error, and then apply these to establish lower bounds for various quadrature rules. These bounds turn out to be either novel or improve upon existing results, leading to lower bounds that closely match upper bounds for various formulas. Specifically, for the suitably truncated trapezoidal rule, we improve upon general lower bounds on the worst-case error obtained by Sugihara [\textit{Numer. Math.}, 75 (1997), pp.~379--395] and provide exceptionally sharp lower bounds apart from a polynomial factor, in particular show that the worst-case error for the trapezoidal rule by Sugihara is not improvable more than a polynomial factor. Additionally, our research reveals a discrepancy between the error decay of the trapezoidal rule and Sugihara's lower bound for general numerical integration rules, introducing a new open problem. Moreover, Gauss--Hermite quadrature is proven sub-optimal under the decay conditions on integrands we consider, a result not deducible from upper-bound arguments alone. Furthermore, to establish the near-optimality of the suitably scaled Gauss--Legendre and Clenshaw--Curtis quadratures, we generalize a recent result of Trefethen [\textit{SIAM Rev.}, 64 (2022), pp.~132--150] for the upper error bounds in terms of the decay conditions.

We present an accelerated greedy strategy for training of projection-based reduced-order models for parametric steady and unsteady partial differential equations. Our approach exploits hierarchical approximate proper orthogonal decomposition to speed up the construction of the empirical test space for least-square Petrov-Galerkin formulations, a progressive construction of the empirical quadrature rule based on a warm start of the non-negative least-square algorithm, and a two-fidelity sampling strategy to reduce the number of expensive greedy iterations. We illustrate the performance of our method for two test cases: a two-dimensional compressible inviscid flow past a LS89 blade at moderate Mach number, and a three-dimensional nonlinear mechanics problem to predict the long-time structural response of the standard section of a nuclear containment building under external loading.

Fully understanding a complex high-resolution satellite or aerial imagery scene often requires spatial reasoning over a broad relevant context. The human object recognition system is able to understand object in a scene over a long-range relevant context. For example, if a human observes an aerial scene that shows sections of road broken up by tree canopy, then they will be unlikely to conclude that the road has actually been broken up into disjoint pieces by trees and instead think that the canopy of nearby trees is occluding the road. However, there is limited research being conducted to understand long-range context understanding of modern machine learning models. In this work we propose a road segmentation benchmark dataset, Chesapeake Roads Spatial Context (RSC), for evaluating the spatial long-range context understanding of geospatial machine learning models and show how commonly used semantic segmentation models can fail at this task. For example, we show that a U-Net trained to segment roads from background in aerial imagery achieves an 84% recall on unoccluded roads, but just 63.5% recall on roads covered by tree canopy despite being trained to model both the same way. We further analyze how the performance of models changes as the relevant context for a decision (unoccluded roads in our case) varies in distance. We release the code to reproduce our experiments and dataset of imagery and masks to encourage future research in this direction -- //github.com/isaaccorley/ChesapeakeRSC.

Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase. The resulting network resembles a static entity of knowledge, with endeavours to extend this knowledge without targeting the original task resulting in a catastrophic forgetting. Continual learning shifts this paradigm towards networks that can continually accumulate knowledge over different tasks without the need to retrain from scratch. We focus on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries. Our main contributions concern 1) a taxonomy and extensive overview of the state-of-the-art, 2) a novel framework to continually determine the stability-plasticity trade-off of the continual learner, 3) a comprehensive experimental comparison of 11 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize method strengths and weaknesses on three benchmarks, considering Tiny Imagenet and large-scale unbalanced iNaturalist and a sequence of recognition datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time, and storage.

北京阿比特科技有限公司