亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work we demonstrate that SVD-based model reduction techniques known for ordinary differential equations, such as the proper orthogonal decomposition, can be extended to stochastic differential equations in order to reduce the computational cost arising from both the high dimension of the considered stochastic system and the large number of independent Monte Carlo runs. We also extend the proper symplectic decomposition method to stochastic Hamiltonian systems, both with and without external forcing, and argue that preserving the underlying symplectic or variational structures results in more accurate and stable solutions that conserve energy better than when the non-geometric approach is used. We validate our proposed techniques with numerical experiments for a semi-discretization of the stochastic nonlinear Schr\"odinger equation and the Kubo oscillator.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · MoDELS · Performer · 講稿 · 值域 ·
2024 年 3 月 12 日

We propose a new approach to the autoregressive spatial functional model, based on the notion of signature, which represents a function as an infinite series of its iterated integrals. It presents the advantage of being applicable to a wide range of processes. After having provided theoretical guarantees to the proposed model, we have shown in a simulation study and on a real data set that this new approach presents competitive performances compared to the traditional model.

In this work, we develop recent research on the fully mixed virtual element method (mixed-VEM) based on the Banach space for the stationary Boussinesq equation to suggest and analyze a new mixed-VEM for the stationary two-dimensional Boussinesq equation with temperature-dependent parameters in terms of the pseudostress, vorticity, velocity, pseudoheat vector and temperature fields. The well-posedness of the continuous formulation is analyzed utilizing a fixed-point strategy, a smallness assumption on the data, and some additional regularities on the solution. The discretization for the mentioned variables is based on the coupling $\mathbb{H}(\mathbf{div}_{6/5})$ -- and $\mathbf{H}(\mathrm{div}_{6/5})$ -- conforming virtual element techniques. The proposed scheme is rewritten as an equivalent fixed point operator equation, so that its existence and stability estimates have been proven. In addition, an a priori convergence analysis is established by utilizing the C\'ea estimate and a suitable assumption on data for all variables in their natural norms showing an optimal rate of convergence. Finally, several numerical examples are presented to illustrate the performance of the proposed method.

In this paper, we harness a result in point process theory, specifically the expectation of the weighted $K$-function, where the weighting is done by the true first-order intensity function. This theoretical result can be employed as an estimation method to derive parameter estimates for a particular model assumed for the data. The underlying motivation is to avoid the difficulties associated with dealing with complex likelihoods in point process models and their maximization. The exploited result makes our method theoretically applicable to any model specification. In this paper, we restrict our study to Poisson models, whose likelihood represents the base for many more complex point process models. In this context, our proposed method can estimate the vector of local parameters that correspond to the points within the analyzed point pattern without introducing any additional complexity compared to the global estimation. We illustrate the method through simulation studies for both purely spatial and spatio-temporal point processes and show complex scenarios based on the Poisson model through the analysis of two real datasets concerning environmental problems.

High-performing out-of-distribution (OOD) detection, both anomaly and novel class, is an important prerequisite for the practical use of classification models. In this paper, we focus on the species recognition task in images concerned with large databases, a large number of fine-grained hierarchical classes, severe class imbalance, and varying image quality. We propose a framework for combining individual OOD measures into one combined OOD (COOD) measure using a supervised model. The individual measures are several existing state-of-the-art measures and several novel OOD measures developed with novel class detection and hierarchical class structure in mind. COOD was extensively evaluated on three large-scale (500k+ images) biodiversity datasets in the context of anomaly and novel class detection. We show that COOD outperforms individual, including state-of-the-art, OOD measures by a large margin in terms of TPR@1% FPR in the majority of experiments, e.g., improving detecting ImageNet images (OOD) from 54.3% to 85.4% for the iNaturalist 2018 dataset. SHAP (feature contribution) analysis shows that different individual OOD measures are essential for various tasks, indicating that multiple OOD measures and combinations are needed to generalize. Additionally, we show that explicitly considering ID images that are incorrectly classified for the original (species) recognition task is important for constructing high-performing OOD detection methods and for practical applicability. The framework can easily be extended or adapted to other tasks and media modalities.

The Sum-of-Squares (SOS) approximation method is a technique used in optimization problems to derive lower bounds on the optimal value of an objective function. By representing the objective function as a sum of squares in a feature space, the SOS method transforms non-convex global optimization problems into solvable semidefinite programs. This note presents an overview of the SOS method. We start with its application in finite-dimensional feature spaces and, subsequently, we extend it to infinite-dimensional feature spaces using reproducing kernels (k-SOS). Additionally, we highlight the utilization of SOS for estimating some relevant quantities in information theory, including the log-partition function.

First-order methods are often analyzed via their continuous-time models, where their worst-case convergence properties are usually approached via Lyapunov functions. In this work, we provide a systematic and principled approach to find and verify Lyapunov functions for classes of ordinary and stochastic differential equations. More precisely, we extend the performance estimation framework, originally proposed by Drori and Teboulle [10], to continuous-time models. We retrieve convergence results comparable to those of discrete methods using fewer assumptions and convexity inequalities, and provide new results for stochastic accelerated gradient flows.

In the present study, the efficiency of preconditioners for solving linear systems associated with the discretized variable-density incompressible Navier-Stokes equations with semiimplicit second-order accuracy in time and spectral accuracy in space is investigated. The method, in which the inverse operator for the constant-density flow system acts as preconditioner, is implemented for three iterative solvers: the General Minimal Residual, the Conjugate Gradient and the Richardson Minimal Residual. We discuss the method, first, in the context of the one-dimensional flow case where a top-hat like profile for the density is used. Numerical evidence shows that the convergence is significantly improved due to the notable decrease in the condition number of the operators. Most importantly, we then validate the robustness and convergence properties of the method on two more realistic problems: the two-dimensional Rayleigh-Taylor instability problem and the three-dimensional variable-density swirling jet.

This article is concerned with the multilevel Monte Carlo (MLMC) methods for approximating expectations of some functions of the solution to the Heston 3/2-model from mathematical finance, which takes values in $(0, \infty)$ and possesses superlinearly growing drift and diffusion coefficients. To discretize the SDE model, a new Milstein-type scheme is proposed to produce independent sample paths. The proposed scheme can be explicitly solved and is positivity-preserving unconditionally, i.e., for any time step-size $h>0$. This positivity-preserving property for large discretization time steps is particularly desirable in the MLMC setting. Furthermore, a mean-square convergence rate of order one is proved in the non-globally Lipschitz regime, which is not trivial, as the diffusion coefficient grows super-linearly. The obtained order-one convergence in turn promises the desired relevant variance of the multilevel estimator and justifies the optimal complexity $\mathcal{O}(\epsilon^{-2})$ for the MLMC approach, where $\epsilon > 0$ is the required target accuracy. Numerical experiments are finally reported to confirm the theoretical findings.

In this manuscript, we combine non-intrusive reduced order models (ROMs) with space-dependent aggregation techniques to build a mixed-ROM. The prediction of the mixed formulation is given by a convex linear combination of the predictions of some previously-trained ROMs, where we assign to each model a space-dependent weight. The ROMs taken into account to build the mixed model exploit different reduction techniques, such as Proper Orthogonal Decomposition (POD) and AutoEncoders (AE), and/or different approximation techniques, namely a Radial Basis Function Interpolation (RBF), a Gaussian Process Regression (GPR) or a feed-forward Artificial Neural Network (ANN). The contribution of each model is retained with higher weights in the regions where the model performs best, and, vice versa, with smaller weights where the model has a lower accuracy with respect to the other models. Finally, a regression technique, namely a Random Forest, is exploited to evaluate the weights for unseen conditions. The performance of the aggregated model is evaluated on two different test cases: the 2D flow past a NACA 4412 airfoil, with an angle of attack of 5 degrees, having as parameter the Reynolds number varying between 1e5 and 1e6 and a transonic flow over a NACA 0012 airfoil, considering as parameter the angle of attack. In both cases, the mixed-ROM has provided improved accuracy with respect to each individual ROM technique.

The theory of two projections is utilized to study two-component Gibbs samplers. Through this theory, previously intractable problems regarding the asymptotic variances of two-component Gibbs samplers are reduced to elementary matrix algebra exercises. It is found that in terms of asymptotic variance, the two-component random-scan Gibbs sampler is never much worse, and could be considerably better than its deterministic-scan counterpart, provided that the selection probability is appropriately chosen. This is especially the case when there is a large discrepancy in computation cost between the two components. The result contrasts with the known fact that the deterministic-scan version has a faster convergence rate, which can also be derived from the method herein. On the other hand, a modified version of the deterministic-scan sampler that accounts for computation cost can outperform the random-scan version.

北京阿比特科技有限公司