亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present an approach for the efficient implementation of self-adjusting multi-rate Runge-Kutta methods and we extend the previously available stability analyses of these methods to the case of an arbitrary number of sub-steps for the active components. We propose a physically motivated model problem that can be used to assess the stability of different multi-rate versions of standard Runge-Kutta methods and the impact of different interpolation methods for the latent variables. Finally, we present the results of several numerical experiments, performed with implementations of the proposed methods in the framework of the \textit{OpenModelica} open-source modelling and simulation software, which demonstrate the efficiency gains deriving from the use of the proposed multi-rate approach for physical modelling problems with multiple time scales.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 正則的 · 確切的 · 近似 · Learning ·
2024 年 6 月 11 日

In these notes we propose and analyze an inertial type method for obtaining stable approximate solutions to nonlinear ill-posed operator equations. The method is based on the Levenberg-Marquardt (LM) iteration. The main obtained results are: monotonicity and convergence for exact data, stability and semi-convergence for noisy data. Regarding numerical experiments we consider: i) a parameter identification problem in elliptic PDEs, ii) a parameter identification problem in machine learning; the computational efficiency of the proposed method is compared with canonical implementations of the LM method.

Molecular and genomic technological advancements have greatly enhanced our understanding of biological processes by allowing us to quantify key biological variables such as gene expression, protein levels, and microbiome compositions. These breakthroughs have enabled us to achieve increasingly higher levels of resolution in our measurements, exemplified by our ability to comprehensively profile biological information at the single-cell level. However, the analysis of such data faces several critical challenges: limited number of individuals, non-normality, potential dropouts, outliers, and repeated measurements from the same individual. In this article, we propose a novel method, which we call U-statistic based latent variable (ULV). Our proposed method takes advantage of the robustness of rank-based statistics and exploits the statistical efficiency of parametric methods for small sample sizes. It is a computationally feasible framework that addresses all the issues mentioned above simultaneously. An additional advantage of ULV is its flexibility in modeling various types of single-cell data, including both RNA and protein abundance. The usefulness of our method is demonstrated in two studies: a single-cell proteomics study of acute myelogenous leukemia (AML) and a single-cell RNA study of COVID-19 symptoms. In the AML study, ULV successfully identified differentially expressed proteins that would have been missed by the pseudobulk version of the Wilcoxon rank-sum test. In the COVID-19 study, ULV identified genes associated with covariates such as age and gender, and genes that would be missed without adjusting for covariates. The differentially expressed genes identified by our method are less biased toward genes with high expression levels. Furthermore, ULV identified additional gene pathways likely contributing to the mechanisms of COVID-19 severity.

Mathematical modeling offers the opportunity to test hypothesis concerning Myeloproliferative emergence and development. We tested different mathematical models based on a training cohort (n=264 patients) (Registre de la c\^ote d'Or) to determine the emergence and evolution times before JAK2V617F classical Myeloproliferative disorders (respectively Polycythemia Vera and Essential Thrombocytemia) are diagnosed. We dissected the time before diagnosis as two main periods: the time from embryonic development for the JAK2V617F mutation to occur, not disappear and enter in proliferation, and a second time corresponding to the expansion of the clonal population until diagnosis. We demonstrate using progressively complexified models that the rate of active mutation occurrence is not constant and doesn't just rely on individual variability, but rather increases with age and takes a median time of 63.1+/-13 years. A contrario, the expansion time can be considered as constant: 8.8 years once the mutation has emerged. Results were validated in an external cohort (national FIMBANK Cohort, n=1248 patients). Analyzing JAK2V617F Essential Thrombocytema versus Polycythemia Vera, we noticed that the first period of time (rate of active homozygous mutation occurrence) for PV takes approximatively 1.5 years more than for ET to develop when the expansion time was quasi-similar. In conclusion, our multi-step approach and the ultimate time-dependent model of MPN emergence and development demonstrates that the emergence of a JAK2V617F mutation should be linked to an aging mechanism, and indicates a 8-9 years period of time to develop a full MPN.

This work considers two boundary correction techniques to mitigate the reduction in the temporal order of convergence in PDE sense (i.e., when both the space and time resolutions tend to zero independently of each other) of $d$ dimension space-discretized parabolic problems on a rectangular domain subject to time dependent boundary conditions. We make use of the MoL approach (method of lines) where the space discretization is made with central differences of order four and the time integration is carried out with $s$-stage AMF-W-methods. The time integrators are of ADI-type (alternating direction implicit by using a directional splitting) and of higher order than the usual ones appearing in the literature which only reach order 2. Besides, the techniques here explained also work for most of splitting methods, when directional splitting is used. A remarkable fact is that with these techniques, the time integrators recover the temporal order of PDE-convergence at the level of time-independent boundary conditions.

In this work we aim at developing a new class of high order accurate well-balanced finite difference (FD) Weighted Essentially Non-Oscillatory (WENO) methods for numerical general relativity, which can be applied to any first--order reduction of the Einstein field equations, even if non--conservative terms are present. We choose the first--order non--conservative Z4 formulation of the Einstein equations, which has a built--in cleaning procedure that accounts for the Einstein constraints and that has already shown its ability in keeping stationary solutions stable over long timescales. Upon the introduction of auxiliary variables, the vacuum Einstein equations in first order form constitute a ...

There has been a resurgence of interest in the asymptotic normality of incomplete U-statistics that only sum over roughly as many kernel evaluations as there are data samples, due to its computational efficiency and usefulness in quantifying the uncertainty for ensemble-based predictions. In this paper, we focus on the normal convergence of one such construction, the incomplete U-statistic with Bernoulli sampling, based on a raw sample of size $n$ and a computational budget $N$ in the same order as $n$. Under a minimalistic third moment assumption on the kernel, we offer an accompanying Berry-Esseen bound of the natural rate $1/\sqrt{\min(N, n)}$ that characterizes the normal approximating accuracy involved. Our key techniques include Stein's method specialized for the so-called Studentized nonlinear statistics, and an exponential lower tail bound for non-negative kernel U-statistics.

This paper concerns a class of DC composite optimization problems which, as an extension of convex composite optimization problems and DC programs with nonsmooth components, often arises in robust factorization models of low-rank matrix recovery. For this class of nonconvex and nonsmooth problems, we propose an inexact linearized proximal algorithm (iLPA) by computing in each step an inexact minimizer of a strongly convex majorization constructed with a partial linearization of their objective functions at the current iterate, and establish the convergence of the generated iterate sequence under the Kurdyka-\L\"ojasiewicz (KL) property of a potential function. In particular, by leveraging the composite structure, we provide a verifiable condition for the potential function to have the KL property of exponent $1/2$ at the limit point, so for the iterate sequence to have a local R-linear convergence rate. Finally, we apply the proposed iLPA to a robust factorization model for matrix completions with outliers and non-uniform sampling, and numerical comparison with a proximal alternating minimization (PAM) method confirms iLPA yields the comparable relative errors or NMAEs within less running time, especially for large-scale real data.

Bayesian inference for survival regression modeling offers numerous advantages, especially for decision-making and external data borrowing, but demands the specification of the baseline hazard function, which may be a challenging task. We propose an alternative approach that does not need the specification of this function. Our approach combines pseudo-observations to convert censored data into longitudinal data with the Generalized Methods of Moments (GMM) to estimate the parameters of interest from the survival function directly. GMM may be viewed as an extension of the Generalized Estimating Equation (GEE) currently used for frequentist pseudo-observations analysis and can be extended to the Bayesian framework using a pseudo-likelihood function. We assessed the behavior of the frequentist and Bayesian GMM in the new context of analyzing pseudo-observations. We compared their performances to the Cox, GEE, and Bayesian piecewise exponential models through a simulation study of two-arm randomized clinical trials. Frequentist and Bayesian GMM gave valid inferences with similar performances compared to the three benchmark methods, except for small sample sizes and high censoring rates. For illustration, three post-hoc efficacy analyses were performed on randomized clinical trials involving patients with Ewing Sarcoma, producing results similar to those of the benchmark methods. Through a simple application of estimating hazard ratios, these findings confirm the effectiveness of this new Bayesian approach based on pseudo-observations and the generalized method of moments. This offers new insights on using pseudo-observations for Bayesian survival analysis.

In this paper we present a family of high order cut finite element methods with bound preserving properties for hyperbolic conservation laws in one space dimension. The methods are based on the discontinuous Galerkin framework and use a regular background mesh, where interior boundaries are allowed to cut through the mesh arbitrarily. Our methods include ghost penalty stabilization to handle small cut elements and a new reconstruction of the approximation on macro-elements, which are local patches consisting of cut and un-cut neighboring elements that are connected by stabilization. We show that the reconstructed solution retains conservation and order of convergence. Our lowest-order scheme results in a piecewise constant solution that satisfies a maximum principle for scalar hyperbolic conservation laws. When the lowest order scheme is applied to the Euler equations, the scheme is positivity preserving in the sense that positivity of pressure and density are retained. For the high-order schemes, suitable bound preserving limiters are applied to the reconstructed solution on macro-elements. In the scalar case, a maximum principle limiter is applied, which ensures that the limited approximation satisfies the maximum principle. Correspondingly, we use a positivity preserving limiter for the Euler equations and show that our scheme is positivity preserving. In the presence of shocks, additional limiting is needed to avoid oscillations, hence we apply a standard TVB limiter to the reconstructed solution. The time step restrictions are of the same order as for the corresponding discontinuous Galerkin methods on the background mesh. Numerical computations illustrate accuracy, bound preservation, and shock capturing capabilities of the proposed schemes.

In this paper, we introduce the neural empirical interpolation method (NEIM), a neural network-based alternative to the discrete empirical interpolation method for reducing the time complexity of computing the nonlinear term in a reduced order model (ROM) for a parameterized nonlinear partial differential equation. NEIM is a greedy algorithm which accomplishes this reduction by approximating an affine decomposition of the nonlinear term of the ROM, where the vector terms of the expansion are given by neural networks depending on the ROM solution, and the coefficients are given by an interpolation of some "optimal" coefficients. Because NEIM is based on a greedy strategy, we are able to provide a basic error analysis to investigate its performance. NEIM has the advantages of being easy to implement in models with automatic differentiation, of being a nonlinear projection of the ROM nonlinearity, of being efficient for both nonlocal and local nonlinearities, and of relying solely on data and not the explicit form of the ROM nonlinearity. We demonstrate the effectiveness of the methodology on solution-dependent and solution-independent nonlinearities, a nonlinear elliptic problem, and a nonlinear parabolic model of liquid crystals.

北京阿比特科技有限公司