亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Modeling biological processes is a highly demanding task because not all processes are fully understood. Mathematical models allow us to test hypotheses about possible mechanisms of biological processes. The mathematical mechanisms oftentimes abstract from the biological micro-scale mechanisms. Experimental parameter calibration is extremely challenging as the connection between abstract and micro-scale mechanisms is unknown. Even if some microscopic parameters can be determined by isolated experiments, the connection to the abstract mathematical model is challenging. We present ideas for overcoming these difficulties by using longtime characteristics of solutions for, first, finding abstract mechanisms covering large-scale observations and, second, determining parameter values for the abstract mechanisms. The parameter values are not directly connected to experimental data but serve as a link between known mechanisms and observations. The framework combines machine learning techniques with the characteristic solution behavior of differential equations. This setting gives insight into challenges by using rare data only that can later be used for partial differential equations.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 類別 · Attention · 統計量 · SimPLe ·
2024 年 1 月 29 日

This overview is devoted to splitting methods, a class of numerical integrators intended for differential equations that can be subdivided into different problems easier to solve than the original system. Closely connected with this class of integrators are composition methods, in which one or several low-order schemes are composed to construct higher-order numerical approximations to the exact solution. We analyze in detail the order conditions that have to be satisfied by these classes of methods to achieve a given order, and provide some insight about their qualitative properties in connection with geometric numerical integration and the treatment of highly oscillatory problems. Since splitting methods have received considerable attention in the realm of partial differential equations, we also cover this subject in the present survey, with special attention to parabolic equations and their problems. An exhaustive list of methods of different orders is collected and tested on simple examples. Finally, some applications of splitting methods in different areas, ranging from celestial mechanics to statistics, are also provided.

This manuscript summarizes the outcome of the focus groups at "The f(A)bulous workshop on matrix functions and exponential integrators", held at the Max Planck Institute for Dynamics of Complex Technical Systems in Magdeburg, Germany, on 25-27 September 2023. There were three focus groups in total, each with a different theme: knowledge transfer, high-performance and energy-aware computing, and benchmarking. We collect insights, open issues, and perspectives from each focus group, as well as from general discussions throughout the workshop. Our primary aim is to highlight ripe research directions and continue to build on the momentum from a lively meeting.

We design in this work a discrete de Rham complex on manifolds. This complex, written in the framework of exterior calculus, is applicable on meshes on the manifold with generic elements, and has the same cohomology as the continuous de Rham complex. Notions of local (full and trimmed) polynomial spaces are developed, with compatibility requirements between polynomials on mesh entities of various dimensions. Explicit examples of polynomials spaces are presented. The discrete de Rham complex is then used to set up a scheme for the Maxwell equations on a 2D manifold without boundary, and we show that a natural discrete version of the constraint linking the electric field and the electric charge density is satisfied. Numerical examples are provided on the sphere and the torus, based on a bespoke analytical solution and mesh design on each manifold.

Multi-distribution learning generalizes the classic PAC learning to handle data coming from multiple distributions. Given a set of $k$ data distributions and a hypothesis class of VC dimension $d$, the goal is to learn a hypothesis that minimizes the maximum population loss over $k$ distributions, up to $\epsilon$ additive error. In this paper, we settle the sample complexity of multi-distribution learning by giving an algorithm of sample complexity $\widetilde{O}((d+k)\epsilon^{-2}) \cdot (k/\epsilon)^{o(1)}$. This matches the lower bound up to sub-polynomial factor and resolves the COLT 2023 open problem of Awasthi, Haghtalab and Zhao [AHZ23].

A discrete spatial lattice can be cast as a network structure over which spatially-correlated outcomes are observed. A second network structure may also capture similarities among measured features, when such information is available. Incorporating the network structures when analyzing such doubly-structured data can improve predictive power, and lead to better identification of important features in the data-generating process. Motivated by applications in spatial disease mapping, we develop a new doubly regularized regression framework to incorporate these network structures for analyzing high-dimensional datasets. Our estimators can easily be implemented with standard convex optimization algorithms. In addition, we describe a procedure to obtain asymptotically valid confidence intervals and hypothesis tests for our model parameters. We show empirically that our framework provides improved predictive accuracy and inferential power compared to existing high-dimensional spatial methods. These advantages hold given fully accurate network information, and also with networks which are partially misspecified or uninformative. The application of the proposed method to modeling COVID-19 mortality data suggests that it can improve prediction of deaths beyond standard spatial models, and that it selects relevant covariates more often.

We analyze the optimized adaptive importance sampler (OAIS) for performing Monte Carlo integration with general proposals. We leverage a classical result which shows that the bias and the mean-squared error (MSE) of the importance sampling scales with the $\chi^2$-divergence between the target and the proposal and develop a scheme which performs global optimization of $\chi^2$-divergence. While it is known that this quantity is convex for exponential family proposals, the case of the general proposals has been an open problem. We close this gap by utilizing the nonasymptotic bounds for stochastic gradient Langevin dynamics (SGLD) for the global optimization of $\chi^2$-divergence and derive nonasymptotic bounds for the MSE by leveraging recent results from non-convex optimization literature. The resulting AIS schemes have explicit theoretical guarantees that are uniform-in-time.

We derive bounds on the moduli of the eigenvalues of special type of matrix rational functions using the following techniques/methods: (1) the Bauer-Fike theorem on an associated block matrix of the given matrix rational function, (2) by associating a real rational function, along with Rouch$\text{\'e}$ theorem for the matrix rational function and (3) by a numerical radius inequality for a block matrix for the matrix rational function. These bounds are compared when the coefficients are unitary matrices. Numerical examples are given to illustrate the results obtained.

It is known that standard stochastic Galerkin methods encounter challenges when solving partial differential equations with high-dimensional random inputs, which are typically caused by the large number of stochastic basis functions required. It becomes crucial to properly choose effective basis functions, such that the dimension of the stochastic approximation space can be reduced. In this work, we focus on the stochastic Galerkin approximation associated with generalized polynomial chaos (gPC), and explore the gPC expansion based on the analysis of variance (ANOVA) decomposition. A concise form of the gPC expansion is presented for each component function of the ANOVA expansion, and an adaptive ANOVA procedure is proposed to construct the overall stochastic Galerkin system. Numerical results demonstrate the efficiency of our proposed adaptive ANOVA stochastic Galerkin method for both diffusion and Helmholtz problems.

In arXiv:2305.03945 [math.NA], a first-order optimization algorithm has been introduced to solve time-implicit schemes of reaction-diffusion equations. In this research, we conduct theoretical studies on this first-order algorithm equipped with a quadratic regularization term. We provide sufficient conditions under which the proposed algorithm and its time-continuous limit converge exponentially fast to a desired time-implicit numerical solution. We show both theoretically and numerically that the convergence rate is independent of the grid size, which makes our method suitable for large-scale problems. The efficiency of our algorithm has been verified via a series of numerical examples conducted on various types of reaction-diffusion equations. The choice of optimal hyperparameters as well as comparisons with some classical root-finding algorithms are also discussed in the numerical section.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

北京阿比特科技有限公司