亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present discontinuous Galerkin (DG) methods for solving a first-order semi-linear hyperbolic system, which was originally proposed as a continuum model for a one-dimensional dimer lattice of topological resonators. We examine the energy-conserving or energy-dissipating property in relation to the choices of simple, mesh-independent numerical fluxes. We demonstrate that, with certain numerical flux choices, our DG method achieves optimal convergence in the $L^2$ norm. We provide numerical experiments that validate and illustrate the effectiveness of our proposed numerical methods.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · MoDELS · 動力系統 · 數值分析 ·
2023 年 7 月 10 日

We consider a second-order linear system of ordinary differential equations (ODEs) including random variables. A stochastic Galerkin method yields a larger deterministic linear system of ODEs. We apply a model order reduction (MOR) of this high-dimensional linear dynamical system, where its internal energy represents a quadratic quantity of interest. We investigate the properties of this MOR with respect to stability, passivity, and energy dissipation. Numerical results are shown for a system modelling a mass-spring-damper configuration.

It is well known that the Euler method for approximating the solutions of a random ordinary differential equation $\mathrm{d}X_t/\mathrm{d}t = f(t, X_t, Y_t)$ driven by a stochastic process $\{Y_t\}_t$ with $\theta$-H\"older sample paths is estimated to be of strong order $\theta$ with respect to the time step, provided $f=f(t, x, y)$ is sufficiently regular and with suitable bounds. Here, it is proved that, in many typical cases, further conditions on the noise can be exploited so that the strong convergence is actually of order 1, regardless of the H\"older regularity of the sample paths. This applies for instance to additive or multiplicative It\^o process noises (such as Wiener, Ornstein-Uhlenbeck, and geometric Brownian motion processes); to point-process noises (such as Poisson point processes and Hawkes self-exciting processes, which even have jump-type discontinuities); and to transport-type processes with sample paths of bounded variation. The result is based on a novel approach, estimating the global error as an iterated integral over both large and small mesh scales, and switching the order of integration to move the critical regularity to the large scale. The work is complemented with numerical simulations illustrating the strong order 1 convergence in those cases, and with an example with fractional Brownian motion noise with Hurst parameter $0 < H < 1/2$ for which the order of convergence is $H + 1/2$, hence lower than the attained order 1 in the examples above, but still higher than the order $H$ of convergence expected from previous works.

Physics informed neural networks (PINNs) have emerged as a powerful tool to provide robust and accurate approximations of solutions to partial differential equations (PDEs). However, PINNs face serious difficulties and challenges when trying to approximate PDEs with dominant hyperbolic character. This research focuses on the development of a physics informed deep learning framework to approximate solutions to nonlinear PDEs that can develop shocks or discontinuities without any a-priori knowledge of the solution or the location of the discontinuities. The work takes motivation from finite element method that solves for solution values at nodes in the discretized domain and use these nodal values to obtain a globally defined solution field. Built on the rigorous mathematical foundations of the discontinuous Galerkin method, the framework naturally handles imposition of boundary conditions (Neumann/Dirichlet), entropy conditions, and regularity requirements. Several numerical experiments and validation with analytical solutions demonstrate the accuracy, robustness, and effectiveness of the proposed framework.

We present results of numerical simulations of the tensor-valued elliptic-parabolic PDE model for biological network formation. The numerical method is based on a non-linear finite difference scheme on a uniform Cartesian grid in a 2D domain. The focus is on the impact of different discretization methods and choices of regularization parameters on the symmetry of the numerical solution. In particular, we show that using the symmetric alternating-direction implicit (ADI) method for time discretization helps preserve the symmetry of the solution, compared to the (non symmetric) ADI method. Moreover, we study the effect of regularization by isotropic background permeability $r>0$, showing that increased condition number of the elliptic problem due to decreasing value of $r$ leads to loss of symmetry. We show that in this case, neither the use of the symmetric ADI method preserves the symmetry of the solution. Finally, we perform numerical error analysis of our method making use of Wasserstein distance.

Variable screening has been a useful research area that deals with ultrahigh-dimensional data. When there exist both marginally and jointly dependent predictors to the response, existing methods such as conditional screening or iterative screening often suffer from instability against the selection of the conditional set or the computational burden, respectively. In this article, we propose a new independence measure, named conditional martingale difference divergence (CMDH), that can be treated as either a conditional or a marginal independence measure. Under regularity conditions, we show that the sure screening property of CMDH holds for both marginally and jointly active variables. Based on this measure, we propose a kernel-based model-free variable screening method, which is efficient, flexible, and stable against high correlation among predictors and heterogeneity of the response. In addition, we provide a data-driven method to select the conditional set. In simulations and real data applications, we demonstrate the superior performance of the proposed method.

This paper is devoted to the statistical and numerical properties of the geometric median, and its applications to the problem of robust mean estimation via the median of means principle. Our main theoretical results include (a) an upper bound for the distance between the mean and the median for general absolutely continuous distributions in R^d, and examples of specific classes of distributions for which these bounds do not depend on the ambient dimension $d$; (b) exponential deviation inequalities for the distance between the sample and the population versions of the geometric median, which again depend only on the trace-type quantities and not on the ambient dimension. As a corollary, we deduce improved bounds for the (geometric) median of means estimator that hold for large classes of heavy-tailed distributions. Finally, we address the error of numerical approximation, which is an important practical aspect of any statistical estimation procedure. We demonstrate that the objective function minimized by the geometric median satisfies a "local quadratic growth" condition that allows one to translate suboptimality bounds for the objective function to the corresponding bounds for the numerical approximation to the median itself. As a corollary, we propose a simple stopping rule (applicable to any optimization method) which yields explicit error guarantees. We conclude with the numerical experiments including the application to estimation of mean values of log-returns for S&P 500 data.

In this paper, we construct an efficient linear and fully decoupled finite difference scheme for wormhole propagation with heat transmission process on staggered grids, which only requires solving a sequence of linear elliptic equations at each time step. We first derive the positivity preserving properties for the discrete porosity and its difference quotient in time, and then obtain optimal error estimates for the velocity, pressure, concentration, porosity and temperature in different norms rigorously and carefully by establishing several auxiliary lemmas for the highly coupled nonlinear system. Numerical experiments in two- and three-dimensional cases are provided to verify our theoretical results and illustrate the capabilities of the constructed method.

Many food products involve mixtures of ingredients, where the mixtures can be expressed as combinations of ingredient proportions. In many cases, the quality and the consumer preference may also depend on the way in which the mixtures are processed. The processing is generally defined by the settings of one or more process variables. Experimental designs studying the joint impact of the mixture ingredient proportions and the settings of the process variables are called mixture-process variable experiments. In this article, we show how to combine mixture-process variable experiments and discrete choice experiments, to quantify and model consumer preferences for food products that can be viewed as processed mixtures. First, we describe the modeling of data from such combined experiments. Next, we describe how to generate D- and I-optimal designs for choice experiments involving mixtures and process variables, and we compare the two kinds of designs using two examples.

Graph neural networks generalize conventional neural networks to graph-structured data and have received widespread attention due to their impressive representation ability. In spite of the remarkable achievements, the performance of Euclidean models in graph-related learning is still bounded and limited by the representation ability of Euclidean geometry, especially for datasets with highly non-Euclidean latent anatomy. Recently, hyperbolic space has gained increasing popularity in processing graph data with tree-like structure and power-law distribution, owing to its exponential growth property. In this survey, we comprehensively revisit the technical details of the current hyperbolic graph neural networks, unifying them into a general framework and summarizing the variants of each component. More importantly, we present various HGNN-related applications. Last, we also identify several challenges, which potentially serve as guidelines for further flourishing the achievements of graph learning in hyperbolic spaces.

This PhD thesis contains several contributions to the field of statistical causal modeling. Statistical causal models are statistical models embedded with causal assumptions that allow for the inference and reasoning about the behavior of stochastic systems affected by external manipulation (interventions). This thesis contributes to the research areas concerning the estimation of causal effects, causal structure learning, and distributionally robust (out-of-distribution generalizing) prediction methods. We present novel and consistent linear and non-linear causal effects estimators in instrumental variable settings that employ data-dependent mean squared prediction error regularization. Our proposed estimators show, in certain settings, mean squared error improvements compared to both canonical and state-of-the-art estimators. We show that recent research on distributionally robust prediction methods has connections to well-studied estimators from econometrics. This connection leads us to prove that general K-class estimators possess distributional robustness properties. We, furthermore, propose a general framework for distributional robustness with respect to intervention-induced distributions. In this framework, we derive sufficient conditions for the identifiability of distributionally robust prediction methods and present impossibility results that show the necessity of several of these conditions. We present a new structure learning method applicable in additive noise models with directed trees as causal graphs. We prove consistency in a vanishing identifiability setup and provide a method for testing substructure hypotheses with asymptotic family-wise error control that remains valid post-selection. Finally, we present heuristic ideas for learning summary graphs of nonlinear time-series models.

北京阿比特科技有限公司