亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Wright function arises in the theory of the fractional differential equations. It is a very general mathematical object having diverse connections with other special and elementary functions. The Wright function provides a unified treatment of several classes of special functions, such as the Gaussian, Airy, Bessel, error functions, etc. The manuscript presents a novel numerical technique for approximation of the Wright function using quadratures. The algorithm is implemented as a standalone library using the double-exponential quadrature integration technique using the method of stationary phase. Function plots for a variety of parameter values are demonstrated.

相關內容

The hazard function represents one of the main quantities of interest in the analysis of survival data. We propose a general approach for modelling the dynamics of the hazard function using systems of autonomous ordinary differential equations (ODEs). This modelling approach can be used to provide qualitative and quantitative analyses of the evolution of the hazard function over time. Our proposal capitalises on the extensive literature of ODEs which, in particular, allow for establishing basic rules or laws on the dynamics of the hazard function via the use of autonomous ODEs. We show how to implement the proposed modelling framework in cases where there is an analytic solution to the system of ODEs or where an ODE solver is required to obtain a numerical solution. We focus on the use of a Bayesian modelling approach, but the proposed methodology can also be coupled with maximum likelihood estimation. A simulation study is presented to illustrate the performance of these models and the interplay of sample size and censoring. Two case studies using real data are presented to illustrate the use of the proposed approach and to highlight the interpretability of the corresponding models. We conclude with a discussion on potential extensions of our work and strategies to include covariates into our framework.

In classical logic, "P implies Q" is equivalent to "not-P or Q". It is well known that the equivalence is problematic. Actually, from "P implies Q", "not-P or Q" can be inferred ("Implication-to-disjunction" is valid), while from "not-P or Q", "P implies Q" cannot be inferred in general ("Disjunction-to-implication" is not valid), so the equivalence between them is invalid. This work aims to remove exactly the incorrect Disjunction-to-implication from classical logic (CL). The paper proposes a logical system (IRL), which has the properties (1) adding Disjunction-to-implication to IRL is simply CL, and (2) Disjunction-to-implication is independent of IRL, i.e. either Disjunction-to-implication or its negation cannot be derived in IRL. In other words, IRL is just the sub-system of CL with Disjunction-to-implication being exactly removed.

It is well known that the Euler method for approximating the solutions of a random ordinary differential equation $\mathrm{d}X_t/\mathrm{d}t = f(t, X_t, Y_t)$ driven by a stochastic process $\{Y_t\}_t$ with $\theta$-H\"older sample paths is estimated to be of strong order $\theta$ with respect to the time step, provided $f=f(t, x, y)$ is sufficiently regular and with suitable bounds. Here, it is proved that, in many typical cases, further conditions on the noise can be exploited so that the strong convergence is actually of order 1, regardless of the H\"older regularity of the sample paths. This applies for instance to additive or multiplicative It\^o process noises (such as Wiener, Ornstein-Uhlenbeck, and geometric Brownian motion processes); to point-process noises (such as Poisson point processes and Hawkes self-exciting processes, which even have jump-type discontinuities); and to transport-type processes with sample paths of bounded variation. The result is based on a novel approach, estimating the global error as an iterated integral over both large and small mesh scales, and switching the order of integration to move the critical regularity to the large scale. The work is complemented with numerical simulations illustrating the strong order 1 convergence in those cases, and with an example with fractional Brownian motion noise with Hurst parameter $0 < H < 1/2$ for which the order of convergence is $H + 1/2$, hence lower than the attained order 1 in the examples above, but still higher than the order $H$ of convergence expected from previous works.

Operator splitting is a popular divide-and-conquer strategy for solving differential equations. Typically, the right-hand side of the differential equation is split into a number of parts that are then integrated separately. Many methods are known that split the right-hand side into two parts. This approach is limiting, however, and there are situations when 3-splitting is more natural and ultimately more advantageous. The second-order Strang operator-splitting method readily generalizes to a right-hand side splitting into any number of operators. It is arguably the most popular method for 3-splitting because of its efficiency, ease of implementation, and intuitive nature. Other 3-splitting methods exist, but they are less well-known, and \rev{analysis and} evaluation of their performance in practice are scarce. We demonstrate the effectiveness of some alternative 3-split, second-order methods to Strang splitting on two problems: the reaction-diffusion Brusselator, which can be split into three parts that each have closed-form solutions, and the kinetic Vlasov--Poisson equations that is used in semi-Lagrangian plasma simulations. We find alternative second-order 3-operator-splitting methods that realize efficiency gains of 10\%--20\% over traditional Strang splitting. Our analysis for the practical assessment of efficiency of operator-splitting methods includes the computational cost of the integrators and can be used in method design.

For terminal value problems of fractional differential equations of order $\alpha \in (0,1)$ that use Caputo derivatives, shooting methods are a well developed and investigated approach. Based on recently established analytic properties of such problems, we develop a new technique to select the required initial values that solves such shooting problems quickly and accurately. Numerical experiments indicate that this new proportional secting technique converges very quickly and accurately to the solution. Run time measurements indicate a speedup factor of between 4 and 10 when compared to the standard bisection method.

Partial differential equations (PDEs) with uncertain or random inputs have been considered in many studies of uncertainty quantification. In forward uncertainty quantification, one is interested in analyzing the stochastic response of the PDE subject to input uncertainty, which usually involves solving high-dimensional integrals of the PDE output over a sequence of stochastic variables. In practical computations, one typically needs to discretize the problem in several ways: approximating an infinite-dimensional input random field with a finite-dimensional random field, spatial discretization of the PDE using, e.g., finite elements, and approximating high-dimensional integrals using cubatures such as quasi-Monte Carlo methods. In this paper, we focus on the error resulting from dimension truncation of an input random field. We show how Taylor series can be used to derive theoretical dimension truncation rates for a wide class of problems and we provide a simple checklist of conditions that a parametric mathematical model needs to satisfy in order for our dimension truncation error bound to hold. Some of the novel features of our approach include that our results are applicable to non-affine parametric operator equations, dimensionally-truncated conforming finite element discretized solutions of parametric PDEs, and even compositions of PDE solutions with smooth nonlinear quantities of interest. As a specific application of our method, we derive an improved dimension truncation error bound for elliptic PDEs with lognormally parameterized diffusion coefficients. Numerical examples support our theoretical findings.

It is desirable for statistical models to detect signals of interest independently of their position. If the data is generated by some smooth process, this additional structure should be taken into account. We introduce a new class of neural networks that are shift invariant and preserve smoothness of the data: functional neural networks (FNNs). For this, we use methods from functional data analysis (FDA) to extend multi-layer perceptrons and convolutional neural networks to functional data. We propose different model architectures, show that the models outperform a benchmark model from FDA in terms of accuracy and successfully use FNNs to classify electroencephalography (EEG) data.

We propose a new model to address the overlooked problem of node clustering in simple hypergraphs. Simple hypergraphs are suitable when a node may not appear multiple times in the same hyperedge, such as in co-authorship datasets. Our model assumes the existence of latent node groups and hyperedges are conditionally independent given these groups. We first establish the generic identifiability of the model parameters. We then develop a variational approximation Expectation-Maximization algorithm for parameter inference and node clustering, and derive a statistical criterion for model selection. To illustrate the performance of our R package HyperSBM, we compare it with other node clustering methods using synthetic data generated from the model, as well as from a line clustering experiment and a co-authorship dataset. As a by-product, our synthetic experiments demonstrate that the detectability thresholds for non-uniform sparse hypergraphs cannot be deduced from the uniform case.

Linear statistics of point processes yield Monte Carlo estimators of integrals. While the simplest approach relies on a homogeneous Poisson point process, more regularly spread point processes, such as scrambled low-discrepancy sequences or determinantal point processes, can yield Monte Carlo estimators with fast-decaying mean square error. Following the intuition that more regular configurations result in lower integration error, we introduce the repulsion operator, which reduces clustering by slightly pushing the points of a configuration away from each other. Our main theoretical result is that applying the repulsion operator to a homogeneous Poisson point process yields an unbiased Monte Carlo estimator with lower variance than under the original point process. On the computational side, the evaluation of our estimator is only quadratic in the number of integrand evaluations and can be easily parallelized without any communication across tasks. We illustrate our variance reduction result with numerical experiments and compare it to popular Monte Carlo methods. Finally, we numerically investigate a few open questions on the repulsion operator. In particular, the experiments suggest that the variance reduction also holds when the operator is applied to other motion-invariant point processes.

We consider an unknown multivariate function representing a system-such as a complex numerical simulator-taking both deterministic and uncertain inputs. Our objective is to estimate the set of deterministic inputs leading to outputs whose probability (with respect to the distribution of the uncertain inputs) of belonging to a given set is less than a given threshold. This problem, which we call Quantile Set Inversion (QSI), occurs for instance in the context of robust (reliability-based) optimization problems, when looking for the set of solutions that satisfy the constraints with sufficiently large probability. To solve the QSI problem, we propose a Bayesian strategy based on Gaussian process modeling and the Stepwise Uncertainty Reduction (SUR) principle, to sequentially choose the points at which the function should be evaluated to efficiently approximate the set of interest. We illustrate the performance and interest of the proposed SUR strategy through several numerical experiments.

北京阿比特科技有限公司