亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Semi-implicit spectral deferred correction (SDC) methods provide a systematic approach to construct time integration methods of arbitrarily high order for nonlinear evolution equations including conservation laws. They converge towards $A$- or even $L$-stable collocation methods, but are often not sufficiently robust themselves. In this paper, a family of SDC methods inspired by an implicit formulation of the Lax-Wendroff method is developed. Compared to fully implicit approaches, the methods have the advantage that they only require the solution of positive definite or semi-definite linear systems. Numerical evidence suggests that the proposed semi-implicit SDC methods with Radau points are $L$-stable up to order 11 and require very little diffusion for orders 13 and 15. The excellent stability and accuracy of these methods is confirmed by numerical experiments with 1D conservation problems, including the convection-diffusion, Burgers, Euler and Navier-Stokes equations.

相關內容

Electromagnetic stimulation probes and modulates the neural systems that control movement. Key to understanding their effects is the muscle recruitment curve, which maps evoked potential size against stimulation intensity. Current methods to estimate curve parameters require large samples; however, obtaining these is often impractical due to experimental constraints. Here, we present a hierarchical Bayesian framework that accounts for small samples, handles outliers, simulates high-fidelity data, and returns a posterior distribution over curve parameters that quantify estimation uncertainty. It uses a rectified-logistic function that estimates motor threshold and outperforms conventionally used sigmoidal alternatives in predictive performance, as demonstrated through cross-validation. In simulations, our method outperforms non-hierarchical models by reducing threshold estimation error on sparse data and requires fewer participants to detect shifts in threshold compared to frequentist testing. We present two common use cases involving electrical and electromagnetic stimulation data and provide an open-source library for Python, called hbMEP, for diverse applications.

Recent methods in modeling spatial extreme events have focused on utilizing parametric max-stable processes and their underlying dependence structure. In this work, we provide a unified approach for analyzing spatial extremes with little available data by estimating the distribution of model parameters or the spatial dependence directly. By employing recent developments in generative neural networks we predict a full sample-based distribution, allowing for direct assessment of uncertainty regarding model parameters or other parameter dependent functionals. We validate our method by fitting several simulated max-stable processes, showing a high accuracy of the approach, regarding parameter estimation, as well as uncertainty quantification. Additional robustness checks highlight the generalization and extrapolation capabilities of the model, while an application to precipitation extremes across Western Germany demonstrates the usability of our approach in real-world scenarios.

The homogenization procedure developed here is conducted on a laminate with periodic space-time modulation on the fine scale: at leading order, this modulation creates convection in the low-wavelength regime if both parameters are modulated. However, if only one parameter is modulated, which is more realistic, this convective term disappears and one recovers a standard diffusion equation with effective homogeneous parameters; this does not describe the non-reciprocity and the propagation of the field observed from exact dispersion diagrams. This inconsistency is corrected here by considering second-order homogenization which results in a non-reciprocal propagation term that is proved to be non-zero for any laminate and verified via numerical simulation. The same methodology is also applied to the case when the density is modulated in the heat equation, leading therefore to a corrective advective term which cancels out non-reciprocity at the leading order but not at the second order.

A common method for estimating the Hessian operator from random samples on a low-dimensional manifold involves locally fitting a quadratic polynomial. Although widely used, it is unclear if this estimator introduces bias, especially in complex manifolds with boundaries and nonuniform sampling. Rigorous theoretical guarantees of its asymptotic behavior have been lacking. We show that, under mild conditions, this estimator asymptotically converges to the Hessian operator, with nonuniform sampling and curvature effects proving negligible, even near boundaries. Our analysis framework simplifies the intensive computations required for direct analysis.

We present a Bayesian method for multivariate changepoint detection that allows for simultaneous inference on the location of a changepoint and the coefficients of a logistic regression model for distinguishing pre-changepoint data from post-changepoint data. In contrast to many methods for multivariate changepoint detection, the proposed method is applicable to data of mixed type and avoids strict assumptions regarding the distribution of the data and the nature of the change. The regression coefficients provide an interpretable description of a potentially complex change. For posterior inference, the model admits a simple Gibbs sampling algorithm based on P\'olya-gamma data augmentation. We establish conditions under which the proposed method is guaranteed to recover the true underlying changepoint. As a testing ground for our method, we consider the problem of detecting topological changes in time series of images. We demonstrate that our proposed method $\mathtt{bclr}$, combined with a topological feature embedding, performs well on both simulated and real image data. The method also successfully recovers the location and nature of changes in more traditional changepoint tasks.

We introduce a method based on Gaussian process regression to identify discrete variational principles from observed solutions of a field theory. The method is based on the data-based identification of a discrete Lagrangian density. It is a geometric machine learning technique in the sense that the variational structure of the true field theory is reflected in the data-driven model by design. We provide a rigorous convergence statement of the method. The proof circumvents challenges posed by the ambiguity of discrete Lagrangian densities in the inverse problem of variational calculus. Moreover, our method can be used to quantify model uncertainty in the equations of motions and any linear observable of the discrete field theory. This is illustrated on the example of the discrete wave equation and Schr\"odinger equation. The article constitutes an extension of our previous article arXiv:2404.19626 for the data-driven identification of (discrete) Lagrangians for variational dynamics from an ode setting to the setting of discrete pdes.

We formulate and analyze a multiscale method for an elliptic problem with an oscillatory coefficient based on a skeletal (hybrid) formulation. More precisely, we employ hybrid discontinuous Galerkin approaches and combine them with the localized orthogonal decomposition methodology to obtain a coarse-scale skeletal method that effectively includes fine-scale information. This work is the first step in reliably merging hybrid skeletal formulations and localized orthogonal decomposition to unite the advantages of both strategies. Numerical experiments are presented to illustrate the theoretical findings.

In relational verification, judicious alignment of computational steps facilitates proof of relations between programs using simple relational assertions. Relational Hoare logics (RHL) provide compositional rules that embody various alignments of executions. Seemingly more flexible alignments can be expressed in terms of product automata based on program transition relations. A single degenerate alignment rule (self-composition), atop a complete Hoare logic, comprises a RHL for $\forall\forall$ properties that is complete in the ordinary logical sense (Cook'78). The notion of alignment completeness was previously proposed as a more satisfactory measure, and some rules were shown to be alignment complete with respect to a few ad hoc forms of alignment automata. This paper proves alignment completeness with respect to a general class of $\forall\forall$ alignment automata, for a RHL comprised of standard rules together with a rule of semantics-preserving rewrites based on Kleene algebra with tests. A new logic for $\forall\exists$ properties is introduced and shown to be alignment complete. The $\forall\forall$ and $\forall\exists$ automata are shown to be semantically complete. Thus the logics are both complete in the ordinary sense. Recent work by D'Osualdo et al highlights the importance of completeness relative to assumptions (which we term entailment completeness), and presents $\forall\forall$ examples seemingly beyond the scope of RHLs. Additional rules enable these examples to be proved in our RHL, shedding light on the open problem of entailment completeness.

Shape-restricted inferences have exhibited empirical success in various applications with survival data. However, certain works fall short in providing a rigorous theoretical justification and an easy-to-use variance estimator with theoretical guarantee. Motivated by Deng et al. (2023), this paper delves into an additive and shape-restricted partially linear Cox model for right-censored data, where each additive component satisfies a specific shape restriction, encompassing monotonic increasing/decreasing and convexity/concavity. We systematically investigate the consistencies and convergence rates of the shape-restricted maximum partial likelihood estimator (SMPLE) of all the underlying parameters. We further establish the aymptotic normality and semiparametric effiency of the SMPLE for the linear covariate shift. To estimate the asymptotic variance, we propose an innovative data-splitting variance estimation method that boasts exceptional versatility and broad applicability. Our simulation results and an analysis of the Rotterdam Breast Cancer dataset demonstrate that the SMPLE has comparable performance with the maximum likelihood estimator under the Cox model when the Cox model is correct, and outperforms the latter and Huang (1999)'s method when the Cox model is violated or the hazard is nonsmooth. Meanwhile, the proposed variance estimation method usually leads to reliable interval estimates based on the SMPLE and its competitors.

Ethics based auditing (EBA) is a structured process whereby an entitys past or present behaviour is assessed for consistency with moral principles or norms. Recently, EBA has attracted much attention as a governance mechanism that may bridge the gap between principles and practice in AI ethics. However, important aspects of EBA (such as the feasibility and effectiveness of different auditing procedures) have yet to be substantiated by empirical research. In this article, we address this knowledge gap by providing insights from a longitudinal industry case study. Over 12 months, we observed and analysed the internal activities of AstraZeneca, a biopharmaceutical company, as it prepared for and underwent an ethics-based AI audit. While previous literature concerning EBA has focused on proposing evaluation metrics or visualisation techniques, our findings suggest that the main difficulties large multinational organisations face when conducting EBA mirror classical governance challenges. These include ensuring harmonised standards across decentralised organisations, demarcating the scope of the audit, driving internal communication and change management, and measuring actual outcomes. The case study presented in this article contributes to the existing literature by providing a detailed description of the organisational context in which EBA procedures must be integrated to be feasible and effective.

北京阿比特科技有限公司