亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper focuses on the problem of testing the null hypothesis that the regression functions of several populations are equal under a general nonparametric homoscedastic regression model. It is well known that linear kernel regression estimators are sensitive to atypical responses. These distorted estimates will influence the test statistic constructed from them so the conclusions obtained when testing equality of several regression functions may also be affected. In recent years, the use of testing procedures based on empirical characteristic functions has shown good practical properties. For that reason, to provide more reliable inferences, we construct a test statistic that combines characteristic functions and residuals obtained from a robust smoother under the null hypothesis. The asymptotic distribution of the test statistic is studied under the null hypothesis and under root$-n$ contiguous alternatives. A Monte Carlo study is performed to compare the finite sample behaviour of the proposed test with the classical one obtained using local averages. The reported numerical experiments show the advantage of the proposed methodology over the one based on Nadaraya-Watson estimators for finite samples. An illustration to a real data set is also provided and enables to investigate the sensitivity of the $p-$value to the bandwidth selection.

相關內容

Testing of hypotheses is a well studied topic in mathematical statistics. Recently, this issue has also been addressed in the context of Inverse Problems, where the quantity of interest is not directly accessible but only after the inversion of a (potentially) ill-posed operator. In this study, we propose a regularized approach to hypothesis testing in Inverse Problems in the sense that the underlying estimators (or test statistics) are allowed to be biased. Under mild source-condition type assumptions we derive a family of tests with prescribed level $\alpha$ and subsequently analyze how to choose the test with maximal power out of this family. As one major result we prove that regularized testing is always at least as good as (classical) unregularized testing. Furthermore, using tools from convex optimization, we provide an adaptive test by maximizing the power functional, which then outperforms previous unregularized tests in numerical simulations by several orders of magnitude.

Over the last decade, approximating functions in infinite dimensions from samples has gained increasing attention in computational science and engineering, especially in computational uncertainty quantification. This is primarily due to the relevance of functions that are solutions to parametric differential equations in various fields, e.g. chemistry, economics, engineering, and physics. While acquiring accurate and reliable approximations of such functions is inherently difficult, current benchmark methods exploit the fact that such functions often belong to certain classes of holomorphic functions to get algebraic convergence rates in infinite dimensions with respect to the number of (potentially adaptive) samples $m$. Our work focuses on providing theoretical approximation guarantees for the class of $(\boldsymbol{b},\varepsilon)$-holomorphic functions, demonstrating that these algebraic rates are the best possible for Banach-valued functions in infinite dimensions. We establish lower bounds using a reduction to a discrete problem in combination with the theory of $m$-widths, Gelfand widths and Kolmogorov widths. We study two cases, known and unknown anisotropy, in which the relative importance of the variables is known and unknown, respectively. A key conclusion of our paper is that in the latter setting, approximation from finite samples is impossible without some inherent ordering of the variables, even if the samples are chosen adaptively. Finally, in both cases, we demonstrate near-optimal, non-adaptive (random) sampling and recovery strategies which achieve close to same rates as the lower bounds.

In this paper we show that using implicative algebras one can produce models of set theory generalizing Heyting/Boolean-valued models and realizability models of (I)ZF, both in intuitionistic and classical logic. This has as consequence that any topos which is obtained from a Set-based tripos as the result of the tripos-to-topos construction hosts a model of intuitionistic or classical set theory, provided a large enough strongly inaccessible cardinal exists.

Engineers are often faced with the decision to select the most appropriate model for simulating the behavior of engineered systems, among a candidate set of models. Experimental monitoring data can generate significant value by supporting engineers toward such decisions. Such data can be leveraged within a Bayesian model updating process, enabling the uncertainty-aware calibration of any candidate model. The model selection task can subsequently be cast into a problem of decision-making under uncertainty, where one seeks to select the model that yields an optimal balance between the reward associated with model precision, in terms of recovering target Quantities of Interest (QoI), and the cost of each model, in terms of complexity and compute time. In this work, we examine the model selection task by means of Bayesian decision theory, under the prism of availability of models of various refinements, and thus varying levels of fidelity. In doing so, we offer an exemplary application of this framework on the IMAC-MVUQ Round-Robin Challenge. Numerical investigations show various outcomes of model selection depending on the target QoI.

The accuracy of solving partial differential equations (PDEs) on coarse grids is greatly affected by the choice of discretization schemes. In this work, we propose to learn time integration schemes based on neural networks which satisfy three distinct sets of mathematical constraints, i.e., unconstrained, semi-constrained with the root condition, and fully-constrained with both root and consistency conditions. We focus on the learning of 3-step linear multistep methods, which we subsequently applied to solve three model PDEs, i.e., the one-dimensional heat equation, the one-dimensional wave equation, and the one-dimensional Burgers' equation. The results show that the prediction error of the learned fully-constrained scheme is close to that of the Runge-Kutta method and Adams-Bashforth method. Compared to the traditional methods, the learned unconstrained and semi-constrained schemes significantly reduce the prediction error on coarse grids. On a grid that is 4 times coarser than the reference grid, the mean square error shows a reduction of up to an order of magnitude for some of the heat equation cases, and a substantial improvement in phase prediction for the wave equation. On a 32 times coarser grid, the mean square error for the Burgers' equation can be reduced by up to 35% to 40%.

We deal with Mckean-Vlasov and Boltzmann type jump equations. This means that the coefficients of the stochastic equation depend on the law of the solution, and the equation is driven by a Poisson point measure with intensity measure which depends on the law of the solution as well. In [3], Alfonsi and Bally have proved that under some suitable conditions, the solution $X_t$ of such equation exists and is unique. One also proves that $X_t$ is the probabilistic interpretation of an analytical weak equation. Moreover, the Euler scheme $X_t^{\mathcal{P}}$ of this equation converges to $X_t$ in Wasserstein distance. In this paper, under more restricted assumptions, we show that the Euler scheme $X_t^{\mathcal{P}}$ converges to $X_t$ in total variation distance and $X_t$ has a smooth density (which is a function solution of the analytical weak equation). On the other hand, in view of simulation, we use a truncated Euler scheme $X^{\mathcal{P},M}_t$ which has a finite numbers of jumps in any compact interval. We prove that $X^{\mathcal{P},M}_{t}$ also converges to $X_t$ in total variation distance. Finally, we give an algorithm based on a particle system associated to $X^{\mathcal{P},M}_t$ in order to approximate the density of the law of $X_t$. Complete estimates of the error are obtained.

We introduce time-ordered multibody interactions to describe complex systems manifesting temporal as well as multibody dependencies. First, we show how the dynamics of multivariate Markov chains can be decomposed in ensembles of time-ordered multibody interactions. Then, we present an algorithm to extract those interactions from data capturing the system-level dynamics of node states and a measure to characterize the complexity of interaction ensembles. Finally, we experimentally validate the robustness of our algorithm against statistical errors and its efficiency at inferring parsimonious interaction ensembles.

Linear logic has provided new perspectives on proof-theory, denotational semantics and the study of programming languages. One of its main successes are proof-nets, canonical representations of proofs that lie at the intersection between logic and graph theory. In the case of the minimalist proof-system of multiplicative linear logic without units (MLL), these two aspects are completely fused: proof-nets for this system are graphs satisfying a correctness criterion that can be fully expressed in the language of graphs. For more expressive logical systems (containing logical constants, quantifiers and exponential modalities), this is not completely the case. The purely graphical approach of proof-nets deprives them of any sequential structure that is crucial to represent the order in which arguments are presented, which is necessary for these extensions. Rebuilding this order of presentation - sequentializing the graph - is thus a requirement for a graph to be logical. Presentations and study of the artifacts ensuring that sequentialization can be done, such as boxes or jumps, are an integral part of researches on linear logic. Jumps, extensively studied by Faggian and di Giamberardino, can express intermediate degrees of sequentialization between a sequent calculus proof and a fully desequentialized proof-net. We propose to analyze the logical strength of jumps by internalizing them in an extention of MLL where axioms on a specific formula, the jumping formula, introduce constrains on the possible sequentializations. The jumping formula needs to be treated non-linearly, which we do either axiomatically, or by embedding it in a very controlled fragment of multiplicative-exponential linear logic, uncovering the exponential logic of sequentialization.

Physics informed neural network (PINN) based solution methods for differential equations have recently shown success in a variety of scientific computing applications. Several authors have reported difficulties, however, when using PINNs to solve equations with multiscale features. The objective of the present work is to illustrate and explain the difficulty of using standard PINNs for the particular case of divergence-form elliptic partial differential equations (PDEs) with oscillatory coefficients present in the differential operator. We show that if the coefficient in the elliptic operator $a^{\epsilon}(x)$ is of the form $a(x/\epsilon)$ for a 1-periodic coercive function $a(\cdot)$, then the Frobenius norm of the neural tangent kernel (NTK) matrix associated to the loss function grows as $1/\epsilon^2$. This implies that as the separation of scales in the problem increases, training the neural network with gradient descent based methods to achieve an accurate approximation of the solution to the PDE becomes increasingly difficult. Numerical examples illustrate the stiffness of the optimization problem.

We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.

北京阿比特科技有限公司