亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we deal with sequential testing of multiple hypotheses. In the general scheme of construction of optimal tests based on the backward induction, we propose a modification which provides a simplified (generally speaking, suboptimal) version of the optimal test, for any particular criterion of optimization. We call this DBC version (the one with Dropped Backward Control) of the optimal test. In particular, for the case of two simple hypotheses, dropping backward control in the Bayesian test produces the classical sequential probability ratio test (SPRT). Similarly, dropping backward control in the modified Kiefer-Weiss solutions produces Lorden's 2-SPRTs . In the case of more than two hypotheses, we obtain in this way new classes of sequential multi-hypothesis tests, and investigate their properties. The efficiency of the DBC-tests is evaluated with respect to the optimal Bayesian multi-hypothesis test and with respect to the matrix sequential probability ratio test (MSPRT) by Armitage. In a multihypothesis variant of the Kiefer-Weiss problem for binomial proportions the performance of the DBC-test is numerically compared with that of the exact solution. In a model of normal observations with a linear trend, the performance of of the DBC-test is numerically compared with that of the MSPRT. Some other numerical examples are presented. In all the cases the proposed tests exhibit a very high efficiency with respect to the optimal tests (more than 99.3\% when sampling from Bernoulli populations) and/or with respect to the MSPRT (even outperforming the latter in some scenarios).

相關內容

We present a novel and mathematically transparent approach to function approximation and the training of large, high-dimensional neural networks, based on the approximate least-squares solution of associated Fredholm integral equations of the first kind by Ritz-Galerkin discretization, Tikhonov regularization and tensor-train methods. Practical application to supervised learning problems of regression and classification type confirm that the resulting algorithms are competitive with state-of-the-art neural network-based methods.

We derive a general lower bound for the generalized Hamming weights of nested matrix-product codes, with a particular emphasis on the cases with two and three constituent codes. We also provide an upper bound which is reminiscent of the bounds used for the minimum distance of matrix-product codes. When the constituent codes are two Reed-Solomon codes, we obtain an explicit formula for the generalized Hamming weights of the resulting matrix-product code. We also deal with the non-nested case for the case of two constituent codes.

In this paper, preconditioning the saddle point problem arising from the elliptic boundary optimal control problem with mixed boundary conditions is considered. A block triangular reconditioning method is proposed based on permutations of the saddle point problem and approximations of the corresponding Schur complement. The spectral properties of the preconditioned matrix is analyzed. Numerical experiments are conducted to demonstrate the effectiveness of the proposed preconditioning method.

We propose a numerical method to solve parameter-dependent hyperbolic partial differential equations (PDEs) with a moment approach, based on a previous work from Marx et al. (2020). This approach relies on a very weak notion of solution of nonlinear equations, namely parametric entropy measure-valued (MV) solutions, satisfying linear equations in the space of Borel measures. The infinite-dimensional linear problem is approximated by a hierarchy of convex, finite-dimensional, semidefinite programming problems, called Lasserre's hierarchy. This gives us a sequence of approximations of the moments of the occupation measure associated with the parametric entropy MV solution, which is proved to converge. In the end, several post-treatments can be performed from this approximate moments sequence. In particular, the graph of the solution can be reconstructed from an optimization of the Christoffel-Darboux kernel associated with the approximate measure, that is a powerful approximation tool able to capture a large class of irregular functions. Also, for uncertainty quantification problems, several quantities of interest can be estimated, sometimes directly such as the expectation of smooth functionals of the solutions. The performance of our approach is evaluated through numerical experiments on the inviscid Burgers equation with parametrised initial conditions or parametrised flux function.

This study proposes a unified theory and statistical learning approach for traffic conflict detection, addressing the long-existing call for a consistent and comprehensive methodology to evaluate the collision risk emerged in road user interactions. The proposed theory assumes a context-dependent probabilistic collision risk and frames conflict detection as estimating the risk by statistical learning from observed proximities and contextual variables. Three primary tasks are integrated: representing interaction context from selected observables, inferring proximity distributions in different contexts, and applying extreme value theory to relate conflict intensity with conflict probability. As a result, this methodology is adaptable to various road users and interaction scenarios, enhancing its applicability without the need for pre-labelled conflict data. Demonstration experiments are executed using real-world trajectory data, with the unified metric trained on lane-changing interactions on German highways and applied to near-crash events from the 100-Car Naturalistic Driving Study in the U.S. The experiments demonstrate the methodology's ability to provide effective collision warnings, generalise across different datasets and traffic environments, cover a broad range of conflicts, and deliver a long-tailed distribution of conflict intensity. This study contributes to traffic safety by offering a consistent and explainable methodology for conflict detection applicable across various scenarios. Its societal implications include enhanced safety evaluations of traffic infrastructures, more effective collision warning systems for autonomous and driving assistance systems, and a deeper understanding of road user behaviour in different traffic conditions, contributing to a potential reduction in accident rates and improving overall traffic safety.

We develop a nonparametric test for deciding whether volatility of an asset follows a standard semimartingale process, with paths of finite quadratic variation, or a rough process with paths of infinite quadratic variation. The test utilizes the fact that volatility is rough if and only if volatility increments are negatively autocorrelated at high frequencies. It is based on the sample autocovariance of increments of spot volatility estimates computed from high-frequency asset return data. By showing a feasible CLT for this statistic under the null hypothesis of semimartingale volatility paths, we construct a test with fixed asymptotic size and an asymptotic power equal to one. The test is derived under very general conditions for the data-generating process. In particular, it is robust to jumps with arbitrary activity and to the presence of market microstructure noise. In an application of the test to SPY high-frequency data, we find evidence for rough volatility.

This paper introduces a comprehensive framework to adjust a discrete test statistic for improving its hypothesis testing procedure. The adjustment minimizes the Wasserstein distance to a null-approximating continuous distribution, tackling some fundamental challenges inherent in combining statistical significances derived from discrete distributions. The related theory justifies Lancaster's mid-p and mean-value chi-squared statistics for Fisher's combination as special cases. However, in order to counter the conservative nature of Lancaster's testing procedures, we propose an updated null-approximating distribution. It is achieved by further minimizing the Wasserstein distance to the adjusted statistics within a proper distribution family. Specifically, in the context of Fisher's combination, we propose an optimal gamma distribution as a substitute for the traditionally used chi-squared distribution. This new approach yields an asymptotically consistent test that significantly improves type I error control and enhances statistical power.

Despite the wide usage of parametric point processes in theory and applications, a sound goodness-of-fit procedure to test whether a given parametric model is appropriate for data coming from a self-exciting point processes has been missing in the literature. In this work, we establish a bootstrap-based goodness-of-fit test which empirically works for all kinds of self-exciting point processes (and even beyond). In an infill-asymptotic setting we also prove its asymptotic consistency, albeit only in the particular case that the underlying point process is inhomogeneous Poisson.

Motivated by the application of saddlepoint approximations to resampling-based statistical tests, we prove that a Lugananni-Rice style approximation for conditional tail probabilities of averages of conditionally independent random variables has vanishing relative error. We also provide a general condition on the existence and uniqueness of the solution to the corresponding saddlepoint equation. The results are valid under a broad class of distributions involving no restrictions on the smoothness of the distribution function. The derived saddlepoint approximation formula can be directly applied to resampling-based hypothesis tests, including bootstrap, sign-flipping and conditional randomization tests. Our results extend and connect several classical saddlepoint approximation results. On the way to proving our main results, we prove a new conditional Berry-Esseen inequality for the sum of conditionally independent random variables, which may be of independent interest.

We introduce the saddlepoint approximation-based conditional randomization test (spaCRT), a novel conditional independence test that effectively balances statistical accuracy and computational efficiency, inspired by applications to single-cell CRISPR screens. Resampling-based methods like the distilled conditional randomization test (dCRT) offer statistical precision but at a high computational cost. The spaCRT leverages a saddlepoint approximation to the resampling distribution of the dCRT test statistic, achieving very similar finite-sample statistical performance with significantly reduced computational demands. We prove that the spaCRT p-value approximates the dCRT p-value with vanishing relative error, and that these two tests are asymptotically equivalent. Through extensive simulations and real data analysis, we demonstrate that the spaCRT controls Type-I error and maintains high power, outperforming other asymptotic and resampling-based tests. Our method is particularly well-suited for large-scale single-cell CRISPR screen analyses, facilitating the efficient and accurate assessment of perturbation-gene associations.

北京阿比特科技有限公司