亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

It is realized that existing powerful tests of goodness-of-fit are all based on sorted uniforms and, consequently, can suffer from the confounded effect of different locations and various signal frequencies in the deviations of the distributions under the alternative hypothesis from those under the null. This paper proposes circularly symmetric tests that are obtained by circularizing reweighted Anderson-Darling tests, with the focus on the circularized versions of Anderson-Darling and Zhang test statistics. Two specific types of circularization are considered, one is obtained by taking the average of the corresponding so-called scan test statistics and the other by using the maximum. To a certain extent, this circularization technique effectively eliminates the location effect and allows the weights to focus on the various signal frequencies. A limited but arguably convincing simulation study on finite-sample performance demonstrates that the circularized Zhang method outperforms the circularized Anderson-Darling and that the circularized tests outperform their parent methods. Large-sample theoretical results are also obtained for the average type of circularization. The results show that both the circularized Anderson-Darling and circularized Zhang have asymptotic distributions that are a weighted sum of an infinite number of independent squared standard normal random variables. In addition, the kernel matrices and functions are circulant. As a result, asymptotic approximations are computationally efficient via the fast Fourier transform.

相關內容

Clinical trials often involve the assessment of multiple endpoints to comprehensively evaluate the efficacy and safety of interventions. In the work, we consider a global nonparametric testing procedure based on multivariate rank for the analysis of multiple endpoints in clinical trials. Unlike other existing approaches that rely on pairwise comparisons for each individual endpoint, the proposed method directly incorporates the multivariate ranks of the observations. By considering the joint ranking of all endpoints, the proposed approach provides robustness against diverse data distributions and censoring mechanisms commonly encountered in clinical trials. Through extensive simulations, we demonstrate the superior performance of the multivariate rank-based approach in controlling type I error and achieving higher power compared to existing rank-based methods. The simulations illustrate the advantages of leveraging multivariate ranks and highlight the robustness of the approach in various settings. The proposed method offers an effective tool for the analysis of multiple endpoints in clinical trials, enhancing the reliability and efficiency of outcome evaluations.

While there exists several inferential methods for analyzing functional data in factorial designs, there is a lack of statistical tests that are valid (i) in general designs, (ii) under non-restrictive assumptions on the data generating process and (iii) allow for coherent post-hoc analyses. In particular, most existing methods assume Gaussianity or equal covariance functions across groups (homoscedasticity) and are only applicable for specific study designs that do not allow for evaluation of interactions. Moreover, all available strategies are only designed for testing global hypotheses and do not directly allow a more in-depth analysis of multiple local hypotheses. To address the first two problems (i)-(ii), we propose flexible integral-type test statistics that are applicable in general factorial designs under minimal assumptions on the data generating process. In particular, we neither postulate homoscedasticity nor Gaussianity. To approximate the statistics' null distribution, we adopt a resampling approach and validate it methodologically. Finally, we use our flexible testing framework to (iii) infer several local null hypotheses simultaneously. To allow for powerful data analysis, we thereby take the complex dependencies of the different local test statistics into account. In extensive simulations we confirm that the new methods are flexibly applicable. Two illustrate data analyses complete our study. The new testing procedures are implemented in the R package multiFANOVA, which will be available on CRAN soon.

Nonparametric tests for functional data are a challenging class of tests to work with because of the potentially high dimensional nature of functional data. One of the main challenges for considering rank-based tests, like the Mann-Whitney or Wilcoxon Rank Sum tests (MWW), is that the unit of observation is a curve. Thus any rank-based test must consider ways of ranking curves. While several procedures, including depth-based methods, have recently been used to create scores for rank-based tests, these scores are not constructed under the null and often introduce additional, uncontrolled for variability. We therefore reconsider the problem of rank-based tests for functional data and develop an alternative approach that incorporates the null hypothesis throughout. Our approach first ranks realizations from the curves at each time point, summarizes the ranks for each subject using a sufficient statistic we derive, and finally re-ranks the sufficient statistics in a procedure we refer to as a doubly ranked test. As we demonstrate, doubly rank tests are more powerful while maintaining ideal type I error in the two sample, MWW setting. We also extend our framework to more than two samples, developing a Kruskal-Wallis test for functional data which exhibits good test characteristics as well. Finally, we illustrate the use of doubly ranked tests in functional data contexts from material science, climatology, and public health policy.

In this article we propose two finite element schemes for the Navier-Stokes equations, based on a reformulation that involves differential operators from the de Rham sequence and an advection operator with explicit skew-symmetry in weak form. Our first scheme is obtained by discretizing this formulation with conforming FEEC (Finite Element Exterior Calculus) spaces: it preserves the pointwise divergence free constraint of the velocity, its total momentum and its energy, in addition to being pressure robust. Following the broken-FEEC approach, our second scheme uses fully discontinuous spaces and local conforming projections to define the discrete differential operators. It preserves the same invariants up to a dissipation of energy to stabilize numerical discontinuities. For both schemes we use a middle point time discretization which preserve these invariants at the fully discrete level and we analyse its well-posedness in terms of a CFL condition. Numerical test cases performed with spline finite elements allow us to verify the high order accuracy of the resulting numerical methods, as well as their ability to handle general boundary conditions.

Some aspects of the ELectrical EXplicit (ELEX) scheme for using explicit integration schemes in circuit simulation are discussed. It is pointed out that the parallel resistor approach, presented earlier to address singular matrix issues arising in the ELEX scheme, is not adequately robust for incorporation in a general-purpose simulator for power electronic circuits. New topology-aware approaches, which are more robust and efficient compared to the parallel resistor approach, are presented. Several circuit examples are considered to illustrate the new approaches.

We propose a model to flexibly estimate joint tail properties by exploiting the convergence of an appropriately scaled point cloud onto a compact limit set. Characteristics of the shape of the limit set correspond to key tail dependence properties. We directly model the shape of the limit set using B\'ezier splines, which allow flexible and parsimonious specification of shapes in two dimensions. We then fit the B\'ezier splines to data in pseudo-polar coordinates using Markov chain Monte Carlo, utilizing a limiting approximation to the conditional likelihood of the radii given angles. By imposing appropriate constraints on the parameters of the B\'ezier splines, we guarantee that each posterior sample is a valid limit set boundary, allowing direct posterior analysis of any quantity derived from the shape of the curve. Furthermore, we obtain interpretable inference on the asymptotic dependence class by using mixture priors with point masses on the corner of the unit box. Finally, we apply our model to bivariate datasets of extremes of variables related to fire risk and air pollution.

The equilibrium configuration of a plasma in an axially symmetric reactor is described mathematically by a free boundary problem associated with the celebrated Grad--Shafranov equation. The presence of uncertainty in the model parameters introduces the need to quantify the variability in the predictions. This is often done by computing a large number of model solutions on a computational grid for an ensemble of parameter values and then obtaining estimates for the statistical properties of solutions. In this study, we explore the savings that can be obtained using multilevel Monte Carlo methods, which reduce costs by performing the bulk of the computations on a sequence of spatial grids that are coarser than the one that would typically be used for a simple Monte Carlo simulation. We examine this approach using both a set of uniformly refined grids and a set of adaptively refined grids guided by a discrete error estimator. Numerical experiments show that multilevel methods dramatically reduce the cost of simulation, with cost reductions typically on the order of 60 or more and possibly as large as 200. Adaptive gridding results in more accurate computation of geometric quantities such as x-points associated with the model.

We use the lens of weak signal asymptotics to study a class of sequentially randomized experiments, including those that arise in solving multi-armed bandit problems. In an experiment with $n$ time steps, we let the mean reward gaps between actions scale to the order $1/\sqrt{n}$ so as to preserve the difficulty of the learning task as $n$ grows. In this regime, we show that the sample paths of a class of sequentially randomized experiments -- adapted to this scaling regime and with arm selection probabilities that vary continuously with state -- converge weakly to a diffusion limit, given as the solution to a stochastic differential equation. The diffusion limit enables us to derive refined, instance-specific characterization of stochastic dynamics, and to obtain several insights on the regret and belief evolution of a number of sequential experiments including Thompson sampling (but not UCB, which does not satisfy our continuity assumption). We show that all sequential experiments whose randomization probabilities have a Lipschitz-continuous dependence on the observed data suffer from sub-optimal regret performance when the reward gaps are relatively large. Conversely, we find that a version of Thompson sampling with an asymptotically uninformative prior variance achieves near-optimal instance-specific regret scaling, including with large reward gaps, but these good regret properties come at the cost of highly unstable posterior beliefs.

Within the next decade the Laser Interferometer Space Antenna (LISA) is due to be launched, providing the opportunity to extract physics from stellar objects and systems, such as \textit{Extreme Mass Ratio Inspirals}, (EMRIs) otherwise undetectable to ground based interferometers and Pulsar Timing Arrays (PTA). Unlike previous sources detected by the currently available observational methods, these sources can \textit{only} be simulated using an accurate computation of the gravitational self-force. Whereas the field has seen outstanding progress in the frequency domain, metric reconstruction and self-force calculations are still an open challenge in the time domain. Such computations would not only further corroborate frequency domain calculations and models, but also allow for full self-consistent evolution of the orbit under the effect of the self-force. Given we have \textit{a priori} information about the local structure of the discontinuity at the particle, we will show how to construct discontinuous spatial and temporal discretisations by operating on discontinuous Lagrange and Hermite interpolation formulae and hence recover higher order accuracy. In this work we demonstrate how this technique in conjunction with well-suited gauge choice (hyperboloidal slicing) and numerical (discontinuous collocation with time symmetric) methods can provide a relatively simple method of lines numerical algorithm to the problem. This is the first of a series of papers studying the behaviour of a point-particle prescribing circular geodesic motion in Schwarzschild in the \textit{time domain}. In this work we describe the numerical machinery necessary for these computations and show not only our work is capable of highly accurate flux radiation measurements but it also shows suitability for evaluation of the necessary field and it's derivatives at the particle limit.

Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization. While conventional wisdom often takes a dim view of nonconvex optimization algorithms due to their susceptibility to spurious local minima, simple iterative methods such as gradient descent have been remarkably successful in practice. The theoretical footings, however, had been largely lacking until recently. In this tutorial-style overview, we highlight the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees. We review two contrasting approaches: (1) two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and (2) global landscape analysis and initialization-free algorithms. Several canonical matrix factorization problems are discussed, including but not limited to matrix sensing, phase retrieval, matrix completion, blind deconvolution, robust principal component analysis, phase synchronization, and joint alignment. Special care is taken to illustrate the key technical insights underlying their analyses. This article serves as a testament that the integrated consideration of optimization and statistics leads to fruitful research findings.

北京阿比特科技有限公司