亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Testing the equality of mean vectors across $g$ different groups plays an important role in many scientific fields. In regular frameworks, likelihood-based statistics under the normality assumption offer a general solution to this task. However, the accuracy of standard asymptotic results is not reliable when the dimension $p$ of the data is large relative to the sample size $n_i$ of each group. We propose here an exact directional test for the equality of $g$ normal mean vectors with identical unknown covariance matrix, provided that $\sum_{i=1}^g n_i \ge p+g+1$. In the case of two groups ($g=2$), the directional test is equivalent to the Hotelling's $T^2$ test. In the more general situation where the $g$ independent groups may have different unknown covariance matrices, although exactness does not hold, simulation studies show that the directional test is more accurate than most commonly used likelihood based solutions. Robustness of the directional approach and its competitors under deviation from multivariate normality is also numerically investigated.

相關內容

In contemporary problems involving genetic or neuroimaging data, thousands of hypotheses need to be tested. Due to their high power, and finite sample guarantees on type-1 error under weak assumptions, Monte-Carlo permutation tests are often considered as gold standard for these settings. However, the enormous computational effort required for (thousands of) permutation tests is a major burden. Recently, Fischer and Ramdas (2024) constructed a permutation test for a single hypothesis in which the permutations are drawn sequentially one-by-one and the testing process can be stopped at any point without inflating the type I error. They showed that the number of permutations can be substantially reduced (under null and alternative) while the power remains similar. We show how their approach can be modified to make it suitable for a broad class of multiple testing procedures. In particular, we discuss its use with the Benjamini-Hochberg procedure and illustrate the application on a large dataset.

We investigate the parameter estimation and prediction of two forms of the stochastic SIR model driven by small L\'{e}vy noise with time-dependent periodic transmission. We present consistency and rate of convergence results for the least-squares estimators. We include simulation studies using the method of projected gradient descent.

Hoeffding's inequality is a fundamental tool widely applied in probability theory, statistics, and machine learning. In this paper, we establish Hoeffding's inequalities specifically tailored for an irreducible and positive recurrent continuous-time Markov chain (CTMC) on a countable state space with the invariant probability distribution ${\pi}$ and an $\mathcal{L}^{2}(\pi)$-spectral gap ${\lambda}(Q)$. More precisely, for a function $g:E\to [a,b]$ with a mean $\pi(g)$, and given $t,\varepsilon>0$, we derive the inequality \[ \mathbb{P}_{\pi}\left(\frac{1}{t} \int_{0}^{t} g\left(X_{s}\right)\mathrm{d}s-\pi (g) \geq \varepsilon \right) \leq \exp\left\{-\frac{{\lambda}(Q)t\varepsilon^2}{(b-a)^2} \right\}, \] which can be viewed as a generalization of Hoeffding's inequality for discrete-time Markov chains (DTMCs) presented in [J. Fan et al., J. Mach. Learn. Res., 22(2022), pp. 6185-6219] to the realm of CTMCs. The key analysis enabling the attainment of this inequality lies in the utilization of the techniques of skeleton chains and augmented truncation approximations. Furthermore, we also discuss Hoeffding's inequality for a jump process on a general state space.

This work proposes a Bayesian rule based on the mixture of a point mass function at zero and the logistic distribution to perform wavelet shrinkage in nonparametric regression models with stationary errors (with short or long-memory behavior). The proposal is assessed through Monte Carlo experiments and illustrated with real data. Simulation studies indicate that the precision of the estimates decreases as the amount of correlation increases. However, given a sample size and error correlated noise, the performance of the rule is almost the same while the signal-to-noise ratio decreases, compared to the performance of the rule under independent and identically distributed errors. Further, we find that the performance of the proposal is better than the standard soft thresholding rule with universal policy in most of the considered underlying functions, sample sizes and signal-to-noise ratios scenarios.

The characterization of the solution set for a class of algebraic Riccati inequalities is studied. This class arises in the passivity analysis of linear time invariant control systems. Eigenvalue perturbation theory for the Hamiltonian matrix associated with the Riccati inequality is used to analyze the extremal points of the solution set.

We consider an expected-value ranking and selection (R&S) problem where all k solutions' simulation outputs depend on a common parameter whose uncertainty can be modeled by a distribution. We define the most probable best (MPB) to be the solution that has the largest probability of being optimal with respect to the distribution and design an efficient sequential sampling algorithm to learn the MPB when the parameter has a finite support. We derive the large deviations rate of the probability of falsely selecting the MPB and formulate an optimal computing budget allocation problem to find the rate-maximizing static sampling ratios. The problem is then relaxed to obtain a set of optimality conditions that are interpretable and computationally efficient to verify. We devise a series of algorithms that replace the unknown means in the optimality conditions with their estimates and prove the algorithms' sampling ratios achieve the conditions as the simulation budget increases. Furthermore, we show that the empirical performances of the algorithms can be significantly improved by adopting the kernel ridge regression for mean estimation while achieving the same asymptotic convergence results. The algorithms are benchmarked against a state-of-the-art contextual R&S algorithm and demonstrated to have superior empirical performances.

We show that solutions to the popular convex matrix LASSO problem (nuclear-norm--penalized linear least-squares) have low rank under similar assumptions as required by classical low-rank matrix sensing error bounds. Although the purpose of the nuclear norm penalty is to promote low solution rank, a proof has not yet (to our knowledge) been provided outside very specific circumstances. Furthermore, we show that this result has significant theoretical consequences for nonconvex rank-constrained optimization approaches. Specifically, we show that if (a) the ground truth matrix has low rank, (b) the (linear) measurement operator has the matrix restricted isometry property (RIP), and (c) the measurement error is small enough relative to the nuclear norm penalty, then the (unique) LASSO solution has rank (approximately) bounded by that of the ground truth. From this, we show (a) that a low-rank--projected proximal gradient descent algorithm will converge linearly to the LASSO solution from any initialization, and (b) that the nonconvex landscape of the low-rank Burer-Monteiro--factored problem formulation is benign in the sense that all second-order critical points are globally optimal and yield the LASSO solution.

We propose a theoretically justified and practically applicable slice sampling based Markov chain Monte Carlo (MCMC) method for approximate sampling from probability measures on Riemannian manifolds. The latter naturally arise as posterior distributions in Bayesian inference of matrix-valued parameters, for example belonging to either the Stiefel or the Grassmann manifold. Our method, called geodesic slice sampling, is reversible with respect to the distribution of interest, and generalizes Hit-and-run slice sampling on $\mathbb{R}^{d}$ to Riemannian manifolds by using geodesics instead of straight lines. We demonstrate the robustness of our sampler's performance compared to other MCMC methods dealing with manifold valued distributions through extensive numerical experiments, on both synthetic and real data. In particular, we illustrate its remarkable ability to cope with anisotropic target densities, without using gradient information and preconditioning.

This work presents GAL{\AE}XI as a novel, energy-efficient flow solver for the simulation of compressible flows on unstructured meshes leveraging the parallel computing power of modern Graphics Processing Units (GPUs). GAL{\AE}XI implements the high-order Discontinuous Galerkin Spectral Element Method (DGSEM) using shock capturing with a finite-volume subcell approach to ensure the stability of the high-order scheme near shocks. This work provides details on the general code design, the parallelization strategy, and the implementation approach for the compute kernels with a focus on the element local mappings between volume and surface data due to the unstructured mesh. GAL{\AE}XI exhibits excellent strong scaling properties up to 1024 GPUs if each GPU is assigned a minimum of one million degrees of freedom degrees of freedom. To verify its implementation, a convergence study is performed that recovers the theoretical order of convergence of the implemented numerical schemes. Moreover, the solver is validated using both the incompressible and compressible formulation of the Taylor-Green-Vortex at a Mach number of 0.1 and 1.25, respectively. A mesh convergence study shows that the results converge to the high-fidelity reference solution and that the results match the original CPU implementation. Finally, GAL{\AE}XI is applied to a large-scale wall-resolved large eddy simulation of a linear cascade of the NASA Rotor 37. Here, the supersonic region and shocks at the leading edge are captured accurately and robustly by the implemented shock-capturing approach. It is demonstrated that GAL{\AE}XI requires less than half of the energy to carry out this simulation in comparison to the reference CPU implementation. This renders GAL{\AE}XI as a potent tool for accurate and efficient simulations of compressible flows in the realm of exascale computing and the associated new HPC architectures.

This is a preleminary work. Overdamped Langevin dynamics are reversible stochastic differential equations which are commonly used to sample probability measures in high dimensional spaces, such as the ones appearing in computational statistical physics and Bayesian inference. By varying the diffusion coefficient, there are in fact infinitely many reversible overdamped Langevin dynamics which preserve the target probability measure at hand. This suggests to optimize the diffusion coefficient in order to increase the convergence rate of the dynamics, as measured by the spectral gap of the generator associated with the stochastic differential equation. We analytically study this problem here, obtaining in particular necessary conditions on the optimal diffusion coefficient. We also derive an explicit expression of the optimal diffusion in some homogenized limit. Numerical results, both relying on discretizations of the spectral gap problem and Monte Carlo simulations of the stochastic dynamics, demonstrate the increased quality of the sampling arising from an appropriate choice of the diffusion coefficient.

北京阿比特科技有限公司