亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study when low coordinate degree functions (LCDF) -- linear combinations of functions depending on small subsets of entries of a vector -- can hypothesis test between high-dimensional probability measures. These functions are a generalization, proposed in Hopkins' 2018 thesis but seldom studied since, of low degree polynomials (LDP), a class widely used in recent literature as a proxy for all efficient algorithms for tasks in statistics and optimization. Instead of the orthogonal polynomial decompositions used in LDP calculations, our analysis of LCDF is based on the Efron-Stein or ANOVA decomposition, making it much more broadly applicable. By way of illustration, we prove channel universality for the success of LCDF in testing for the presence of sufficiently "dilute" random signals through noisy channels: the efficacy of LCDF depends on the channel only through the scalar Fisher information for a class of channels including nearly arbitrary additive i.i.d. noise and nearly arbitrary exponential families. As applications, we extend lower bounds against LDP for spiked matrix and tensor models under additive Gaussian noise to lower bounds against LCDF under general noisy channels. We also give a simple and unified treatment of the effect of censoring models by erasing observations at random and of quantizing models by taking the sign of the observations. These results are the first computational lower bounds against any large class of algorithms for all of these models when the channel is not one of a few special cases, and thereby give the first substantial evidence for the universality of several statistical-to-computational gaps.

相關內容

We propose new linear combinations of compositions of a basic second-order scheme with appropriately chosen coefficients to construct higher order numerical integrators for differential equations. They can be considered as a generalization of extrapolation methods and multi-product expansions. A general analysis is provided and new methods up to order 8 are built and tested. The new approach is shown to reduce the latency problem when implemented in a parallel environment and leads to schemes that are significantly more efficient than standard extrapolation when the linear combination is delayed by a number of steps.

This paper introduces a second-order method for solving general elliptic partial differential equations (PDEs) on irregular domains using GPU acceleration, based on Ying's kernel-free boundary integral (KFBI) method. The method addresses limitations imposed by CFL conditions in explicit schemes and accuracy issues in fully implicit schemes for the Laplacian operator. To overcome these challenges, the paper employs a series of second-order time discrete schemes and splits the Laplacian operator into explicit and implicit components. Specifically, the Crank-Nicolson method discretizes the heat equation in the temporal dimension, while the implicit scheme is used for the wave equation. The Schrodinger equation is treated using the Strang splitting method. By discretizing the temporal dimension implicitly, the heat, wave, and Schrodinger equations are transformed into a sequence of elliptic equations. The Laplacian operator on the right-hand side of the elliptic equation is obtained from the numerical scheme rather than being discretized and corrected by the five-point difference method. A Cartesian grid-based KFBI method is employed to solve the resulting elliptic equations. GPU acceleration, achieved through a parallel Cartesian grid solver, enhances the computational efficiency by exploiting high degrees of parallelism. Numerical results demonstrate that the proposed method achieves second-order accuracy for the heat, wave, and Schrodinger equations. Furthermore, the GPU-accelerated solvers for the three types of time-dependent equations exhibit a speedup of 30 times compared to CPU-based solvers.

Many analyses of multivariate data focus on evaluating the dependence between two sets of variables, rather than the dependence among individual variables within each set. Canonical correlation analysis (CCA) is a classical data analysis technique that estimates parameters describing the dependence between such sets. However, inference procedures based on traditional CCA rely on the assumption that all variables are jointly normally distributed. We present a semiparametric approach to CCA in which the multivariate margins of each variable set may be arbitrary, but the dependence between variable sets is described by a parametric model that provides low-dimensional summaries of dependence. While maximum likelihood estimation in the proposed model is intractable, we propose two estimation strategies: one using a pseudolikelihood for the model and one using a Markov chain Monte Carlo (MCMC) algorithm that provides Bayesian estimates and confidence regions for the between-set dependence parameters. The MCMC algorithm is derived from a multirank likelihood function, which uses only part of the information in the observed data in exchange for being free of assumptions about the multivariate margins. We apply the proposed Bayesian inference procedure to Brazilian climate data and monthly stock returns from the materials and communications market sectors.

Test-negative designs are widely used for post-market evaluation of vaccine effectiveness, particularly in cases where randomization is not feasible. Differing from classical test-negative designs where only healthcare-seekers with symptoms are included, recent test-negative designs have involved individuals with various reasons for testing, especially in an outbreak setting. While including these data can increase sample size and hence improve precision, concerns have been raised about whether they introduce bias into the current framework of test-negative designs, thereby demanding a formal statistical examination of this modified design. In this article, using statistical derivations, causal graphs, and numerical simulations, we show that the standard odds ratio estimator may be biased if various reasons for testing are not accounted for. To eliminate this bias, we identify three categories of reasons for testing, including symptoms, disease-unrelated reasons, and case contact tracing, and characterize associated statistical properties and estimands. Based on our characterization, we show how to consistently estimate each estimand via stratification. Furthermore, we describe when these estimands correspond to the same vaccine effectiveness parameter, and, when appropriate, propose a stratified estimator that can incorporate multiple reasons for testing and improve precision. The performance of our proposed method is demonstrated through simulation studies.

Equilibrated fluid-solid-growth (FSGe) is a fast, open source, three-dimensional (3D) computational platform for simulating interactions between instantaneous hemodynamics and long-term vessel wall adaptation through growth and remodeling (G&R). Such models are crucial for capturing adaptations in health and disease and following clinical interventions. In traditional G&R models, this feedback is modeled through highly simplified fluid models, neglecting local variations in blood pressure and wall shear stress (WSS). FSGe overcomes these inherent limitations by strongly coupling the 3D Navier-Stokes equations for blood flow with a 3D equilibrated constrained mixture model (CMMe) for vascular tissue G&R. CMMe allows one to predict long-term evolved mechanobiological equilibria from an original homeostatic state at a computational cost equivalent to that of a standard hyperelastic material model. In illustrative computational examples, we focus on the development of a stable aortic aneurysm in a mouse model to highlight key differences in growth patterns and fluid-solid feedback between FSGe and solid-only G&R models. We show that FSGe is especially important in blood vessels with asymmetric stimuli. Simulation results reveal greater local variation in fluid-derived WSS than in intramural stress (IMS). Thus, differences between FSGe and G&R models became more pronounced with the growing influence of WSS relative to pressure. Future applications in highly localized disease processes, such as for lesion formation in atherosclerosis, can now include spatial and temporal variations of WSS.

In this paper, we plan to show an eigenvalue algorithm for block Hessenberg matrices by using the idea of non-commutative integrable systems and matrix-valued orthogonal polynomials. We introduce adjacent families of matrix-valued $\theta$-deformed bi-orthogonal polynomials, and derive corresponding discrete non-commutative hungry Toda lattice from discrete spectral transformations for polynomials. It is shown that this discrete system can be used as a pre-precessing algorithm for block Hessenberg matrices. Besides, some convergence analysis and numerical examples of this algorithm are presented.

Machine learning-based reliability analysis methods have shown great advancements for their computational efficiency and accuracy. Recently, many efficient learning strategies have been proposed to enhance the computational performance. However, few of them explores the theoretical optimal learning strategy. In this article, we propose several theorems that facilitates such exploration. Specifically, cases that considering and neglecting the correlations among the candidate design samples are well elaborated. Moreover, we prove that the well-known U learning function can be reformulated to the optimal learning function for the case neglecting the Kriging correlation. In addition, the theoretical optimal learning strategy for sequential multiple training samples enrichment is also mathematically explored through the Bayesian estimate with the corresponding lost functions. Simulation results show that the optimal learning strategy considering the Kriging correlation works better than that neglecting the Kriging correlation and other state-of-the art learning functions from the literatures in terms of the reduction of number of evaluations of performance function. However, the implementation needs to investigate very large computational resource.

This work presents GAL{\AE}XI as a novel, energy-efficient flow solver for the simulation of compressible flows on unstructured meshes leveraging the parallel computing power of modern Graphics Processing Units (GPUs). GAL{\AE}XI implements the high-order Discontinuous Galerkin Spectral Element Method (DGSEM) using shock capturing with a finite-volume subcell approach to ensure the stability of the high-order scheme near shocks. This work provides details on the general code design, the parallelization strategy, and the implementation approach for the compute kernels with a focus on the element local mappings between volume and surface data due to the unstructured mesh. GAL{\AE}XI exhibits excellent strong scaling properties up to 1024 GPUs if each GPU is assigned a minimum of one million degrees of freedom degrees of freedom. To verify its implementation, a convergence study is performed that recovers the theoretical order of convergence of the implemented numerical schemes. Moreover, the solver is validated using both the incompressible and compressible formulation of the Taylor-Green-Vortex at a Mach number of 0.1 and 1.25, respectively. A mesh convergence study shows that the results converge to the high-fidelity reference solution and that the results match the original CPU implementation. Finally, GAL{\AE}XI is applied to a large-scale wall-resolved large eddy simulation of a linear cascade of the NASA Rotor 37. Here, the supersonic region and shocks at the leading edge are captured accurately and robustly by the implemented shock-capturing approach. It is demonstrated that GAL{\AE}XI requires less than half of the energy to carry out this simulation in comparison to the reference CPU implementation. This renders GAL{\AE}XI as a potent tool for accurate and efficient simulations of compressible flows in the realm of exascale computing and the associated new HPC architectures.

In a Jacobi--Davidson (JD) type method for singular value decomposition (SVD) problems, called JDSVD, a large symmetric and generally indefinite correction equation is approximately solved iteratively at each outer iteration, which constitutes the inner iterations and dominates the overall efficiency of JDSVD. In this paper, a convergence analysis is made on the minimal residual (MINRES) method for the correction equation. Motivated by the results obtained, a preconditioned correction equation is derived that extracts useful information from current searching subspaces to construct effective preconditioners for the correction equation and is proved to retain the same convergence of outer iterations of JDSVD. The resulting method is called inner preconditioned JDSVD (IPJDSVD) method. Convergence results show that MINRES for the preconditioned correction equation can converge much faster when there is a cluster of singular values closest to a given target, so that IPJDSVD is more efficient than JDSVD. A new thick-restart IPJDSVD algorithm with deflation and purgation is proposed that simultaneously accelerates the outer and inner convergence of the standard thick-restart JDSVD and computes several singular triplets of a large matrix. Numerical experiments justify the theory and illustrate the considerable superiority of IPJDSVD to JDSVD.

We have introduced the generalized alternating direction implicit iteration (GADI) method for solving large sparse complex symmetric linear systems and proved its convergence properties. Additionally, some numerical results have demonstrated the effectiveness of this algorithm. Furthermore, as an application of the GADI method in solving complex symmetric linear systems, we utilized the flattening operator and Kronecker product properties to solve Lyapunov and Riccati equations with complex coefficients using the GADI method. In solving the Riccati equation, we combined inner and outer iterations, first simplifying the Riccati equation into a Lyapunov equation using the Newton method, and then applying the GADI method for solution. Finally, we provided convergence analysis of the method and corresponding numerical results.

北京阿比特科技有限公司