亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a novel test procedure for comparing mean functions across two groups within the reproducing kernel Hilbert space (RKHS) framework. Our proposed method is adept at handling sparsely and irregularly sampled functional data when observation times are random for each subject. Conventional approaches, that are built upon functional principal components analysis, usually assume homogeneous covariance structure across groups. Nonetheless, justifying this assumption in real-world scenarios can be challenging. To eliminate the need for a homogeneous covariance structure, we first develop the functional Bahadur representation for the mean estimator under the RKHS framework; this representation naturally leads to the desirable pointwise limiting distributions. Moreover, we establish weak convergence for the mean estimator, allowing us to construct a test statistic for the mean difference. Our method is easily implementable and outperforms some conventional tests in controlling type I errors across various settings. We demonstrate the finite sample performance of our approach through extensive simulations and two real-world applications.

相關內容

The integer autoregressive (INAR) model is one of the most commonly used models in nonnegative integer-valued time series analysis and is a counterpart to the traditional autoregressive model for continuous-valued time series. To guarantee the integer-valued nature, the binomial thinning operator or more generally the generalized Steutel and van Harn operator is used to define the INAR model. However, the distributions of the counting sequences used in the operators have been determined by the preference of analyst without statistical verification so far. In this paper, we propose a test based on the mean and variance relationships for distributions of counting sequences and a disturbance process to check if the operator is reasonable. We show that our proposed test has asymptotically correct size and is consistent. Numerical simulation is carried out to evaluate the finite sample performance of our test. As a real data application, we apply our test to the monthly number of anorexia cases in animals submitted to animal health laboratories in New Zealand and we conclude that binomial thinning operator is not appropriate.

This work has been motivated by a longitudinal data set on HIV CD4 T+ cell counts from Livingstone district, Zambia. The corresponding histogram plots indicate lack of symmetry in the marginal distributions and the pairwise scatter plots show non-elliptical dependence patterns. The standard linear mixed model for longitudinal data fails to capture these features. Thus it seems appropriate to consider a more general framework for modeling such data. In this article, we consider generalized linear mixed models (GLMM) for the marginals (e.g. Gamma mixed model), and temporal dependency of the repeated measurements is modeled by the copula corresponding to some skew-elliptical distributions (like skew-normal/skew-t). Our proposed class of copula based mixed models simultaneously takes into account asymmetry, between-subject variability and non-standard temporal dependence, and hence can be considered extensions to the standard linear mixed model based on multivariate normality. We estimate the model parameters using the IFM (inference function of margins) method, and also describe how to obtain standard errors of the parameter estimates. We investigate the finite sample performance of our procedure with extensive simulation studies involving skewed and symmetric marginal distributions and several choices of the copula. We finally apply our models to the HIV data set and report the findings.

We propose a new numerical domain decomposition method for solving elliptic equations on compact Riemannian manifolds. One advantage of this method is its ability to bypass the need for global triangulations or grids on the manifolds. Additionally, it features a highly parallel iterative scheme. To verify its efficacy, we conduct numerical experiments on some $4$-dimensional manifolds without and with boundary.

High-order Hadamard-form entropy stable multidimensional summation-by-parts discretizations of the Euler and compressible Navier-Stokes equations are considerably more expensive than the standard divergence-form discretization. In search of a more efficient entropy stable scheme, we extend the entropy-split method for implementation on unstructured grids and investigate its properties. The main ingredients of the scheme are Harten's entropy functions, diagonal-$ \mathsf{E} $ summation-by-parts operators with diagonal norm matrix, and entropy conservative simultaneous approximation terms (SATs). We show that the scheme is high-order accurate and entropy conservative on periodic curvilinear unstructured grids for the Euler equations. An entropy stable matrix-type interface dissipation operator is constructed, which can be added to the SATs to obtain an entropy stable semi-discretization. Fully-discrete entropy conservation is achieved using a relaxation Runge-Kutta method. Entropy stable viscous SATs, applicable to both the Hadamard-form and entropy-split schemes, are developed for the compressible Navier-Stokes equations. In the absence of heat fluxes, the entropy-split scheme is entropy stable for the compressible Navier-Stokes equations. Local conservation in the vicinity of discontinuities is enforced using an entropy stable hybrid scheme. Several numerical problems involving both smooth and discontinuous solutions are investigated to support the theoretical results. Computational cost comparison studies suggest that the entropy-split scheme offers substantial efficiency benefits relative to Hadamard-form multidimensional SBP-SAT discretizations.

The human cerebral cortex has many bumps and grooves called gyri and sulci. Even though there is a high inter-individual consistency for the main cortical folds, this is not the case when we examine the exact shapes and details of the folding patterns. Because of this complexity, characterizing the cortical folding variability and relating them to subjects' behavioral characteristics or pathologies is still an open scientific problem. Classical approaches include labeling a few specific patterns, either manually or semi-automatically, based on geometric distances, but the recent availability of MRI image datasets of tens of thousands of subjects makes modern deep-learning techniques particularly attractive. Here, we build a self-supervised deep-learning model to detect folding patterns in the cingulate region. We train a contrastive self-supervised model (SimCLR) on both Human Connectome Project (1101 subjects) and UKBioBank (21070 subjects) datasets with topological-based augmentations on the cortical skeletons, which are topological objects that capture the shape of the folds. We explore several backbone architectures (convolutional network, DenseNet, and PointNet) for the SimCLR. For evaluation and testing, we perform a linear classification task on a database manually labeled for the presence of the "double-parallel" folding pattern in the cingulate region, which is related to schizophrenia characteristics. The best model, giving a test AUC of 0.76, is a convolutional network with 6 layers, a 10-dimensional latent space, a linear projection head, and using the branch-clipping augmentation. This is the first time that a self-supervised deep learning model has been applied to cortical skeletons on such a large dataset and quantitatively evaluated. We can now envisage the next step: applying it to other brain regions to detect other biomarkers.

Using validated numerical methods, interval arithmetic and Taylor models, we propose a certified predictor-corrector loop for tracking zeros of polynomial systems with a parameter. We provide a Rust implementation which shows tremendous improvement over existing software for certified path tracking.

Data generation remains a bottleneck in training surrogate models to predict molecular properties. We demonstrate that multitask Gaussian process regression overcomes this limitation by leveraging both expensive and cheap data sources. In particular, we consider training sets constructed from coupled-cluster (CC) and density function theory (DFT) data. We report that multitask surrogates can predict at CC level accuracy with a reduction to data generation cost by over an order of magnitude. Of note, our approach allows the training set to include DFT data generated by a heterogeneous mix of exchange-correlation functionals without imposing any artificial hierarchy on functional accuracy. More generally, the multitask framework can accommodate a wider range of training set structures -- including full disparity between the different levels of fidelity -- than existing kernel approaches based on $\Delta$-learning, though we show that the accuracy of the two approaches can be similar. Consequently, multitask regression can be a tool for reducing data generation costs even further by opportunistically exploiting existing data sources.

Among semiparametric regression models, partially linear additive models provide a useful tool to include additive nonparametric components as well as a parametric component, when explaining the relationship between the response and a set of explanatory variables. This paper concerns such models under sparsity assumptions for the covariates included in the linear component. Sparse covariates are frequent in regression problems where the task of variable selection is usually of interest. As in other settings, outliers either in the residuals or in the covariates involved in the linear component have a harmful effect. To simultaneously achieve model selection for the parametric component of the model and resistance to outliers, we combine preliminary robust estimators of the additive component, robust linear $MM-$regression estimators with a penalty such as SCAD on the coefficients in the parametric part. Under mild assumptions, consistency results and rates of convergence for the proposed estimators are derived. A Monte Carlo study is carried out to compare, under different models and contamination schemes, the performance of the robust proposal with its classical counterpart. The obtained results show the advantage of using the robust approach. Through the analysis of a real data set, we also illustrate the benefits of the proposed procedure.

We study a category of probability spaces and measure-preserving Markov kernels up to almost sure equality. This category contains, among its isomorphisms, mod-zero isomorphisms of probability spaces. It also gives an isomorphism between the space of values of a random variable and the sigma-algebra that it generates on the outcome space, reflecting the standard mathematical practice of using the two interchangeably, for example when taking conditional expectations. We show that a number of constructions and results from classical probability theory, mostly involving notions of equilibrium, can be expressed and proven in terms of this category. In particular: - Given a stochastic dynamical system acting on a standard Borel space, we show that the almost surely invariant sigma-algebra can be obtained as a limit and as a colimit; - In the setting above, the almost surely invariant sigma-algebra gives rise, up to isomorphism of our category, to a standard Borel space; - As a corollary, we give a categorical version of the ergodic decomposition theorem for stochastic actions; - As an example, we show how de Finetti's theorem and the Hewitt-Savage and Kolmogorov zero-one laws fit in this limit-colimit picture. This work uses the tools of categorical probability, in particular Markov categories, as well as the theory of dagger categories.

Time series and extreme value analyses are two statistical approaches usually applied to study hydrological data. Classical techniques, such as ARIMA models (in the case of mean flow predictions), and parametric generalised extreme value (GEV) fits and nonparametric extreme value methods (in the case of extreme value theory) have been usually employed in this context. In this paper, nonparametric functional data methods are used to perform mean monthly flow predictions and extreme value analysis, which are important for flood risk management. These are powerful tools that take advantage of both, the functional nature of the data under consideration and the flexibility of nonparametric methods, providing more reliable results. Therefore, they can be useful to prevent damage caused by floods and to reduce the likelihood and/or the impact of floods in a specific location. The nonparametric functional approaches are applied to flow samples of two rivers in the U.S. In this way, monthly mean flow is predicted and flow quantiles in the extreme value framework are estimated using the proposed methods. Results show that the nonparametric functional techniques work satisfactorily, generally outperforming the behaviour of classical parametric and nonparametric estimators in both settings.

北京阿比特科技有限公司