亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This article focuses on the study of lactating sows, where the main interest is the influence of temperature, measured throughout the day, on the lower quantiles of the daily feed intake. We outline a model framework and estimation methodology for quantile regression in scenarios with longitudinal data and functional covariates. The quantile regression model uses a time-varying regression coefficient function to quantify the association between covariates and the quantile level of interest, and it includes subject-specific intercepts to incorporate within-subject dependence. Estimation relies on spline representations of the unknown coefficient functions, and can be carried out with existing software. We introduce bootstrap procedures for bias adjustment and computation of standard errors. Analysis of the lactation data indicates, among others, that the influence of temperature increases during the lactation period.

相關內容

This paper focuses on statistical modelling using additive Gaussian process (GP) models and their efficient implementation for large-scale spatio-temporal data with a multi-dimensional grid structure. To achieve this, we exploit the Kronecker product structures of the covariance kernel. While this method has gained popularity in the GP literature, the existing approach is limited to covariance kernels with a tensor product structure and does not allow flexible modelling and selection of interaction effects. This is considered an important component in spatio-temporal analysis. We extend the method to a more general class of additive GP models that accounts for main effects and selected interaction effects. Our approach allows for easy identification and interpretation of interaction effects. The proposed model is applied to the analysis of NO$_2$ concentrations during the COVID-19 lockdown in London. Our scalable method enables analysis of large-scale, hourly-recorded data collected from 59 different stations across the city, providing additional insights to findings from previous research using daily or weekly averaged data.

This paper presents a Bayesian reformulation of covariate-assisted principal (CAP) regression of Zhao et al. (2021), which aims to identify components in the covariance of response signal that are associated with covariates in a regression framework. We introduce a geometric formulation and reparameterization of individual covariance matrices in their tangent space. By mapping the covariance matrices to the tangent space, we leverage Euclidean geometry to perform posterior inference. This approach enables joint estimation of all parameters and uncertainty quantification within a unified framework, fusing dimension reduction for covariance matrices with regression model estimation. We validate the proposed method through simulation studies and apply it to analyze associations between covariates and brain functional connectivity, utilizing data from the Human Connectome Project.

Functional mixed models are widely useful for regression analysis with dependent functional data, including longitudinal functional data with scalar predictors. However, existing algorithms for Bayesian inference with these models only provide either scalable computing or accurate approximations to the posterior distribution, but not both. We introduce a new MCMC sampling strategy for highly efficient and fully Bayesian regression with longitudinal functional data. Using a novel blocking structure paired with an orthogonalized basis reparametrization, our algorithm jointly samples the fixed effects regression functions together with all subject- and replicate-specific random effects functions. Crucially, the joint sampler optimizes sampling efficiency for these key parameters while preserving computational scalability. Perhaps surprisingly, our new MCMC sampling algorithm even surpasses state-of-the-art algorithms for frequentist estimation and variational Bayes approximations for functional mixed models -- while also providing accurate posterior uncertainty quantification -- and is orders of magnitude faster than existing Gibbs samplers. Simulation studies show improved point estimation and interval coverage in nearly all simulation settings over competing approaches. We apply our method to a large physical activity dataset to study how various demographic and health factors associate with intraday activity.

In this paper, we develop an {\em epsilon admissible subsets} (EAS) model selection approach for performing group variable selection in the high-dimensional multivariate regression setting. This EAS strategy is designed to estimate a posterior-like, generalized fiducial distribution over a parsimonious class of models in the setting of correlated predictors and/or in the absence of a sparsity assumption. The effectiveness of our approach, to this end, is demonstrated empirically in simulation studies, and is compared to other state-of-the-art model/variable selection procedures. Furthermore, assuming a matrix-Normal linear model we show that the EAS strategy achieves {\em strong model selection consistency} in the high-dimensional setting if there does exist a sparse, true data generating set of predictors. In contrast to Bayesian approaches for model selection, our generalized fiducial approach completely avoids the problem of simultaneously having to specify arbitrary prior distributions for model parameters and penalize model complexity; our approach allows for inference directly on the model complexity. \textcolor{black}{Implementation of the method is illustrated through yeast data to identify significant cell-cycle regulating transcription factors.

Correlated data are ubiquitous in today's data-driven society. While regression models for analyzing means and variances of responses of interest are relatively well-developed, the development of these models for analyzing the correlations is largely confined to longitudinal data, a special form of sequentially correlated data. This paper proposes a new method for the analysis of correlations to fully exploit the use of covariates for general correlated data. In a renewed analysis of the Classroom data, a highly unbalanced multilevel clustered data with within-class and within-school correlations, our method reveals informative insights on these structures not previously known. In another analysis of the malaria immune response data in Benin, a longitudinal study with time-dependent covariates where the exact times of the observations are not available, our approach again provides promising new results. At the heart of our approach is a new generalized z-transformation that converts correlation matrices constrained to be positive definite to vectors with unrestricted support, and is order-invariant. These two properties enable us to develop regression analysis incorporating covariates for the modelling of correlations via the use of maximum likelihood.

This paper investigates the efficiency of the K-fold cross-validation (CV) procedure and a debiased version thereof as a means of estimating the generalization risk of a learning algorithm. We work under the general assumption of uniform algorithmic stability. We show that the K-fold risk estimate may not be consistent under such general stability assumptions, by constructing non vanishing lower bounds on the error in realistic contexts such as regularized empirical risk minimisation and stochastic gradient descent. We thus advocate the use of a debiased version of the K-fold and prove an error bound with exponential tail decay regarding this version. Our result is applicable to the large class of uniformly stable algorithms, contrarily to earlier works focusing on specific tasks such as density estimation. We illustrate the relevance of the debiased K-fold CV on a simple model selection problem and demonstrate empirically the usefulness of the promoted approach on real world classification and regression datasets.

A bootstrap procedure for constructing prediction bands for a stationary functional time series is proposed. The procedure exploits a general vector autoregressive representation of the time-reversed series of Fourier coefficients appearing in the Karhunen-Loeve representation of the functional process. It generates backward-in-time, functional replicates that adequately mimic the dependence structure of the underlying process in a model-free way and have the same conditionally fixed curves at the end of each functional pseudo-time series. The bootstrap prediction error distribution is then calculated as the difference between the model-free, bootstrap-generated future functional observations and the functional forecasts obtained from the model used for prediction. This allows the estimated prediction error distribution to account for the innovation and estimation errors associated with prediction and the possible errors due to model misspecification. We establish the asymptotic validity of the bootstrap procedure in estimating the conditional prediction error distribution of interest, and we also show that the procedure enables the construction of prediction bands that achieve (asymptotically) the desired coverage. Prediction bands based on a consistent estimation of the conditional distribution of the studentized prediction error process also are introduced. Such bands allow for taking more appropriately into account the local uncertainty of prediction. Through a simulation study and the analysis of two data sets, we demonstrate the capabilities and the good finite-sample performance of the proposed method.

The debate over whether to keep daylight savings time has gained attention in recent years, with interest in understanding how the length of exposure to sunlight may affect health outcomes. In this study, we analyzed cancer incidence rates in counties located in different longitudinal positions within time zones and across time zone borders in the contiguous United States. Using both linear and spatial regression models, we found that differences in cancer rates are not significant within time zones or near time zone borders, which challenges previous research. Furthermore, we examined breast, liver, lung, and prostate cancer rates and found that only breast and liver cancers show an increase in incidence from the eastern border to the west within a time zone, while prostate cancer shows the opposite trend. Our study provides insights into the potential difference on human health incurred by an additional hour of sunlight in the morning versus in the evening, which could inform the ongoing discussions about daylight savings time.

Data heterogeneity across clients is a key challenge in federated learning. Prior works address this by either aligning client and server models or using control variates to correct client model drift. Although these methods achieve fast convergence in convex or simple non-convex problems, the performance in over-parameterized models such as deep neural networks is lacking. In this paper, we first revisit the widely used FedAvg algorithm in a deep neural network to understand how data heterogeneity influences the gradient updates across the neural network layers. We observe that while the feature extraction layers are learned efficiently by FedAvg, the substantial diversity of the final classification layers across clients impedes the performance. Motivated by this, we propose to correct model drift by variance reduction only on the final layers. We demonstrate that this significantly outperforms existing benchmarks at a similar or lower communication cost. We furthermore provide proof for the convergence rate of our algorithm.

We present a pseudo-reversible normalizing flow method for efficiently generating samples of the state of a stochastic differential equation (SDE) with different initial distributions. The primary objective is to construct an accurate and efficient sampler that can be used as a surrogate model for computationally expensive numerical integration of SDE, such as those employed in particle simulation. After training, the normalizing flow model can directly generate samples of the SDE's final state without simulating trajectories. Existing normalizing flows for SDEs depend on the initial distribution, meaning the model needs to be re-trained when the initial distribution changes. The main novelty of our normalizing flow model is that it can learn the conditional distribution of the state, i.e., the distribution of the final state conditional on any initial state, such that the model only needs to be trained once and the trained model can be used to handle various initial distributions. This feature can provide a significant computational saving in studies of how the final state varies with the initial distribution. We provide a rigorous convergence analysis of the pseudo-reversible normalizing flow model to the target probability density function in the Kullback-Leibler divergence metric. Numerical experiments are provided to demonstrate the effectiveness of the proposed normalizing flow model.

北京阿比特科技有限公司