A stepped wedge design is a unidirectional crossover design where clusters are randomized to distinct treatment sequences defined by calendar time. While model-based analysis of stepped wedge designs -- via linear mixed models or generalized estimating equations -- is standard practice to evaluate treatment effects accounting for clustering and adjusting for baseline covariates, formal results on their model-robustness properties remain unavailable. In this article, we study when a potentially misspecified multilevel model can offer consistent estimators for treatment effect estimands that are functions of calendar time and/or exposure time. We describe a super-population potential outcomes framework to define treatment effect estimands of interest in stepped wedge designs, and adapt linear mixed models and generalized estimating equations to achieve estimand-aligned inference. We prove a central result that, as long as the treatment effect structure is correctly specified in each working model, our treatment effect estimator is robust to arbitrary misspecification of all remaining model components. The theoretical results are illustrated via simulation experiments and re-analysis of a cardiovascular stepped wedge cluster randomized trial.
Standard techniques such as leave-one-out cross-validation (LOOCV) might not be suitable for evaluating the predictive performance of models incorporating structured random effects. In such cases, the correlation between the training and test sets could have a notable impact on the model's prediction error. To overcome this issue, an automatic group construction procedure for leave-group-out cross validation (LGOCV) has recently emerged as a valuable tool for enhancing predictive performance measurement in structured models. The purpose of this paper is (i) to compare LOOCV and LGOCV within structured models, emphasizing model selection and predictive performance, and (ii) to provide real data applications in spatial statistics using complex structured models fitted with INLA, showcasing the utility of the automatic LGOCV method. First, we briefly review the key aspects of the recently proposed LGOCV method for automatic group construction in latent Gaussian models. We also demonstrate the effectiveness of this method for selecting the model with the highest predictive performance by simulating extrapolation tasks in both temporal and spatial data analyses. Finally, we provide insights into the effectiveness of the LGOCV method in modelling complex structured data, encompassing spatio-temporal multivariate count data, spatial compositional data, and spatio-temporal geospatial data.
The innovative application of precise geospatial vegetation forecasting holds immense potential across diverse sectors, including agriculture, forestry, humanitarian aid, and carbon accounting. To leverage the vast availability of satellite imagery for this task, various works have applied deep neural networks for predicting multispectral images in photorealistic quality. However, the important area of vegetation dynamics has not been thoroughly explored. Our study breaks new ground by introducing GreenEarthNet, the first dataset specifically designed for high-resolution vegetation forecasting, and Contextformer, a novel deep learning approach for predicting vegetation greenness from Sentinel 2 satellite images with fine resolution across Europe. Our multi-modal transformer model Contextformer leverages spatial context through a vision backbone and predicts the temporal dynamics on local context patches incorporating meteorological time series in a parameter-efficient manner. The GreenEarthNet dataset features a learned cloud mask and an appropriate evaluation scheme for vegetation modeling. It also maintains compatibility with the existing satellite imagery forecasting dataset EarthNet2021, enabling cross-dataset model comparisons. Our extensive qualitative and quantitative analyses reveal that our methods outperform a broad range of baseline techniques. This includes surpassing previous state-of-the-art models on EarthNet2021, as well as adapted models from time series forecasting and video prediction. To the best of our knowledge, this work presents the first models for continental-scale vegetation modeling at fine resolution able to capture anomalies beyond the seasonal cycle, thereby paving the way for predicting vegetation health and behaviour in response to climate variability and extremes.
Charts, figures, and text derived from data play an important role in decision making, from data-driven policy development to day-to-day choices informed by online articles. Making sense of, or fact-checking, outputs means understanding how they relate to the underlying data. Even for domain experts with access to the source code and data sets, this poses a significant challenge. In this paper we introduce a new program analysis framework which supports interactive exploration of fine-grained I/O relationships directly through computed outputs, making use of dynamic dependence graphs. Our main contribution is a novel notion in data provenance which we call related inputs, a relation of mutual relevance or "cognacy" which arises between inputs when they contribute to common features of the output. Queries of this form allow readers to ask questions like "What outputs use this data element, and what other data elements are used along with it?". We show how Jonsson and Tarski's concept of conjugate operators on Boolean algebras appropriately characterises the notion of cognacy in a dependence graph, and give a procedure for computing related inputs over such a graph.
Several mixed-effects models for longitudinal data have been proposed to accommodate the non-linearity of late-life cognitive trajectories and assess the putative influence of covariates on it. No prior research provides a side-by-side examination of these models to offer guidance on their proper application and interpretation. In this work, we examined five statistical approaches previously used to answer research questions related to non-linear changes in cognitive aging: the linear mixed model (LMM) with a quadratic term, LMM with splines, the functional mixed model, the piecewise linear mixed model, and the sigmoidal mixed model. We first theoretically describe the models. Next, using data from two prospective cohorts with annual cognitive testing, we compared the interpretation of the models by investigating associations of education on cognitive change before death. Lastly, we performed a simulation study to empirically evaluate the models and provide practical recommendations. Except for the LMM-quadratic, the fit of all models was generally adequate to capture non-linearity of cognitive change and models were relatively robust. Although spline-based models have no interpretable nonlinearity parameters, their convergence was easier to achieve, and they allow graphical interpretation. In contrast, piecewise and sigmoidal models, with interpretable non-linear parameters, may require more data to achieve convergence.
Chemical and biochemical reactions can exhibit surprisingly different behaviours from multiple steady-state solutions to oscillatory solutions and chaotic behaviours. Such behaviour has been of great interest to researchers for many decades. The Briggs-Rauscher, Belousov-Zhabotinskii and Bray-Liebhafsky reactions, for which periodic variations in concentrations can be visualized by changes in colour, are experimental examples of oscillating behaviour in chemical systems. These type of systems are modelled by a system of partial differential equations coupled by a nonlinearity. However, analysing the pattern, one may suspect that the dynamic is only generated by a finite number of spatial Fourier modes. In fluid dynamics, it is shown that for large times, the solution is determined by a finite number of spatial Fourier modes, called determining modes. In the article, we first introduce the concept of determining modes and show that, indeed, it is sufficient to characterise the dynamic by only a finite number of spatial Fourier modes. In particular, we analyse the exact number of the determining modes of $u$ and $v$, where the couple $(u,v)$ solves the following stochastic system \begin{equation*} \partial_t{u}(t) = r_1\Delta u(t) -\alpha_1u(t)- \gamma_1u(t)v^2(t) + f(1 - u(t)) + g(t),\quad \partial_t{v}(t) = r_2\Delta v(t) -\alpha_2v(t) + \gamma_2 u(t)v^2(t) + h(t),\quad u(0) = u_0,\;v(0) = v_0, \end{equation*} where $r_1,r_2,\gamma_1,\gamma_2>0$, $\alpha_1,\alpha_2 \ge 0$ and $g,h$ are time depending mappings specified later.
We consider the reliable implementation of an adaptive high-order unfitted finite element method on Cartesian meshes for solving elliptic interface problems with geometrically curved singularities. We extend our previous work on the reliable cell merging algorithm for smooth interfaces to automatically generate the induced mesh for piecewise smooth interfaces. An $hp$ a posteriori error estimate is derived for a new unfitted finite element method whose finite element functions are conforming in each subdomain. Numerical examples illustrate the competitive performance of the method.
During the evolution of large models, performance evaluation is necessarily performed to assess their capabilities and ensure safety before practical application. However, current model evaluations mainly rely on specific tasks and datasets, lacking a united framework for assessing the multidimensional intelligence of large models. In this perspective, we advocate for a comprehensive framework of cognitive science-inspired artificial general intelligence (AGI) tests, aimed at fulfilling the testing needs of large models with enhanced capabilities. The cognitive science-inspired AGI tests encompass the full spectrum of intelligence facets, including crystallized intelligence, fluid intelligence, social intelligence, and embodied intelligence. To assess the multidimensional intelligence of large models, the AGI tests consist of a battery of well-designed cognitive tests adopted from human intelligence tests, and then naturally encapsulates into an immersive virtual community. We propose increasing the complexity of AGI testing tasks commensurate with advancements in large models and emphasizing the necessity for the interpretation of test results to avoid false negatives and false positives. We believe that cognitive science-inspired AGI tests will effectively guide the targeted improvement of large models in specific dimensions of intelligence and accelerate the integration of large models into human society.
Many production processes require the cooperation of various resources. Especially when using expensive machines, their utilization plays a decisive role in efficient production. In agricultural production or civil construction processes, e.g., harvesting or road building, the machines are typically mobile, and synchronization of different machine types is required to perform operations. In addition, the productivity of one type often depends on the availability of another type. In this paper, we consider two types of vehicles, called primary and support vehicles. Primary vehicles perform operations and are assisted by at least one support vehicle, with more support vehicles resulting in faster service times for primary vehicles. We call this practical problem the vehicle routing and scheduling problem with support vehicle-dependent service times and introduce two mixed-integer linear programming models. The first represents each support vehicle individually with binary decision variables, while the second considers the cumulative flow of support vehicles with integer decision variables. Furthermore, the models are defined on a graph that allows easy transformation into multiple variants. These variants are based on allowing or prohibiting switching support vehicles between primary vehicles and splitting services among primary vehicles. We show in our extensive computational experiments that: i) the integer representation of support vehicles is superior to the binary representation, ii) the benefit of additional vehicles is subject to saturation effects and depends on the ratio of support and primary vehicles, and iii) switching and splitting lead to problems that are more difficult to solve, but also result in better solutions with higher primary vehicle utilization.
The use of hyperspectral imaging to investigate food samples has grown due to the improved performance and lower cost of spectroscopy instrumentation. Food engineers use hyperspectral images to classify the type and quality of a food sample, typically using classification methods. In order to train these methods, every pixel in each training image needs to be labelled. Typically, computationally cheap threshold-based approaches are used to label the pixels, and classification methods are trained based on those labels. However, threshold-based approaches are subjective and cannot be generalized across hyperspectral images taken in different conditions and of different foods. Here a consensus-constrained parsimonious Gaussian mixture model (ccPGMM) is proposed to label pixels in hyperspectral images using a model-based clustering approach. The ccPGMM utilizes available information on the labels of a small number of pixels and the relationship between those pixels and neighbouring pixels as constraints when clustering the rest of the pixels in the image. A latent variable model is used to represent the high-dimensional data in terms of a small number of underlying latent factors. To ensure computational feasibility, a consensus clustering approach is employed, where the data are divided into multiple randomly selected subsets of variables and constrained clustering is applied to each data subset; the clustering results are then consolidated across all data subsets to provide a consensus clustering solution. The ccPGMM approach is applied to simulated datasets and real hyperspectral images of three types of puffed cereal, corn, rice, and wheat. Improved clustering performance and computational efficiency are demonstrated when compared to other current state-of-the-art approaches.
Researchers in many fields endeavor to estimate treatment effects by regressing outcome data (Y) on a treatment (D) and observed confounders (X). Even absent unobserved confounding, the regression coefficient on the treatment reports a weighted average of strata-specific treatment effects (Angrist, 1998). Where heterogeneous treatment effects cannot be ruled out, the resulting coefficient is thus not generally equal to the average treatment effect (ATE), and is unlikely to be the quantity of direct scientific or policy interest. The difference between the coefficient and the ATE has led researchers to propose various interpretational, bounding, and diagnostic aids (Humphreys, 2009; Aronow and Samii, 2016; Sloczynski, 2022; Chattopadhyay and Zubizarreta, 2023). We note that the linear regression of Y on D and X can be misspecified when the treatment effect is heterogeneous in X. The "weights of regression", for which we provide a new (more general) expression, simply characterize how the OLS coefficient will depart from the ATE under the misspecification resulting from unmodeled treatment effect heterogeneity. Consequently, a natural alternative to suffering these weights is to address the misspecification that gives rise to them. For investigators committed to linear approaches, we propose relying on the slightly weaker assumption that the potential outcomes are linear in X. Numerous well-known estimators are unbiased for the ATE under this assumption, namely regression-imputation/g-computation/T-learner, regression with an interaction of the treatment and covariates (Lin, 2013), and balancing weights. Any of these approaches avoid the apparent weighting problem of the misspecified linear regression, at an efficiency cost that will be small when there are few covariates relative to sample size. We demonstrate these lessons using simulations in observational and experimental settings.