This survey is concerned with the power of random information for approximation in the (deterministic) worst-case setting, with special emphasis on information consisting of functionals selected independently and identically distributed (iid) at random on a class of admissible information functionals. We present a general result based on a weighted least squares method and derive consequences for special cases. Improvements are available if the information is ``Gaussian'' or if we consider iid function values for Sobolev spaces. We include open questions to guide future research on the power of random information in the context of information-based complexity.
Several mixed-effects models for longitudinal data have been proposed to accommodate the non-linearity of late-life cognitive trajectories and assess the putative influence of covariates on it. No prior research provides a side-by-side examination of these models to offer guidance on their proper application and interpretation. In this work, we examined five statistical approaches previously used to answer research questions related to non-linear changes in cognitive aging: the linear mixed model (LMM) with a quadratic term, LMM with splines, the functional mixed model, the piecewise linear mixed model, and the sigmoidal mixed model. We first theoretically describe the models. Next, using data from two prospective cohorts with annual cognitive testing, we compared the interpretation of the models by investigating associations of education on cognitive change before death. Lastly, we performed a simulation study to empirically evaluate the models and provide practical recommendations. Except for the LMM-quadratic, the fit of all models was generally adequate to capture non-linearity of cognitive change and models were relatively robust. Although spline-based models have no interpretable nonlinearity parameters, their convergence was easier to achieve, and they allow graphical interpretation. In contrast, piecewise and sigmoidal models, with interpretable non-linear parameters, may require more data to achieve convergence.
For the stochastic heat equation with multiplicative noise we consider the problem of estimating the diffusivity parameter in front of the Laplace operator. Based on local observations in space, we first study an estimator that was derived for additive noise. A stable central limit theorem shows that this estimator is consistent and asymptotically mixed normal. By taking into account the quadratic variation, we propose two new estimators. Their limiting distributions exhibit a smaller (conditional) variance and the last estimator also works for vanishing noise levels. The proofs are based on local approximation results to overcome the intricate nonlinearities and on a stable central limit theorem for stochastic integrals with respect to cylindrical Brownian motion. Simulation results illustrate the theoretical findings.
The scale function holds significant importance within the fluctuation theory of Levy processes, particularly in addressing exit problems. However, its definition is established through the Laplace transform, thereby lacking explicit representations in general. This paper introduces a novel series representation for this scale function, employing Laguerre polynomials to construct a uniformly convergent approximate sequence. Additionally, we derive statistical inference based on specific discrete observations, presenting estimators of scale functions that are asymptotically normal.
Missing data often result in undesirable bias and loss of efficiency. These become substantial problems when the response mechanism is nonignorable, such that the response model depends on unobserved variables. It is necessary to estimate the joint distribution of unobserved variables and response indicators to manage nonignorable nonresponse. However, model misspecification and identification issues prevent robust estimates despite careful estimation of the target joint distribution. In this study, we modelled the distribution of the observed parts and derived sufficient conditions for model identifiability, assuming a logistic regression model as the response mechanism and generalised linear models as the main outcome model of interest. More importantly, the derived sufficient conditions are testable with the observed data and do not require any instrumental variables, which are often assumed to guarantee model identifiability but cannot be practically determined beforehand. To analyse missing data, we propose a new imputation method which incorporates verifiable identifiability using only observed data. Furthermore, we present the performance of the proposed estimators in numerical studies and apply the proposed method to two sets of real data: exit polls for the 19th South Korean election data and public data collected from the Korean Survey of Household Finances and Living Conditions.
Social-ecological systems (SES) research aims to understand the nature of social-ecological phenomena, to find effective ways to foster or manage conditions under which desirable phenomena, such as sustainable resource use, occur or to change conditions or reduce the negative consequences of undesirable phenomena, such as poverty traps. Challenges such as these are often addressed using dynamical systems models (DSM) or agent-based models (ABM). Both modeling approaches have strengths and weaknesses. DSM are praised for their analytical tractability and efficient exploration of asymptotic dynamics and bifurcation, which are enabled by reduced number and heterogeneity of system components. ABM allows representing heterogeneity, agency, learning and interactions of diverse agents within SES, but this also comes at a price such as inefficiency to explore asymptotic dynamics or bifurcations. In this paper we combine DSM and ABM to leverage strengths of each modeling technique and gain deeper insights into dynamics of a system. We start with an ABM and research questions that the ABM was not able to answer. Using results of the ABM analysis as inputs for DSM, we create a DSM. Stability and bifurcation analysis of the DSM gives partial answers to the research questions and direct attention to where additional details are needed. This informs further ABM analysis, prevents burdening the ABM with less important details and reveals new insights about system dynamics. The iterative process and dialogue between the ABM and DSM leads to more complete answers to research questions and surpasses insights provided by each of the models separately. We illustrate the procedure with the example of the emergence of poverty traps in an agricultural system with endogenously driven innovation.
Estimating parameters from data is a fundamental problem in physics, customarily done by minimizing a loss function between a model and observed statistics. In scattering-based analysis, researchers often employ their domain expertise to select a specific range of wavevectors for analysis, a choice that can vary depending on the specific case. We introduce another paradigm that defines a probabilistic generative model from the beginning of data processing and propagates the uncertainty for parameter estimation, termed ab initio uncertainty quantification (AIUQ). As an illustrative example, we demonstrate this approach with differential dynamic microscopy (DDM) that extracts dynamical information through Fourier analysis at a selected range of wavevectors. We first show that DDM is equivalent to fitting a temporal variogram in the reciprocal space using a latent factor model as the generative model. Then we derive the maximum marginal likelihood estimator, which optimally weighs information at all wavevectors, therefore eliminating the need to select the range of wavevectors. Furthermore, we substantially reduce the computational cost by utilizing the generalized Schur algorithm for Toeplitz covariances without approximation. Simulated studies validate that AIUQ significantly improves estimation accuracy and enables model selection with automated analysis. The utility of AIUQ is also demonstrated by three distinct sets of experiments: first in an isotropic Newtonian fluid, pushing limits of optically dense systems compared to multiple particle tracking; next in a system undergoing a sol-gel transition, automating the determination of gelling points and critical exponent; and lastly, in discerning anisotropic diffusive behavior of colloids in a liquid crystal. These outcomes collectively underscore AIUQ's versatility to capture system dynamics in an efficient and automated manner.
Marginal structural models have been widely used in causal inference to estimate mean outcomes under either a static or a prespecified set of treatment decision rules. This approach requires imposing a working model for the mean outcome given a sequence of treatments and possibly baseline covariates. In this paper, we introduce a dynamic marginal structural model that can be used to estimate an optimal decision rule within a class of parametric rules. Specifically, we will estimate the mean outcome as a function of the parameters in the class of decision rules, referred to as a regimen-response curve. In general, misspecification of the working model may lead to a biased estimate with questionable causal interpretability. To mitigate this issue, we will leverage risk to assess "goodness-of-fit" of the imposed working model. We consider the counterfactual risk as our target parameter and derive inverse probability weighting and canonical gradients to map it to the observed data. We provide asymptotic properties of the resulting risk estimators, considering both fixed and data-dependent target parameters. We will show that the inverse probability weighting estimator can be efficient and asymptotic linear when the weight functions are estimated using a sieve-based estimator. The proposed method is implemented on the LS1 study to estimate a regimen-response curve for patients with Parkinson's disease.
Effect modification occurs when the impact of the treatment on an outcome varies based on the levels of other covariates known as effect modifiers. Modeling of these effect differences is important for etiological goals and for purposes of optimizing treatment. Structural nested mean models (SNMMs) are useful causal models for estimating the potentially heterogeneous effect of a time-varying exposure on the mean of an outcome in the presence of time-varying confounding. A data-driven approach for selecting the effect modifiers of an exposure may be necessary if these effect modifiers are a priori unknown and need to be identified. Although variable selection techniques are available in the context of estimating conditional average treatment effects using marginal structural models, or in the context of estimating optimal dynamic treatment regimens, all of these methods consider an outcome measured at a single point in time. In the context of an SNMM for repeated outcomes, we propose a doubly robust penalized G-estimator for the causal effect of a time-varying exposure with a simultaneous selection of effect modifiers and use this estimator to analyze the effect modification in a study of hemodiafiltration. We prove the oracle property of our estimator, and conduct a simulation study for evaluation of its performance in finite samples and for verification of its double-robustness property. Our work is motivated by and applied to the study of hemodiafiltration for treating patients with end-stage renal disease at the Centre Hospitalier de l'Universit\'e de Montr\'eal. We apply the proposed method to investigate the effect heterogeneity of dialysis facility on the repeated session-specific hemodiafiltration outcomes.
We consider the numerical behavior of the fixed-stress splitting method for coupled poromechanics as undrained regimes are approached. We explain that pressure stability is related to the splitting error of the scheme, not the fact that the discrete saddle point matrix never appears in the fixed-stress approach. This observation reconciles previous results regarding the pressure stability of the splitting method. Using examples of compositional poromechanics with application to geological CO$_2$ sequestration, we see that solutions obtained using the fixed-stress scheme with a low order finite element-finite volume discretization which is not inherently inf-sup stable can exhibit the same pressure oscillations obtained with the corresponding fully implicit scheme. Moreover, pressure jump stabilization can effectively remove these spurious oscillations in the fixed-stress setting, while also improving the efficiency of the scheme in terms of the number of iterations required at every time step to reach convergence.
We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.