Accelerated life-tests (ALTs) are applied for inferring lifetime characteristics of highly reliable products. In particular, step-stress ALTs increase the stress level at which units under test are subject at certain pre-fixed times, thus accelerating product wear and inducing its failure. In some cases, due to cost or product nature constraints, continuous monitoring of devices is infeasible but the units are inspected for failures at particular inspection time points. In such setups, the ALT response is interval-censored. Furthermore, when a test unit fails, there are often more than one fatal cause for the failure, known as competing risks. In this paper, we assume that all competing risks are independent and follow an exponential distribution with scale parameter depending on the stress level. Under this setup, we present a family of robust estimators based on the density power divergence, including the classical maximum likelihood estimator as a particular case. We derive asymptotic and robustness properties of the MDPDE, showing its consistency for large samples. Based on these MDPDEs, estimates of the lifetime characteristics of the product as well as estimates of cause-specific lifetime characteristics have been developed. Direct, transformed and bootstrap confidence intervals for the mean lifetime to failure, reliability at a mission time, and distribution quantiles are proposed, and their performance is empirically compared through simulations. Besides, the performance of the MDPDE family has been examined through an extensive numerical study and the methods of inference discussed here are illustrated with a real-data example regarding electronic devices.
Despite numerous years of research into the merits and trade-offs of various model selection criteria, obtaining robust results that elucidate the behavior of cross-validation remains a challenging endeavor. In this paper, we highlight the inherent limitations of cross-validation when employed to discern the structure of a Gaussian graphical model. We provide finite-sample bounds on the probability that the Lasso estimator for the neighborhood of a node within a Gaussian graphical model, optimized using a prediction oracle, misidentifies the neighborhood. Our results pertain to both undirected and directed acyclic graphs, encompassing general, sparse covariance structures. To support our theoretical findings, we conduct an empirical investigation of this inconsistency by contrasting our outcomes with other commonly used information criteria through an extensive simulation study. Given that many algorithms designed to learn the structure of graphical models require hyperparameter selection, the precise calibration of this hyperparameter is paramount for accurately estimating the inherent structure. Consequently, our observations shed light on this widely recognized practical challenge.
Multi-modal Magnetic Resonance Imaging (MRI) offers complementary diagnostic information, but some modalities are limited by the long scanning time. To accelerate the whole acquisition process, MRI reconstruction of one modality from highly undersampled k-space data with another fully-sampled reference modality is an efficient solution. However, the misalignment between modalities, which is common in clinic practice, can negatively affect reconstruction quality. Existing deep learning-based methods that account for inter-modality misalignment perform better, but still share two main common limitations: (1) The spatial alignment task is not adaptively integrated with the reconstruction process, resulting in insufficient complementarity between the two tasks; (2) the entire framework has weak interpretability. In this paper, we construct a novel Deep Unfolding Network with Spatial Alignment, termed DUN-SA, to appropriately embed the spatial alignment task into the reconstruction process. Concretely, we derive a novel joint alignment-reconstruction model with a specially designed cross-modal spatial alignment term. By relaxing the model into cross-modal spatial alignment and multi-modal reconstruction tasks, we propose an effective algorithm to solve this model alternatively. Then, we unfold the iterative steps of the proposed algorithm and design corresponding network modules to build DUN-SA with interpretability. Through end-to-end training, we effectively compensate for spatial misalignment using only reconstruction loss, and utilize the progressively aligned reference modality to provide inter-modality prior to improve the reconstruction of the target modality. Comprehensive experiments on three real datasets demonstrate that our method exhibits superior reconstruction performance compared to state-of-the-art methods.
In recent years, the Adaptive Antoulas-Anderson AAA algorithm has established itself as the method of choice for solving rational approximation problems. Data-driven Model Order Reduction (MOR) of large-scale Linear Time-Invariant (LTI) systems represents one of the many applications in which this algorithm has proven to be successful since it typically generates reduced-order models (ROMs) efficiently and in an automated way. Despite its effectiveness and numerical reliability, the classical AAA algorithm is not guaranteed to return a ROM that retains the same structural features of the underlying dynamical system, such as the stability of the dynamics. In this paper, we propose a novel algebraic characterization for the stability of ROMs with transfer function obeying the AAA barycentric structure. We use this characterization to formulate a set of convex constraints on the free coefficients of the AAA model that, whenever verified, guarantee by construction the asymptotic stability of the resulting ROM. We suggest how to embed such constraints within the AAA optimization routine, and we validate experimentally the effectiveness of the resulting algorithm, named stabAAA, over a set of relevant MOR applications.
Permutation tests are widely recognized as robust alternatives to tests based on normal theory. Random permutation tests have been frequently employed to assess the significance of variables in linear models. Despite their widespread use, existing random permutation tests lack finite-sample and assumption-free guarantees for controlling type I error in partial correlation tests. To address this ongoing challenge, we have developed a conformal test through permutation-augmented regressions, which we refer to as PALMRT. PALMRT not only achieves power competitive with conventional methods but also provides reliable control of type I errors at no more than $2\alpha$, given any targeted level $\alpha$, for arbitrary fixed designs and error distributions. We have confirmed this through extensive simulations. Compared to the cyclic permutation test (CPT) and residual permutation test (RPT), which also offer theoretical guarantees, PALMRT does not compromise as much on power or set stringent requirements on the sample size, making it suitable for diverse biomedical applications. We further illustrate the differences in a long-Covid study where PALMRT validated key findings previously identified using the t-test after multiple corrections, while both CPT and RPT suffered from a drastic loss of power and failed to identify any discoveries. We endorse PALMRT as a robust and practical hypothesis test in scientific research for its superior error control, power preservation, and simplicity. An R package for PALMRT is available at \url{//github.com/LeyingGuan/PairedRegression}.
Phase field models are gradient flows with their energy naturally dissipating in time. In order to preserve this property, many numerical schemes have been well-studied. In this paper we consider a well-known method, namely the exponential integrator method (EI). In the literature a few works studied several EI schemes for various phase field models and proved the energy dissipation by either requiring a strong Lipschitz condition on the nonlinear source term or certain $L^\infty$ bounds on the numerical solutions (maximum principle). However for phase field models such as the (non-local) Cahn-Hilliard equation, the maximum principle no longer exists. As a result, solving such models via EI schemes remains open for a long time. In this paper we aim to give a systematic approach on applying EI-type schemes to such models by solving the Cahn-Hilliard equation with a first order EI scheme and showing the energy dissipation. In fact second order EI schemes can be handled similarly and we leave the discussion in a subsequent paper. To our best knowledge, this is the first work to handle phase field models without assuming any strong Lipschitz condition or $L^\infty$ boundedness. Furthermore, we will analyze the $L^2$ error and present some numerical simulations to demonstrate the dynamics.
Regulatory authorities guide the use of permutation tests or randomization tests so as not to increase the type-I error rate when applying covariate-adaptive randomization in randomized clinical trials. For non-inferiority and equivalence trials, this paper derives adjusted confidence intervals using permutation and randomization methods, thus controlling the type-I error to be much closer to the pre-specified nominal significance level. We consider three variable types for the outcome of interest, namely normal, binary, and time-to-event variables for the adjusted confidence intervals. For normal variables, we show that the type-I error for the adjusted confidence interval holds the nominal significance level. However, we highlight a unique theoretical challenge for non-inferiority and equivalence trials: binary and time-to-event variables may not hold the nominal significance level when the model parameters are estimated by models that diverge from the data-generating model under the null hypothesis. To clarify these features, we present simulation results and evaluate the performance of the adjusted confidence intervals.
Marginal structural models (MSMs) are often used to estimate causal effects of treatments on survival time outcomes from observational data when time-dependent confounding may be present. They can be fitted using, e.g., inverse probability of treatment weighting (IPTW). It is important to evaluate the performance of statistical methods in different scenarios, and simulation studies are a key tool for such evaluations. In such simulation studies, it is common to generate data in such a way that the model of interest is correctly specified, but this is not always straightforward when the model of interest is for potential outcomes, as is an MSM. Methods have been proposed for simulating from MSMs for a survival outcome, but these methods impose restrictions on the data-generating mechanism. Here we propose a method that overcomes these restrictions. The MSM can be a marginal structural logistic model for a discrete survival time or a Cox or additive hazards MSM for a continuous survival time. The hazard of the potential survival time can be conditional on baseline covariates, and the treatment variable can be discrete or continuous. We illustrate the use of the proposed simulation algorithm by carrying out a brief simulation study. This study compares the coverage of confidence intervals calculated in two different ways for causal effect estimates obtained by fitting an MSM via IPTW.
We propose a simple multivariate normality test based on Kac-Bernstein's characterization, which can be conducted by utilising existing statistical independence tests for sums and differences of data samples. We also perform its empirical investigation, which reveals that for high-dimensional data, the proposed approach may be more efficient than the alternative ones. The accompanying code repository is provided at \url{//shorturl.at/rtuy5}.
Many complex tasks and environments can be decomposed into simpler, independent parts. Discovering such underlying compositional structure has the potential to expedite adaptation and enable compositional generalization. Despite progress, our most powerful systems struggle to compose flexibly. While most of these systems are monolithic, modularity promises to allow capturing the compositional nature of many tasks. However, it is unclear under which circumstances modular systems discover this hidden compositional structure. To shed light on this question, we study a teacher-student setting with a modular teacher where we have full control over the composition of ground truth modules. This allows us to relate the problem of compositional generalization to that of identification of the underlying modules. We show theoretically that identification up to linear transformation purely from demonstrations is possible in hypernetworks without having to learn an exponential number of module combinations. While our theory assumes the infinite data limit, in an extensive empirical study we demonstrate how meta-learning from finite data can discover modular solutions that generalize compositionally in modular but not monolithic architectures. We further show that our insights translate outside the teacher-student setting and demonstrate that in tasks with compositional preferences and tasks with compositional goals hypernetworks can discover modular policies that compositionally generalize.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.