Linear mixed models are commonly used in analyzing stepped-wedge cluster randomized trials (SW-CRTs). A key consideration for analyzing a SW-CRT is accounting for the potentially complex correlation structure, which can be achieved by specifying a random effects structure. Common random effects structures for a SW-CRT include random intercept, random cluster-by-period, and discrete-time decay. Recently, more complex structures, such as the random intervention structure, have been proposed. In practice, specifying appropriate random effects can be challenging. Robust variance estimators (RVE) may be applied to linear mixed models to provide consistent estimators of standard errors of fixed effect parameters in the presence of random-effects misspecification. However, there has been no empirical investigation of RVE for SW-CRT. In this paper, we first review five RVEs (both standard and small-sample bias-corrected RVEs) that are available for linear mixed models. We then describe a comprehensive simulation study to examine the performance of these RVEs for SW-CRTs with a continuous outcome under different data generators. For each data generator, we investigate whether the use of a RVE with either the random intercept model or the random cluster-by-period model is sufficient to provide valid statistical inference for fixed effect parameters, when these working models are subject to misspecification. Our results indicate that the random intercept and random cluster-by-period models with RVEs performed similarly. The CR3 RVE estimator, coupled with the number of clusters minus two degrees of freedom correction, consistently gave the best coverage results, but could be slightly anti-conservative when the number of clusters was below 16. We summarize the implications of our results for linear mixed model analysis of SW-CRTs in practice.
Sequential models, such as Recurrent Neural Networks and Neural Ordinary Differential Equations, have long suffered from slow training due to their inherent sequential nature. For many years this bottleneck has persisted, as many thought sequential models could not be parallelized. We challenge this long-held belief with our parallel algorithm that accelerates GPU evaluation of sequential models by up to 3 orders of magnitude faster without compromising output accuracy. The algorithm does not need any special structure in the sequential models' architecture, making it applicable to a wide range of architectures. Using our method, training sequential models can be more than 10 times faster than the common sequential method without any meaningful difference in the training results. Leveraging this accelerated training, we discovered the efficacy of the Gated Recurrent Unit in a long time series classification problem with 17k time samples. By overcoming the training bottleneck, our work serves as the first step to unlock the potential of non-linear sequential models for long sequence problems.
Mixed Hamming packings are considered: the maximal cardinality given a minimum codeword Hamming distance of mixed codes is addressed via mixed integer programming models. Adopting the concept of contact graph from classical continuous sphere packing problems, a reduction technique for the models is introduced, which enables their efficient solution. Several best known upper bounds are improved and some of them are found to be sharp.
In this paper, we address the problem of modeling data with periodic autoregressive (PAR) time series and additive noise. In most cases, the data are processed assuming a noise-free model (i.e., without additive noise), which is not a realistic assumption in real life. The first two steps in PAR model identification are order selection and period estimation, so the main focus is on these issues. Finally, the model should be validated, so a procedure for analyzing the residuals, which are considered here as multidimensional vectors, is proposed. Both order and period selection, as well as model validation, are addressed by using the characteristic function (CF) of the residual series. The CF is used to obtain the probability density function, which is utilized in the information criterion and for residuals distribution testing. To complete the PAR model analysis, the procedure for estimating the coefficients is necessary. However, this issue is only mentioned here as it is a separate task (under consideration in parallel). The presented methodology can be considered as the general framework for analyzing data with periodically non-stationary characteristics disturbed by finite-variance external noise. The original contribution is in the selection of the optimal model order and period identification, as well as the analysis of residuals. All these findings have been inspired by our previous work on machine condition monitoring that used PAR modeling
The presence of faulty or underactuated manipulators can disrupt the end-effector formation keeping of a team of manipulators. Based on two-link planar manipulators, we investigate this end-effector formation keeping problem for mixed fully- and under-actuated manipulators with flexible joints. In this case, the underactuated manipulators can comprise of active-passive (AP) manipulators, passive-active (PA) manipulators, or a combination thereof. We propose distributed control laws for the different types of manipulators to achieve and maintain the desired formation shape of the end-effectors. It is achieved by assigning virtual springs to the end-effectors for the fully-actuated ones and to the virtual end-effectors for the under-actuated ones. We study further the set of all desired and reachable shapes for the networked manipulators' end-effectors. Finally, we validate our analysis via numerical simulations.
B-spline-based hidden Markov models employ B-splines to specify the emission distributions, offering a more flexible modelling approach to data than conventional parametric HMMs. We introduce a Bayesian framework for inference, enabling the simultaneous estimation of all unknown model parameters including the number of states. A parsimonious knot configuration of the B-splines is identified by the use of a trans-dimensional Markov chain sampling algorithm, while model selection regarding the number of states can be performed based on the marginal likelihood within a parallel sampling framework. Using extensive simulation studies, we demonstrate the superiority of our methodology over alternative approaches as well as its robustness and scalability. We illustrate the explorative use of our methods for data on activity in animals, i.e. whitetip-sharks. The flexibility of our Bayesian approach also facilitates the incorporation of more realistic assumptions and we demonstrate this by developing a novel hierarchical conditional HMM to analyse human activity for circadian and sleep modelling.
We consider the problem of testing the fit of a sample of items from many categories to the uniform distribution over the categories. As a class of alternative hypotheses, we consider the removal of an $\ell_p$ ball of radius $\epsilon$ around the uniform rate sequence for $p \leq 2$. When the number of samples $n$ and number of categories $N$ go to infinity while $\epsilon$ goes to zero, the minimax risk $R_\epsilon^*$ in testing based on the sample's histogram (number of absent categories, singletons, collisions, ...) asymptotes to $2\Phi(-n N^{2-2/p} \epsilon^2/\sqrt{8N})$, with $\Phi(x)$ the normal CDF. This characterization allows comparing the many estimators previously proposed for this problem at the constant level rather than the rate of convergence of their risks. The minimax test mostly relies on collisions when $n/N$ is small, but otherwise behaves like the chisquared test. Empirical studies over a range of problem parameters show that this estimate is accurate in finite samples and that our test is significantly better than the chisquared test or a test that only uses collisions. Our analysis relies on the asymptotic normality of histogram ordinates, the equivalence between the minimax setting and a Bayesian setting, and the characterization of the least favorable prior by reducing a multi-dimensional optimization problem to a one-dimensional problem.
Brittle solids are often toughened by adding a second-phase material. This practice often results in composites with material heterogeneities on the meso scale: large compared to the scale of the process zone but small compared to that of the application. The specific configuration (both geometrical and mechanical) of this mesoscale heterogeneity is generally recognized as important in determining crack propagation and, subsequently, the (effective) toughness of the composite. Here, we systematically investigate how dynamic crack propagation is affected by mesoscale heterogeneities taking the form of an array of inclusions. Using a variational phase-field approach, we compute the apparent crack speed and fracture energy dissipation rate to compare crack propagation under Mode-I loading across different configurations of these inclusions. If fixing the volume fraction of inclusions, matching the inclusion size to the K-dominance zone size gives rise to the best toughening outcome. Conversely, if varying the volume fraction of inclusions, a lower volume fraction configuration can lead to a better toughening outcome if and only if the inclusion size approaches from above the size of the K-dominance zone. Since the size of the K-dominance zone can be estimated \textit{a priori} given an understanding of the application scenario and material availability, we can, in principle, exploit this estimation to design a material's mesoscale heterogeneity that optimally balances the tradeoff between strength and toughness. This paves the way for realizing functional (meta-)materials against crack propagation in extreme environments.
Temporal analysis of products (TAP) reactors enable experiments that probe numerous kinetic processes within a single set of experimental data through variations in pulse intensity, delay, or temperature. Selecting additional TAP experiments often involves arbitrary selection of reaction conditions or the use of chemical intuition. To make experiment selection in TAP more robust, we explore the efficacy of model-based design of experiments (MBDoE) for precision in TAP reactor kinetic modeling. We successfully applied this approach to a case study of synthetic oxidative propane dehydrogenation (OPDH) that involves pulses of propane and oxygen. We found that experiments identified as optimal through the MBDoE for precision generally reduce parameter uncertainties to a higher degree than alternative experiments. The performance of MBDoE for model divergence was also explored for OPDH, with the relevant active sites (catalyst structure) being unknown. An experiment that maximized the divergence between the three proposed mechanisms was identified and led to clear mechanism discrimination. However, re-optimization of kinetic parameters eliminated the ability to discriminate. The findings yield insight into the prospects and limitations of MBDoE for TAP and transient kinetic experiments.
We present a joint Speech and Language Model (SLM), a multitask, multilingual, and dual-modal model that takes advantage of pretrained foundational speech and language models. SLM freezes the pretrained foundation models to maximally preserves their capabilities, and only trains a simple adapter with just 1\% (156M) of the foundation models' parameters. This adaptation not only leads SLM to achieve strong performance on conventional tasks such as speech recognition (ASR) and speech translation (AST), but also introduces the novel capability of zero-shot instruction-following for more diverse tasks: given a speech input and a text instruction, SLM is able to perform unseen generation tasks including contextual biasing ASR using real-time context, dialog generation, speech continuation, and question answering, etc. Our approach demonstrates that the representational gap between pretrained speech and language models might be narrower than one would expect, and can be bridged by a simple adaptation mechanism. As a result, SLM is not only efficient to train, but also inherits strong capabilities already acquired in foundation models of different modalities.
Transient errors from the dynamic NISQ noise landscape are challenging to comprehend and are especially detrimental to classes of applications that are iterative and/or long-running, and therefore their timely mitigation is important for quantum advantage in real-world applications. The most popular examples of iterative long-running quantum applications are variational quantum algorithms (VQAs). Iteratively, VQA's classical optimizer evaluates circuit candidates on an objective function and picks the best circuits towards achieving the application's target. Noise fluctuation can cause a significant transient impact on the objective function estimation of the VQA iterations / tuning candidates. This can severely affect VQA tuning and, by extension, its accuracy and convergence. This paper proposes QISMET: Quantum Iteration Skipping to Mitigate Error Transients, to navigate the dynamic noise landscape of VQAs. QISMET actively avoids instances of high fluctuating noise which are predicted to have a significant transient error impact on specific VQA iterations. To achieve this, QISMET estimates transient error in VQA iterations and designs a controller to keep the VQA tuning faithful to the transient-free scenario. By doing so, QISMET efficiently mitigates a large portion of the transient noise impact on VQAs and is able to improve the fidelity by 1.3x-3x over a traditional VQA baseline, with 1.6-2.4x improvement over alternative approaches, across different applications and machines. Further, to diligently analyze the effects of transients, this work also builds transient noise models for target VQA applications from observing real machine transients. These are then integrated with the Qiskit simulator.