Stepped wedge cluster-randomized trial (CRTs) designs randomize clusters of individuals to intervention sequences, ensuring that every cluster eventually transitions from a control period to receive the intervention under study by the end of the study period. The analysis of stepped wedge CRTs is usually more complex than parallel-arm CRTs due to potential secular trends that result in changing intra-cluster and period-cluster correlations over time. A further challenge in the analysis of closed-cohort stepped wedge CRTs, which follow groups of individuals enrolled in each period longitudinally, is the occurrence of dropout. This is particularly problematic in studies of individuals at high risk for mortality, which causes non-ignorable missing outcomes. If not appropriately addressed, missing outcomes from death will erode statistical power, at best, and bias treatment effect estimates, at worst. Joint longitudinal-survival models can accommodate informative dropout and missingness patterns in longitudinal studies. Specifically, within this framework one directly models the dropout process via a time-to-event submodel together with the longitudinal outcome of interest. The two submodels are then linked using a variety of possible association structures. This work extends linear mixed-effects models by jointly modeling the dropout process to accommodate informative missing outcome data in closed-cohort stepped wedge CRTs. We focus on constant intervention and general time-on-treatment effect parametrizations for the longitudinal submodel and study the performance of the proposed methodology using Monte Carlo simulation under several data-generating scenarios. We illustrate the joint modeling methodology in practice by reanalyzing the `Frail Older Adults: Care in Transition' (ACT) trial, a stepped wedge CRT of a multifaceted geriatric care model versus usual care in the Netherlands.
The simulation of many complex phenomena in engineering and science requires solving expensive, high-dimensional systems of partial differential equations (PDEs). To circumvent this, reduced-order models (ROMs) have been developed to speed up computations. However, when governing equations are unknown or partially known, typically ROMs lack interpretability and reliability of the predicted solutions. In this work we present a data-driven, non-intrusive framework for building ROMs where the latent variables and dynamics are identified in an interpretable manner and uncertainty is quantified. Starting from a limited amount of high-dimensional, noisy data the proposed framework constructs an efficient ROM by leveraging variational autoencoders for dimensionality reduction along with a newly introduced, variational version of sparse identification of nonlinear dynamics (SINDy), which we refer to as Variational Identification of Nonlinear Dynamics (VINDy). In detail, the method consists of Variational Encoding of Noisy Inputs (VENI) to identify the distribution of reduced coordinates. Simultaneously, we learn the distribution of the coefficients of a pre-determined set of candidate functions by VINDy. Once trained offline, the identified model can be queried for new parameter instances and new initial conditions to compute the corresponding full-time solutions. The probabilistic setup enables uncertainty quantification as the online testing consists of Variational Inference naturally providing Certainty Intervals (VICI). In this work we showcase the effectiveness of the newly proposed VINDy method in identifying interpretable and accurate dynamical system for the R\"ossler system with different noise intensities and sources. Then the performance of the overall method - named VENI, VINDy, VICI - is tested on PDE benchmarks including structural mechanics and fluid dynamics.
Many state-of-the-art models trained on long-range sequences, for example S4, S5 or LRU, are made of sequential blocks combining State-Space Models (SSMs) with neural networks. In this paper we provide a PAC bound that holds for these kind of architectures with stable SSM blocks and does not depend on the length of the input sequence. Imposing stability of the SSM blocks is a standard practice in the literature, and it is known to help performance. Our results provide a theoretical justification for the use of stable SSM blocks as the proposed PAC bound decreases as the degree of stability of the SSM blocks increases.
(Rephrased) Non-conformance decision-making processes in high-precision manufacturing of engineering structures are often delayed due to numerical simulations that are needed for analyzing the defective parts and assemblies. Interfaces between parts of assemblies can only be simulated using the modeling of contact. Thus, efficient parametric ROMs are necessary for performing contact mechanics simulations in near real-time scenarios. Typical strategies for reducing the cost of contact models use low-rank approximations. Assumptions include the existence of a low-dimensional subspace for displacement and a low-dimensional non-negative subcone for contact pressure. However, the contact pressure exhibits a local nature, as the position of contact can vary with parameters like loading or geometry. The adequacy of low-rank approximations for contact mechanics is investigated and alternative routes based on sparse regression techniques are explored. It is shown that the local nature leads to loss of linear separability of contact pressure, thereby limiting the accuracy of low-rank methods. The applicability of the low-rank assumption to contact pressure is analyzed using 3 different criteria: compactness, generalization and specificity. Subsequently, over-complete dictionaries with a large number of snapshots to mitigate the inseparability issues is investigated. Two strategies are devised: a greedy active-set method where the dictionary elements are selected greedily and a convex hull approximation method that eliminates the necessity of explicitly enforcing non-penetration constraints in convex problems. Lastly, Dynamic Time Warping is studied as a possible non-linear interpolation method that permits the exploration of the non-linear manifoldm synthesising snapshots not computed in the training set with low complexity; reducing the offline costs.
This article provides a reduced-order modelling framework for turbulent compressible flows discretized by the use of finite volume approaches. The basic idea behind this work is the construction of a reduced-order model capable of providing closely accurate solutions with respect to the high fidelity flow fields. Full-order solutions are often obtained through the use of segregated solvers (solution variables are solved one after another), employing slightly modified conservation laws so that they can be decoupled and then solved one at a time. Classical reduction architectures, on the contrary, rely on the Galerkin projection of a complete Navier-Stokes system to be projected all at once, causing a mild discrepancy with the high order solutions. This article relies on segregated reduced-order algorithms for the resolution of turbulent and compressible flows in the context of physical and geometrical parameters. At the full-order level turbulence is modeled using an eddy viscosity approach. Since there is a variety of different turbulence models for the approximation of this supplementary viscosity, one of the aims of this work is to provide a reduced-order model which is independent on this selection. This goal is reached by the application of hybrid methods where Navier-Stokes equations are projected in a standard way while the viscosity field is approximated by the use of data-driven interpolation methods or by the evaluation of a properly trained neural network. By exploiting the aforementioned expedients it is possible to predict accurate solutions with respect to the full-order problems characterized by high Reynolds numbers and elevated Mach numbers.
Identifiability of statistical models is a key notion in unsupervised representation learning. Recent work of nonlinear independent component analysis (ICA) employs auxiliary data and has established identifiable conditions. This paper proposes a statistical model of two latent vectors with single auxiliary data generalizing nonlinear ICA, and establishes various identifiability conditions. Unlike previous work, the two latent vectors in the proposed model can have arbitrary dimensions, and this property enables us to reveal an insightful dimensionality relation among two latent vectors and auxiliary data in identifiability conditions. Furthermore, surprisingly, we prove that the indeterminacies of the proposed model has the same as \emph{linear} ICA under certain conditions: The elements in the latent vector can be recovered up to their permutation and scales. Next, we apply the identifiability theory to a statistical model for graph data. As a result, one of the identifiability conditions includes an appealing implication: Identifiability of the statistical model could depend on the maximum value of link weights in graph data. Then, we propose a practical method for identifiable graph embedding. Finally, we numerically demonstrate that the proposed method well-recovers the latent vectors and model identifiability clearly depends on the maximum value of link weights, which supports the implication of our theoretical results
We present a GPU implementation of vertex-patch smoothers for higher order finite element methods in two and three dimensions. Analysis shows that they are not memory bound with respect to GPU DRAM, but with respect to on-chip scratchpad memory. Multigrid operations are optimized through localization and reorganized local operations in on-chip memory, achieving minimal global data transfer and a conflict free memory access pattern. Performance tests demonstrate that the optimized kernel is at least 2 times faster than the straightforward implementation for the Poisson problem, across various polynomial degrees in 2D and 3D, achieving up to 36% of the peak performance in both single and double precision on Nvidia A100 GPU.
This paper aims to investigate the diffusion behavior of particles moving in stochastic flows under a structure-preserving scheme. We compute the effective diffusivity for normal diffusive random flows and establish the power law between spatial and temporal variables for cases with anomalous diffusion phenomena. From a Lagrangian approach, we separate the corresponding stochastic differential equations (SDEs) into sub-problems and construct a one-step structure-preserving method to solve them. Then by modified equation systems, the convergence analysis in calculating the effective diffusivity is provided and compared between the structure-preserving scheme and the Euler-Maruyama scheme. Also, we provide the error estimate for the structure-preserving scheme in calculating the power law for a series of super-diffusive random flows. Finally, we calculate the effective diffusivity and anomalous diffusion phenomena for a series of 2D and 3D random fields.
The accuracy of finite element solutions is closely tied to the mesh quality. In particular, geometrically nonlinear problems involving large and strongly localized deformations often result in prohibitively large element distortions. In this work, we propose a novel mesh regularization approach allowing to restore a non-distorted high-quality mesh in an adaptive manner without the need for expensive re-meshing procedures. The core idea of this approach lies in the definition of a finite element distortion potential considering contributions from different distortion modes such as skewness and aspect ratio of the elements. The regularized mesh is found by minimization of this potential. Moreover, based on the concept of spatial localization functions, the method allows to specify tailored requirements on mesh resolution and quality for regions with strongly localized mechanical deformation and mesh distortion. In addition, while existing mesh regularization schemes often keep the boundary nodes of the discretization fixed, we propose a mesh-sliding algorithm based on variationally consistent mortar methods allowing for an unrestricted tangential motion of nodes along the problem boundary. Especially for problems involving significant surface deformation (e.g., frictional contact), this approach allows for an improved mesh relaxation as compared to schemes with fixed boundary nodes. To transfer data such as tensor-valued history variables of the material model from the old (distorted) to the new (regularized) mesh, a structure-preserving invariant interpolation scheme for second-order tensors is employed, which has been proposed in our previous work and is designed to preserve important mechanical properties of tensor-valued data such as objectivity and positive definiteness... {continued see pdf}
The dot product attention mechanism, originally designed for natural language processing (NLP) tasks, is a cornerstone of modern Transformers. It adeptly captures semantic relationships between word pairs in sentences by computing a similarity overlap between queries and keys. In this work, we explore the suitability of Transformers, focusing on their attention mechanisms, in the specific domain of the parametrization of variational wave functions to approximate ground states of quantum many-body spin Hamiltonians. Specifically, we perform numerical simulations on the two-dimensional $J_1$-$J_2$ Heisenberg model, a common benchmark in the field of quantum-many body systems on lattice. By comparing the performance of standard attention mechanisms with a simplified version that excludes queries and keys, relying solely on positions, we achieve competitive results while reducing computational cost and parameter usage. Furthermore, through the analysis of the attention maps generated by standard attention mechanisms, we show that the attention weights become effectively input-independent at the end of the optimization. We support the numerical results with analytical calculations, providing physical insights of why queries and keys should be, in principle, omitted from the attention mechanism when studying large systems. Interestingly, the same arguments can be extended to the NLP domain, in the limit of long input sentences.
Distributionally robust optimization has emerged as an attractive way to train robust machine learning models, capturing data uncertainty and distribution shifts. Recent statistical analyses have proved that robust models built from Wasserstein ambiguity sets have nice generalization guarantees, breaking the curse of dimensionality. However, these results are obtained in specific cases, at the cost of approximations, or under assumptions difficult to verify in practice. In contrast, we establish, in this article, exact generalization guarantees that cover all practical cases, including any transport cost function and any loss function, potentially non-convex and nonsmooth. For instance, our result applies to deep learning, without requiring restrictive assumptions. We achieve this result through a novel proof technique that combines nonsmooth analysis rationale with classical concentration results. Our approach is general enough to extend to the recent versions of Wasserstein/Sinkhorn distributionally robust problems that involve (double) regularizations.