Marginal structural models are a popular method for estimating causal effects in the presence of time-varying exposures. In spite of their popularity, no scalable non-parametric estimator exist for marginal structural models with multi-valued and time-varying treatments. In this paper, we use machine learning together with recent developments in semiparametric efficiency theory for longitudinal studies to propose such an estimator. The proposed estimator is based on a study of the non-parametric identifying functional, including first order von-Mises expansions as well as the efficient influence function and the efficiency bound. We show conditions under which the proposed estimator is efficient, asymptotically normal, and sequentially doubly robust in the sense that it is consistent if, for each time point, either the outcome or the treatment mechanism is consistently estimated. We perform a simulation study to illustrate the properties of the estimators, and present the results of our motivating study on a COVID-19 dataset studying the impact of mobility on the cumulative number of observed cases.
We develop both first and second order numerical optimization methods to solve non-smooth optimization problems featuring a shared sparsity penalty, constrained by differential equations with uncertainty. To alleviate the curse of dimensionality we use tensor product approximations. To handle the non-smoothness of the objective function we introduce a smoothed version of the shared sparsity objective. We consider both a benchmark elliptic PDE constraint, and a more realistic topology optimization problem. We demonstrate that the error converges linearly in iterations and the smoothing parameter, and faster than algebraically in the number of degrees of freedom, consisting of the number of quadrature points in one variable and tensor ranks. Moreover, in the topology optimization problem, the smoothed shared sparsity penalty actually reduces the tensor ranks compared to the unpenalised solution. This enables us to find a sparse high-resolution design under a high-dimensional uncertainty.
We propose a novel, highly efficient, second-order accurate, long-time unconditionally stable numerical scheme for a class of finite-dimensional nonlinear models that are of importance in geophysical fluid dynamics. The scheme is highly efficient in the sense that only a (fixed) symmetric positive definite linear problem (with varying right hand sides) is involved at each time-step. The solutions to the scheme are uniformly bounded for all time. We show that the scheme is able to capture the long-time dynamics of the underlying geophysical model, with the global attractors as well as the invariant measures of the scheme converge to those of the original model as the step size approaches zero. In our numerical experiments, we take an indirect approach, using long-term statistics to approximate the invariant measures. Our results suggest that the convergence rate of the long-term statistics, as a function of terminal time, is approximately first order using the Jensen-Shannon metric and half-order using the L1 metric. This implies that very long time simulation is needed in order to capture a few significant digits of long time statistics (climate) correct. Nevertheless, the second order scheme's performance remains superior to that of the first order one, requiring significantly less time to reach a small neighborhood of statistical equilibrium for a given step size.
This paper proposes a topology optimization method for non-thermal and thermal fluid problems using the Lattice Kinetic Scheme (LKS).LKS, which is derived from the Lattice Boltzmann Method (LBM), requires only macroscopic values, such as fluid velocity and pressure, whereas LBM requires velocity distribution functions, thereby reducing memory requirements. The proposed method computes design sensitivities based on the adjoint variable method, and the adjoint equation is solved in the same manner as LKS; thus, we refer to it as the Adjoint Lattice Kinetic Scheme (ALKS). A key contribution of this method is the proposed approximate treatment of boundary conditions for the adjoint equation, which is challenging to apply directly due to the characteristics of LKS boundary conditions. We demonstrate numerical examples for steady and unsteady problems involving non-thermal and thermal fluids, and the results are physically meaningful and consistent with previous research, exhibiting similar trends in parameter dependencies, such as the Reynolds number. Furthermore, the proposed method reduces memory usage by up to 75% compared to the conventional LBM in an unsteady thermal fluid problem.
This paper presents a convolution tensor decomposition based model reduction method for solving the Allen-Cahn equation. The Allen-Cahn equation is usually used to characterize phase separation or the motion of anti-phase boundaries in materials. Its solution is time-consuming when high-resolution meshes and large time scale integration are involved. To resolve these issues, the convolution tensor decomposition method is developed, in conjunction with a stabilized semi-implicit scheme for time integration. The development enables a powerful computational framework for high-resolution solutions of Allen-Cahn problems, and allows the use of relatively large time increments for time integration without violating the discrete energy law. To further improve the efficiency and robustness of the method, an adaptive algorithm is also proposed. Numerical examples have confirmed the efficiency of the method in both 2D and 3D problems. Orders-of-magnitude speedups were obtained with the method for high-resolution problems, compared to the finite element method. The proposed computational framework opens numerous opportunities for simulating complex microstructure formation in materials on large-volume high-resolution meshes at a deeply reduced computational cost.
In real-world data, information is stored in extremely large feature vectors. These variables are typically correlated due to complex interactions involving many features simultaneously. Such correlations qualitatively correspond to semantic roles and are naturally recognized by both the human brain and artificial neural networks. This recognition enables, for instance, the prediction of missing parts of an image or text based on their context. We present a method to detect these correlations in high-dimensional data represented as binary numbers. We estimate the binary intrinsic dimension of a dataset, which quantifies the minimum number of independent coordinates needed to describe the data, and is therefore a proxy of semantic complexity. The proposed algorithm is largely insensitive to the so-called curse of dimensionality, and can therefore be used in big data analysis. We test this approach identifying phase transitions in model magnetic systems and we then apply it to the detection of semantic correlations of images and text inside deep neural networks.
We propose a tamed-adaptive Milstein scheme for stochastic differential equations in which the first-order derivatives of the coefficients are locally H\"older continuous of order $\alpha$. We show that the scheme converges in the $L_2$-norm with a rate of $(1+\alpha)/2$ over both finite intervals $[0, T]$ and the infinite interval $(0, +\infty)$, under certain growth conditions on the coefficients.
We extend prior work comparing linear multilevel models (MLM) and fixed effect (FE) models to the generalized linear model (GLM) setting, where the coefficient on a treatment variable is of primary interest. This leads to three key insights. (i) First, as in the linear setting, MLM can be thought of as a regularized form of FE. This explains why MLM can show large biases in its treatment coefficient estimates when group-level confounding is present. However, unlike the linear setting, there is not an exact equivalence between MLM and regularized FE coefficient estimates in GLMs. (ii) Second, we study a generalization of "bias-corrected MLM" (bcMLM) to the GLM setting. Neither FE nor bcMLM entirely solves MLM's bias problem in GLMs, but bcMLM tends to show less bias than does FE. (iii) Third, and finally, just like in the linear setting, MLM's default standard errors can misspecify the true intragroup dependence structure in the GLM setting, which can lead to downwardly biased standard errors. A cluster bootstrap is a more agnostic alternative. Ultimately, for non-linear GLMs, we recommend bcMLM for estimating the treatment coefficient, and a cluster bootstrap for standard errors and confidence intervals. If a bootstrap is not computationally feasible, then we recommend FE with cluster-robust standard errors.
Many models require integrals of high-dimensional functions: for instance, to obtain marginal likelihoods. Such integrals may be intractable, or too expensive to compute numerically. Instead, we can use the Laplace approximation (LA). The LA is exact if the function is proportional to a normal density; its effectiveness therefore depends on the function's true shape. Here, we propose the use of the probabilistic numerical framework to develop a diagnostic for the LA and its underlying shape assumptions, modelling the function and its integral as a Gaussian process and devising a "test" by conditioning on a finite number of function values. The test is decidedly non-asymptotic and is not intended as a full substitute for numerical integration - rather, it is simply intended to test the feasibility of the assumptions underpinning the LA with as minimal computation. We discuss approaches to optimize and design the test, apply it to known sample functions, and highlight the challenges of high dimensions.
We introduce an algebraic concept of the frame for abstract conditional independence (CI) models, together with basic operations with respect to which such a frame should be closed: copying and marginalization. Three standard examples of such frames are (discrete) probabilistic CI structures, semi-graphoids and structural semi-graphoids. We concentrate on those frames which are closed under the operation of set-theoretical intersection because, for these, the respective families of CI models are lattices. This allows one to apply the results from lattice theory and formal concept analysis to describe such families in terms of implications among CI statements. The central concept of this paper is that of self-adhesivity defined in algebraic terms, which is a combinatorial reflection of the self-adhesivity concept studied earlier in context of polymatroids and information theory. The generalization also leads to a self-adhesivity operator defined on the hyper-level of CI frames. We answer some of the questions related to this approach and raise other open questions. The core of the paper is in computations. The combinatorial approach to computation might overcome some memory and space limitation of software packages based on polyhedral geometry, in particular, if SAT solvers are utilized. We characterize some basic CI families over 4 variables in terms of canonical implications among CI statements. We apply our method in information-theoretical context to the task of entropic region demarcation over 5 variables.
We extend the shifted boundary method (SBM) to the simulation of incompressible fluid flow using immersed octree meshes. Previous work on SBM for fluid flow primarily utilized two- or three-dimensional unstructured tetrahedral grids. Recently, octree grids have become an essential component of immersed CFD solvers, and this work addresses this gap and the associated computational challenges. We leverage an optimal (approximate) surrogate boundary constructed efficiently on incomplete and adaptive octree meshes. The resulting framework enables the simulation of the incompressible Navier-Stokes equations in complex geometries without requiring boundary-fitted grids. Simulations of benchmark tests in two and three dimensions demonstrate that the Octree-SBM framework is a robust, accurate, and efficient approach to simulating fluid dynamics problems with complex geometries.