We present a manifold-based autoencoder method for learning nonlinear dynamics in time, notably partial differential equations (PDEs), in which the manifold latent space evolves according to Ricci flow. This can be accomplished by simulating Ricci flow in a physics-informed setting, and manifold quantities can be matched so that Ricci flow is empirically achieved. With our methodology, the manifold is learned as part of the training procedure, so ideal geometries may be discerned, while the evolution simultaneously induces a more accommodating latent representation over static methods. We present our method on a range of numerical experiments consisting of PDEs that encompass desirable characteristics such as periodicity and randomness, remarking error on in-distribution and extrapolation scenarios.
This work considers the nodal finite element approximation of peridynamics, in which the nodal displacements satisfy the peridynamics equation at each mesh node. For the nonlinear bond-based peridynamics model, it is shown that, under the suitable assumptions on an exact solution, the discretized solution associated with the central-in-time and nodal finite element discretization converges to the exact solution in $L^2$ norm at the rate $C_1 \Delta t + C_2 h^2/\epsilon^2$. Here, $\Delta t$, $h$, and $\epsilon$ are time step size, mesh size, and the size of the horizon or nonlocal length scale, respectively. Constants $C_1$ and $C_2$ are independent of $h$ and $\Delta t$ and depend on the norms of the exact solution. Several numerical examples involving pre-crack, void, and notch are considered, and the efficacy of the proposed nodal finite element discretization is analyzed.
Mendelian randomization is an instrumental variable method that utilizes genetic information to investigate the causal effect of a modifiable exposure on an outcome. In most cases, the exposure changes over time. Understanding the time-varying causal effect of the exposure can yield detailed insights into mechanistic effects and the potential impact of public health interventions. Recently, a growing number of Mendelian randomization studies have attempted to explore time-varying causal effects. However, the proposed approaches oversimplify temporal information and rely on overly restrictive structural assumptions, limiting their reliability in addressing time-varying causal problems. This paper considers a novel approach to estimate time-varying effects through continuous-time modelling by combining functional principal component analysis and weak-instrument-robust techniques. Our method effectively utilizes available data without making strong structural assumptions and can be applied in general settings where the exposure measurements occur at different timepoints for different individuals. We demonstrate through simulations that our proposed method performs well in estimating time-varying effects and provides reliable inference results when the time-varying effect form is correctly specified. The method could theoretically be used to estimate arbitrarily complex time-varying effects. However, there is a trade-off between model complexity and instrument strength. Estimating complex time-varying effects requires instruments that are unrealistically strong. We illustrate the application of this method in a case study examining the time-varying effects of systolic blood pressure on urea levels.
In this study, the impact of turbulent diffusion on mixing of biochemical reaction models is explored by implementing and validating different models. An original codebase called CHAD (Coupled Hydrodynamics and Anaerobic Digestion) is extended to incorporate turbulent diffusion and validate it against results from OpenFOAM with 2D Rayleigh-Taylor Instability and lid-driven cavity simulations. The models are then tested for the applications with Anaerobic Digestion - a widely used wastewater treatment method. The findings demonstrate that the implemented models accurately capture turbulent diffusion when provided with an accurate flow field. Specifically, a minor effect of chemical turbulent diffusion on biochemical reactions within the anaerobic digestion tank is observed, while thermal turbulent diffusion significantly influences mixing. By successfully implementing turbulent diffusion models in CHAD, its capabilities for more accurate anaerobic digestion simulations are enhanced, aiding in optimizing the design and operation of anaerobic digestion reactors in real-world wastewater treatment applications.
Methods for upwinding the potential vorticity in a compatible finite element discretisation of the rotating shallow water equations are studied. These include the well-known anticipated potential vorticity method (APVM), streamwise upwind Petrov-Galerkin (SUPG) method, and a recent approach where the trial functions are evaluated downstream within the reference element. In all cases the upwinding scheme conserves both potential vorticity and energy, since the antisymmetric structure of the equations is preserved. The APVM leads to a symmetric definite correction to the potential enstrophy that is dissipative and inconsistent, resulting in a turbulent state where the potential enstrophy is more strongly damped than for the other schemes. While the SUPG scheme is widely known to be consistent, since it modifies the test functions only, the downwinded trial function formulation results in the advection of downwind corrections. Results of the SUPG and downwinded trial function schemes are very similar in terms of both potential enstrophy conservation and turbulent spectra. The main difference between these schemes is in the energy conservation and residual errors. If just two nonlinear iterations are applied then the energy conservation errors are improved for the downwinded trial function formulation, reflecting a smaller residual error than for the SUPG scheme. We also present formulations by which potential enstrophy is exactly integrated at each time level. Results using these formulations are observed to be stable in the absence of dissipation, despite the uncontrolled aliasing of grid scale turbulence. Using such a formulation and the APVM with a coefficient $\mathcal{O}(100)$ times smaller that its regular value leads to turbulent spectra that are greatly improved at the grid scale over the SUPG and downwinded trial function formulations with unstable potential enstrophy errors.
We propose a numerical method to solve parameter-dependent hyperbolic partial differential equations (PDEs) with a moment approach, based on a previous work from Marx et al. (2020). This approach relies on a very weak notion of solution of nonlinear equations, namely parametric entropy measure-valued (MV) solutions, satisfying linear equations in the space of Borel measures. The infinite-dimensional linear problem is approximated by a hierarchy of convex, finite-dimensional, semidefinite programming problems, called Lasserre's hierarchy. This gives us a sequence of approximations of the moments of the occupation measure associated with the parametric entropy MV solution, which is proved to converge. In the end, several post-treatments can be performed from this approximate moments sequence. In particular, the graph of the solution can be reconstructed from an optimization of the Christoffel-Darboux kernel associated with the approximate measure, that is a powerful approximation tool able to capture a large class of irregular functions. Also, for uncertainty quantification problems, several quantities of interest can be estimated, sometimes directly such as the expectation of smooth functionals of the solutions. The performance of our approach is evaluated through numerical experiments on the inviscid Burgers equation with parametrised initial conditions or parametrised flux function.
Several mixed-effects models for longitudinal data have been proposed to accommodate the non-linearity of late-life cognitive trajectories and assess the putative influence of covariates on it. No prior research provides a side-by-side examination of these models to offer guidance on their proper application and interpretation. In this work, we examined five statistical approaches previously used to answer research questions related to non-linear changes in cognitive aging: the linear mixed model (LMM) with a quadratic term, LMM with splines, the functional mixed model, the piecewise linear mixed model, and the sigmoidal mixed model. We first theoretically describe the models. Next, using data from two prospective cohorts with annual cognitive testing, we compared the interpretation of the models by investigating associations of education on cognitive change before death. Lastly, we performed a simulation study to empirically evaluate the models and provide practical recommendations. Except for the LMM-quadratic, the fit of all models was generally adequate to capture non-linearity of cognitive change and models were relatively robust. Although spline-based models have no interpretable nonlinearity parameters, their convergence was easier to achieve, and they allow graphical interpretation. In contrast, piecewise and sigmoidal models, with interpretable non-linear parameters, may require more data to achieve convergence.
Chemical and biochemical reactions can exhibit surprisingly different behaviours from multiple steady-state solutions to oscillatory solutions and chaotic behaviours. Such behaviour has been of great interest to researchers for many decades. The Briggs-Rauscher, Belousov-Zhabotinskii and Bray-Liebhafsky reactions, for which periodic variations in concentrations can be visualized by changes in colour, are experimental examples of oscillating behaviour in chemical systems. These type of systems are modelled by a system of partial differential equations coupled by a nonlinearity. However, analysing the pattern, one may suspect that the dynamic is only generated by a finite number of spatial Fourier modes. In fluid dynamics, it is shown that for large times, the solution is determined by a finite number of spatial Fourier modes, called determining modes. In the article, we first introduce the concept of determining modes and show that, indeed, it is sufficient to characterise the dynamic by only a finite number of spatial Fourier modes. In particular, we analyse the exact number of the determining modes of $u$ and $v$, where the couple $(u,v)$ solves the following stochastic system \begin{equation*} \partial_t{u}(t) = r_1\Delta u(t) -\alpha_1u(t)- \gamma_1u(t)v^2(t) + f(1 - u(t)) + g(t),\quad \partial_t{v}(t) = r_2\Delta v(t) -\alpha_2v(t) + \gamma_2 u(t)v^2(t) + h(t),\quad u(0) = u_0,\;v(0) = v_0, \end{equation*} where $r_1,r_2,\gamma_1,\gamma_2>0$, $\alpha_1,\alpha_2 \ge 0$ and $g,h$ are time depending mappings specified later.
Recently, deep learning (DL)-based methods have been proposed for the computational reduction of gadolinium-based contrast agents (GBCAs) to mitigate adverse side effects while preserving diagnostic value. Currently, the two main challenges for these approaches are the accurate prediction of contrast enhancement and the synthesis of realistic images. In this work, we address both challenges by utilizing the contrast signal encoded in the subtraction images of pre-contrast and post-contrast image pairs. To avoid the synthesis of any noise or artifacts and solely focus on contrast signal extraction and enhancement from low-dose subtraction images, we train our DL model using noise-free standard-dose subtraction images as targets. As a result, our model predicts the contrast enhancement signal only; thereby enabling synthesization of images beyond the standard dose. Furthermore, we adapt the embedding idea of recent diffusion-based models to condition our model on physical parameters affecting the contrast enhancement behavior. We demonstrate the effectiveness of our approach on synthetic and real datasets using various scanners, field strengths, and contrast agents.
A component-splitting method is proposed to improve convergence characteristics for implicit time integration of compressible multicomponent reactive flows. The characteristic decomposition of flux jacobian of multicomponent Navier-Stokes equations yields a large sparse eigensystem, presenting challenges of slow convergence and high computational costs for implicit methods. To addresses this issue, the component-splitting method segregates the implicit operator into two parts: one for the flow equations (density/momentum/energy) and the other for the component equations. Each part's implicit operator employs flux-vector splitting based on their respective spectral radii to achieve accelerated convergence. This approach improves the computational efficiency of implicit iteration, mitigating the quadratic increase in time cost with the number of species. Two consistence corrections are developed to reduce the introduced component-splitting error and ensure the numerical consistency of mass fraction. Importantly, the impact of component-splitting method on accuracy is minimal as the residual approaches convergence. The accuracy, efficiency, and robustness of component-splitting method are thoroughly investigated and compared with the coupled implicit scheme through several numerical cases involving thermo-chemical nonequilibrium hypersonic flows. The results demonstrate that the component-splitting method decreases the required number of iteration steps for convergence of residual and wall heat flux, decreases the computation time per iteration step, and diminishes the residual to lower magnitude. The acceleration efficiency is enhanced with increases in CFL number and number of species.
This study addresses a class of linear mixed-integer programming (MILP) problems that involve uncertainty in the objective function parameters. The parameters are assumed to form a random vector, whose probability distribution can only be observed through a finite training data set. Unlike most of the related studies in the literature, we also consider uncertainty in the underlying data set. The data uncertainty is described by a set of linear constraints for each random sample, and the uncertainty in the distribution (for a fixed realization of data) is defined using a type-1 Wasserstein ball centered at the empirical distribution of the data. The overall problem is formulated as a three-level distributionally robust optimization (DRO) problem. First, we prove that the three-level problem admits a single-level MILP reformulation, if the class of loss functions is restricted to biaffine functions. Secondly, it turns out that for several particular forms of data uncertainty, the outlined problem can be solved reasonably fast by leveraging the nominal MILP problem. Finally, we conduct a computational study, where the out-of-sample performance of our model and computational complexity of the proposed MILP reformulation are explored numerically for several application domains.