Understanding fluid movement in multi-pored materials is vital for energy security and physiology. For instance, shale (a geological material) and bone (a biological material) exhibit multiple pore networks. Double porosity/permeability models provide a mechanics-based approach to describe hydrodynamics in aforesaid porous materials. However, current theoretical results primarily address steady-state response, and their counterparts in the transient regime are still wanting. The primary aim of this paper is to fill this knowledge gap. We present three principal properties -- with rigorous mathematical arguments -- that the solutions under the double porosity/permeability model satisfy in the transient regime: backward-in-time uniqueness, reciprocity, and a variational principle. We employ the ``energy method'' -- by exploiting the physical total kinetic energy of the flowing fluid -- to establish the first property and Cauchy-Riemann convolutions to prove the next two. The results reported in this paper -- that qualitatively describe the dynamics of fluid flow in double-pored media -- have (a) theoretical significance, (b) practical applications, and (c) considerable pedagogical value. In particular, these results will benefit practitioners and computational scientists in checking the accuracy of numerical simulators. The backward-in-time uniqueness lays a firm theoretical foundation for pursuing inverse problems in which one predicts the prescribed initial conditions based on data available about the solution at a later instance.
This work considers the nodal finite element approximation of peridynamics, in which the nodal displacements satisfy the peridynamics equation at each mesh node. For the nonlinear bond-based peridynamics model, it is shown that, under the suitable assumptions on an exact solution, the discretized solution associated with the central-in-time and nodal finite element discretization converges to the exact solution in $L^2$ norm at the rate $C_1 \Delta t + C_2 h^2/\epsilon^2$. Here, $\Delta t$, $h$, and $\epsilon$ are time step size, mesh size, and the size of the horizon or nonlocal length scale, respectively. Constants $C_1$ and $C_2$ are independent of $h$ and $\Delta t$ and depend on the norms of the exact solution. Several numerical examples involving pre-crack, void, and notch are considered, and the efficacy of the proposed nodal finite element discretization is analyzed.
Mendelian randomization is an instrumental variable method that utilizes genetic information to investigate the causal effect of a modifiable exposure on an outcome. In most cases, the exposure changes over time. Understanding the time-varying causal effect of the exposure can yield detailed insights into mechanistic effects and the potential impact of public health interventions. Recently, a growing number of Mendelian randomization studies have attempted to explore time-varying causal effects. However, the proposed approaches oversimplify temporal information and rely on overly restrictive structural assumptions, limiting their reliability in addressing time-varying causal problems. This paper considers a novel approach to estimate time-varying effects through continuous-time modelling by combining functional principal component analysis and weak-instrument-robust techniques. Our method effectively utilizes available data without making strong structural assumptions and can be applied in general settings where the exposure measurements occur at different timepoints for different individuals. We demonstrate through simulations that our proposed method performs well in estimating time-varying effects and provides reliable inference results when the time-varying effect form is correctly specified. The method could theoretically be used to estimate arbitrarily complex time-varying effects. However, there is a trade-off between model complexity and instrument strength. Estimating complex time-varying effects requires instruments that are unrealistically strong. We illustrate the application of this method in a case study examining the time-varying effects of systolic blood pressure on urea levels.
Functional Differential Equations (FDEs) play a fundamental role in many areas of mathematical physics, including fluid dynamics (Hopf characteristic functional equation), quantum field theory (Schwinger-Dyson equation), and statistical physics. Despite their significance, computing solutions to FDEs remains a longstanding challenge in mathematical physics. In this paper we address this challenge by introducing new approximation theory and high-performance computational algorithms designed for solving FDEs on tensor manifolds. Our approach involves approximating FDEs using high-dimensional partial differential equations (PDEs), and then solving such high-dimensional PDEs on a low-rank tensor manifold leveraging high-performance parallel tensor algorithms. The effectiveness of the proposed approach is demonstrated through its application to the Burgers-Hopf FDE, which governs the characteristic functional of the stochastic solution to the Burgers equation evolving from a random initial state.
In this article, we propose a finite volume discretization of a one dimensional nonlinear reaction kinetic model proposed in [Neumann, Schmeiser, Kint. Rel. Mod. 2016], which describes a 2-species recombination-generation process. Specifically, we establish the long-time convergence of approximate solutions towards equilibrium, at exponential rate. The study is based on an adaptation for a discretization of the linearized problem of the $L^2$ hypocoercivity method introduced in [Dolbeault, Mouhot, Schmeiser, 2015]. From this, we can deduce a local result for the discrete nonlinear problem. As in the continuous framework, this result requires the establishment of a maximum principle, which necessitates the use of monotone numerical fluxes.
Belnap-Dunn logic, also knows as the logic of First-Degree Entailment, is a logic that can serve as the underlying logic of theories that are inconsistent or incomplete. For various reasons, different expansions of Belnap-Dunn logic with non-classical connectives have been studied. This paper investigates the question whether those expansions are interdefinable with an expansion whose connectives include only classical connectives. This is worth knowing because it is difficult to say how close a logic with non-classical connectives is related to classical logic. The notion of interdefinability of logics used is based on a general notion of definability of a connective in a logic that seems to have been forgotten.
The minimum covariance determinant (MCD) estimator is a popular method for robustly estimating the mean and covariance of multivariate data. We extend the MCD to the setting where the observations are matrices rather than vectors and introduce the matrix minimum covariance determinant (MMCD) estimators for robust parameter estimation. These estimators hold equivariance properties, achieve a high breakdown point, and are consistent under elliptical matrix-variate distributions. We have also developed an efficient algorithm with convergence guarantees to compute the MMCD estimators. Using the MMCD estimators, we can compute robust Mahalanobis distances that can be used for outlier detection. Those distances can be decomposed into outlyingness contributions from each cell, row, or column of a matrix-variate observation using Shapley values, a concept for outlier explanation recently introduced in the multivariate setting. Simulations and examples reveal the excellent properties and usefulness of the robust estimators.
Several mixed-effects models for longitudinal data have been proposed to accommodate the non-linearity of late-life cognitive trajectories and assess the putative influence of covariates on it. No prior research provides a side-by-side examination of these models to offer guidance on their proper application and interpretation. In this work, we examined five statistical approaches previously used to answer research questions related to non-linear changes in cognitive aging: the linear mixed model (LMM) with a quadratic term, LMM with splines, the functional mixed model, the piecewise linear mixed model, and the sigmoidal mixed model. We first theoretically describe the models. Next, using data from two prospective cohorts with annual cognitive testing, we compared the interpretation of the models by investigating associations of education on cognitive change before death. Lastly, we performed a simulation study to empirically evaluate the models and provide practical recommendations. Except for the LMM-quadratic, the fit of all models was generally adequate to capture non-linearity of cognitive change and models were relatively robust. Although spline-based models have no interpretable nonlinearity parameters, their convergence was easier to achieve, and they allow graphical interpretation. In contrast, piecewise and sigmoidal models, with interpretable non-linear parameters, may require more data to achieve convergence.
With the increasing availability of large scale datasets, computational power and tools like automatic differentiation and expressive neural network architectures, sequential data are now often treated in a data-driven way, with a dynamical model trained from the observation data. While neural networks are often seen as uninterpretable black-box architectures, they can still benefit from physical priors on the data and from mathematical knowledge. In this paper, we use a neural network architecture which leverages the long-known Koopman operator theory to embed dynamical systems in latent spaces where their dynamics can be described linearly, enabling a number of appealing features. We introduce methods that enable to train such a model for long-term continuous reconstruction, even in difficult contexts where the data comes in irregularly-sampled time series. The potential for self-supervised learning is also demonstrated, as we show the promising use of trained dynamical models as priors for variational data assimilation techniques, with applications to e.g. time series interpolation and forecasting.
Chemical and biochemical reactions can exhibit surprisingly different behaviours from multiple steady-state solutions to oscillatory solutions and chaotic behaviours. Such behaviour has been of great interest to researchers for many decades. The Briggs-Rauscher, Belousov-Zhabotinskii and Bray-Liebhafsky reactions, for which periodic variations in concentrations can be visualized by changes in colour, are experimental examples of oscillating behaviour in chemical systems. These type of systems are modelled by a system of partial differential equations coupled by a nonlinearity. However, analysing the pattern, one may suspect that the dynamic is only generated by a finite number of spatial Fourier modes. In fluid dynamics, it is shown that for large times, the solution is determined by a finite number of spatial Fourier modes, called determining modes. In the article, we first introduce the concept of determining modes and show that, indeed, it is sufficient to characterise the dynamic by only a finite number of spatial Fourier modes. In particular, we analyse the exact number of the determining modes of $u$ and $v$, where the couple $(u,v)$ solves the following stochastic system \begin{equation*} \partial_t{u}(t) = r_1\Delta u(t) -\alpha_1u(t)- \gamma_1u(t)v^2(t) + f(1 - u(t)) + g(t),\quad \partial_t{v}(t) = r_2\Delta v(t) -\alpha_2v(t) + \gamma_2 u(t)v^2(t) + h(t),\quad u(0) = u_0,\;v(0) = v_0, \end{equation*} where $r_1,r_2,\gamma_1,\gamma_2>0$, $\alpha_1,\alpha_2 \ge 0$ and $g,h$ are time depending mappings specified later.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.