A new sparse semiparametric model is proposed, which incorporates the influence of two functional random variables in a scalar response in a flexible and interpretable manner. One of the functional covariates is included through a single-index structure, while the other is included linearly through the high-dimensional vector formed by its discretised observations. For this model, two new algorithms are presented for selecting relevant variables in the linear part and estimating the model. Both procedures utilise the functional origin of linear covariates. Finite sample experiments demonstrated the scope of application of both algorithms: the first method is a fast algorithm that provides a solution (without loss in predictive ability) for the significant computational time required by standard variable selection methods for estimating this model, and the second algorithm completes the set of relevant linear covariates provided by the first, thus improving its predictive efficiency. Some asymptotic results theoretically support both procedures. A real data application demonstrated the applicability of the presented methodology from a predictive perspective in terms of the interpretability of outputs and low computational cost.
The consistency of the maximum likelihood estimator for mixtures of elliptically-symmetric distributions for estimating its population version is shown, where the underlying distribution $P$ is nonparametric and does not necessarily belong to the class of mixtures on which the estimator is based. In a situation where $P$ is a mixture of well enough separated but nonparametric distributions it is shown that the components of the population version of the estimator correspond to the well separated components of $P$. This provides some theoretical justification for the use of such estimators for cluster analysis in case that $P$ has well separated subpopulations even if these subpopulations differ from what the mixture model assumes.
This work considers the nodal finite element approximation of peridynamics, in which the nodal displacements satisfy the peridynamics equation at each mesh node. For the nonlinear bond-based peridynamics model, it is shown that, under the suitable assumptions on an exact solution, the discretized solution associated with the central-in-time and nodal finite element discretization converges to the exact solution in $L^2$ norm at the rate $C_1 \Delta t + C_2 h^2/\epsilon^2$. Here, $\Delta t$, $h$, and $\epsilon$ are time step size, mesh size, and the size of the horizon or nonlocal length scale, respectively. Constants $C_1$ and $C_2$ are independent of $h$ and $\Delta t$ and depend on the norms of the exact solution. Several numerical examples involving pre-crack, void, and notch are considered, and the efficacy of the proposed nodal finite element discretization is analyzed.
It is known that singular values of idempotent matrices are either zero or larger or equal to one \cite{HouC63}. We state exactly how many singular values greater than one, equal to one, and equal to zero there are. Moreover, we derive a singular value decomposition of idempotent matrices which reveals a tight relationship between its left and right singular vectors. The same idea is used to augment a discovery regarding the singular values of involutory matrices as presented in \cite{FasH20}.
In this study, the impact of turbulent diffusion on mixing of biochemical reaction models is explored by implementing and validating different models. An original codebase called CHAD (Coupled Hydrodynamics and Anaerobic Digestion) is extended to incorporate turbulent diffusion and validate it against results from OpenFOAM with 2D Rayleigh-Taylor Instability and lid-driven cavity simulations. The models are then tested for the applications with Anaerobic Digestion - a widely used wastewater treatment method. The findings demonstrate that the implemented models accurately capture turbulent diffusion when provided with an accurate flow field. Specifically, a minor effect of chemical turbulent diffusion on biochemical reactions within the anaerobic digestion tank is observed, while thermal turbulent diffusion significantly influences mixing. By successfully implementing turbulent diffusion models in CHAD, its capabilities for more accurate anaerobic digestion simulations are enhanced, aiding in optimizing the design and operation of anaerobic digestion reactors in real-world wastewater treatment applications.
We propose and analyze a novel approach to construct structure preserving approximations for the Poisson-Nernst-Planck equations, focusing on the positivity preserving and mass conservation properties. The strategy consists of a standard time marching step with a projection (or correction) step to satisfy the desired physical constraints (positivity and mass conservation). Based on the $L^2$ projection, we construct a second order Crank-Nicolson type finite difference scheme, which is linear (exclude the very efficient $L^2$ projection part), positivity preserving and mass conserving. Rigorous error estimates in $L^2$ norm are established, which are both second order accurate in space and time. The other choice of projection, e.g. $H^1$ projection, is discussed. Numerical examples are presented to verify the theoretical results and demonstrate the efficiency of the proposed method.
We propose a numerical method to solve parameter-dependent hyperbolic partial differential equations (PDEs) with a moment approach, based on a previous work from Marx et al. (2020). This approach relies on a very weak notion of solution of nonlinear equations, namely parametric entropy measure-valued (MV) solutions, satisfying linear equations in the space of Borel measures. The infinite-dimensional linear problem is approximated by a hierarchy of convex, finite-dimensional, semidefinite programming problems, called Lasserre's hierarchy. This gives us a sequence of approximations of the moments of the occupation measure associated with the parametric entropy MV solution, which is proved to converge. In the end, several post-treatments can be performed from this approximate moments sequence. In particular, the graph of the solution can be reconstructed from an optimization of the Christoffel-Darboux kernel associated with the approximate measure, that is a powerful approximation tool able to capture a large class of irregular functions. Also, for uncertainty quantification problems, several quantities of interest can be estimated, sometimes directly such as the expectation of smooth functionals of the solutions. The performance of our approach is evaluated through numerical experiments on the inviscid Burgers equation with parametrised initial conditions or parametrised flux function.
Several mixed-effects models for longitudinal data have been proposed to accommodate the non-linearity of late-life cognitive trajectories and assess the putative influence of covariates on it. No prior research provides a side-by-side examination of these models to offer guidance on their proper application and interpretation. In this work, we examined five statistical approaches previously used to answer research questions related to non-linear changes in cognitive aging: the linear mixed model (LMM) with a quadratic term, LMM with splines, the functional mixed model, the piecewise linear mixed model, and the sigmoidal mixed model. We first theoretically describe the models. Next, using data from two prospective cohorts with annual cognitive testing, we compared the interpretation of the models by investigating associations of education on cognitive change before death. Lastly, we performed a simulation study to empirically evaluate the models and provide practical recommendations. Except for the LMM-quadratic, the fit of all models was generally adequate to capture non-linearity of cognitive change and models were relatively robust. Although spline-based models have no interpretable nonlinearity parameters, their convergence was easier to achieve, and they allow graphical interpretation. In contrast, piecewise and sigmoidal models, with interpretable non-linear parameters, may require more data to achieve convergence.
Multiphysics simulations frequently require transferring solution fields between subproblems with non-matching spatial discretizations, typically using interpolation techniques. Standard methods are usually based on measuring the closeness between points by means of the Euclidean distance, which does not account for curvature, cuts, cavities or other non-trivial geometrical or topological features of the domain. This may lead to spurious oscillations in the interpolant in proximity to these features. To overcome this issue, we propose a modification to rescaled localized radial basis function (RL-RBF) interpolation to account for the geometry of the interpolation domain, by yielding conformity and fidelity to geometrical and topological features. The proposed method, referred to as RL-RBF-G, relies on measuring the geodesic distance between data points. RL-RBF-G removes spurious oscillations appearing in the RL-RBF interpolant, resulting in increased accuracy in domains with complex geometries. We demonstrate the effectiveness of RL-RBF-G interpolation through a convergence study in an idealized setting. Furthermore, we discuss the algorithmic aspects and the implementation of RL-RBF-G interpolation in a distributed-memory parallel framework, and present the results of a strong scalability test yielding nearly ideal results. Finally, we show the effectiveness of RL-RBF-G interpolation in multiphysics simulations by considering an application to a whole-heart cardiac electromecanics model.
Classical-quantum hybrid algorithms have recently garnered significant attention, which are characterized by combining quantum and classical computing protocols to obtain readout from quantum circuits of interest. Recent progress due to Lubasch et al in a 2019 paper provides readout for solutions to the Schrodinger and Inviscid Burgers equations, by making use of a new variational quantum algorithm (VQA) which determines the ground state of a cost function expressed with a superposition of expectation values and variational parameters. In the following, we analyze additional computational prospects in which the VQA can reliably produce solutions to other PDEs that are comparable to solutions that have been previously realized classically, which are characterized with noiseless quantum simulations. To determine the range of nonlinearities that the algorithm can process for other IVPs, we study several PDEs, first beginning with the Navier-Stokes equations and progressing to other equations underlying physical phenomena ranging from electromagnetism, gravitation, and wave propagation, from simulations of the Einstein, Boussniesq-type, Lin-Tsien, Camassa-Holm, Drinfeld-Sokolov-Wilson (DSW), and Hunter-Saxton equations. To formulate optimization routines that the VQA undergoes for numerical approximations of solutions that are obtained as readout from quantum circuits, cost functions corresponding to each PDE are provided in the supplementary section after which simulations results from hundreds of ZGR-QFT ansatzae are generated.
This study addresses a class of linear mixed-integer programming (MILP) problems that involve uncertainty in the objective function parameters. The parameters are assumed to form a random vector, whose probability distribution can only be observed through a finite training data set. Unlike most of the related studies in the literature, we also consider uncertainty in the underlying data set. The data uncertainty is described by a set of linear constraints for each random sample, and the uncertainty in the distribution (for a fixed realization of data) is defined using a type-1 Wasserstein ball centered at the empirical distribution of the data. The overall problem is formulated as a three-level distributionally robust optimization (DRO) problem. First, we prove that the three-level problem admits a single-level MILP reformulation, if the class of loss functions is restricted to biaffine functions. Secondly, it turns out that for several particular forms of data uncertainty, the outlined problem can be solved reasonably fast by leveraging the nominal MILP problem. Finally, we conduct a computational study, where the out-of-sample performance of our model and computational complexity of the proposed MILP reformulation are explored numerically for several application domains.