We prove the following type of discrete entropy monotonicity for isotropic log-concave sums of independent identically distributed random vectors $X_1,\dots,X_{n+1}$ on $\mathbb{Z}^d$: $$ H(X_1+\cdots+X_{n+1}) \geq H(X_1+\cdots+X_{n}) + \frac{d}{2}\log{\Bigl(\frac{n+1}{n}\Bigr)} +o(1), $$ where $o(1)$ vanishes as $H(X_1) \to \infty$. Moreover, for the $o(1)$-term we obtain a rate of convergence $ O\Bigl({H(X_1)}{e^{-\frac{1}{d}H(X_1)}}\Bigr)$, where the implied constants depend on $d$ and $n$. This generalizes to $\mathbb{Z}^d$ the one-dimensional result of the second named author (2023). As in dimension one, our strategy is to establish that the discrete entropy $H(X_1+\cdots+X_{n})$ is close to the differential (continuous) entropy $h(X_1+U_1+\cdots+X_{n}+U_{n})$, where $U_1,\dots, U_n$ are independent and identically distributed uniform random vectors on $[0,1]^d$ and to apply the theorem of Artstein, Ball, Barthe and Naor (2004) on the monotonicity of differential entropy. However, in dimension $d\ge2$, more involved tools from convex geometry are needed because a suitable position is required. We show that for a log-concave function on $\mathbb{R}^d$ in isotropic position, its integral, its barycenter and its covariance matrix are close to their discrete counterparts. One of our technical tools is a discrete analogue to the upper bound on the isotropic constant of a log-concave function, which generalises a result of Bobkov, Marsiglietti and Melbourne (2022) and may be of independent interest.
The central path problem is a variation on the single facility location problem. The aim is to find, in a given connected graph $G$, a path $P$ minimizing its eccentricity, which is the maximal distance from $P$ to any vertex of the graph $G$. The path eccentricity of $G$ is the minimal eccentricity achievable over all paths in $G$. In this article we consider the path eccentricity of the class of the $k$-AT-free graphs. They are graphs in which any set of three vertices contains a pair for which every path between them uses at least one vertex of the closed neighborhood at distance $k$ of the third. We prove that they have path eccentricity bounded by $k$. Moreover, we answer a question of G\'omez and Guti\'errez asking if there is a relation between path eccentricity and the consecutive ones property. The latter is the property for a binary matrix to admit a permutation of the rows placing the 1's consecutively on the columns. It was already known that graphs whose adjacency matrices have the consecutive ones property have path eccentricity at most 1, and that the same remains true when the augmented adjacency matrices (with ones on the diagonal) has the consecutive ones property. We generalize these results as follow. We study graphs whose adjacency matrices can be made to satisfy the consecutive ones property after changing some values on the diagonal, and show that those graphs have path eccentricity at most 2, by showing that they are 2-AT-free.
The discretization of non-local operators, e.g., solution operators of partial differential equations or integral operators, leads to large densely populated matrices. $\mathcal{H}^2$-matrices take advantage of local low-rank structures in these matrices to provide an efficient data-sparse approximation that allows us to handle large matrices efficiently, e.g., to reduce the storage requirements to $\mathcal{O}(n k)$ for $n$-dimensional matrices with local rank $k$, and to reduce the complexity of the matrix-vector multiplication to $\mathcal{O}(n k)$ operations. In order to perform more advanced operations, e.g., to construct efficient preconditioners or evaluate matrix functions, we require algorithms that take $\mathcal{H}^2$-matrices as input and approximate the result again by $\mathcal{H}^2$-matrices, ideally with controllable accuracy. In this manuscript, we introduce an algorithm that approximates the product of two $\mathcal{H}^2$-matrices and guarantees block-relative error estimates for the submatrices of the result. It uses specialized tree structures to represent the exact product in an intermediate step, thereby allowing us to apply mathematically rigorous error control strategies.
We propose CAPGrasp, an $\mathbb{R}^3\times \text{SO(2)-equivariant}$ 6-DoF continuous approach-constrained generative grasp sampler. It includes a novel learning strategy for training CAPGrasp that eliminates the need to curate massive conditionally labeled datasets and a constrained grasp refinement technique that improves grasp poses while respecting the grasp approach directional constraints. The experimental results demonstrate that CAPGrasp is more than three times as sample efficient as unconstrained grasp samplers while achieving up to 38% grasp success rate improvement. CAPGrasp also achieves 4-10% higher grasp success rates than constrained but noncontinuous grasp samplers. Overall, CAPGrasp is a sample-efficient solution when grasps must originate from specific directions, such as grasping in confined spaces.
We propose and analyze a novel approach to construct structure preserving approximations for the Poisson-Nernst-Planck equations, focusing on the positivity preserving and mass conservation properties. The strategy consists of a standard time marching step with a projection (or correction) step to satisfy the desired physical constraints (positivity and mass conservation). Based on the $L^2$ projection, we construct a second order Crank-Nicolson type finite difference scheme, which is linear (exclude the very efficient $L^2$ projection part), positivity preserving and mass conserving. Rigorous error estimates in $L^2$ norm are established, which are both second order accurate in space and time. The other choice of projection, e.g. $H^1$ projection, is discussed. Numerical examples are presented to verify the theoretical results and demonstrate the efficiency of the proposed method.
The minimum covariance determinant (MCD) estimator is a popular method for robustly estimating the mean and covariance of multivariate data. We extend the MCD to the setting where the observations are matrices rather than vectors and introduce the matrix minimum covariance determinant (MMCD) estimators for robust parameter estimation. These estimators hold equivariance properties, achieve a high breakdown point, and are consistent under elliptical matrix-variate distributions. We have also developed an efficient algorithm with convergence guarantees to compute the MMCD estimators. Using the MMCD estimators, we can compute robust Mahalanobis distances that can be used for outlier detection. Those distances can be decomposed into outlyingness contributions from each cell, row, or column of a matrix-variate observation using Shapley values, a concept for outlier explanation recently introduced in the multivariate setting. Simulations and examples reveal the excellent properties and usefulness of the robust estimators.
In order to compute the Fourier transform of a function $f$ on the real line numerically, one samples $f$ on a grid and then takes the discrete Fourier transform. We derive exact error estimates for this procedure in terms of the decay and smoothness of $f$. The analysis provides a new recipe of how to relate the number of samples, the sampling interval, and the grid size.
Several mixed-effects models for longitudinal data have been proposed to accommodate the non-linearity of late-life cognitive trajectories and assess the putative influence of covariates on it. No prior research provides a side-by-side examination of these models to offer guidance on their proper application and interpretation. In this work, we examined five statistical approaches previously used to answer research questions related to non-linear changes in cognitive aging: the linear mixed model (LMM) with a quadratic term, LMM with splines, the functional mixed model, the piecewise linear mixed model, and the sigmoidal mixed model. We first theoretically describe the models. Next, using data from two prospective cohorts with annual cognitive testing, we compared the interpretation of the models by investigating associations of education on cognitive change before death. Lastly, we performed a simulation study to empirically evaluate the models and provide practical recommendations. Except for the LMM-quadratic, the fit of all models was generally adequate to capture non-linearity of cognitive change and models were relatively robust. Although spline-based models have no interpretable nonlinearity parameters, their convergence was easier to achieve, and they allow graphical interpretation. In contrast, piecewise and sigmoidal models, with interpretable non-linear parameters, may require more data to achieve convergence.
Multiphysics simulations frequently require transferring solution fields between subproblems with non-matching spatial discretizations, typically using interpolation techniques. Standard methods are usually based on measuring the closeness between points by means of the Euclidean distance, which does not account for curvature, cuts, cavities or other non-trivial geometrical or topological features of the domain. This may lead to spurious oscillations in the interpolant in proximity to these features. To overcome this issue, we propose a modification to rescaled localized radial basis function (RL-RBF) interpolation to account for the geometry of the interpolation domain, by yielding conformity and fidelity to geometrical and topological features. The proposed method, referred to as RL-RBF-G, relies on measuring the geodesic distance between data points. RL-RBF-G removes spurious oscillations appearing in the RL-RBF interpolant, resulting in increased accuracy in domains with complex geometries. We demonstrate the effectiveness of RL-RBF-G interpolation through a convergence study in an idealized setting. Furthermore, we discuss the algorithmic aspects and the implementation of RL-RBF-G interpolation in a distributed-memory parallel framework, and present the results of a strong scalability test yielding nearly ideal results. Finally, we show the effectiveness of RL-RBF-G interpolation in multiphysics simulations by considering an application to a whole-heart cardiac electromecanics model.
Chemical and biochemical reactions can exhibit surprisingly different behaviours from multiple steady-state solutions to oscillatory solutions and chaotic behaviours. Such behaviour has been of great interest to researchers for many decades. The Briggs-Rauscher, Belousov-Zhabotinskii and Bray-Liebhafsky reactions, for which periodic variations in concentrations can be visualized by changes in colour, are experimental examples of oscillating behaviour in chemical systems. These type of systems are modelled by a system of partial differential equations coupled by a nonlinearity. However, analysing the pattern, one may suspect that the dynamic is only generated by a finite number of spatial Fourier modes. In fluid dynamics, it is shown that for large times, the solution is determined by a finite number of spatial Fourier modes, called determining modes. In the article, we first introduce the concept of determining modes and show that, indeed, it is sufficient to characterise the dynamic by only a finite number of spatial Fourier modes. In particular, we analyse the exact number of the determining modes of $u$ and $v$, where the couple $(u,v)$ solves the following stochastic system \begin{equation*} \partial_t{u}(t) = r_1\Delta u(t) -\alpha_1u(t)- \gamma_1u(t)v^2(t) + f(1 - u(t)) + g(t),\quad \partial_t{v}(t) = r_2\Delta v(t) -\alpha_2v(t) + \gamma_2 u(t)v^2(t) + h(t),\quad u(0) = u_0,\;v(0) = v_0, \end{equation*} where $r_1,r_2,\gamma_1,\gamma_2>0$, $\alpha_1,\alpha_2 \ge 0$ and $g,h$ are time depending mappings specified later.
Classical-quantum hybrid algorithms have recently garnered significant attention, which are characterized by combining quantum and classical computing protocols to obtain readout from quantum circuits of interest. Recent progress due to Lubasch et al in a 2019 paper provides readout for solutions to the Schrodinger and Inviscid Burgers equations, by making use of a new variational quantum algorithm (VQA) which determines the ground state of a cost function expressed with a superposition of expectation values and variational parameters. In the following, we analyze additional computational prospects in which the VQA can reliably produce solutions to other PDEs that are comparable to solutions that have been previously realized classically, which are characterized with noiseless quantum simulations. To determine the range of nonlinearities that the algorithm can process for other IVPs, we study several PDEs, first beginning with the Navier-Stokes equations and progressing to other equations underlying physical phenomena ranging from electromagnetism, gravitation, and wave propagation, from simulations of the Einstein, Boussniesq-type, Lin-Tsien, Camassa-Holm, Drinfeld-Sokolov-Wilson (DSW), and Hunter-Saxton equations. To formulate optimization routines that the VQA undergoes for numerical approximations of solutions that are obtained as readout from quantum circuits, cost functions corresponding to each PDE are provided in the supplementary section after which simulations results from hundreds of ZGR-QFT ansatzae are generated.