Neurodegenerative diseases have a significant global impact affecting millions of individuals worldwide. Some of them, known as proteinopathies, are characterized by the accumulation and propagation of toxic proteins, known as prions. Alzheimer's and Parkinson's diseases are relevant of protheinopathies. Mathematical models of prion dynamics play a crucial role in understanding disease progression and could be of help to potential interventions. This article focuses on the heterodimer model: a system of two partial differential equations that describe the evolution of healthy and misfolded proteins. In particular, we propose a space discretization based on a Discontinuous Galerkin method on polygonal/polyhedral grids, which provides flexibility in handling meshes of complex brain geometries. Concerning the semi-discrete formulation we prove stability and a-priori error estimates. Next, we adopt a $\vartheta$-method scheme for time discretization. Some convergence tests are performed to confirm the theoretical bounds and the ability of the method to approximate travelling wave solutions. The proposed scheme is also tested to simulate the spread of $\alpha$-synuclein in a realistic test case of Parkinson's disease in a two-dimensional sagittal brain section geometry reconstructed from medical images.
Ensuring that refugees and asylum seekers thrive (e.g., find employment) in their host countries is a profound humanitarian goal, and a primary driver of employment is the geographic location within a host country to which the refugee or asylum seeker is assigned. Recent research has proposed and implemented algorithms that assign refugees and asylum seekers to geographic locations in a manner that maximizes the average employment across all arriving refugees. While these algorithms can have substantial overall positive impact, using data from two industry collaborators we show that the impact of these algorithms can vary widely across key subgroups based on country of origin, age, or educational background. Thus motivated, we develop a simple and interpretable framework for incorporating group fairness into the dynamic refugee assignment problem. In particular, the framework can flexibly incorporate many existing and future definitions of group fairness from the literature (e.g., maxmin, randomized, and proportionally-optimized within-group). Equipped with our framework, we propose two bid-price algorithms that maximize overall employment while simultaneously yielding provable group fairness guarantees. Through extensive numerical experiments using various definitions of group fairness and real-world data from the U.S. and the Netherlands, we show that our algorithms can yield substantial improvements in group fairness compared to an offline benchmark fairness constraints, with only small relative decreases ($\approx$ 1%-5%) in global performance.
In recent years a great deal of attention has been paid to discretizations of the incompressible Stokes equations that exactly preserve the incompressibility constraint. These are of substantial interest because these discretizations are pressure-robust, i.e. the error estimates for the velocity do not depend on the error in the pressure. Similar considerations arise in nearly incompressible linear elastic solids. Conforming discretizations with this property are now well understood in two dimensions, but remain poorly understood in three dimensions. In this work we state two conjectures on this subject. The first is that the Scott-Vogelius element pair is inf-sup stable on uniform meshes for velocity degree $k \ge 4$; the best result available in the literature is for $k \ge 6$. The second is that there exists a stable space decomposition of the kernel of the divergence for $k \ge 5$. We present numerical evidence supporting our conjectures.
Recently, there has been a growing interest in the relationships between unrooted and rooted phylogenetic networks. In this context, a natural question to ask is if an unrooted phylogenetic network U can be oriented as a rooted phylogenetic network such that the latter satisfies certain structural properties. In a recent preprint, Bulteau et al. claim that it is computational hard to decide if U has a funneled (resp. funneled tree-child) orientation, for when the internal vertices of U have degree at most 5. Unfortunately, the proof of their funneled tree-child result appears to be incorrect. In this paper, we present a corrected proof and show that hardness remains for other popular classes of rooted phylogenetic networks such as funneled normal and funneled reticulation-visible. Additionally, our results hold regardless of whether U is rooted at an existing vertex or by subdividing an edge with the root.
The use of propensity score (PS) methods has become ubiquitous in causal inference. At the heart of these methods is the positivity assumption. Violation of the positivity assumption leads to the presence of extreme PS weights when estimating average causal effects of interest, such as the average treatment effect (ATE) or the average treatment effect on the treated (ATT), which renders invalid related statistical inference. To circumvent this issue, trimming or truncating the extreme estimated PSs have been widely used. However, these methods require that we specify a priori a threshold and sometimes an additional smoothing parameter. While there are a number of methods dealing with the lack of positivity when estimating ATE, surprisingly there is no much effort in the same issue for ATT. In this paper, we first review widely used methods, such as trimming and truncation in ATT. We emphasize the underlying intuition behind these methods to better understand their applications and highlight their main limitations. Then, we argue that the current methods simply target estimands that are scaled ATT (and thus move the goalpost to a different target of interest), where we specify the scale and the target populations. We further propose a PS weight-based alternative for the average causal effect on the treated, called overlap weighted average treatment effect on the treated (OWATT). The appeal of our proposed method lies in its ability to obtain similar or even better results than trimming and truncation while relaxing the constraint to choose a priori a threshold (or even specify a smoothing parameter). The performance of the proposed method is illustrated via a series of Monte Carlo simulations and a data analysis on racial disparities in health care expenditures.
We study the problem of testing whether the missing values of a potentially high-dimensional dataset are Missing Completely at Random (MCAR). We relax the problem of testing MCAR to the problem of testing the compatibility of a sequence of covariance matrices, motivated by the fact that this procedure is feasible when the dimension grows with the sample size. Tests of compatibility can be used to test the feasibility of positive semi-definite matrix completion problems with noisy observations, and thus our results may be of independent interest. Our first contributions are to define a natural measure of the incompatibility of a sequence of correlation matrices, which can be characterised as the optimal value of a Semi-definite Programming (SDP) problem, and to establish a key duality result allowing its practical computation and interpretation. By studying the concentration properties of the natural plug-in estimator of this measure, we introduce novel hypothesis tests that we prove have power against all distributions with incompatible covariance matrices. The choice of critical values for our tests rely on a new concentration inequality for the Pearson sample correlation matrix, which may be of interest more widely. By considering key examples of missingness structures, we demonstrate that our procedures are minimax rate optimal in certain cases. We further validate our methodology with numerical simulations that provide evidence of validity and power, even when data are heavy tailed.
Stabbing Planes (also known as Branch and Cut) is a proof system introduced very recently which, informally speaking, extends the DPLL method by branching on integer linear inequalities instead of single variables. The techniques known so far to prove size and depth lower bounds for Stabbing Planes are generalizations of those used for the Cutting Planes proof system. For size lower bounds these are established by monotone circuit arguments, while for depth these are found via communication complexity and protection. As such these bounds apply for lifted versions of combinatorial statements. Rank lower bounds for Cutting Planes are also obtained by geometric arguments called protection lemmas. In this work we introduce two new geometric approaches to prove size/depth lower bounds in Stabbing Planes working for any formula: (1) the antichain method, relying on Sperner's Theorem and (2) the covering method which uses results on essential coverings of the boolean cube by linear polynomials, which in turn relies on Alon's combinatorial Nullenstellensatz. We demonstrate their use on classes of combinatorial principles such as the Pigeonhole principle, the Tseitin contradictions and the Linear Ordering Principle. By the first method we prove almost linear size lower bounds and optimal logarithmic depth lower bounds for the Pigeonhole principle and analogous lower bounds for the Tseitin contradictions over the complete graph and for the Linear Ordering Principle. By the covering method we obtain a superlinear size lower bound and a logarithmic depth lower bound for Stabbing Planes proof of Tseitin contradictions over a grid graph.
Monotone missing data is a common problem in data analysis. However, imputation combined with dimensionality reduction can be computationally expensive, especially with the increasing size of datasets. To address this issue, we propose a Blockwise principal component analysis Imputation (BPI) framework for dimensionality reduction and imputation of monotone missing data. The framework conducts Principal Component Analysis (PCA) on the observed part of each monotone block of the data and then imputes on merging the obtained principal components using a chosen imputation technique. BPI can work with various imputation techniques and can significantly reduce imputation time compared to conducting dimensionality reduction after imputation. This makes it a practical and efficient approach for large datasets with monotone missing data. Our experiments validate the improvement in speed. In addition, our experiments also show that while applying MICE imputation directly on missing data may not yield convergence, applying BPI with MICE for the data may lead to convergence.
We show how to learn discrete field theories from observational data of fields on a space-time lattice. For this, we train a neural network model of a discrete Lagrangian density such that the discrete Euler--Lagrange equations are consistent with the given training data. We, thus, obtain a structure-preserving machine learning architecture. Lagrangian densities are not uniquely defined by the solutions of a field theory. We introduce a technique to derive regularisers for the training process which optimise numerical regularity of the discrete field theory. Minimisation of the regularisers guarantees that close to the training data the discrete field theory behaves robust and efficient when used in numerical simulations. Further, we show how to identify structurally simple solutions of the underlying continuous field theory such as travelling waves. This is possible even when travelling waves are not present in the training data. This is compared to data-driven model order reduction based approaches, which struggle to identify suitable latent spaces containing structurally simple solutions when these are not present in the training data. Ideas are demonstrated on examples based on the wave equation and the Schr\"odinger equation.
Supercomputers have revolutionized how industries and scientific fields process large amounts of data. These machines group hundreds or thousands of computing nodes working together to execute time-consuming programs that require a large amount of computational resources. Over the years, supercomputers have expanded to include new and different technologies characterizing them as heterogeneous. However, executing a program in a heterogeneous environment requires attention to a specific aspect of performance degradation: load imbalance. In this research, we address the challenges associated with load imbalance when scheduling many homogeneous tasks in a heterogeneous environment. To address this issue, we introduce the concept of adaptive asynchronous work-stealing. This approach collects information about the nodes and utilizes it to improve work-stealing aspects, such as victim selection and task offloading. Additionally, the proposed approach eliminates the need for extra threads to communicate information, thereby reducing overhead when implementing a fully asynchronous approach. Our experimental results demonstrate a performance improvement of approximately 10.1\% compared to other conventional and state-of-the-art implementations.
In this work we investigate a 1D evolution equation involving a divergence form operator where the diffusion coefficient inside the divergence is changing sign, as in models for metamaterials.We focus on the construction of a fundamental solution for the evolution equation,which does not proceed as in the case of standard parabolic PDE's, since the associatedsecond order operator is not elliptic. We show that a spectral representation of the semigroup associated to the equation can be derived, which leads to a first expression of the fundamental solution. We also derive a probabilistic representation in terms of a pseudo Skew Brownian Motion (SBM).This construction generalizes that derived from the killed SBM when the diffusion coefficientis piecewise constant but remains positive.We show that the pseudo SBM can be approached by a rescaled pseudo asymmetric random walk,which allows us to derive several numerical schemes for the resolution of the PDEand we report the associated numerical test results.