Although the process variables of epoxy resins alter their mechanical properties, the visual identification of the characteristic features of X-ray images of samples of these materials is challenging. To facilitate the identification, we approximate the magnitude of the gradient of the intensity field of the X-ray images of different kinds of epoxy resins and then we use deep learning to discover the most representative features of the transformed images. In this solution of the inverse problem to finding characteristic features to discriminate samples of heterogeneous materials, we use the eigenvectors obtained from the singular value decomposition of all the channels of the feature maps of the early layers in a convolutional neural network. While the strongest activated channel gives a visual representation of the characteristic features, often these are not robust enough in some practical settings. On the other hand, the left singular vectors of the matrix decomposition of the feature maps, barely change when variables such as the capacity of the network or network architecture change. High classification accuracy and robustness of characteristic features are presented in this work.
Epilepsy is a clinical neurological disorder characterized by recurrent and spontaneous seizures consisting of abnormal high-frequency electrical activity in the brain. In this condition, the transmembrane potential dynamics are characterized by rapid and sharp wavefronts traveling along the heterogeneous and anisotropic conduction pathways of the brain. This work employs the monodomain model, coupled with specific neuronal ionic models characterizing ion concentration dynamics, to mathematically describe brain tissue electrophysiology in grey and white matter at the organ scale. This multiscale model is discretized in space with the high-order discontinuous Galerkin method on polygonal and polyhedral grids (PolyDG) and advanced in time with a Crank-Nicolson scheme. This ensures, on the one hand, efficient and accurate simulations of the high-frequency electrical activity that is responsible for epileptic seizure and, on the other hand, keeps reasonably low the computational costs by a suitable combination of high-order approximations and agglomerated polytopal meshes. We numerically investigate synthetic test cases on a two-dimensional heterogeneous squared domain discretized with a polygonal grid, and on a two-dimensional brainstem in a sagittal plane with an agglomerated polygonal grid that takes full advantage of the flexibility of the PolyDG approximation of the semidiscrete formulation. Finally, we provide a theoretical analysis of stability and an a-priori convergence analysis for a simplified mathematical problem.
Map matching is a common preprocessing step for analysing vehicle trajectories. In the theory community, the most popular approach for map matching is to compute a path on the road network that is the most spatially similar to the trajectory, where spatial similarity is measured using the Fr\'echet distance. A shortcoming of existing map matching algorithms under the Fr\'echet distance is that every time a trajectory is matched, the entire road network needs to be reprocessed from scratch. An open problem is whether one can preprocess the road network into a data structure, so that map matching queries can be answered in sublinear time. In this paper, we investigate map matching queries under the Fr\'echet distance. We provide a negative result for geometric planar graphs. We show that, unless SETH fails, there is no data structure that can be constructed in polynomial time that answers map matching queries in $O((pq)^{1-\delta})$ query time for any $\delta > 0$, where $p$ and $q$ are the complexities of the geometric planar graph and the query trajectory, respectively. We provide a positive result for realistic input graphs, which we regard as the main result of this paper. We show that for $c$-packed graphs, one can construct a data structure of $\tilde O(cp)$ size that can answer $(1+\varepsilon)$-approximate map matching queries in $\tilde O(c^4 q \log^4 p)$ time, where $\tilde O(\cdot)$ hides lower-order factors and dependence on $\varepsilon$.
During multiple testing, researchers often adjust their alpha level to control the familywise error rate for a statistical inference about a joint union alternative hypothesis (e.g., "H1 or H2"). However, in some cases, they do not make this inference. Instead, they make separate inferences about each of the individual hypotheses that comprise the joint hypothesis (e.g., H1 and H2). For example, a researcher might use a Bonferroni correction to adjust their alpha level from the conventional level of 0.050 to 0.025 when testing H1 and H2, find a significant result for H1 (p < 0.025) and not for H2 (p > .0.025), and so claim support for H1 and not for H2. However, these separate individual inferences do not require an alpha adjustment. Only a statistical inference about the union alternative hypothesis "H1 or H2" requires an alpha adjustment because it is based on "at least one" significant result among the two tests, and so it depends on the familywise error rate. When a researcher corrects their alpha level during multiple testing but does not make an inference about the union alternative hypothesis, their correction is redundant. In the present article, I discuss this redundant correction problem, including its reduction in statistical power for tests of individual hypotheses and its potential causes vis-\`a-vis error rate confusions and the alpha adjustment ritual. I also provide three illustrations of redundant corrections from recent psychology studies. I conclude that redundant corrections represent a symptom of statisticism, and I call for a more nuanced inference-based approach to multiple testing corrections.
We study the sharp interface limit of the stochastic Cahn-Hilliard equation with cubic double-well potential and additive space-time white noise $\epsilon^{\sigma}\dot{W}$ where $\epsilon>0$ is an interfacial width parameter. We prove that, for sufficiently large scaling constant $\sigma >0$, the stochastic Cahn-Hilliard equation converges to the deterministic Mullins-Sekerka/Hele-Shaw problem for $\epsilon\rightarrow 0$. The convergence is shown in suitable fractional Sobolev norms as well as in the $L^p$-norm for $p\in (2, 4]$ in spatial dimension $d=2,3$. This generalizes the existing result for the space-time white noise to dimension $d=3$ and improves the existing results for smooth noise, which were so far limited to $p\in \left(2, \frac{2d+8}{d+2}\right]$ in spatial dimension $d=2,3$. As a byproduct of the analysis of the stochastic problem with space-time white noise, we identify minimal regularity requirements on the noise which allow convergence to the sharp interface limit in the $\mathbb{H}^1$-norm and also provide improved convergence estimates for the sharp interface limit of the deterministic problem.
We consider the posets of equivalence relations on finite sets under the standard embedding ordering and under the consecutive embedding ordering. In the latter case, the relations are also assumed to have an underlying linear order, which governs consecutive embeddings. For each poset we ask the well quasi-order and atomicity decidability questions: Given finitely many equivalence relations $\rho_1,\dots,\rho_k$, is the downward closed set Av$(\rho_1,\dots,\rho_k)$ consisting of all equivalence relations which do not contain any of $\rho_1,\dots,\rho_k$: (a) well-quasi-ordered, meaning that it contains no infinite antichains? and (b) atomic, meaning that it is not a union of two proper downward closed subsets, or, equivalently, that it satisfies the joint embedding property?
The reconstruction of cortical surfaces is a prerequisite for quantitative analyses of the cerebral cortex in magnetic resonance imaging (MRI). Existing segmentation-based methods separate the surface registration from the surface extraction, which is computationally inefficient and prone to distortions. We introduce Vox2Cortex-Flow (V2C-Flow), a deep mesh-deformation technique that learns a deformation field from a brain template to the cortical surfaces of an MRI scan. To this end, we present a geometric neural network that models the deformation-describing ordinary differential equation in a continuous manner. The network architecture comprises convolutional and graph-convolutional layers, which allows it to work with images and meshes at the same time. V2C-Flow is not only very fast, requiring less than two seconds to infer all four cortical surfaces, but also establishes vertex-wise correspondences to the template during reconstruction. In addition, V2C-Flow is the first approach for cortex reconstruction that models white matter and pial surfaces jointly, therefore avoiding intersections between them. Our comprehensive experiments on internal and external test data demonstrate that V2C-Flow results in cortical surfaces that are state-of-the-art in terms of accuracy. Moreover, we show that the established correspondences are more consistent than in FreeSurfer and that they can directly be utilized for cortex parcellation and group analyses of cortical thickness.
Although Regge finite element functions are not continuous, useful generalizations of nonlinear derivatives like the curvature, can be defined using them. This paper is devoted to studying the convergence of the finite element lifting of a generalized (distributional) Gauss curvature defined using a metric tensor in the Regge finite element space. Specifically, we investigate the interplay between the polynomial degree of the curvature lifting by Lagrange elements and the degree of the metric tensor in the Regge finite element space. Previously, a superconvergence result, where convergence rate of one order higher than expected, was obtained when the metric is the canonical Regge interpolant of the exact metric. In this work, we show that an even higher order can be obtained if the degree of the curvature lifting is reduced by one polynomial degre and if at least linear Regge elements are used. These improved convergence rates are confirmed by numerical examples.
This work proposes a novel variational approximation of partial differential equations on moving geometries determined by explicit boundary representations. The benefits of the proposed formulation are the ability to handle large displacements of explicitly represented domain boundaries without generating body-fitted meshes and remeshing techniques. For the space discretization, we use a background mesh and an unfitted method that relies on integration on cut cells only. We perform this intersection by using clipping algorithms. To deal with the mesh movement, we pullback the equations to a reference configuration (the spatial mesh at the initial time slab times the time interval) that is constant in time. This way, the geometrical intersection algorithm is only required in 3D, another key property of the proposed scheme. At the end of the time slab, we compute the deformed mesh, intersect the deformed boundary with the background mesh, and consider an exact transfer operator between meshes to compute jump terms in the time discontinuous Galerkin integration. The transfer is also computed using geometrical intersection algorithms. We demonstrate the applicability of the method to fluid problems around rotating (2D and 3D) geometries described by oriented boundary meshes. We also provide a set of numerical experiments that show the optimal convergence of the method.
A crucial challenge for solving problems in conflict research is in leveraging the semi-supervised nature of the data that arise. Observed response data such as counts of battle deaths over time indicate latent processes of interest such as intensity and duration of conflicts, but defining and labeling instances of these unobserved processes requires nuance and imprecision. The availability of such labels, however, would make it possible to study the effect of intervention-related predictors - such as ceasefires - directly on conflict dynamics (e.g., latent intensity) rather than through an intermediate proxy like observed counts of battle deaths. Motivated by this problem and the new availability of the ETH-PRIO Civil Conflict Ceasefires data set, we propose a Bayesian autoregressive (AR) hidden Markov model (HMM) framework as a sufficiently flexible machine learning approach for semi-supervised regime labeling with uncertainty quantification. We motivate our approach by illustrating the way it can be used to study the role that ceasefires play in shaping conflict dynamics. This ceasefires data set is the first systematic and globally comprehensive data on ceasefires, and our work is the first to analyze this new data and to explore the effect of ceasefires on conflict dynamics in a comprehensive and cross-country manner.
Graph representation learning for hypergraphs can be used to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms the state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications.