In four-dimensional scanning transmission electron microscopy (4D STEM) a focused beam is scanned over a specimen and a diffraction pattern is recorded at each position using a pixelated detector. During the experiment, it must be ensured that the scan coordinate system of the beam is correctly calibrated relative to the detector coordinate system. Various simplified and approximate models are used implicitly and explicitly for understanding and analyzing the recorded data, requiring translation between the physical reality of the instrument and the abstractions used in data interpretation. Here, we introduce a calibration method where interactive live data processing in combination with a digital twin is used to match a set of models and their parameters with the action of a real-world instrument.
In contemporary problems involving genetic or neuroimaging data, thousands of hypotheses need to be tested. Due to their high power, and finite sample guarantees on type-1 error under weak assumptions, Monte-Carlo permutation tests are often considered as gold standard for these settings. However, the enormous computational effort required for (thousands of) permutation tests is a major burden. Recently, Fischer and Ramdas (2024) constructed a permutation test for a single hypothesis in which the permutations are drawn sequentially one-by-one and the testing process can be stopped at any point without inflating the type I error. They showed that the number of permutations can be substantially reduced (under null and alternative) while the power remains similar. We show how their approach can be modified to make it suitable for a broad class of multiple testing procedures. In particular, we discuss its use with the Benjamini-Hochberg procedure and illustrate the application on a large dataset.
We describe a simple deterministic near-linear time approximation scheme for uncapacitated minimum cost flow in undirected graphs with real edge weights, a problem also known as transshipment. Specifically, our algorithm takes as input a (connected) undirected graph $G = (V, E)$, vertex demands $b \in \mathbb{R}^V$ such that $\sum_{v \in V} b(v) = 0$, positive edge costs $c \in \mathbb{R}_{>0}^E$, and a parameter $\varepsilon > 0$. In $O(\varepsilon^{-2} m \log^{O(1)} n)$ time, it returns a flow $f$ such that the net flow out of each vertex is equal to the vertex's demand and the cost of the flow is within a $(1 + \varepsilon)$ factor of optimal. Our algorithm is combinatorial and has no running time dependency on the demands or edge costs. With the exception of a recent result presented at STOC 2022 for polynomially bounded edge weights, all almost- and near-linear time approximation schemes for transshipment relied on randomization to embed the problem instance into low-dimensional space. Our algorithm instead deterministically approximates the cost of routing decisions that would be made if the input were subject to a random tree embedding. To avoid computing the $\Omega(n^2)$ vertex-vertex distances that an approximation of this kind suggests, we also limit the available routing decisions using distances explicitly stored in the well-known Thorup-Zwick distance oracle.
The rise of electron microscopy has expanded our ability to acquire nanometer and atomically resolved images of complex materials. The resulting vast datasets are typically analyzed by human operators, an intrinsically challenging process due to the multiple possible analysis steps and the corresponding need to build and optimize complex analysis workflows. We present a methodology based on the concept of a Reward Function coupled with Bayesian Optimization, to optimize image analysis workflows dynamically. The Reward Function is engineered to closely align with the experimental objectives and broader context and is quantifiable upon completion of the analysis. Here, cross-section, high-angle annular dark field (HAADF) images of ion-irradiated $(Y, Dy)Ba_2Cu_3O_{7-\delta}$ thin-films were used as a model system. The reward functions were formed based on the expected materials density and atomic spacings and used to drive multi-objective optimization of the classical Laplacian-of-Gaussian (LoG) method. These results can be benchmarked against the DCNN segmentation. This optimized LoG* compares favorably against DCNN in the presence of the additional noise. We further extend the reward function approach towards the identification of partially-disordered regions, creating a physics-driven reward function and action space of high-dimensional clustering. We pose that with correct definition, the reward function approach allows real-time optimization of complex analysis workflows at much higher speeds and lower computational costs than classical DCNN-based inference, ensuring the attainment of results that are both precise and aligned with the human-defined objectives.
Formulating dynamical models for physical phenomena is essential for understanding the interplay between the different mechanisms and predicting the evolution of physical states. However, a dynamical model alone is often insufficient to address these fundamental tasks, as it suffers from model errors and uncertainties. One common remedy is to rely on data assimilation, where the state estimate is updated with observations of the true system. Ensemble filters sequentially assimilate observations by updating a set of samples over time. They operate in two steps: a forecast step that propagates each sample through the dynamical model and an analysis step that updates the samples with incoming observations. For accurate and robust predictions of dynamical systems, discrete solutions must preserve their critical invariants. While modern numerical solvers satisfy these invariants, existing invariant-preserving analysis steps are limited to Gaussian settings and are often not compatible with classical regularization techniques of ensemble filters, e.g., inflation and covariance tapering. The present work focuses on preserving linear invariants, such as mass, stoichiometric balance of chemical species, and electrical charges. Using tools from measure transport theory (Spantini et al., 2022, SIAM Review), we introduce a generic class of nonlinear ensemble filters that automatically preserve desired linear invariants in non-Gaussian filtering problems. By specializing this framework to the Gaussian setting, we recover a constrained formulation of the Kalman filter. Then, we show how to combine existing regularization techniques for the ensemble Kalman filter (Evensen, 1994, J. Geophys. Res.) with the preservation of the linear invariants. Finally, we assess the benefits of preserving linear invariants for the ensemble Kalman filter and nonlinear ensemble filters.
Generalized cross-validation (GCV) is a widely-used method for estimating the squared out-of-sample prediction risk that employs a scalar degrees of freedom adjustment (in a multiplicative sense) to the squared training error. In this paper, we examine the consistency of GCV for estimating the prediction risk of arbitrary ensembles of penalized least-squares estimators. We show that GCV is inconsistent for any finite ensemble of size greater than one. Towards repairing this shortcoming, we identify a correction that involves an additional scalar correction (in an additive sense) based on degrees of freedom adjusted training errors from each ensemble component. The proposed estimator (termed CGCV) maintains the computational advantages of GCV and requires neither sample splitting, model refitting, or out-of-bag risk estimation. The estimator stems from a finer inspection of the ensemble risk decomposition and two intermediate risk estimators for the components in this decomposition. We provide a non-asymptotic analysis of the CGCV and the two intermediate risk estimators for ensembles of convex penalized estimators under Gaussian features and a linear response model. Furthermore, in the special case of ridge regression, we extend the analysis to general feature and response distributions using random matrix theory, which establishes model-free uniform consistency of CGCV.
A direct solver is introduced for solving overdetermined linear systems involving nonuniform discrete Fourier transform matrices. Such a matrices can be transformed into a Cauchy-like form that has hierarchical low rank structure. The rank structure of this matrix is explained, and it is shown that the ranks of the relevant submatrices grow only logarithmically with the number of columns of the matrix. A fast rank-structured hierarchical approximation method based on this analysis is developed, along with a hierarchical least-squares solver for these and related systems. This result is a direct method for inverting nonuniform discrete transforms with a complexity that is nearly linear with respect to the degrees of freedom in the problem. This solver is benchmarked against various iterative and direct solvers in the setting of inverting the one-dimensional type-II (or forward) transform,for a range of condition numbers and problem sizes (up to $4\times 10^6$ by $2\times 10^6$). These experiments demonstrate that this method is especially useful for large ill-conditioned problems with multiple right-hand sides.
The complexity of the list homomorphism problem for signed graphs appears difficult to classify. Existing results focus on special classes of signed graphs, such as trees and reflexive signed graphs. Irreflexive signed graphs are in a certain sense the heart of the problem, as noted by a recent paper of Kim and Siggers. We focus on a special class of irreflexive signed graphs, namely those in which the unicoloured edges form a spanning path or cycle, which we call separable signed graphs. We classify the complexity of list homomorphisms to these separable signed graphs; we believe that these signed graphs will play an important role for the general resolution of the irreflexive case. We also relate our results to a conjecture of Kim and Siggers concerning the special case of semi-balanced irreflexive signed graphs; we have proved the conjecture in another paper, and the present results add structural information to that topic.
For a general class of nonlinear port-Hamiltonian systems we develop a high-order time discretization scheme with certain structure preservation properties. The possibly infinite-dimensional system under consideration possesses a Hamiltonian function, which represents an energy in the system and is conserved or dissipated along solutions. The numerical scheme is energy-consistent in the sense that the Hamiltonian of the approximate solutions at time grid points behaves accordingly. This structure preservation property is achieved by specific design of a continuous Petrov-Galerkin (cPG) method in time. It coincides with standard cPG methods in special cases, in which the latter are energy-consistent. Examples of port-Hamiltonian ODEs and PDEs are presented to visualize the framework. In numerical experiments the energy consistency is verified and the convergence behavior is investigated.
The demagnetization field in micromagnetism is given as the gradient of a potential which solves a partial differential equation (PDE) posed in R^d. In its most general form, this PDE is supplied with continuity condition on the boundary of the magnetic domain and the equation includes a discontinuity in the gradient of the potential over the boundary. Typical numerical algorithms to solve this problem relies on the representation of the potential via the Green's function, where a volume and a boundary integral terms need to be accurately approximated. From a computational point of view, the volume integral dominates the computational cost and can be difficult to approximate due to the singularities of the Green's function. In this article, we propose a hybrid model, where the overall potential can be approximated by solving two uncoupled PDEs posed in bounded domains, whereby the boundary conditions of one of the PDEs is obtained by a low cost boundary integral. Moreover, we provide a convergence analysis of the method under two separate theoretical settings; periodic magnetisation, and high-frequency magnetisation. Numerical examples are given to verify the convergence rates.
The use of three extractors, fed by linear feedback shift registers (LFSR) for generating pseudo-random bit streams is investigated. Specifically, a standard LFSR is combined with a von Neumann extractor, a modified LFSR, extended by the all-zero state, is combined with an output logic, which translates every three bits from the LFSR into up to two output bits and a run extraction of the input bit stream into single output bits are investigated. The latter two achieve better efficiency in using bits from the primary bit stream, the last one reaches 50\%. Compared to other generator logics, the three extractors investigated are less performant in terms of their cryptographic strength. However, the focus of this report is on the quality of the pseudo-random bit stream in comparison to really random bits and on the efficiency of using the bits of the primary stream from the LFSR and generating valid output bits, while fulfilling a minimum cryptographic strength only, beyond that of the pure LFSR.