The Immersed Boundary (IB) method of Peskin (J. Comput. Phys., 1977) is useful for problems involving fluid-structure interactions or complex geometries. By making use of a regular Cartesian grid that is independent of the geometry, the IB framework yields a robust numerical scheme that can efficiently handle immersed deformable structures. Additionally, the IB method has been adapted to problems with prescribed motion and other PDEs with given boundary data. IB methods for these problems traditionally involve penalty forces which only approximately satisfy boundary conditions, or they are formulated as constraint problems. In the latter approach, one must find the unknown forces by solving an equation that corresponds to a poorly conditioned first-kind integral equation. This operation can require a large number of iterations of a Krylov method, and since a time-dependent problem requires this solve at each time step, this method can be prohibitively inefficient without preconditioning. In this work, we introduce a new, well-conditioned IB formulation for boundary value problems, which we call the Immersed Boundary Double Layer (IBDL) method. We present the method as it applies to Poisson and Helmholtz problems to demonstrate its efficiency over the original constraint method. In this double layer formulation, the equation for the unknown boundary distribution corresponds to a well-conditioned second-kind integral equation that can be solved efficiently with a small number of iterations of a Krylov method. Furthermore, the iteration count is independent of both the mesh size and immersed boundary point spacing. The method converges away from the boundary, and when combined with a local interpolation, it converges in the entire PDE domain. Additionally, while the original constraint method applies only to Dirichlet problems, the IBDL formulation can also be used for Neumann conditions.
White matter bundle segmentation is a cornerstone of modern tractography to study the brain's structural connectivity in domains such as neurological disorders, neurosurgery, and aging. In this study, we present FIESTA (FIbEr Segmentation in Tractography using Autoencoders), a reliable and robust, fully automated, and easily semi-automatically calibrated pipeline based on deep autoencoders that can dissect and fully populate white matter bundles. This pipeline is built upon previous works that demonstrated how autoencoders can be used successfully for streamline filtering, bundle segmentation, and streamline generation in tractography. Our proposed method improves bundle segmentation coverage by recovering hard-to-track bundles with generative sampling through the latent space seeding of the subject bundle and the atlas bundle. A latent space of streamlines is learned using autoencoder-based modeling combined with contrastive learning. Using an atlas of bundles in standard space (MNI), our proposed method segments new tractograms using the autoencoder latent distance between each tractogram streamline and its closest neighbor bundle in the atlas of bundles. Intra-subject bundle reliability is improved by recovering hard-to-track streamlines, using the autoencoder to generate new streamlines that increase the spatial coverage of each bundle while remaining anatomically correct. Results show that our method is more reliable than state-of-the-art automated virtual dissection methods such as RecoBundles, RecoBundlesX, TractSeg, White Matter Analysis and XTRACT. Our framework allows for the transition from one anatomical bundle definition to another with marginal calibration efforts. Overall, these results show that our framework improves the practicality and usability of current state-of-the-art bundle segmentation framework.
We present new Dirichlet-Neumann and Neumann-Dirichlet algorithms with a time domain decomposition applied to unconstrained parabolic optimal control problems. After a spatial semi-discretization, we use the Lagrange multiplier approach to derive a coupled forward-backward optimality system, which can then be solved using a time domain decomposition. Due to the forward-backward structure of the optimality system, three variants can be found for the Dirichlet-Neumann and Neumann-Dirichlet algorithms. We analyze their convergence behavior and determine the optimal relaxation parameter for each algorithm. Our analysis reveals that the most natural algorithms are actually only good smoothers, and there are better choices which lead to efficient solvers. We illustrate our analysis with numerical experiments.
We study the power of randomness in the Number-on-Forehead (NOF) model in communication complexity. We construct an explicit 3-player function $f:[N]^3 \to \{0,1\}$, such that: (i) there exist a randomized NOF protocol computing it that sends a constant number of bits; but (ii) any deterministic or nondeterministic NOF protocol computing it requires sending about $(\log N)^{1/3}$ many bits. This exponentially improves upon the previously best-known such separation. At the core of our proof is an extension of a recent result of the first and third authors on sets of integers without 3-term arithmetic progressions into a non-arithmetic setting.
We introduce a new class of absorbing boundary conditions (ABCs) for the Helmholtz equation. The proposed ABCs can be derived from a certain simple class of perfectly matched layers using $L$ discrete layers and using the $Q_N$ Lagrange finite element in conjunction with the $N$-point Gauss-Legendre quadrature reduced integration rule. The proposed ABCs are classified by a tuple $(L,N)$, and achieve reflection error of order $O(R^{2LN})$ for some $R<1$. The new ABCs generalise the perfectly matched discrete layers proposed by Guddati and Lim [Int. J. Numer. Meth. Engng 66 (6) (2006) 949-977], including them as type $(L,1)$. An analysis of the proposed ABCs is performed motivated by the work of Ainsworth [J. Comput. Phys. 198 (1) (2004) 106-130]. The new ABCs facilitate numerical implementations of the Helmholtz problem with ABCs if $Q_N$ finite elements are used in the physical domain. Moreover, giving more insight, the analysis presented in this work potentially aids with developing ABCs in related areas.
Replication studies are increasingly conducted to assess the credibility of scientific findings. Most of these replication attempts target studies with a superiority design, and there is a lack of methodology regarding the analysis of replication studies with alternative types of designs, such as equivalence. In order to fill this gap, we propose two approaches, the two-trials rule and the sceptical TOST procedure, adapted from methods used in superiority settings. Both methods have the same overall Type-I error rate, but the sceptical TOST procedure allows replication success even for non-significant original or replication studies. This leads to a larger project power and other differences in relevant operating characteristics. Both methods can be used for sample size calculation of the replication study, based on the results from the original one. The two methods are applied to data from the Reproducibility Project: Cancer Biology.
We consider a linear implicit-explicit (IMEX) time discretization of the Cahn-Hilliard equation with a source term, endowed with Dirichlet boundary conditions. For every time step small enough, we build an exponential attractor of the discrete-in-time dynamical system associated to the discretization. We prove that, as the time step tends to 0, this attractor converges for the symmmetric Hausdorff distance to an exponential attractor of the continuous-in-time dynamical system associated with the PDE. We also prove that the fractal dimension of the exponential attractor (and consequently, of the global attractor) is bounded by a constant independent of the time step. The results also apply to the classical Cahn-Hilliard equation with Neumann boundary conditions.
The Sinc approximation applied to double-exponentially decaying functions is referred to as the DE-Sinc approximation. Because of its high efficiency, this method has been used in various applications. In the Sinc approximation, the mesh size and truncation numbers should be optimally selected to achieve its best performance. However, the standard selection formula has only been "near-optimally" selected because the optimal formula of the mesh size cannot be expressed in terms of elementary functions of truncation numbers. In this study, we propose two improved selection formulas. The first one is based on the concept by an earlier research that resulted in a better selection formula for the double-exponential formula. The formula performs slightly better than the standard one, but is still not optimal. As a second selection formula, we introduce a new parameter to propose truly optimal selection formula. We provide explicit error bounds for both selection formulas. Numerical comparisons show that the first formula gives a better error bound than the standard formula, and the second formula gives a much better error bound than the standard and first formulas.
The purpose of this work is to develop a framework to calibrate signed datasets so as to be consistent with specified marginals by suitably extending the Schr\"odinger-Fortet-Sinkhorn paradigm. Specifically, we seek to revise sign-indefinite multi-dimensional arrays in a way that the updated values agree with specified marginals. Our approach follows the rationale in Schr\"odinger's problem, aimed at updating a "prior" probability measure to agree with marginal distributions. The celebrated Sinkhorn's algorithm (established earlier by R.\ Fortet) that solves Schr\"odinger's problem found early applications in calibrating contingency tables in statistics and, more recently, multi-marginal problems in machine learning and optimal transport. Herein, we postulate a sign-indefinite prior in the form of a multi-dimensional array, and propose an optimization problem to suitably update this prior to ensure consistency with given marginals. The resulting algorithm generalizes the Sinkhorn algorithm in that it amounts to iterative scaling of the entries of the array along different coordinate directions. The scaling is multiplicative but also, in contrast to Sinkhorn, inverse-multiplicative depending on the sign of the entries. Our algorithm reduces to the classical Sinkhorn algorithm when the entries of the prior are positive.
Differential geometric approaches are ubiquitous in several fields of mathematics, physics and engineering, and their discretizations enable the development of network-based mathematical and computational frameworks, which are essential for large-scale data science. The Forman-Ricci curvature (FRC) - a statistical measure based on Riemannian geometry and designed for networks - is known for its high capacity for extracting geometric information from complex networks. However, extracting information from dense networks is still challenging due to the combinatorial explosion of high-order network structures. Motivated by this challenge we sought a set-theoretic representation theory for high-order network cells and FRC, as well as their associated concepts and properties, which together provide an alternative and efficient formulation for computing high-order FRC in complex networks. We provide a pseudo-code, a software implementation coined FastForman, as well as a benchmark comparison with alternative implementations. Crucially, our representation theory reveals previous computational bottlenecks and also accelerates the computation of FRC. As a consequence, our findings open new research possibilities in complex systems where higher-order geometric computations are required.
We review how to simulate continuous determinantal point processes (DPPs) and improve the current simulation algorithms in several important special cases as well as detail how certain types of conditional simulation can be carried out. Importantly we show how to speed up the simulation of the widely used Fourier based projection DPPs, which arise as approximations of more general DPPs. The algorithms are implemented and published as open source software.