The present article proposes a partitioned Dirichlet-Neumann algorithm, that allows to address unique challenges arising from a novel mixed-dimensional coupling of very slender fibers embedded in fluid flow using a regularized mortar-type finite element discretization. The fibers are modeled via one-dimensional (1D) partial differential equations based on geometrically exact nonlinear beam theory, while the flow is described by the three-dimensional (3D) incompressible Navier-Stokes equations. The arising truly mixed-dimensional 1D-3D coupling scheme constitutes a novel approximate model and numerical strategy, that naturally necessitates specifically tailored solution schemes to ensure an accurate and efficient computational treatment. In particular, we present a strongly coupled partitioned solution algorithm based on a Quasi-Newton method for applications involving fibers with high slenderness ratios that usually present a challenge with regard to the well-known added mass effect. The influence of all employed algorithmic and numerical parameters, namely the applied acceleration technique, the employed constraint regularization parameter as well as shape functions, on efficiency and results of the solution procedure is studied through appropriate examples. Finally, the convergence of the two-way coupled mixed-dimensional problem solution under uniform mesh refinement is demonstrated, a comparison to a 3D reference solution is performed, and the method's capabilities in capturing flow phenomena at large geometric scale separation is illustrated by the example of a submersed vegetation canopy.
A generalization of Passing-Bablok regression is proposed for comparing multiple measurement methods simultaneously. Possible applications include assay migration studies or interlaboratory trials. When comparing only two methods, the method reduces to the usual Passing-Bablok estimator. It is close in spirit to reduced major axis regression, which is, however, not robust. To obtain a robust estimator, the major axis is replaced by the (hyper-)spherical median axis. The method is shown to reduce to the usual Passing-Bablok estimator if only two methods are compared. This technique has been applied to compare SARS-CoV-2 serological tests, bilirubin in neonates, and an in vitro diagnostic test using different instruments, sample preparations, and reagent lots. In addition, plots similar to the well-known Bland-Altman plots have been developed to represent the variance structure.
In this note we consider the approximation of the Greeks Delta and Gamma of American-style options through the numerical solution of time-dependent partial differential complementarity problems (PDCPs). This approach is very attractive as it can yield accurate approximations to these Greeks at essentially no additional computational cost during the numerical solution of the PDCP for the pertinent option value function. For the temporal discretization, the Crank-Nicolson method is arguably the most popular method in computational finance. It is well-known, however, that this method can have an undesirable convergence behaviour in the approximation of the Greeks Delta and Gamma for American-style options, even when backward Euler damping (Rannacher smoothing) is employed. In this note we study for the temporal discretization an interesting family of diagonally implicit Runge-Kutta (DIRK) methods together with the two-stage Lobatto IIIC method. Through ample numerical experiments for one- and two-asset American-style options, it is shown that these methods can yield a regular second-order convergence behaviour for the option value as well as for the Greeks Delta and Gamma. A mutual comparison reveals that the DIRK method with suitably chosen parameter $\theta$ is preferable.
We present a tensor train (TT) based algorithm designed for sampling from a target distribution and employ TT approximation to capture the high-dimensional probability density evolution of overdamped Langevin dynamics. This involves utilizing the regularized Wasserstein proximal operator, which exhibits a simple kernel integration formulation, i.e., the softmax formula of the traditional proximal operator. The integration, performed in $\mathbb{R}^d$, poses a challenge in practical scenarios, making the algorithm practically implementable only with the aid of TT approximation. In the specific context of Gaussian distributions, we rigorously establish the unbiasedness and linear convergence of our sampling algorithm towards the target distribution. To assess the effectiveness of our proposed methods, we apply them to various scenarios, including Gaussian families, Gaussian mixtures, bimodal distributions, and Bayesian inverse problems in numerical examples. The sampling algorithm exhibits superior accuracy and faster convergence when compared to classical Langevin dynamics-type sampling algorithms.
For appropriate Gaussian processes, as a corollary of the majorizing measure theorem, Michel Talagrand (1987) proved that the event that the supremum is significantly larger than its expectation can be covered by a set of half-spaces whose sum of measures is small. We prove a conjecture of Talagrand that is the analog of this result in the Bernoulli-$p$ setting, and answer a question of Talagrand on the analogous result for general positive empirical processes.
We consider Maxwell eigenvalue problems on uncertain shapes with perfectly conducting TESLA cavities being the driving example. Due to the shape uncertainty, the resulting eigenvalues and eigenmodes are also uncertain and it is well known that the eigenvalues may exhibit crossings or bifurcations under perturbation. We discuss how the shape uncertainties can be modelled using the domain mapping approach and how the deformation mapping can be expressed as coefficients in Maxwell's equations. Using derivatives of these coefficients and derivatives of the eigenpairs, we follow a perturbation approach to compute approximations of mean and covariance of the eigenpairs. For small perturbations, these approximations are faster and more accurate than Monte Carlo or similar sampling-based strategies. Numerical experiments for a three-dimensional 9-cell TESLA cavity are presented.
Recently, a family of unconventional integrators for ODEs with polynomial vector fields was proposed, based on the polarization of vector fields. The simplest instance is the by now famous Kahan discretization for quadratic vector fields. All these integrators seem to possess remarkable conservation properties. In particular, it has been proved that, when the underlying ODE is Hamiltonian, its polarization discretization possesses an integral of motion and an invariant volume form. In this note, we propose a new algebraic approach to derivation of the integrals of motion for polarization discretizations.
We show that the known list-decoding algorithms for univariate multiplicity and folded Reed-Solomon (FRS) codes can be made to run in nearly-linear time. This yields, to our knowledge, the first known family of codes that can be decoded in nearly linear time, even as they approach the list decoding capacity. Univariate multiplicity codes and FRS codes are natural variants of Reed-Solomon codes that were discovered and studied for their applications to list-decoding. It is known that for every $\epsilon >0$, and rate $R \in (0,1)$, there exist explicit families of these codes that have rate $R$ and can be list-decoded from a $(1-R-\epsilon)$ fraction of errors with constant list size in polynomial time (Guruswami & Wang (IEEE Trans. Inform. Theory, 2013) and Kopparty, Ron-Zewi, Saraf & Wootters (SIAM J. Comput. 2023)). In this work, we present randomized algorithms that perform the above tasks in nearly linear time. Our algorithms have two main components. The first builds upon the lattice-based approach of Alekhnovich (IEEE Trans. Inf. Theory 2005), who designed a nearly linear time list-decoding algorithm for Reed-Solomon codes approaching the Johnson radius. As part of the second component, we design nearly-linear time algorithms for two natural algebraic problems. The first algorithm solves linear differential equations of the form $Q\left(x, f(x), \frac{df}{dx}, \dots,\frac{d^m f}{dx^m}\right) \equiv 0$ where $Q$ has the form $Q(x,y_0,\dots,y_m) = \tilde{Q}(x) + \sum_{i = 0}^m Q_i(x)\cdot y_i$. The second solves functional equations of the form $Q\left(x, f(x), f(\gamma x), \dots,f(\gamma^m x)\right) \equiv 0$ where $\gamma$ is a high-order field element. These algorithms can be viewed as generalizations of classical algorithms of Sieveking (Computing 1972) and Kung (Numer. Math. 1974) for computing the modular inverse of a power series, and might be of independent interest.
The Boundary Element Method (BEM) is implemented using piecewise linear elements to solve the two-dimensional Dirichlet problem for Laplace's equation posed on a disk. A benefit of the BEM as opposed to many other numerical solution techniques is that discretization only occurs on the boundary, i.e., the complete domain does not need to be discretized. This provides an advantage in terms of time and cost. The algorithm's performance is illustrated through sample test problems with known solutions. A comparison between the exact solution and the BEM numerical solution is done, and error analysis is performed on the results.
We present the new Orthogonal Polynomials Approximation Algorithm (OPAA), a parallelizable algorithm that estimates probability distributions using functional analytic approach: first, it finds a smooth functional estimate of the probability distribution, whether it is normalized or not; second, the algorithm provides an estimate of the normalizing weight; and third, the algorithm proposes a new computation scheme to compute such estimates. A core component of OPAA is a special transform of the square root of the joint distribution into a special functional space of our construct. Through this transform, the evidence is equated with the $L^2$ norm of the transformed function, squared. Hence, the evidence can be estimated by the sum of squares of the transform coefficients. Computations can be parallelized and completed in one pass. OPAA can be applied broadly to the estimation of probability density functions. In Bayesian problems, it can be applied to estimating the normalizing weight of the posterior, which is also known as the evidence, serving as an alternative to existing optimization-based methods.
Comparisons of frequency distributions often invoke the concept of shift to describe directional changes in properties such as the mean. In the present study, we sought to define shift as a property in and of itself. Specifically, we define distributional shift (DS) as the concentration of frequencies away from the discrete class having the greatest value (e.g., the right-most bin of a histogram). We derive a measure of DS using the normalized sum of exponentiated cumulative frequencies. We then define relative distributional shift (RDS) as the difference in DS between two distributions, revealing the magnitude and direction by which one distribution is concentrated to lesser or greater discrete classes relative to another. We find that RDS is highly related to popular measures that, while based on the comparison of frequency distributions, do not explicitly consider shift. While RDS provides a useful complement to other comparative measures, DS allows shift to be quantified as a property of individual distributions, similar in concept to a statistical moment.