In this paper, we first study the projections onto the set of unit dual quaternions, and the set of dual quaternion vectors with unit norms. Then we propose a power method for computing the dominant eigenvalue of a dual quaternion Hermitian matrix, and show its convergence and convergence rate under mild conditions. Based upon these, we reformulate the simultaneous localization and mapping (SLAM) problem as a rank-one dual quaternion completion problem. A two-block coordinate descent method is proposed to solve this problem. One block subproblem can be reduced to compute the best rank-one approximation of a dual quaternion Hermitian matrix, which can be computed by the power method. The other block has a closed-form solution. Numerical experiments are presented to show the efficiency of our proposed power method.
Score-based generative models are a popular class of generative modelling techniques relying on stochastic differential equations (SDE). From their inception, it was realized that it was also possible to perform generation using ordinary differential equations (ODE) rather than SDE. This led to the introduction of the probability flow ODE approach and denoising diffusion implicit models. Flow matching methods have recently further extended these ODE-based approaches and approximate a flow between two arbitrary probability distributions. Previous work derived bounds on the approximation error of diffusion models under the stochastic sampling regime, given assumptions on the $L^2$ loss. We present error bounds for the flow matching procedure using fully deterministic sampling, assuming an $L^2$ bound on the approximation error and a certain regularity condition on the data distributions.
In this paper, several row and column orthogonal projection methods are proposed for solving matrix equation $AXB=C$, where the matrix $A$ and $B$ are full rank or rank deficient and equation is consistent or not. These methods are iterative methods without matrix multiplication. It is theoretically proved these methods converge to the solution or least-squares solution of the matrix equation. Numerical results show that these methods are more efficient than iterative methods involving matrix multiplication for high-dimensional matrix.
Efficient numerical solvers for partial differential equations empower science and engineering. One of the commonly employed numerical solvers is the preconditioned conjugate gradient (PCG) algorithm which can solve large systems to a given precision level. One challenge in PCG solvers is the selection of preconditioners, as different problem-dependent systems can benefit from different preconditioners. We present a new method to introduce \emph{inductive bias} in preconditioning conjugate gradient algorithm. Given a system matrix and a set of solution vectors arise from an underlying distribution, we train a graph neural network to obtain an approximate decomposition to the system matrix to be used as a preconditioner in the context of PCG solvers. We conduct extensive experiments to demonstrate the efficacy and generalizability of our proposed approach in solving various 2D and 3D linear second-order PDEs.
We propose Riemannian Flow Matching (RFM), a simple yet powerful framework for training continuous normalizing flows on manifolds. Existing methods for generative modeling on manifolds either require expensive simulation, are inherently unable to scale to high dimensions, or use approximations for limiting quantities that result in biased training objectives. Riemannian Flow Matching bypasses these limitations and offers several advantages over previous approaches: it is simulation-free on simple geometries, does not require divergence computation, and computes its target vector field in closed-form. The key ingredient behind RFM is the construction of a relatively simple premetric for defining target vector fields, which encompasses the existing Euclidean case. To extend to general geometries, we rely on the use of spectral decompositions to efficiently compute premetrics on the fly. Our method achieves state-of-the-art performance on real-world non-Euclidean datasets, and we demonstrate tractable training on general geometries, including triangular meshes with highly non-trivial curvature and boundaries.
It has been shown that Maximum Satisfiability (MaxSAT) problem instances can be effectively solved by partitioning the set of soft clauses into several disjoint sets. The partitioning methods can be based on clause weights (e.g., stratification) or based on graph representations of the formula. Afterwards, a merge procedure is applied to guarantee that an optimal solution is found. This paper proposes a new framework called UpMax that decouples the partitioning procedure from the MaxSAT solving algorithms. As a result, new partitioning procedures can be defined independently of the MaxSAT algorithm to be used. Moreover, this decoupling also allows users that build new MaxSAT formulas to propose partition schemes based on knowledge of the problem to be solved. We illustrate this approach using several problems and show that partitioning has a large impact on the performance of unsatisfiability-based MaxSAT algorithms.
A novel algorithm for the computation of the quadratic numerical range is presented and exemplified yielding much better results in less time compared to the random vector sampling method. Furthermore, a bound on the probability for the random vector sampling method to produce a point exceeding a neighborhood of the expectation value in dependence on norm and size of the matrix is given.
This paper introduces the application of the weak Galerkin (WG) finite element method to solve the Steklov eigenvalue problem, focusing on obtaining lower bounds of the eigenvalues. The noncomforming finite element space of the weak Galerkin finite element method is the key to obtain lower bounds of the eigenvalues. The arbitary high order lower bound estimates are given and the guaranteed lower bounds of the eigenvalues are also discussed. Numerical results demonstrate the accuracy and lower bound property of the numerical scheme.
Generative flow networks (GFlowNets) are amortized variational inference algorithms that are trained to sample from unnormalized target distributions over compositional objects. A key limitation of GFlowNets until this time has been that they are restricted to discrete spaces. We present a theory for generalized GFlowNets, which encompasses both existing discrete GFlowNets and ones with continuous or hybrid state spaces, and perform experiments with two goals in mind. First, we illustrate critical points of the theory and the importance of various assumptions. Second, we empirically demonstrate how observations about discrete GFlowNets transfer to the continuous case and show strong results compared to non-GFlowNet baselines on several previously studied tasks. This work greatly widens the perspectives for the application of GFlowNets in probabilistic inference and various modeling settings.
Consider the Toeplitz matrix $T_n(f)$ generated by the symbol $f(\theta)=\hat{f}_r e^{\mathbf{i}r\theta}+\hat{f}_0+\hat{f}_{-s} e^{-\mathbf{i}s\theta}$, where $\hat{f}_r, \hat{f}_0, \hat{f}_{-s} \in \mathbb{C}$ and $0<r<n,~0<s<n$. For $r=s=1$ we have the classical tridiagonal Toeplitz matrices, for which the eigenvalues and eigenvectors are known. Similarly, the eigendecompositions are known for $1<r=s$, when the generated matrices are ``symmetrically sparse tridiagonal''. In the current paper we study the eigenvalues of $T_n(f)$ for $1\leq r<s$, which are ``non-symmetrically sparse tridiagonal''. We propose an algorithm which constructs one or two ad hoc matrices smaller than $T_n(f)$, whose eigenvalues are sufficient for determining the full spectrum of $T_n(f)$. The algorithm is explained through use of a conjecture for which examples and numerical experiments are reported for supporting it and for clarifying the presentation. Open problems are briefly discussed.
Deep learning based deformable medical image registration methods have emerged as a strong alternative for classical iterative registration methods. Since image registration is in general an ill-defined problem, the usefulness of inductive biases of symmetricity, inverse consistency and topology preservation has been widely accepted by the research community. However, while many deep learning registration methods enforce these properties via loss functions, no prior deep learning registration method fulfills all of these properties by construct. Here, we propose a novel multi-resolution registration architecture which is by construct symmetric, inverse consistent, and topology preserving. We also develop an implicit layer for memory efficient inversion of the deformation fields. The proposed method achieves state-of-the-art registration accuracy on two datasets.