Navigating dynamic environments requires the robot to generate collision-free trajectories and actively avoid moving obstacles. Most previous works designed path planning algorithms based on one single map representation, such as the geometric, occupancy, or ESDF map. Although they have shown success in static environments, due to the limitation of map representation, those methods cannot reliably handle static and dynamic obstacles simultaneously. To address the problem, this paper proposes a gradient-based B-spline trajectory optimization algorithm utilizing the robot's onboard vision. The depth vision enables the robot to track and represent dynamic objects geometrically based on the voxel map. The proposed optimization first adopts the circle-based guide-point algorithm to approximate the costs and gradients for avoiding static obstacles. Then, with the vision-detected moving objects, our receding-horizon distance field is simultaneously used to prevent dynamic collisions. Finally, the iterative re-guide strategy is applied to generate the collision-free trajectory. The simulation and physical experiments prove that our method can run in real-time to navigate dynamic environments safely. Our software is available on GitHub as an open-source package.
We propose a numerical algorithm for the computation of multi-marginal optimal transport (MMOT) problems involving general measures that are not necessarily discrete. By developing a relaxation scheme in which marginal constraints are replaced by finitely many linear constraints and by proving a specifically tailored duality result for this setting, we approximate the MMOT problem by a linear semi-infinite optimization problem. Moreover, we are able to recover a feasible and approximately optimal solution of the MMOT problem, and its sub-optimality can be controlled to be arbitrarily close to 0 under mild conditions. The developed relaxation scheme leads to a numerical algorithm which can compute a feasible approximate optimizer of the MMOT problem whose theoretical sub-optimality can be chosen to be arbitrarily small. Besides the approximate optimizer, the algorithm is also able to compute both an upper bound and a lower bound on the optimal value of the MMOT problem. The difference between the computed bounds provides an explicit upper bound on the sub-optimality of the computed approximate optimizer. Through a numerical example, we demonstrate that the proposed algorithm is capable of computing a high-quality solution of an MMOT problem involving as many as 50 marginals along with an explicit estimate of its sub-optimality that is much less conservative compared to the theoretical estimate.
We analyze the Schr\"odingerisation method for quantum simulation of a general class of non-unitary dynamics with inhomogeneous source terms. The Schr\"odingerisation technique, introduced in \cite{JLY22a,JLY23}, transforms any linear ordinary and partial differential equations with non-unitary dynamics into a system under unitary dynamics via a warped phase transition that maps the equations into a higher dimension, making them suitable for quantum simulation. This technique can also be applied to these equations with inhomogeneous terms modeling source or forcing terms or boundary and interface conditions, and discrete dynamical systems such as iterative methods in numerical linear algebra, through extra equations in the system. Difficulty airses with the presense of inhomogeneous terms since it can change the stability of the original system. In this paper, we systematically study--both theoretically and numerically--the important issue of recovering the original variables from the Schr\"odingerized equations, even when the evolution operator contains unstable modes. We show that even with unstable modes, one can still construct a stable scheme, yet to recover the original variable one needs to use suitable data in the extended space. We analyze and compare both the discrete and continuous Fourier transforms used in the extended dimension, and derive corresponding error estimates, which allows one to use the more appropriate transform for specific equations. We also provide a smoother initialization for the Schrod\"odingerized system to gain higher order accuracy in the extended space. We homogenize the inhomogeneous terms with a stretch transformation, making it easier to recover the original variable. Our recovering technique also provides a simple and generic framework to solve general ill-posed problems in a computationally stable way.
We introduce novel finite element schemes for curve diffusion and elastic flow in arbitrary codimension. The schemes are based on a variational form of a system that includes a specifically chosen tangential motion. We derive optimal $L^2$- and $H^1$-error bounds for continuous-in-time semidiscrete finite element approximations that use piecewise linear elements. In addition, we consider fully discrete schemes and, in the case of curve diffusion, prove unconditional stability for it. Finally, we present several numerical simulations, including some convergence experiments that confirm the derived error bounds. The presented simulations suggest that the tangential motion leads to equidistribution in practice.
The macro-element variant of the hybridized discontinuous Galerkin (HDG) method combines advantages of continuous and discontinuous finite element discretization. In this paper, we investigate the performance of the macro-element HDG method for the analysis of compressible flow problems at moderate Reynolds numbers. To efficiently handle the corresponding large systems of equations, we explore several strategies at the solver level. On the one hand, we devise a second-layer static condensation approach that reduces the size of the local system matrix in each macro-element and hence the factorization time of the local solver. On the other hand, we employ a multi-level preconditioner based on the FGMRES solver for the global system that integrates well within a matrix-free implementation. In addition, we integrate a standard diagonally implicit Runge-Kutta scheme for time integration. We test the matrix-free macro-element HDG method for compressible flow benchmarks, including Couette flow, flow past a sphere, and the Taylor-Green vortex. Our results show that unlike standard HDG, the macro-element HDG method can operate efficiently for moderate polynomial degrees, as the local computational load can be flexibly increased via mesh refinement within a macro-element. Our results also show that due to the balance of local and global operations, the reduction in degrees of freedom, and the reduction of the global problem size and the number of iterations for its solution, the macro-element HDG method can be a competitive option for the analysis of compressible flow problems.
Immersed boundary methods are high-order accurate computational tools used to model geometrically complex problems in computational mechanics. While traditional finite element methods require the construction of high-quality boundary-fitted meshes, immersed boundary methods instead embed the computational domain in a background grid. Interpolation-based immersed boundary methods augment existing finite element software to non-invasively implement immersed boundary capabilities through extraction. Extraction interpolates the background basis as a linear combination of Lagrange polynomials defined on a foreground mesh, creating an interpolated basis that can be easily integrated by existing methods. This work extends the interpolation-based immersed boundary method to multi-material and multi-physics problems. Beginning from level-set descriptions of domain geometries, Heaviside enrichment is implemented to accommodate discontinuities in state variable fields across material interfaces. Adaptive refinement with truncated hierarchical B-splines is used to both improve interface geometry representations and resolve large solution gradients near interfaces. Multi-physics problems typically involve coupled fields where each field has unique discretization requirements. This work presents a novel discretization method for coupled problems through the application of extraction, using a single foreground mesh for all fields. Numerical examples illustrate optimal convergence rates for this method in both 2D and 3D, for heat conduction, linear elasticity, and a coupled thermo-mechanical problem. The utility of this method is demonstrated through image-based analysis of a composite sample, where in addition to circumventing typical meshing difficulties, this method reduces the required degrees of freedom compared to classical boundary-fitted finite element methods.
We prove explicit uniform two-sided bounds for the phase functions of Bessel functions and of their derivatives. As a consequence, we obtain new enclosures for the zeros of Bessel functions and their derivatives in terms of inverse values of some elementary functions. These bounds are valid, with a few exceptions, for all zeros and all Bessel functions with non-negative indices. We provide numerical evidence showing that our bounds either improve or closely match the best previously known ones.
We propose a novel algorithm for the support estimation of partially known Gaussian graphical models that incorporates prior information about the underlying graph. In contrast to classical approaches that provide a point estimate based on a maximum likelihood or a maximum a posteriori criterion using (simple) priors on the precision matrix, we consider a prior on the graph and rely on annealed Langevin diffusion to generate samples from the posterior distribution. Since the Langevin sampler requires access to the score function of the underlying graph prior, we use graph neural networks to effectively estimate the score from a graph dataset (either available beforehand or generated from a known distribution). Numerical experiments demonstrate the benefits of our approach.
Mendelian randomization uses genetic variants as instrumental variables to make causal inferences about the effects of modifiable risk factors on diseases from observational data. One of the major challenges in Mendelian randomization is that many genetic variants are only modestly or even weakly associated with the risk factor of interest, a setting known as many weak instruments. Many existing methods, such as the popular inverse-variance weighted (IVW) method, could be biased when the instrument strength is weak. To address this issue, the debiased IVW (dIVW) estimator, which is shown to be robust to many weak instruments, was recently proposed. However, this estimator still has non-ignorable bias when the effective sample size is small. In this paper, we propose a modified debiased IVW (mdIVW) estimator by multiplying a modification factor to the original dIVW estimator. After this simple correction, we show that the bias of the mdIVW estimator converges to zero at a faster rate than that of the dIVW estimator under some regularity conditions. Moreover, the mdIVW estimator has smaller variance than the dIVW estimator.We further extend the proposed method to account for the presence of instrumental variable selection and balanced horizontal pleiotropy. We demonstrate the improvement of the mdIVW estimator over the dIVW estimator through extensive simulation studies and real data analysis.
We study the problem of selecting $k$ experiments from a larger candidate pool, where the goal is to maximize mutual information (MI) between the selected subset and the underlying parameters. Finding the exact solution is to this combinatorial optimization problem is computationally costly, not only due to the complexity of the combinatorial search but also the difficulty of evaluating MI in nonlinear/non-Gaussian settings. We propose greedy approaches based on new computationally inexpensive lower bounds for MI, constructed via log-Sobolev inequalities. We demonstrate that our method outperforms random selection strategies, Gaussian approximations, and nested Monte Carlo (NMC) estimators of MI in various settings, including optimal design for nonlinear models with non-additive noise.
Mass lumping techniques are commonly employed in explicit time integration schemes for problems in structural dynamics and both avoid solving costly linear systems with the consistent mass matrix and increase the critical time step. In isogeometric analysis, the critical time step is constrained by so-called "outlier" frequencies, representing the inaccurate high frequency part of the spectrum. Removing or dampening these high frequencies is paramount for fast explicit solution techniques. In this work, we propose robust mass lumping and outlier removal techniques for nontrivial geometries, including multipatch and trimmed geometries. Our lumping strategies provably do not deteriorate (and often improve) the CFL condition of the original problem and are combined with deflation techniques to remove persistent outlier frequencies. Numerical experiments reveal the advantages of the method, especially for simulations covering large time spans where they may halve the number of iterations with little or no effect on the numerical solution.