Classical model order reduction (MOR) for parametric problems may become computationally inefficient due to large sizes of the required projection bases, especially for problems with slowly decaying Kolmogorov n-widths. Additionally, Hamiltonian structure of dynamical systems may be available and should be preserved during the reduction. In the current presentation, we address these two aspects by proposing a corresponding dictionary-based, online-adaptive MOR approach. The method requires dictionaries for the state-variable, non-linearities and discrete empirical interpolation (DEIM) points. During the online simulation, local basis extensions/simplifications are performed in an online-efficient way, i.e. the runtime complexity of basis modifications and online simulation of the reduced models do not depend on the full state dimension. Experiments on a linear wave equation and a non-linear Sine-Gordon example demonstrate the efficiency of the approach.
Multi-agent systems can be extremely efficient when working concurrently and collaboratively, e.g., for transportation, maintenance, search and rescue. Coordination of such teams often involves two aspects: (i) selecting appropriate sub-teams for different tasks; (ii) designing collaborative control strategies to execute these tasks. The former aspect can be combinatorial w.r.t. the team size, while the latter requires optimization over joint state-spaces under geometric and dynamic constraints. Existing work often tackles one aspect by assuming the other is given, while ignoring their close dependency. This work formulates such problems as combinatorial-hybrid optimizations (CHO), where both the discrete modes of collaboration and the continuous control parameters are optimized simultaneously and iteratively. The proposed framework consists of two interleaved layers: the dynamic formation of task coalitions and the hybrid optimization of collaborative behaviors. Overall feasibility and costs of different coalitions performing various tasks are approximated at different granularities to improve the computational efficiency. At last, a Nash-stable strategy for both task assignment and execution is derived with provable guarantee on the feasibility and quality. Two non-trivial applications of collaborative transportation and dynamic capture are studied against several baselines.
PDE solutions are numerically represented by basis functions. Classical methods employ pre-defined bases that encode minimum desired PDE properties, which naturally cause redundant computations. What are the best bases to numerically represent PDE solutions? From the analytical perspective, the Kolmogorov $n$-width is a popular criterion for selecting representative basis functions. From the Bayesian computation perspective, the concept of optimality selects the modes that, when known, minimize the variance of the conditional distribution of the rest of the solution. We show that these two definitions of optimality are equivalent. Numerically, both criteria reduce to solving a Singular Value Decomposition (SVD), a procedure that can be made numerically efficient through randomized sampling. We demonstrate computationally the effectiveness of the basis functions so obtained on several linear and nonlinear problems. In all cases, the optimal accuracy is achieved with a small set of basis functions.
Post-market safety surveillance is an integral part of mass vaccination programs. Typically relying on sequential analysis of real-world health data as they accrue, safety surveillance is challenged by the difficulty of sequential multiple testing and by biases induced by residual confounding. The current standard approach based on the maximized sequential probability ratio test (MaxSPRT) fails to satisfactorily address these practical challenges and it remains a rigid framework that requires pre-specification of the surveillance schedule. We develop an alternative Bayesian surveillance procedure that addresses both challenges using a more flexible framework. We adopt a joint statistical modeling approach to sequentially estimate the effect of vaccine exposure on the adverse event of interest and correct for estimation bias by simultaneously analyzing a large set of negative control outcomes through a Bayesian hierarchical model. We then compute a posterior probability of the alternative hypothesis via Markov chain Monte Carlo sampling and use it for sequential detection of safety signals. Through an empirical evaluation using six US observational healthcare databases covering more than 360 million patients, we benchmark the proposed procedure against MaxSPRT on testing errors and estimation accuracy, under two epidemiological designs, the historical comparator and the self-controlled case series. We demonstrate that our procedure substantially reduces Type 1 error rates, maintains high statistical power, delivers fast signal detection, and provides considerably more accurate estimation. As an effort to promote open science, we present all empirical results in an R ShinyApp and provide full implementation of our method in the R package EvidenceSynthesis.
Accurate and efficient estimation of rare events probabilities is of significant importance, since often the occurrences of such events have widespread impacts. The focus in this work is on precisely quantifying these probabilities, often encountered in reliability analysis of complex engineering systems, based on an introduced framework termed Approximate Sampling Target with Post-processing Adjustment (ASTPA), which herein is integrated with and supported by gradient-based Hamiltonian Markov Chain Monte Carlo (HMCMC) methods. The developed techniques in this paper are applicable from low- to high-dimensional stochastic spaces, and the basic idea is to construct a relevant target distribution by weighting the original random variable space through a one-dimensional output likelihood model, using the limit-state function. To sample from this target distribution, we exploit HMCMC algorithms, a family of MCMC methods that adopts physical system dynamics, rather than solely using a proposal probability distribution, to generate distant sequential samples, and we develop a new Quasi-Newton mass preconditioned HMCMC scheme (QNp-HMCMC), which is particularly efficient and suitable for high-dimensional spaces. To eventually compute the rare event probability, an original post-sampling step is devised using an inverse importance sampling procedure based on the already obtained samples. The statistical properties of the estimator are analyzed as well, and the performance of the proposed methodology is examined in detail and compared against Subset Simulation in a series of challenging low- and high-dimensional problems.
Estimating the rigid transformation between two LiDAR scans through putative 3D correspondences is a typical point cloud registration paradigm. Current 3D feature matching approaches commonly lead to numerous outlier correspondences, making outlier-robust registration techniques indispensable. Many recent studies have adopted the branch and bound (BnB) optimization framework to solve the correspondence-based point cloud registration problem globally and deterministically. Nonetheless, BnB-based methods are time-consuming to search the entire 6-dimensional parameter space, since their computational complexity is exponential to the dimension of the solution domain. In order to enhance algorithm efficiency, existing works attempt to decouple the 6 degrees of freedom (DOF) original problem into two 3-DOF sub-problems, thereby reducing the dimension of the parameter space. In contrast, our proposed approach introduces a novel pose decoupling strategy based on residual projections, effectively decomposing the raw problem into three 2-DOF rotation search sub-problems. Subsequently, we employ a novel BnB-based search method to solve these sub-problems, achieving efficient and deterministic registration. Furthermore, our method can be adapted to address the challenging problem of simultaneous pose and correspondence registration (SPCR). Through extensive experiments conducted on synthetic and real-world datasets, we demonstrate that our proposed method outperforms state-of-the-art methods in terms of efficiency, while simultaneously ensuring robustness.
Humans exhibit complex motions that vary depending on the task that they are performing, the interactions they engage in, as well as subject-specific preferences. Therefore, forecasting future poses based on the history of the previous motions is a challenging task. This paper presents an innovative auxiliary-memory-powered deep neural network framework for the improved modelling of historical knowledge. Specifically, we disentangle subject-specific, task-specific, and other auxiliary information from the observed pose sequences and utilise these factorised features to query the memory. A novel Multi-Head knowledge retrieval scheme leverages these factorised feature embeddings to perform multiple querying operations over the historical observations captured within the auxiliary memory. Moreover, our proposed dynamic masking strategy makes this feature disentanglement process dynamic. Two novel loss functions are introduced to encourage diversity within the auxiliary memory while ensuring the stability of the memory contents, such that it can locate and store salient information that can aid the long-term prediction of future motion, irrespective of data imbalances or the diversity of the input data distribution. With extensive experiments conducted on two public benchmarks, Human3.6M and CMU-Mocap, we demonstrate that these design choices collectively allow the proposed approach to outperform the current state-of-the-art methods by significant margins: $>$ 17\% on the Human3.6M dataset and $>$ 9\% on the CMU-Mocap dataset.
We develop an autonomous navigation algorithm for a robot operating in two-dimensional environments containing obstacles, with arbitrary non-convex shapes, which can be in close proximity with each other, as long as there exists at least one safe path connecting the initial and the target location. The proposed navigation approach relies on a hybrid feedback guaranteeing asymptotic stability of target location while ensuring the forward invariance of the obstacle-free workspace. The proposed hybrid feedback controller guarantees Zeno-free switching between the move-to-target mode and the obstacle-avoidance mode based on the proximity of the robot with respect to the obstacle-occupied workspace. An instrumental transformation that reshapes (virtually) the non-convex obstacles, in a non-conservative manner, is introduced to facilitate the design of the obstacle-avoidance strategy. Finally, we provide an algorithmic procedure for the sensor-based implementation of the proposed hybrid controller and validate its effectiveness via some numerical simulations.
Generative modeling has recently undergone remarkable advancements, primarily propelled by the transformative implications of Diffusion Probabilistic Models (DPMs). The impressive capability of these models, however, often entails significant computational overhead during both training and inference. To tackle this challenge, we present Diff-Pruning, an efficient compression method tailored for learning lightweight diffusion models from pre-existing ones, without the need for extensive re-training. The essence of Diff-Pruning is encapsulated in a Taylor expansion over pruned timesteps, a process that disregards non-contributory diffusion steps and ensembles informative gradients to identify important weights. Our empirical assessment, undertaken across four diverse datasets highlights two primary benefits of our proposed method: 1) Efficiency: it enables approximately a 50% reduction in FLOPs at a mere 10% to 20% of the original training expenditure; 2) Consistency: the pruned diffusion models inherently preserve generative behavior congruent with their pre-trained progenitors. Code is available at \url{//github.com/VainF/Diff-Pruning}.
Interpolation-based methods are well-established and effective approaches for the efficient generation of accurate reduced-order surrogate models. Common challenges for such methods are the automatic selection of good or even optimal interpolation points and the appropriate size of the reduced-order model. An approach that addresses the first problem for linear, unstructured systems is the Iterative Rational Krylov Algorithm (IRKA), which computes optimal interpolation points through iterative updates by solving linear eigenvalue problems. However, in the case of preserving internal system structures, optimal interpolation points are unknown, and heuristics based on nonlinear eigenvalue problems result in numbers of potential interpolation points that typically exceed the reasonable size of reduced-order systems. In our work, we propose a projection-based iterative interpolation method inspired by IRKA for generally structured systems to adaptively compute near-optimal interpolation points as well as an appropriate size for the reduced-order system. Additionally, the iterative updates of the interpolation points can be chosen such that the reduced-order model provides an accurate approximation in specified frequency ranges of interest. For such applications, our new approach outperforms the established methods in terms of accuracy and computational effort. We show this in numerical examples with different structures.
Lax-Wendroff Flux Reconstruction (LWFR) is a single-stage, high order, quadrature free method for solving hyperbolic conservation laws. We develop a subcell based limiter by blending LWFR with a lower order scheme, either first order finite volume or MUSCL-Hancock scheme. While the blending with a lower order scheme helps to control oscillations, it may not guarantee admissibility of discrete solution, e.g., positivity property of quantities like density and pressure. By exploiting the subcell structure and admissibility of lower order schemes, we devise a strategy to ensure that the blended scheme is admissibility preserving for the mean values and then use a scaling limiter to obtain admissibility of the polynomial solution. For MUSCL-Hancock scheme on non-cell-centered subcells, we develop a slope limiter, time step restrictions and suitable blending of higher order fluxes, that ensures admissibility of lower order updates and hence that of the cell averages. By using the MUSCL-Hancock scheme on subcells and Gauss-Legendre points in flux reconstruction, we improve small-scale resolution compared to the subcell-based RKDG blending scheme with first order finite volume method and Gauss-Legendre-Lobatto points. We demonstrate the performance of our scheme on compressible Euler's equations, showcasing its ability to handle shocks and preserve small-scale structures.