The realization of a standard Adaptive Finite Element Method (AFEM) preserves the mesh conformity by performing a completion step in the refinement loop: in addition to elements marked for refinement due to their contribution to the global error estimator, other elements are refined. In the new perspective opened by the introduction of Virtual Element Methods (VEM), elements with hanging nodes can be viewed as polygons with aligned edges, carrying virtual functions together with standard polynomial functions. The potential advantage is that all activated degrees of freedom are motivated by error reduction, not just by geometric reasons. This point of view is at the basis of the paper [L. Beirao da Veiga et al., Adaptive VEM: stabilization-free a posteriori error analysis and contraction property, SIAM Journal on Numerical Analysis, vol. 61, 2023], devoted to the convergence analysis of an adaptive VEM generated by the successive newest-vertex bisections of triangular elements without applying completion, in the lowest-order case (polynomial degree k=1). The purpose of this paper is to extend these results to the case of VEMs of order k>1 built on triangular meshes. The problem at hand is a variable-coefficient, second-order self-adjoint elliptic equation with Dirichlet boundary conditions; the data of the problem are assumed to be piecewise polynomials of degree k-1. By extending the concept of global index of a hanging node, under an admissibility assumption of the mesh, we derive a stabilization-free a posteriori error estimator. This is the sum of residual-type terms and certain virtual inconsistency terms (which vanish for k=1). We define an adaptive VEM of order k based on this estimator, and we prove its convergence by establishing a contraction result for a linear combination of (squared) energy norm of the error, residual estimator, and virtual inconsistency estimator.
We propose an automated nonlinear model reduction and mesh adaptation framework for rapid and reliable solution of parameterized advection-dominated problems, with emphasis on compressible flows. The key features of our approach are threefold: (i) a metric-based mesh adaptation technique to generate an accurate mesh for a range of parameters, (ii) a general (i.e., independent of the underlying equations) registration procedure for the computation of a mapping $\Phi$ that tracks moving features of the solution field, and (iii) an hyper-reduced least-square Petrov-Galerkin reduced-order model for the rapid and reliable estimation of the mapped solution. We discuss a general paradigm -- which mimics the refinement loop considered in mesh adaptation -- to simultaneously construct the high-fidelity and the reduced-order approximations, and we discuss actionable strategies to accelerate the offline phase. We present extensive numerical investigations for a quasi-1D nozzle problem and for a two-dimensional inviscid flow past a Gaussian bump to display the many features of the methodology and to assess the performance for problems with discontinuous solutions.
Linear regression and classification models with repeated functional data are considered. For each statistical unit in the sample, a real-valued parameter is observed over time under different conditions. Two regression models based on fusion penalties are presented. The first one is a generalization of the variable fusion model based on the 1-nearest neighbor. The second one, called group fusion lasso, assumes some grouping structure of conditions and allows for homogeneity among the regression coefficient functions within groups. A finite sample numerical simulation and an application on EEG data are presented.
In this work, we address parametric non-stationary fluid dynamics problems within a model order reduction setting based on domain decomposition. Starting from the domain decomposition approach, we derive an optimal control problem, for which we present the convergence analysis. The snapshots for the high-fidelity model are obtained with the Finite Element discretisation, and the model order reduction is then proposed both in terms of time and physical parameters, with a standard POD-Galerkin projection. We test the proposed methodology on two fluid dynamics benchmarks: the non-stationary backward-facing step and lid-driven cavity flow. Finally, also in view of future works, we compare the intrusive POD--Galerkin approach with a non--intrusive approach based on Neural Networks.
We study the numerical solution of a Cahn-Hilliard/Allen-Cahn system with strong coupling through state and gradient dependent non-diagonal mobility matrices. A fully discrete approximation scheme in space and time is proposed which preserves the underlying gradient flow structure and leads to dissipation of the free-energy on the discrete level. Existence and uniqueness of the discrete solution is established and relative energy estimates are used to prove optimal convergence rates in space and time under minimal smoothness assumptions. Numerical tests are presented for illustration of the theoretical results and to demonstrate the viability of the proposed methods.
Navigating automated driving systems (ADSs) through complex driving environments is difficult. Predicting the driving behavior of surrounding human-driven vehicles (HDVs) is a critical component of an ADS. This paper proposes an enhanced motion-planning approach for an ADS in a highway-merging scenario. The proposed enhanced approach utilizes the results of two aspects: the driving behavior and long-term trajectory of surrounding HDVs, which are coupled using a hierarchical model that is used for the motion planning of an ADS to improve driving safety.
We consider the problem of computing a sparse binary representation of an image. To be precise, given an image and an overcomplete, non-orthonormal basis, we aim to find a sparse binary vector indicating the minimal set of basis vectors that when added together best reconstruct the given input. We formulate this problem with an $L_2$ loss on the reconstruction error, and an $L_0$ (or, equivalently, an $L_1$) loss on the binary vector enforcing sparsity. This yields a so-called Quadratic Unconstrained Binary Optimization (QUBO) problem, whose solution is generally NP-hard to find. The contribution of this work is twofold. First, the method of unsupervised and unnormalized dictionary feature learning for a desired sparsity level to best match the data is presented. Second, the binary sparse coding problem is then solved on the Loihi 1 neuromorphic chip by the use of stochastic networks of neurons to traverse the non-convex energy landscape. The solutions are benchmarked against the classical heuristic simulated annealing. We demonstrate neuromorphic computing is suitable for sampling low energy solutions of binary sparse coding QUBO models, and although Loihi 1 is capable of sampling very sparse solutions of the QUBO models, there needs to be improvement in the implementation in order to be competitive with simulated annealing.
We present Surjective Sequential Neural Likelihood (SSNL) estimation, a novel method for simulation-based inference in models where the evaluation of the likelihood function is not tractable and only a simulator that can generate synthetic data is available. SSNL fits a dimensionality-reducing surjective normalizing flow model and uses it as a surrogate likelihood function which allows for conventional Bayesian inference using either Markov chain Monte Carlo methods or variational inference. By embedding the data in a low-dimensional space, SSNL solves several issues previous likelihood-based methods had when applied to high-dimensional data sets that, for instance, contain non-informative data dimensions or lie along a lower-dimensional manifold. We evaluate SSNL on a wide variety of experiments and show that it generally outperforms contemporary methods used in simulation-based inference, for instance, on a challenging real-world example from astrophysics which models the magnetic field strength of the sun using a solar dynamo model.
Velocity limit (VL) has been widely adopted in many variants of particle swarm optimization (PSO) to prevent particles from searching outside the solution space. Several adaptive VL strategies have been introduced with which the performance of PSO can be improved. However, the existing adaptive VL strategies simply adjust their VL based on iterations, leading to unsatisfactory optimization results because of the incompatibility between VL and the current searching state of particles. To deal with this problem, a novel PSO variant with state-based adaptive velocity limit strategy (PSO-SAVL) is proposed. In the proposed PSO-SAVL, VL is adaptively adjusted based on the evolutionary state estimation (ESE) in which a high value of VL is set for global searching state and a low value of VL is set for local searching state. Besides that, limit handling strategies have been modified and adopted to improve the capability of avoiding local optima. The good performance of PSO-SAVL has been experimentally validated on a wide range of benchmark functions with 50 dimensions. The satisfactory scalability of PSO-SAVL in high-dimension and large-scale problems is also verified. Besides, the merits of the strategies in PSO-SAVL are verified in experiments. Sensitivity analysis for the relevant hyper-parameters in state-based adaptive VL strategy is conducted, and insights in how to select these hyper-parameters are also discussed.
Nonlinear extensions to the active subspaces method have brought remarkable results for dimension reduction in the parameter space and response surface design. We further develop a kernel-based nonlinear method. In particular we introduce it in a broader mathematical framework that contemplates also the reduction in parameter space of multivariate objective functions. The implementation is thoroughly discussed and tested on more challenging benchmarks than the ones already present in the literature, for which dimension reduction with active subspaces produces already good results. Finally, we show a whole pipeline for the design of response surfaces with the new methodology in the context of a parametric CFD application solved with the Discontinuous Galerkin method.
In recent years, object detection has experienced impressive progress. Despite these improvements, there is still a significant gap in the performance between the detection of small and large objects. We analyze the current state-of-the-art model, Mask-RCNN, on a challenging dataset, MS COCO. We show that the overlap between small ground-truth objects and the predicted anchors is much lower than the expected IoU threshold. We conjecture this is due to two factors; (1) only a few images are containing small objects, and (2) small objects do not appear enough even within each image containing them. We thus propose to oversample those images with small objects and augment each of those images by copy-pasting small objects many times. It allows us to trade off the quality of the detector on large objects with that on small objects. We evaluate different pasting augmentation strategies, and ultimately, we achieve 9.7\% relative improvement on the instance segmentation and 7.1\% on the object detection of small objects, compared to the current state of the art method on MS COCO.