We detail how to use Newton's method for distortion-based curved $r$-adaption to a discrete high-order metric field while matching a target geometry. Specifically, we combine two terms: a distortion measuring the deviation from the target metric; and a penalty term measuring the deviation from the target boundary. For this combination, we consider four ingredients. First, to represent the metric field, we detail a log-Euclidean high-order metric interpolation on a curved (straight-edged) mesh. Second, for this metric interpolation, we detail the first and second derivatives in physical coordinates. Third, to represent the domain boundaries, we propose an implicit representation for 2D and 3D NURBS models. Fourth, for this implicit representation, we obtain the first and second derivatives. The derivatives of the metric interpolation and the implicit representation allow minimizing the objective function with Newton's method. For this second-order minimization, the resulting meshes simultaneously match the curved features of the target metric and boundary. Matching the metric and the geometry using second-order optimization is an unprecedented capability in curved (straight-edged) $r$-adaption. This capability will be critical in global and cavity-based curved (straight-edged) high-order mesh adaption.
In recent years, the use of expressive surface visualizations in the representation of vascular structures has gained significant attention. These visualizations provide a comprehensive understanding of complex anatomical structures and are crucial for treatment planning and medical education. However, to aid decision-making, physicians require visualizations that accurately depict anatomical structures and their spatial relationships in a clear and well-perceivable manner. This work extends a previous paper and presents a thorough examination of common techniques for encoding distance information of 3D vessel surfaces and provides an implementation of these visualizations. A Unity environment and detailed implementation instructions for sixteen different visualizations are provided. These visualizations can be classified into four categories: fundamental, surface-based, auxiliary, and illustrative. Furthermore, this extension includes tools to generate endpoint locations for vascular models. Overall this framework serves as a valuable resource for researchers in the field of vascular surface visualization by reducing the barrier to entry and promoting further research in this area. By providing an implementation of various visualizations, this paper aims to aid in the development of accurate and effective visual representations of vascular structures to assist in treatment planning and medical education.
In this paper, we propose two new algorithms for maximum-likelihood estimation (MLE) of high dimensional sparse covariance matrices. Unlike most of the state of-the-art methods, which either use regularization techniques or penalize the likelihood to impose sparsity, we solve the MLE problem based on an estimated covariance graph. More specifically, we propose a two-stage procedure: in the first stage, we determine the sparsity pattern of the target covariance matrix (in other words the marginal independence in the covariance graph under a Gaussian graphical model) using the multiple hypothesis testing method of false discovery rate (FDR), and in the second stage we use either a block coordinate descent approach to estimate the non-zero values or a proximal distance approach that penalizes the distance between the estimated covariance graph and the target covariance matrix. Doing so gives rise to two different methods, each with its own advantage: the coordinate descent approach does not require tuning of any hyper-parameters, whereas the proximal distance approach is computationally fast but requires a careful tuning of the penalty parameter. Both methods are effective even in cases where the number of observed samples is less than the dimension of the data. For performance evaluation, we test the proposed methods on both simulated and real-world data and show that they provide more accurate estimates of the sparse covariance matrix than two state-of-the-art methods.
We are interested in the discretisation of a drift-diffusion system in the framework of hybrid finite volume (HFV) methods on general polygonal/polyhedral meshes. The system under study is composed of two anisotropic and nonlinear convection-diffusion equations with nonsymmetric tensors, coupled with a Poisson equation and describes in particular semiconductor devices immersed in a magnetic field. We introduce a new scheme based on an entropy-dissipation relation and prove that the scheme admits solutions with values in admissible sets - especially, the computed densities remain positive. Moreover, we show that the discrete solutions to the scheme converge exponentially fast in time towards the associated discrete thermal equilibrium. Several numerical tests confirm our theoretical results. Up to our knowledge, this scheme is the first one able to discretise anisotropic drift-diffusion systems while preserving the bounds on the densities.
Port-Hamiltonian (PH) systems provide a framework for modeling, analysis and control of complex dynamical systems, where the complexity might result from multi-physical couplings, non-trivial domains and diverse nonlinearities. A major benefit of the PH representation is the explicit formulation of power interfaces, so-called ports, which allow for a power-preserving interconnection of subsystems to compose flexible multibody systems in a modular way. In this work, we present a PH representation of geometrically exact strings with nonlinear material behaviour. Furthermore, using structure-preserving discretization techniques a corresponding finite-dimensional PH state space model is developed. Applying mixed finite elements, the semi-discrete model retains the PH structure and the ports (pairs of velocities and forces) on the discrete level. Moreover, discrete derivatives are used in order to obtain an energy-consistent time-stepping method. The numerical properties of the newly devised model are investigated in a representative example. The developed PH state space model can be used for structure-preserving simulation and model order reduction as well as feedforward and feedback control design.
ROI extraction is an active but challenging task in remote sensing because of the complicated landform, the complex boundaries and the requirement of annotations. Weakly supervised learning (WSL) aims at learning a mapping from input image to pixel-wise prediction under image-wise labels, which can dramatically decrease the labor cost. However, due to the imprecision of labels, the accuracy and time consumption of WSL methods are relatively unsatisfactory. In this paper, we propose a two-step ROI extraction based on contractive learning. Firstly, we present to integrate multiscale Grad-CAM to obtain pseudo pixelwise annotations with well boundaries. Then, to reduce the compact of misjudgments in pseudo annotations, we construct a contrastive learning strategy to encourage the features inside ROI as close as possible and separate background features from foreground features. Comprehensive experiments demonstrate the superiority of our proposal. Code is available at //github.com/HE-Lingfeng/ROI-Extraction
We present an $\ell^2_2+\ell_1$-regularized discrete least squares approximation over general regions under assumptions of hyperinterpolation, named hybrid hyperinterpolation. Hybrid hyperinterpolation, using a soft thresholding operator as well a filter function to shrink the Fourier coefficients approximated by a high-order quadrature rule of a given continuous function with respect to some orthonormal basis, is a combination of lasso and filtered hyperinterpolations. Hybrid hyperinterpolation inherits features of them to deal with noisy data once the regularization parameters and filter function are chosen well. We not only provide $L_2$ errors in theoretical analysis for hybrid hyperinterpolation to approximate continuous functions with noise and noise-free, but also decompose $L_2$ errors into three exact computed terms with the aid of a prior regularization parameter choices rule. This rule, making fully use of coefficients of hyperinterpolation to choose a regularization parameter, reveals that $L_2$ errors for hybrid hyperinterpolation sharply decline and then slowly increase when the sparsity of coefficients ranges from one to large values. Numerical examples show the enhanced performance of hybrid hyperinterpolation when regularization parameters and noise vary. Theoretical $L_2$ errors bounds are verified in numerical examples on the interval, the unit-disk, the unit-sphere and the unit-cube, the union of disks.
A code of length $n$ is said to be (combinatorially) $(\rho,L)$-list decodable if the Hamming ball of radius $\rho n$ around any vector in the ambient space does not contain more than $L$ codewords. We study a recently introduced class of higher order MDS codes, which are closely related (via duality) to codes that achieve a generalized Singleton bound for list decodability. For some $\ell\geq 1$, higher order MDS codes of length $n$, dimension $k$, and order $\ell$ are denoted as $(n,k)$-MDS($\ell$) codes. We present a number of results on the structure of these codes, identifying the `extend-ability' of their parameters in various scenarios. Specifically, for some parameter regimes, we identify conditions under which $(n_1,k_1)$-MDS($\ell_1$) codes can be obtained from $(n_2,k_2)$-MDS($\ell_2$) codes, via various techniques. We believe that these results will aid in efficient constructions of higher order MDS codes. We also obtain a new field size upper bound for the existence of such codes, which arguably improves over the best known existing bound, in some parameter regimes.
Inferring causal structures from time series data is the central interest of many scientific inquiries. A major barrier to such inference is the problem of subsampling, i.e., the frequency of measurements is much lower than that of causal influence. To overcome this problem, numerous model-based and model-free methods have been proposed, yet either limited to the linear case or failed to establish identifiability. In this work, we propose a model-free algorithm that can identify the entire causal structure from subsampled time series, without any parametric constraint. The idea is that the challenge of subsampling arises mainly from \emph{unobserved} time steps and therefore should be handled with tools designed for unobserved variables. Among these tools, we find the proxy variable approach particularly fits, in the sense that the proxy of an unobserved variable is naturally itself at the observed time step. Following this intuition, we establish comprehensive structural identifiability results. Our method is constraint-based and requires no more regularities than common continuity and differentiability. Theoretical advantages are reflected in experimental results.
This dataset contains 10,000 fluid flow and heat transfer simulations in U-bend shapes. Each of them is described by 28 design parameters, which are processed with the help of Computational Fluid Dynamics methods. The dataset provides a comprehensive benchmark for investigating various problems and methods from the field of design optimization. For these investigations supervised, semi-supervised and unsupervised deep learning approaches can be employed. One unique feature of this dataset is that each shape can be represented by three distinct data types including design parameter and objective combinations, five different resolutions of 2D images from the geometry and the solution variables of the numerical simulation, as well as a representation using the cell values of the numerical mesh. This third representation enables considering the specific data structure of numerical simulations for deep learning approaches. The source code and the container used to generate the data are published as part of this work.
This work aims to provide an engagement decision support tool for Beyond Visual Range (BVR) air combat in the context of Defensive Counter Air (DCA) missions. In BVR air combat, engagement decision refers to the choice of the moment the pilot engages a target by assuming an offensive stance and executing corresponding maneuvers. To model this decision, we use the Brazilian Air Force's Aerospace Simulation Environment (\textit{Ambiente de Simula\c{c}\~ao Aeroespacial - ASA} in Portuguese), which generated 3,729 constructive simulations lasting 12 minutes each and a total of 10,316 engagements. We analyzed all samples by an operational metric called the DCA index, which represents, based on the experience of subject matter experts, the degree of success in this type of mission. This metric considers the distances of the aircraft of the same team and the opposite team, the point of Combat Air Patrol, and the number of missiles used. By defining the engagement status right before it starts and the average of the DCA index throughout the engagement, we create a supervised learning model to determine the quality of a new engagement. An algorithm based on decision trees, working with the XGBoost library, provides a regression model to predict the DCA index with a coefficient of determination close to 0.8 and a Root Mean Square Error of 0.05 that can furnish parameters to the BVR pilot to decide whether or not to engage. Thus, using data obtained through simulations, this work contributes by building a decision support system based on machine learning for BVR air combat.