Point clouds arise from acquisition processes applied in various scenarios, such as reverse engineering, rapid prototyping, or cultural preservation. To run various simulations via, e.g., finite element methods, on the derived data, a mesh has to be created from it. In this paper, a meshing algorithm for point clouds is presented, which is based on a sphere covering of the underlying surface. The algorithm provides a mesh close to uniformity in terms of edge lengths and angles of its triangles. Additionally, theoretical results guarantee the output to be manifold, given suitable input and parameter choices. We present both the underlying theory, which provides suitable parameter bounds, as well as experiments showing that our algorithm can compete with widely used competitors in terms of quality of the output and timings.
Collaborative filtering (CF) is a widely employed technique that predicts user preferences based on past interactions. Negative sampling plays a vital role in training CF-based models with implicit feedback. In this paper, we propose a novel perspective based on the sampling area to revisit existing sampling methods. We point out that current sampling methods mainly focus on Point-wise or Line-wise sampling, lacking flexibility and leaving a significant portion of the hard sampling area un-explored. To address this limitation, we propose Dimension Independent Mixup for Hard Negative Sampling (DINS), which is the first Area-wise sampling method for training CF-based models. DINS comprises three modules: Hard Boundary Definition, Dimension Independent Mixup, and Multi-hop Pooling. Experiments with real-world datasets on both matrix factorization and graph-based models demonstrate that DINS outperforms other negative sampling methods, establishing its effectiveness and superiority. Our work contributes a new perspective, introduces Area-wise sampling, and presents DINS as a novel approach that achieves state-of-the-art performance for negative sampling. Our implementations are available in PyTorch.
In this paper, we propose and analyze an efficient preconditioning method for the elliptic problem based on the reconstructed discontinuous approximation method. We reconstruct a high-order piecewise polynomial space that arbitrary order can be achieved with one degree of freedom per element. This space can be directly used with the symmetric/nonsymmetric interior penalty discontinuous Galerkin method. Compared with the standard DG method, we can enjoy the advantage on the efficiency of the approximation. Besides, we establish an norm equivalence result between the reconstructed high-order space and the piecewise constant space. This property further allows us to construct an optimal preconditioner from the piecewise constant space. The upper bound of the condition number to the preconditioned symmetric/nonsymmetric system is shown to be independent of the mesh size. Numerical experiments are provided to demonstrate the validity of the theory and the efficiency of the proposed method.
The goal of this paper is to understand how exponential-time approximation algorithms can be obtained from existing polynomial-time approximation algorithms, existing parameterized exact algorithms, and existing parameterized approximation algorithms. More formally, we consider a monotone subset minimization problem over a universe of size $n$ (e.g., Vertex Cover or Feedback Vertex Set). We have access to an algorithm that finds an $\alpha$-approximate solution in time $c^k \cdot n^{O(1)}$ if a solution of size $k$ exists (and more generally, an extension algorithm that can approximate in a similar way if a set can be extended to a solution with $k$ further elements). Our goal is to obtain a $d^n \cdot n^{O(1)}$ time $\beta$-approximation algorithm for the problem with $d$ as small as possible. That is, for every fixed $\alpha,c,\beta \geq 1$, we would like to determine the smallest possible $d$ that can be achieved in a model where our problem-specific knowledge is limited to checking the feasibility of a solution and invoking the $\alpha$-approximate extension algorithm. Our results completely resolve this question: (1) For every fixed $\alpha,c,\beta \geq 1$, a simple algorithm (``approximate monotone local search'') achieves the optimum value of $d$. (2) Given $\alpha,c,\beta \geq 1$, we can efficiently compute the optimum $d$ up to any precision $\varepsilon > 0$. Earlier work presented algorithms (but no lower bounds) for the special case $\alpha = \beta = 1$ [Fomin et al., J. ACM 2019] and for the special case $\alpha = \beta > 1$ [Esmer et al., ESA 2022]. Our work generalizes these results and in particular confirms that the earlier algorithms are optimal in these special cases.
The focus of this article is on shape and topology optimization of transient vibroacoustic problems. The main contribution is a transient problem formulation that enables optimization over wide ranges of frequencies with complex signals, which are often of interest in industry. The work employs time domain methods to realize wide band optimization in the frequency domain. To this end, the objective function is defined in frequency domain where the frequency response of the system is obtained through a fast Fourier transform (FFT) algorithm on the transient response of the system. The work utilizes a parametric level set approach to implicitly define the geometry in which the zero level describes the interface between acoustic and structural domains. A cut element method is used to capture the geometry on a fixed background mesh through utilization of a special integration scheme that accurately resolves the interface. This allows for accurate solutions to strongly coupled vibroacoustic systems without having to re-mesh at each design update. The present work relies on efficient gradient based optimizers where the discrete adjoint method is used to calculate the sensitivities of objective and constraint functions. A thorough explanation of the consistent sensitivity calculation is given involving the FFT operation needed to define the objective function in frequency domain. Finally, the developed framework is applied to various vibroacoustic filter designs and the optimization results are verified using commercial finite element software with a steady state time-harmonic formulation.
We present a finite element scheme for fractional diffusion problems with varying diffusivity and fractional order. We consider a symmetric integral form of these nonlocal equations defined on general geometries and in arbitrary bounded domains. A number of challenges are encountered when discretizing these equations. The first comes from the heterogeneous kernel singularity in the fractional integral operator. The second comes from the dense discrete operator with its quadratic growth in memory footprint and arithmetic operations. An additional challenge comes from the need to handle volume conditions-the generalization of classical local boundary conditions to the nonlocal setting. Satisfying these conditions requires that the effect of the whole domain, including both the interior and exterior regions, can be computed on every interior point in the discretization. Performed directly, this would result in quadratic complexity. To address these challenges, we propose a strategy that decomposes the stiffness matrix into three components. The first is a sparse matrix that handles the singular near-field separately and is computed by adapting singular quadrature techniques available for the homogeneous case to the case of spatially variable order. The second component handles the remaining smooth part of the near-field as well as the far field and is approximated by a hierarchical $\mathcal{H}^{2}$ matrix that maintains linear complexity in storage and operations. The third component handles the effect of the global mesh at every node and is written as a weighted mass matrix whose density is computed by a fast-multipole type method. The resulting algorithm has therefore overall linear space and time complexity. Analysis of the consistency of the stiffness matrix is provided and numerical experiments are conducted to illustrate the convergence and performance of the proposed algorithm.
We explore two differentiable deep declarative layers, namely least squares on sphere (LESS) and implicit eigen decomposition (IED), for learning the principal matrix features (PMaF). This can be used to represent data features with a low-dimension vector containing dominant information from a high-dimension matrix. We first solve the problems with iterative optimization in the forward pass and then backpropagate the solution for implicit gradients under a bi-level optimization framework. Particularly, adaptive descent steps with the backtracking line search method and descent decay in the tangent space are studied to improve the forward pass efficiency of LESS. Meanwhile, exploited data structures are used to greatly reduce the computational complexity in the backward pass of LESS and IED. Empirically, we demonstrate the superiority of our layers over the off-the-shelf baselines by comparing the solution optimality and computational requirements.
We consider the Sobolev embedding operator $E_s : H^s(\Omega) \to L_2(\Omega)$ and its role in the solution of inverse problems. In particular, we collect various properties and investigate different characterizations of its adjoint operator $E_s^*$, which is a common component in both iterative and variational regularization methods. These include variational representations and connections to boundary value problems, Fourier and wavelet representations, as well as connections to spatial filters. Moreover, we consider characterizations in terms of Fourier series, singular value decompositions and frame decompositions, as well as representations in finite dimensional settings. While many of these results are already known to researchers from different fields, a detailed and general overview or reference work containing rigorous mathematical proofs is still missing. Hence, in this paper we aim to fill this gap by collecting, introducing and generalizing a large number of characterizations of $E_s^*$ and discuss their use in regularization methods for solving inverse problems. The resulting compilation can serve both as a reference as well as a useful guide for its efficient numerical implementation in practice.
Discontinuities and delayed terms are encountered in the governing equations of a large class of problems ranging from physics, engineering, medicine to economics. These systems are impossible to be properly modelled and simulated with standard Ordinary Differential Equations (ODE), or any data-driven approximation including Neural Ordinary Differential Equations (NODE). To circumvent this issue, latent variables are typically introduced to solve the dynamics of the system in a higher dimensional space and obtain the solution as a projection to the original space. However, this solution lacks physical interpretability. In contrast, Delay Differential Equations (DDEs) and their data-driven, approximated counterparts naturally appear as good candidates to characterize such complicated systems. In this work we revisit the recently proposed Neural DDE by introducing Neural State-Dependent DDE (SDDDE), a general and flexible framework featuring multiple and state-dependent delays. The developed framework is auto-differentiable and runs efficiently on multiple backends. We show that our method is competitive and outperforms other continuous-class models on a wide variety of delayed dynamical systems.
Many real-world decision-making tasks require learning causal relationships between a set of variables. Traditional causal discovery methods, however, require that all variables are observed, which is often not feasible in practical scenarios. Without additional assumptions about the unobserved variables, it is not possible to recover any causal relationships from observational data. Fortunately, in many applied settings, additional structure among the confounders can be expected. In particular, pervasive confounding is commonly encountered and has been utilized for consistent causal estimation in linear causal models. In this paper, we present a provably consistent method to estimate causal relationships in the non-linear, pervasive confounding setting. The core of our procedure relies on the ability to estimate the confounding variation through a simple spectral decomposition of the observed data matrix. We derive a DAG score function based on this insight, prove its consistency in recovering a correct ordering of the DAG, and empirically compare it to previous approaches. We demonstrate improved performance on both simulated and real datasets by explicitly accounting for both confounders and non-linear effects.
When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.