In this paper, we study the problem of maximizing $k$-submodular functions subject to a knapsack constraint. For monotone objective functions, we present a $\frac{1}{2}(1-e^{-2})\approx 0.432$ greedy approximation algorithm. For the non-monotone case, we are the first to consider the knapsack problem and provide a greedy-type combinatorial algorithm with approximation ratio $\frac{1}{3}(1-e^{-3})\approx 0.317$.
In this work, we propose two criteria for linear codes obtained from the Plotkin sum construction being symplectic self-orthogonal (SO) and linear complementary dual (LCD). As specific constructions, several classes of symplectic SO codes with good parameters including symplectic maximum distance separable codes are derived via $\ell$-intersection pairs of linear codes and generalized Reed-Muller codes. Also symplectic LCD codes are constructed from general linear codes. Furthermore, we obtain some binary symplectic LCD codes, which are equivalent to quaternary trace Hermitian additive complementary dual codes that outperform best-known quaternary Hermitian LCD codes reported in the literature. In addition, we prove that symplectic SO and LCD codes obtained in these ways are asymptotically good.
In the Maximum Independent Set of Objects problem, we are given an $n$-vertex planar graph $G$ and a family $\mathcal{D}$ of $N$ objects, where each object is a connected subgraph of $G$. The task is to find a subfamily $\mathcal{F} \subseteq \mathcal{D}$ of maximum cardinality that consists of pairwise disjoint objects. This problem is $\mathsf{NP}$-hard and is equivalent to the problem of finding the maximum number of pairwise disjoint polygons in a given family of polygons in the plane. As shown by Adamaszek et al. (J. ACM '19), the problem admits a \emph{quasi-polynomial time approximation scheme} (QPTAS): a $(1-\varepsilon)$-approximation algorithm whose running time is bounded by $2^{\mathrm{poly}(\log(N),1/\epsilon)} \cdot n^{\mathcal{O}(1)}$. Nevertheless, to the best of our knowledge, in the polynomial-time regime only the trivial $\mathcal{O}(N)$-approximation is known for the problem in full generality. In the restricted setting where the objects are pseudolines in the plane, Fox and Pach (SODA '11) gave an $N^{\varepsilon}$-approximation algorithm with running time $N^{2^{\tilde{\mathcal{O}}(1/\varepsilon)}}$, for any $\varepsilon>0$. In this work, we present an $\text{OPT}^{\varepsilon}$-approximation algorithm for the problem that runs in time $N^{\tilde{\mathcal{O}}(1/\varepsilon^2)} n^{\mathcal{O}(1)}$, for any $\varepsilon>0$, thus improving upon the result of Fox and Pach both in terms of generality and in terms of the running time. Our approach combines the methodology of Voronoi separators, introduced by Marx and Pilipczuk (TALG '22), with a new analysis of the approximation factor.
A new numerical domain decomposition method is proposed for solving elliptic equations on compact Riemannian manifolds. The advantage of this method is to avoid global triangulations or grids on manifolds. Our method is numerically tested on some $4$-dimensional manifolds such as the unit sphere $S^{4}$, the complex projective space $\mathbb{CP}^{2}$ and the product manifold $S^{2} \times S^{2}$.
We show how to express intuitionistic Zermelo set theory in deduction modulo (i.e. by replacing its axioms by rewrite rules) in such a way that the corresponding notion of proof enjoys the normalization property. To do so, we first rephrase set theory as a theory of pointed graphs (following a paradigm due to P. Aczel) by interpreting set-theoretic equality as bisimilarity, and show that in this setting, Zermelo's axioms can be decomposed into graph-theoretic primitives that can be turned into rewrite rules. We then show that the theory we obtain in deduction modulo is a conservative extension of (a minor extension of) Zermelo set theory. Finally, we prove the normalization of the intuitionistic fragment of the theory.
A convincing feature of least-squares finite element methods is the built-in a posteriori error estimator for any conforming discretization. In order to generalize this property to discontinuous finite element ansatz functions, this paper introduces a least-squares principle on piecewise Sobolev functions for the solution of the Poisson model problem in 2D with mixed boundary conditions. It allows for fairly general discretizations including standard piecewise polynomial ansatz spaces on triangular and polygonal meshes. The presented scheme enforces the interelement continuity of the piecewise polynomials by additional least-squares residuals. A side condition on the normal jumps of the flux variable requires a vanishing integral mean and enables a natural weighting of the jump in the least-squares functional in terms of the mesh size. This avoids over-penalization with additional regularity assumptions on the exact solution as usually present in the literature on discontinuous LSFEM. The proof of the built-in a posteriori error estimation for the over-penalized scheme is presented as well. All results in this paper are robust with respect to the size of the domain guaranteed by a suitable weighting of the residuals in the least-squares functional. Numerical experiments exhibit optimal convergence rates of the adaptive mesh-refining algorithm for various polynomial degrees.
Based on a new Taylor-like formula, we derived an improved interpolation error estimate in $W^{1,p}$. We compare it with the classical error estimates based on the standard Taylor formula, and also with the corresponding interpolation error estimate, derived from the mean value theorem. We then assess the improvement in accuracy we can get from this formula, leading to a significant reduction in finite element computation costs.
We construct a graph with $n$ vertices where the smoothed runtime of the 3-FLIP algorithm for the 3-Opt Local Max-Cut problem can be as large as $2^{\Omega(\sqrt{n})}$. This provides the first example where a local search algorithm for the Max-Cut problem can fail to be efficient in the framework of smoothed analysis. We also give a new construction of graphs where the runtime of the FLIP algorithm for the Local Max-Cut problem is $2^{\Omega(n)}$ for any pivot rule. This graph is much smaller and has a simpler structure than previous constructions.
We study least-squares trace regression when the parameter is the sum of a $r$-low-rank and a $s$-sparse matrices and a fraction $\epsilon$ of the labels is corrupted. For subgaussian distributions, we highlight three design properties. The first, termed $\PP$, handles additive decomposition and follows from a product process inequality. The second, termed $\IP$, handles both label contamination and additive decomposition. It follows from Chevet's inequality. The third, termed $\MP$, handles the interaction between the design and featured-dependent noise. It follows from a multiplier process inequality. Jointly, these properties entail the near-optimality of a tractable estimator with respect to the effective dimensions $d_{\eff,r}$ and $d_{\eff,s}$ for the low-rank and sparse components, $\epsilon$ and the failure probability $\delta$. This rate has the form $$ \mathsf{r}(n,d_{\eff,r}) + \mathsf{r}(n,d_{\eff,s}) + \sqrt{(1+\log(1/\delta))/n} + \epsilon\log(1/\epsilon). $$ Here, $\mathsf{r}(n,d_{\eff,r})+\mathsf{r}(n,d_{\eff,s})$ is the optimal uncontaminated rate, independent of $\delta$. Our estimator is adaptive to $(s,r,\epsilon,\delta)$ and, for fixed absolute constant $c>0$, it attains the mentioned rate with probability $1-\delta$ uniformly over all $\delta\ge\exp(-cn)$. Disconsidering matrix decomposition, our analysis also entails optimal bounds for a robust estimator adapted to the noise variance. Finally, we consider robust matrix completion. We highlight a new property for this problem: one can robustly and optimally estimate the incomplete matrix regardless of the \emph{magnitude of the corruption}. Our estimators are based on ``sorted'' versions of Huber's loss. We present simulations matching the theory. In particular, it reveals the superiority of ``sorted'' Huber loss over the classical Huber's loss.
We construct and analyze finite element approximations of the Einstein tensor in dimension $N \ge 3$. We focus on the setting where a smooth Riemannian metric tensor $g$ on a polyhedral domain $\Omega \subset \mathbb{R}^N$ has been approximated by a piecewise polynomial metric $g_h$ on a simplicial triangulation $\mathcal{T}$ of $\Omega$ having maximum element diameter $h$. We assume that $g_h$ possesses single-valued tangential-tangential components on every codimension-1 simplex in $\mathcal{T}$. Such a metric is not classically differentiable in general, but it turns out that one can still attribute meaning to its Einstein curvature in a distributional sense. We study the convergence of the distributional Einstein curvature of $g_h$ to the Einstein curvature of $g$ under refinement of the triangulation. We show that in the $H^{-2}(\Omega)$-norm, this convergence takes place at a rate of $O(h^{r+1})$ when $g_h$ is an optimal-order interpolant of $g$ that is piecewise polynomial of degree $r \ge 1$. We provide numerical evidence to support this claim.
Causal representation learning algorithms discover lower-dimensional representations of data that admit a decipherable interpretation of cause and effect; as achieving such interpretable representations is challenging, many causal learning algorithms utilize elements indicating prior information, such as (linear) structural causal models, interventional data, or weak supervision. Unfortunately, in exploratory causal representation learning, such elements and prior information may not be available or warranted. Alternatively, scientific datasets often have multiple modalities or physics-based constraints, and the use of such scientific, multimodal data has been shown to improve disentanglement in fully unsupervised settings. Consequently, we introduce a causal representation learning algorithm (causalPIMA) that can use multimodal data and known physics to discover important features with causal relationships. Our innovative algorithm utilizes a new differentiable parametrization to learn a directed acyclic graph (DAG) together with a latent space of a variational autoencoder in an end-to-end differentiable framework via a single, tractable evidence lower bound loss function. We place a Gaussian mixture prior on the latent space and identify each of the mixtures with an outcome of the DAG nodes; this novel identification enables feature discovery with causal relationships. Tested against a synthetic and a scientific dataset, our results demonstrate the capability of learning an interpretable causal structure while simultaneously discovering key features in a fully unsupervised setting.