We study Clustered Planarity with Linear Saturators, which is the problem of augmenting an $n$-vertex planar graph whose vertices are partitioned into independent sets (called clusters) with paths - one for each cluster - that connect all the vertices in each cluster while maintaining planarity. We show that the problem can be solved in time $2^{O(n)}$ for both the variable and fixed embedding case. Moreover, we show that it can be solved in subexponential time $2^{O(\sqrt{n}\log n)}$ in the fixed embedding case if additionally the input graph is connected. The latter time complexity is tight under the Exponential-Time Hypothesis. We also show that $n$ can be replaced with the vertex cover number of the input graph by providing a linear (resp. polynomial) kernel for the variable-embedding (resp. fixed-embedding) case; these results contrast the NP-hardness of the problem on graphs of bounded treewidth (and even on trees). Finally, we complement known lower bounds for the problem by showing that Clustered Planarity with Linear Saturators is NP-hard even when the number of clusters is at most $3$, thus excluding the algorithmic use of the number of clusters as a parameter.
The immersed interface method (IIM) for models of fluid flow and fluid-structure interaction imposes jump conditions that capture stress discontinuities generated by forces that are concentrated along immersed boundaries. Most prior work using the IIM for fluid dynamic applications has focused on smooth interfaces, but boundaries with sharp features such as corners and edges can appear in practical analyses, particularly on engineered structures. The present study builds on our work to integrate finite element-type representations of interface geometries with the IIM. Initial realizations of this approach used a continuous Galerkin (CG) finite element discretization for the boundary, but as we show herein, these approaches generate large errors near sharp geometrical features. To overcome this difficulty, this study introduces an IIM approach using discontinuous Galerkin (DG) representation of the jump conditions. Numerical examples explore the impacts of different interface representations on accuracy for both smooth and sharp boundaries, particularly flows interacting with fixed interface configurations. We demonstrate that using a DG approach provides accuracy that is comparable to the CG method for smooth cases. Further, we identify a time step size restriction for the CG representation that is directly related to the sharpness of the geometry. In contrast, time step size restrictions imposed by DG representations are demonstrated to be insensitive to the presence of sharp features.
Artificial Intelligence (AI) research often aims to develop models that can generalize reliably across complex datasets, yet this remains challenging in fields where data is scarce, intricate, or inaccessible. This paper introduces a novel approach that leverages three generative models of varying complexity to synthesize one of the most demanding structured datasets: Malicious Network Traffic. Our approach uniquely transforms numerical data into text, re-framing data generation as a language modeling task, which not only enhances data regularization but also significantly improves generalization and the quality of the synthetic data. Extensive statistical analyses demonstrate that our method surpasses state-of-the-art generative models in producing high-fidelity synthetic data. Additionally, we conduct a comprehensive study on synthetic data applications, effectiveness, and evaluation strategies, offering valuable insights into its role across various domains. Our code and pre-trained models are openly accessible at Github, enabling further exploration and application of our methodology. Index Terms: Data synthesis, machine learning, traffic generation, privacy preserving data, generative models.
In the logical framework introduced by Grohe and Tur\'an (TOCS 2004) for Boolean classification problems, the instances to classify are tuples from a logical structure, and Boolean classifiers are described by parametric models based on logical formulas. This is a specific scenario for supervised passive learning, where classifiers should be learned based on labelled examples. Existing results in this scenario focus on Boolean classification. This paper presents learnability results beyond Boolean classification. We focus on multiclass classification problems where the task is to assign input tuples to arbitrary integers. To represent such integer-valued classifiers, we use aggregate queries specified by an extension of first-order logic with counting terms called FOC1. Our main result shows the following: given a database of polylogarithmic degree, within quasi-linear time, we can build an index structure that makes it possible to learn FOC1-definable integer-valued classifiers in time polylogarithmic in the size of the database and polynomial in the number of training examples.
Logistic regression, the Support Vector Machine (SVM), and least squares are well-studied methods in the statistical and computer science community, with various practical applications. High-dimensional data arriving on a real-time basis makes the design of online learning algorithms that produce sparse solutions essential. The seminal work of \hyperlink{cite.langford2009sparse}{Langford, Li, and Zhang (2009)} developed a method to obtain sparsity via truncated gradient descent, showing a near-optimal online regret bound. Based on this method, we develop a quantum sparse online learning algorithm for logistic regression, the SVM, and least squares. Given efficient quantum access to the inputs, we show that a quadratic speedup in the time complexity with respect to the dimension of the problem is achievable, while maintaining a regret of $O(1/\sqrt{T})$, where $T$ is the number of iterations.
The spectral transformation Lanczos method for the sparse symmetric definite generalized eigenvalue problem for matrices $A$ and $B$ is an iterative method that addresses the case of semidefinite or ill conditioned $B$ using a shifted and inverted formulation of the problem. This paper proposes the same approach for dense problems and shows that with a shift chosen in accordance with certain constraints, the algorithm can conditionally ensure that every computed shifted and inverted eigenvalue is close to the exact shifted and inverted eigenvalue of a pair of matrices close to $A$ and $B$. Under the same assumptions on the shift, the analysis of the algorithm for the shifted and inverted problem leads to useful error bounds for the original problem, including a bound that shows how a single shift that is of moderate size in a scaled sense can be chosen so that every computed generalized eigenvalue corresponds to a generalized eigenvalue of a pair of matrices close to $A$ and $B$. The computed generalized eigenvectors give a relative residual that depends on the distance between the corresponding generalized eigenvalue and the shift. If the shift is of moderate size, then relative residuals are small for generalized eigenvalues that are not much larger than the shift. Larger shifts give small relative residuals for generalized eigenvalues that are not much larger or smaller than the shift.
We study the problem of assigning items to agents so as to maximize the \emph{weighted} Nash Social Welfare (NSW) under submodular valuations. The best-known result for the problem is an $O(nw_{\max})$-approximation due to Garg, Husic, Li, Vega, and Vondrak~\cite{GHL23}, where $w_{\max}$ is the maximum weight over all agents. Obtaining a constant approximation algorithm is an open problem in the field that has recently attracted considerable attention. We give the first such algorithm for the problem, thus solving the open problem in the affirmative. Our algorithm is based on the natural Configuration LP for the problem, which was introduced recently by Feng and Li~\cite{FL24} for the additive valuation case. Our rounding algorithm is similar to that of Li \cite{Li25} developed for the unrelated machine scheduling problem to minimize weighted completion time. Roughly speaking, we designate the largest item in each configuration as a large item and the remaining items as small items. So, every agent gets precisely 1 fractional large item in the configuration LP solution. With the rounding algorithm in \cite{Li25}, we can ensure that in the obtained solution, every agent gets precisely 1 large item, and the assignments of small items are negatively correlated.
Graph diffusion, which iteratively propagates real-valued substances among the graph, is used in numerous graph/network-involved applications. However, releasing diffusion vectors may reveal sensitive linking information in the data such as transaction information in financial network data. However, protecting the privacy of graph data is challenging due to its interconnected nature. This work proposes a novel graph diffusion framework with edge-level differential privacy guarantees by using noisy diffusion iterates. The algorithm injects Laplace noise per diffusion iteration and adopts a degree-based thresholding function to mitigate the high sensitivity induced by low-degree nodes. Our privacy loss analysis is based on Privacy Amplification by Iteration (PABI), which to our best knowledge, is the first effort that analyzes PABI with Laplace noise and provides relevant applications. We also introduce a novel Infinity-Wasserstein distance tracking method, which tightens the analysis of privacy leakage and makes PABI more applicable in practice. We evaluate this framework by applying it to Personalized Pagerank computation for ranking tasks. Experiments on real-world network data demonstrate the superiority of our method under stringent privacy conditions.
We formulate and analyze interior penalty discontinuous Galerkin methods for coupled elliptic PDEs modeling excitable tissue, represented by intracellular and extracellular domains sharing a common interface. The PDEs are coupled through a dynamic boundary condition, posed on the interface, that relates the normal gradients of the solutions to the time derivative of their jump. This system is referred to as the Extracellular Membrane Intracellular model or the cell-by-cell model. Due to the dynamic nature of the interface condition and to the presence of corner singularities, the analysis of discontinuous Galerkin methods is non-standard. We prove the existence and uniqueness of solutions by a reformulation of the problem to one posed on the membrane. Convergence is shown by utilizing face-to-element lifting operators and notions of weak consistency suitable for solutions with low spatial regularity. Further, we present parameter-robust preconditioned iterative solvers. Numerical examples in idealized geometries demonstrate our theoretical findings, and simulations in multiple cells portray the robustness of the method.
We study a fundamental problem in Computational Geometry, the planar two-center problem. In this problem, the input is a set $S$ of $n$ points in the plane and the goal is to find two smallest congruent disks whose union contains all points of $S$. A longstanding open problem has been to obtain an $O(n\log n)$-time algorithm for planar two-center, matching the $\Omega(n\log n)$ lower bound given by Eppstein [SODA'97]. Towards this, researchers have made a lot of efforts over decades. The previous best algorithm, given by Wang [SoCG'20], solves the problem in $O(n\log^2 n)$ time. In this paper, we present an $O(n\log n)$-time (deterministic) algorithm for planar two-center, which completely resolves this open problem.
Incompleteness is a common problem for existing knowledge graphs (KGs), and the completion of KG which aims to predict links between entities is challenging. Most existing KG completion methods only consider the direct relation between nodes and ignore the relation paths which contain useful information for link prediction. Recently, a few methods take relation paths into consideration but pay less attention to the order of relations in paths which is important for reasoning. In addition, these path-based models always ignore nonlinear contributions of path features for link prediction. To solve these problems, we propose a novel KG completion method named OPTransE. Instead of embedding both entities of a relation into the same latent space as in previous methods, we project the head entity and the tail entity of each relation into different spaces to guarantee the order of relations in the path. Meanwhile, we adopt a pooling strategy to extract nonlinear and complex features of different paths to further improve the performance of link prediction. Experimental results on two benchmark datasets show that the proposed model OPTransE performs better than state-of-the-art methods.