We consider the classic Correlation Clustering problem: Given a complete graph where edges are labelled either $+$ or $-$, the goal is to find a partition of the vertices that minimizes the number of the \pedges across parts plus the number of the \medges within parts. Recently, Cohen-Addad, Lee and Newman [CLN22] presented a 1.994-approximation algorithm for the problem using the Sherali-Adams hierarchy, hence breaking through the integrality gap of 2 for the classic linear program and improving upon the 2.06-approximation of Chawla, Makarychev, Schramm and Yaroslavtsev [CMSY15]. We significantly improve the state-of-the-art by providing a 1.73-approximation for the problem. Our approach introduces a preclustering of Correlation Clustering instances that allows us to essentially ignore the error arising from the {\em correlated rounding} used by [CLN22]. This additional power simplifies the previous algorithm and analysis. More importantly, it enables a new {\em set-based rounding} that complements the previous roundings. A combination of these two rounding algorithms yields the improved bound.
Equivalence checking is used to verify whether two programs produce equivalent outputs when given equivalent inputs. Research in this field mainly focused on improving equivalence checking accuracy and runtime performance. However, for program pairs that cannot be proven to be either equivalent or non-equivalent, existing approaches only report a classification result of "unknown", which provides no information regarding the programs' non-/equivalence. In this paper, we introduce PASDA, our partition-based semantic differencing approach with best effort classification of undecided cases. While PASDA aims to formally prove non-/equivalence of analyzed program pairs using a variant of differential symbolic execution, its main novelty lies in its handling of cases for which no formal non-/equivalence proof can be found. For such cases, PASDA provides a best effort equivalence classification based on a set of classification heuristics. We evaluated PASDA with an existing benchmark consisting of 141 non-/equivalent program pairs. PASDA correctly classified 61-74% of these cases at timeouts from 10 seconds to 3600 seconds. Thus, PASDA achieved equivalence checking accuracies that are 3-7% higher than the best results achieved by three existing tools. Furthermore, PASDA's best effort classifications were correct for 70-75% of equivalent and 55-85% of non-equivalent cases across the different timeouts.
Over the last decades, the family of $\alpha$-stale distributions has proven to be useful for modelling in telecommunication systems. Particularly, in the case of radar applications, finding a fast and accurate estimation for the amplitude density function parameters appears to be very important. In this work, the maximum likelihood estimator (MLE) is proposed for parameters of the amplitude distribution. To do this, the amplitude data are \emph{projected} on the horizontal and vertical axes using two simple transformations. It is proved that the \emph{projected} data follow a zero-location symmetric $\alpha$-stale distribution for which the MLE can be computed quite fast. The average of computed MLEs based on two \emph{projections} is considered as estimator for parameters of the amplitude distribution. Performance of the proposed \emph{projection} method is demonstrated through simulation study and analysis of two sets of real radar data.
The flexible-position multiple-input multiple-output (FLP-MIMO), such as fluid antennas and movable antennas, is a promising technology for future wireless communications. This is due to the fact that the positions of antennas at the transceiver and reflector can be dynamically optimized to achieve better channel conditions and, as such, can provide high spectral efficiency (SE) and energy efficiency (EE) gains with fewer antennas. In this article, we introduce the fundamentals of FLP-MIMO systems, including hardware design, structure design, and potential applications. We shall demonstrate that FLP-MIMO, using fewer flexible antennas, can match the channel hardening achieved by a large number of fixed antennas. We will then analyze the SE-EE relationship for FLP-MIMO and fixed-position MIMO. Furthermore, we will design the optimal trajectory of flexible antennas to maximize system sum SE or total EE at a fixed travel distance of each antenna. Finally, several important research directions regarding FLP-MIMO communications are presented to facilitate further investigation.
We show that the \textsc{Maximum Weight Independent Set} problem (\textsc{MWIS}) can be solved in quasi-polynomial time on $H$-free graphs (graphs excluding a fixed graph $H$ as an induced subgraph) for every $H$ whose every connected component is a path or a subdivided claw (i.e., a tree with at most three leaves). This completes the dichotomy of the complexity of \textsc{MWIS} in $\mathcal{F}$-free graphs for any finite set $\mathcal{F}$ of graphs into NP-hard cases and cases solvable in quasi-polynomial time, and corroborates the conjecture that the cases not known to be NP-hard are actually polynomial-time solvable. The key graph-theoretic ingredient in our result is as follows. Fix an integer $t \geq 1$. Let $S_{t,t,t}$ be the graph created from three paths on $t$ edges by identifying one endpoint of each path into a single vertex. We show that, given a graph $G$, one can in polynomial time find either an induced $S_{t,t,t}$ in $G$, or a balanced separator consisting of $\Oh(\log |V(G)|)$ vertex neighborhoods in $G$, or an extended strip decomposition of $G$ (a decomposition almost as useful for recursion for \textsc{MWIS} as a partition into connected components) with each particle of weight multiplicatively smaller than the weight of $G$. This is a strengthening of a result of Majewski et al.\ [ICALP~2022] which provided such an extended strip decomposition only after the deletion of $\Oh(\log |V(G)|)$ vertex neighborhoods. To reach the final result, we employ an involved branching strategy that relies on the structural lemma presented above.
We solve a long-standing open problem about the optimal codebook structure of codes in $n$-dimensional Euclidean space that consist of $n+1$ codewords subject to a codeword energy constraint, in terms of minimizing the average decoding error probability. The conjecture states that optimal codebooks are formed by the $n+1$ vertices of a regular simplex (the $n$-dimensional generalization of a regular tetrahedron) inscribed in the unit sphere. A self-contained proof of this conjecture is provided that hinges on symmetry arguments and leverages a relaxation approach that consists in jointly optimizing the codebook and the decision regions, rather than the codeword locations alone.
The history of the seemingly simple problem of straight line fitting in the presence of both $x$ and $y$ errors has been fraught with misadventure, with statistically ad hoc and poorly tested methods abounding in the literature. The problem stems from the emergence of latent variables describing the "true" values of the independent variables, the priors on which have a significant impact on the regression result. By analytic calculation of maximum a posteriori values and biases, and comprehensive numerical mock tests, we assess the quality of possible priors. In the presence of intrinsic scatter, the only prior that we find to give reliably unbiased results in general is a mixture of one or more Gaussians with means and variances determined as part of the inference. We find that a single Gaussian is typically sufficient and dub this model Marginalised Normal Regression (MNR). We illustrate the necessity for MNR by comparing it to alternative methods on an important linear relation in cosmology, and extend it to nonlinear regression and an arbitrary covariance matrix linking $x$ and $y$. We publicly release a Python/Jax implementation of MNR and its Gaussian mixture model extension that is coupled to Hamiltonian Monte Carlo for efficient sampling, which we call ROXY (Regression and Optimisation with X and Y errors).
In this paper, we consider the following two problems: (i) Deletion Blocker($\alpha$) where we are given an undirected graph $G=(V,E)$ and two integers $k,d\geq 1$ and ask whether there exists a subset of vertices $S\subseteq V$ with $|S|\leq k$ such that $\alpha(G-S) \leq \alpha(G)-d$, that is the independence number of $G$ decreases by at least $d$ after having removed the vertices from $S$; (ii) Transversal($\alpha$) where we are given an undirected graph $G=(V,E)$ and two integers $k,d\geq 1$ and ask whether there exists a subset of vertices $S\subseteq V$ with $|S|\leq k$ such that for every maximum independent set $I$ we have $|I\cap S| \geq d$. We show that both problems are polynomial-time solvable in the class of co-comparability graphs by reducing them to the well-known Vertex Cut problem. Our results generalize a result of [Chang et al., Maximum clique transversals, Lecture Notes in Computer Science 2204, pp. 32-43, WG 2001] and a recent result of [Hoang et al., Assistance and interdiction problems on interval graphs, Discrete Applied Mathematics 340, pp. 153-170, 2023].
Given a graph G and a query vertex q, the topic of community search (CS), aiming to retrieve a dense subgraph of G containing q, has gained much attention. Most existing works focus on undirected graphs which overlooks the rich information carried by the edge directions. Recently, the problem of community search over directed graphs (or CSD problem) has been studied; it finds a connected subgraph containing q, where the in-degree and out-degree of each vertex within the subgraph are at least k and l, respectively. However, existing solutions are inefficient, especially on large graphs. To tackle this issue, in this paper, we propose a novel index called D-Forest, which allows a CSD query to be completed within the optimal time cost. We further propose efficient index construction methods. Extensive experiments on six real large graphs show that our index-based query algorithm is up to two orders of magnitude faster than existing solutions.
The canonical approach to video-and-language learning (e.g., video question answering) dictates a neural model to learn from offline-extracted dense video features from vision models and text features from language models. These feature extractors are trained independently and usually on tasks different from the target domains, rendering these fixed features sub-optimal for downstream tasks. Moreover, due to the high computational overload of dense video features, it is often difficult (or infeasible) to plug feature extractors directly into existing approaches for easy finetuning. To provide a remedy to this dilemma, we propose a generic framework ClipBERT that enables affordable end-to-end learning for video-and-language tasks, by employing sparse sampling, where only a single or a few sparsely sampled short clips from a video are used at each training step. Experiments on text-to-video retrieval and video question answering on six datasets demonstrate that ClipBERT outperforms (or is on par with) existing methods that exploit full-length videos, suggesting that end-to-end learning with just a few sparsely sampled clips is often more accurate than using densely extracted offline features from full-length videos, proving the proverbial less-is-more principle. Videos in the datasets are from considerably different domains and lengths, ranging from 3-second generic domain GIF videos to 180-second YouTube human activity videos, showing the generalization ability of our approach. Comprehensive ablation studies and thorough analyses are provided to dissect what factors lead to this success. Our code is publicly available at //github.com/jayleicn/ClipBERT
Most object recognition approaches predominantly focus on learning discriminative visual patterns while overlooking the holistic object structure. Though important, structure modeling usually requires significant manual annotations and therefore is labor-intensive. In this paper, we propose to "look into object" (explicitly yet intrinsically model the object structure) through incorporating self-supervisions into the traditional framework. We show the recognition backbone can be substantially enhanced for more robust representation learning, without any cost of extra annotation and inference speed. Specifically, we first propose an object-extent learning module for localizing the object according to the visual patterns shared among the instances in the same category. We then design a spatial context learning module for modeling the internal structures of the object, through predicting the relative positions within the extent. These two modules can be easily plugged into any backbone networks during training and detached at inference time. Extensive experiments show that our look-into-object approach (LIO) achieves large performance gain on a number of benchmarks, including generic object recognition (ImageNet) and fine-grained object recognition tasks (CUB, Cars, Aircraft). We also show that this learning paradigm is highly generalizable to other tasks such as object detection and segmentation (MS COCO). Project page: //github.com/JDAI-CV/LIO.