Object detection on visible (RGB) and infrared (IR) images, as an emerging solution to facilitate robust detection for around-the-clock applications, has received extensive attention in recent years. With the help of IR images, object detectors have been more reliable and robust in practical applications by using RGB-IR combined information. However, existing methods still suffer from modality miscalibration and fusion imprecision problems. Since transformer has the powerful capability to model the pairwise correlations between different features, in this paper, we propose a novel Calibrated and Complementary Transformer called $\mathrm{C}^2$Former to address these two problems simultaneously. In $\mathrm{C}^2$Former, we design an Inter-modality Cross-Attention (ICA) module to obtain the calibrated and complementary features by learning the cross-attention relationship between the RGB and IR modality. To reduce the computational cost caused by computing the global attention in ICA, an Adaptive Feature Sampling (AFS) module is introduced to decrease the dimension of feature maps. Because $\mathrm{C}^2$Former performs in the feature domain, it can be embedded into existed RGB-IR object detectors via the backbone network. Thus, one single-stage and one two-stage object detector both incorporating our $\mathrm{C}^2$Former are constructed to evaluate its effectiveness and versatility. With extensive experiments on the DroneVehicle and KAIST RGB-IR datasets, we verify that our method can fully utilize the RGB-IR complementary information and achieve robust detection results. The code is available at //github.com/yuanmaoxun/Calibrated-and-Complementary-Transformer-for-RGB-Infrared-Object-Detection.git.
Color image completion is a challenging problem in computer vision, but recent research has shown that quaternion representations of color images perform well in many areas. These representations consider the entire color image and effectively utilize coupling information between the three color channels. Consequently, low-rank quaternion matrix completion (LRQMC) algorithms have gained significant attention. We propose a method based on quaternion Qatar Riyal decomposition (QQR) and quaternion $L_{2,1}$-norm called QLNM-QQR. This new approach reduces computational complexity by avoiding the need to calculate the QSVD of large quaternion matrices. We also present two improvements to the QLNM-QQR method: an enhanced version called IRQLNM-QQR that uses iteratively reweighted quaternion $L_{2,1}$-norm minimization and a method called QLNM-QQR-SR that integrates sparse regularization. Our experiments on natural color images and color medical images show that IRQLNM-QQR outperforms QLNM-QQR and that the proposed QLNM-QQR-SR method is superior to several state-of-the-art methods.
3D dense captioning requires a model to translate its understanding of an input 3D scene into several captions associated with different object regions. Existing methods adopt a sophisticated "detect-then-describe" pipeline, which builds explicit relation modules upon a 3D detector with numerous hand-crafted components. While these methods have achieved initial success, the cascade pipeline tends to accumulate errors because of duplicated and inaccurate box estimations and messy 3D scenes. In this paper, we first propose Vote2Cap-DETR, a simple-yet-effective transformer framework that decouples the decoding process of caption generation and object localization through parallel decoding. Moreover, we argue that object localization and description generation require different levels of scene understanding, which could be challenging for a shared set of queries to capture. To this end, we propose an advanced version, Vote2Cap-DETR++, which decouples the queries into localization and caption queries to capture task-specific features. Additionally, we introduce the iterative spatial refinement strategy to vote queries for faster convergence and better localization performance. We also insert additional spatial information to the caption head for more accurate descriptions. Without bells and whistles, extensive experiments on two commonly used datasets, ScanRefer and Nr3D, demonstrate Vote2Cap-DETR and Vote2Cap-DETR++ surpass conventional "detect-then-describe" methods by a large margin. Codes will be made available at //github.com/ch3cook-fdu/Vote2Cap-DETR.
We show that interactive protocols between a prover and a verifier, a well-known tool of complexity theory, can be used in practice to certify the correctness of automated reasoning tools. Theoretically, interactive protocols exist for all $\textsf{PSPACE}$ problems. The verifier of a protocol checks the prover's answer to a problem instance in probabilistic polynomial time, with polynomially many bits of communication, and with exponentially small probability of error. (The prover may need exponential time.) Existing interactive protocols are not used in practice because their provers use naive algorithms, inefficient even for small instances, that are incompatible with practical implementations of automated reasoning. We bridge the gap between theory and practice by means of an interactive protocol whose prover uses BDDs. We consider the problem of counting the number of assignments to a QBF instance ($\#\textrm{CP}$), which has a natural BDD-based algorithm. We give an interactive protocol for $\#\textrm{CP}$ whose prover is implemented on top of an extended BDD library. The prover has only a linear overhead in computation time over the natural algorithm. We have implemented our protocol in $\textsf{blic}$, a certifying tool for $\#\textrm{CP}$. Experiments on standard QBF benchmarks show that $\textsf{blic}$ is competitive with state-of-the-art QBF-solvers. The run time of the verifier is negligible. While loss of absolute certainty can be concerning, the error probability in our experiments is at most $10^{-10}$ and reduces to $10^{-10k}$ by repeating the verification $k$ times.
In this paper, we propose a rate-splitting design and characterize the sum-degrees-of-freedom (DoF) for the $K$-user multiple-input-single-output (MISO) broadcast channel with mixed channel state information at the transmitter (CSIT) and order-$(K-1)$ messages, where mixed CSIT refers to the delayed and imperfect-current CSIT, and order-$(K-1)$ message refers to the message desired by $K-1$ users simultaneously. In particular, for the sum-DoF lower bound, we propose a rate-splitting scheme embedding with retrospective interference alignment. In addition, we propose a matching sum-DoF upper bound via genie signalings and extremal inequality. Opposed to existing works for $K=2$, our results show that the sum-DoF is saturated with CSIT quality when CSIT quality thresholds are satisfied for $K>2$.
The study of nonplanar drawings of graphs with restricted crossing configurations is a well-established topic in graph drawing, often referred to as beyond-planar graph drawing. One of the most studied types of drawings in this area are the $k$-planar drawings $(k \geq 1)$, where each edge cannot cross more than $k$ times. We generalize $k$-planar drawings, by introducing the new family of min-$k$-planar drawings. In a min-$k$-planar drawing edges can cross an arbitrary number of times, but for any two crossing edges, one of the two must have no more than $k$ crossings. We prove a general upper bound on the number of edges of min-$k$-planar drawings, a finer upper bound for $k=3$, and tight upper bounds for $k=1,2$. Also, we study the inclusion relations between min-$k$-planar graphs (i.e., graphs admitting min-$k$-planar drawings) and $k$-planar graphs.
Previous statistical approaches to hierarchical clustering for social network analysis all construct an "ultrametric" hierarchy. While the assumption of ultrametricity has been discussed and studied in the phylogenetics literature, it has not yet been acknowledged in the social network literature. We show that "non-ultrametric structure" in the network introduces significant instabilities in the existing top-down recovery algorithms. To address this issue, we introduce an instability diagnostic plot and use it to examine a collection of empirical networks. These networks appear to violate the "ultrametric" assumption. We propose a deceptively simple and yet general class of probabilistic models called $\mathbb{T}$-Stochastic Graphs which impose no topological restrictions on the latent hierarchy. To illustrate this model, we propose six alternative forms of hierarchical network models and then show that all six are equivalent to the $\mathbb{T}$-Stochastic Graph model. These alternative models motivate a novel approach to hierarchical clustering that combines spectral techniques with the well-known Neighbor-Joining algorithm from phylogenetic reconstruction. We prove this spectral approach is statistically consistent.
The Longest Common Substring (LCS) and Longest Palindromic Substring (LPS) are classical problems in computer science, representing fundamental challenges in string processing. Both problems can be solved in linear time using a classical model of computation, by means of very similar algorithms, both relying on the use of suffix trees. Very recently, two sublinear algorithms for LCS and LPS in the quantum query model have been presented by Le Gall and Seddighin~\cite{GallS23}, requiring $\tilde{\mathcal{O}}(n^{5/6})$ and $\tilde{\mathcal{O}}(\sqrt{n})$ queries, respectively. However, while the query model is fascinating from a theoretical standpoint, its practical applicability becomes limited when it comes to crafting algorithms meant for actual execution on real hardware. In this paper we present, for the first time, a $\tilde{\mathcal{O}}(\sqrt{n})$ quantum algorithm for both LCS and LPS working in the circuit model of computation. Our solutions are simpler than previous ones and can be easily translated into quantum procedures. We also present actual implementations of the two algorithms as quantum circuits working in $\mathcal{O}(\sqrt{n}\log^5(n))$ and $\mathcal{O}(\sqrt{n}\log^4(n))$ time, respectively.
Querying cohesive subgraphs on temporal graphs (e.g., social network, finance network, etc.) with various conditions has attracted intensive research interests recently. In this paper, we study a novel Temporal $(k,\mathcal{X})$-Core Query (TXCQ) that extends a fundamental Temporal $k$-Core Query (TCQ) proposed in our conference paper by optimizing or constraining an arbitrary metric $\mathcal{X}$ of $k$-core, such as size, engagement, interaction frequency, time span, burstiness, periodicity, etc. Our objective is to address specific TXCQ instances with conditions on different $\mathcal{X}$ in a unified algorithm framework that guarantees scalability. For that, this journal paper proposes a taxonomy of measurement $\mathcal{X}(\cdot)$ and achieve our objective using a two-phase framework while $\mathcal{X}(\cdot)$ is time-insensitive or time-monotonic. Specifically, Phase 1 still leverages the query processing algorithm of TCQ to induce all distinct $k$-cores during a given time range, and meanwhile locates the "time zones" in which the cores emerge. Then, Phase 2 conducts fast local search and $\mathcal{X}$ evaluation in each time zone with respect to the time insensitivity or monotonicity of $\mathcal{X}(\cdot)$. By revealing two insightful concepts named tightest time interval and loosest time interval that bound time zones, the redundant core induction and unnecessary $\mathcal{X}$ evaluation in a zone can be reduced dramatically. Our experimental results demonstrate that TXCQ can be addressed as efficiently as TCQ, which achieves the latest state-of-the-art performance, by using a general algorithm framework that leaves $\mathcal{X}(\cdot)$ as a user-defined function.
Low-power event-based analog front-ends (AFE) are a crucial component required to build efficient end-to-end neuromorphic processing systems for edge computing. Although several neuromorphic chips have been developed for implementing spiking neural networks (SNNs) and solving a wide range of sensory processing tasks, there are only a few general-purpose analog front-end devices that can be used to convert analog sensory signals into spikes and interfaced to neuromorphic processors. In this work, we present a novel, highly configurable analog front-end chip, denoted as SPAIC (signal-to-spike converter for analog AI computation), that offers a general-purpose dual-mode analog signal-to-spike encoding with delta modulation and pulse frequency modulation, with tunable frequency bands. The ASIC is designed in a 180 nm process. It supports and encodes a wide variety of signals spanning 4 orders of magnitude in frequency, and provides an event-based output that is compatible with existing neuromorphic processors. We validated the ASIC for its functions and present initial silicon measurement results characterizing the basic building blocks of the chip.
Click-through rate (CTR) prediction plays a critical role in recommender systems and online advertising. The data used in these applications are multi-field categorical data, where each feature belongs to one field. Field information is proved to be important and there are several works considering fields in their models. In this paper, we proposed a novel approach to model the field information effectively and efficiently. The proposed approach is a direct improvement of FwFM, and is named as Field-matrixed Factorization Machines (FmFM, or $FM^2$). We also proposed a new explanation of FM and FwFM within the FmFM framework, and compared it with the FFM. Besides pruning the cross terms, our model supports field-specific variable dimensions of embedding vectors, which acts as soft pruning. We also proposed an efficient way to minimize the dimension while keeping the model performance. The FmFM model can also be optimized further by caching the intermediate vectors, and it only takes thousands of floating-point operations (FLOPs) to make a prediction. Our experiment results show that it can out-perform the FFM, which is more complex. The FmFM model's performance is also comparable to DNN models which require much more FLOPs in runtime.