We design an additive approximation scheme for estimating the cost of the min-weight bipartite matching problem: given a bipartite graph with non-negative edge costs and $\varepsilon > 0$, our algorithm estimates the cost of matching all but $O(\varepsilon)$-fraction of the vertices in truly subquadratic time $O(n^{2-\delta(\varepsilon)})$. Our algorithm has a natural interpretation for computing the Earth Mover's Distance (EMD), up to a $\varepsilon$-additive approximation. Notably, we make no assumptions about the underlying metric (more generally, the costs do not have to satisfy triangle inequality). Note that compared to the size of the instance (an arbitrary $n \times n$ cost matrix), our algorithm runs in {\em sublinear} time. Our algorithm can approximate a slightly more general problem: max-cardinality bipartite matching with a knapsack constraint, where the goal is to maximize the number of vertices that can be matched up to a total cost $B$.
Measurement-based quantum computing (MBQC) is a promising quantum computing paradigm that performs computation through ``one-way'' measurements on entangled quantum qubits. It is widely used in photonic quantum computing (PQC), where the computation is carried out on photonic cluster states (i.e., a 2-D mesh of entangled photons). In MBQC-based PQC, the cluster state depth (i.e., the length of one-way measurements) to execute a quantum circuit plays an important role in the overall execution time and error. Thus, it is important to reduce the cluster state depth. In this paper, we propose FMCC, a compilation framework that employs dynamic programming to efficiently minimize the cluster state depth. Experimental results on five representative quantum algorithms show that FMCC achieves 53.6%, 60.6%, and 60.0% average depth reductions in small, medium, and large qubit counts compared to the state-of-the-art MBQC compilation frameworks.
Existing out-of-distribution (OOD) methods have shown great success on balanced datasets but become ineffective in long-tailed recognition (LTR) scenarios where 1) OOD samples are often wrongly classified into head classes and/or 2) tail-class samples are treated as OOD samples. To address these issues, current studies fit a prior distribution of auxiliary/pseudo OOD data to the long-tailed in-distribution (ID) data. However, it is difficult to obtain such an accurate prior distribution given the unknowingness of real OOD samples and heavy class imbalance in LTR. A straightforward solution to avoid the requirement of this prior is to learn an outlier class to encapsulate the OOD samples. The main challenge is then to tackle the aforementioned confusion between OOD samples and head/tail-class samples when learning the outlier class. To this end, we introduce a novel calibrated outlier class learning (COCL) approach, in which 1) a debiased large margin learning method is introduced in the outlier class learning to distinguish OOD samples from both head and tail classes in the representation space and 2) an outlier-class-aware logit calibration method is defined to enhance the long-tailed classification confidence. Extensive empirical results on three popular benchmarks CIFAR10-LT, CIFAR100-LT, and ImageNet-LT demonstrate that COCL substantially outperforms state-of-the-art OOD detection methods in LTR while being able to improve the classification accuracy on ID data. Code is available at //github.com/mala-lab/COCL.
This paper introduces and characterizes a new family of continuous probability distributions applicable to norm distributions in three-dimensional random spaces, specifically for the Euclidean norm of three random Gaussian variables with non-zero means. The distribution is specified over the semi-infinite range $[0,\infty)$ and is notable for its computational tractability. Building on this foundation, we also introduce a separate family of continuous probability distributions suitable for power distributions in three-dimensional random spaces. Despite being previously unknown, these distributions are attractive for numerous applications, some of which are discussed in this work.
Debugging physical computing projects provides a rich context to understand cross-disciplinary problem solving that integrates multiple domains of computing and engineering. Yet understanding and assessing students' learning of debugging remains a challenge, particularly in understudied areas such as physical computing, since finding and fixing hardware and software bugs is a deeply contextual practice. In this paper we draw on the rich history of clinical interviews to develop and pilot "failure artifact scenarios" in order to study changes in students' approaches to debugging and troubleshooting electronic textiles (e-textiles). We applied this clinical interview protocol before and after an eight-week-long e-textiles unit. We analyzed pre/post clinical interviews from 18 students at four different schools. The analysis revealed that students improved in identifying bugs with greater specificity, and across domains, and in considering multiple causes for bugs. We discuss implications for developing tools to assess students' debugging abilities through contextualized debugging scenarios in physical computing.
Graph Neural Networks (GNNs) demonstrate their significance by effectively modeling complex interrelationships within graph-structured data. To enhance the credibility and robustness of GNNs, it becomes exceptionally crucial to bolster their ability to capture causal relationships. However, despite recent advancements that have indeed strengthened GNNs from a causal learning perspective, conducting an in-depth analysis specifically targeting the causal modeling prowess of GNNs remains an unresolved issue. In order to comprehensively analyze various GNN models from a causal learning perspective, we constructed an artificially synthesized dataset with known and controllable causal relationships between data and labels. The rationality of the generated data is further ensured through theoretical foundations. Drawing insights from analyses conducted using our dataset, we introduce a lightweight and highly adaptable GNN module designed to strengthen GNNs' causal learning capabilities across a diverse range of tasks. Through a series of experiments conducted on both synthetic datasets and other real-world datasets, we empirically validate the effectiveness of the proposed module.
Low-precision fine-tuning of language models has gained prominence as a cost-effective and energy-efficient approach to deploying large-scale models in various applications. However, this approach is susceptible to the existence of outlier values in activation. The outlier values in the activation can negatively affect the performance of fine-tuning language models in the low-precision regime since they affect the scaling factor and thus make representing smaller values harder. This paper investigates techniques for mitigating outlier activation in low-precision integer fine-tuning of the language models. Our proposed novel approach enables us to represent the outlier activation values in 8-bit integers instead of floating-point (FP16) values. The benefit of using integers for outlier values is that it enables us to use operator tiling to avoid performing 16-bit integer matrix multiplication to address this problem effectively. We provide theoretical analysis and supporting experiments to demonstrate the effectiveness of our approach in improving the robustness and performance of low-precision fine-tuned language models.
In approval-based committee (ABC) voting, the goal is to choose a subset of predefined size of the candidates based on the voters' approval preferences over the candidates. While this problem has attracted significant attention in recent years, the incentives for voters to participate in an election for a given ABC voting rule have been neglected so far. This paper is thus the first to explicitly study this property, typically called participation, for ABC voting rules. In particular, we show that all ABC scoring rules even satisfy group participation, whereas most sequential rules severely fail participation. We furthermore explore several escape routes to the impossibility for sequential ABC voting rules: we prove for many sequential rules that (i) they satisfy participation on laminar profiles, (ii) voters who approve none of the elected candidates cannot benefit by abstaining, and (iii) it is NP-hard for a voter to decide whether she benefits from abstaining.
We propose a PnP algorithm for a camera constrained to two-dimensional movement (applicable, for instance, to many wheeled robotics platforms). Leveraging this assumption allows performance improvements over 3D PnP algorithms due to the reduction in search space dimensionality. It also reduces the incidence of ambiguous pose estimates (as, in most cases, the spurious solutions fall outside the plane of movement). Our algorithm finds an approximate solution using geometric criteria and refines its prediction iteratively. We compare this algorithm to existing 3D PnP algorithms in the cases of general and coplanar point configurations.
The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.
We introduce a generic framework that reduces the computational cost of object detection while retaining accuracy for scenarios where objects with varied sizes appear in high resolution images. Detection progresses in a coarse-to-fine manner, first on a down-sampled version of the image and then on a sequence of higher resolution regions identified as likely to improve the detection accuracy. Built upon reinforcement learning, our approach consists of a model (R-net) that uses coarse detection results to predict the potential accuracy gain for analyzing a region at a higher resolution and another model (Q-net) that sequentially selects regions to zoom in. Experiments on the Caltech Pedestrians dataset show that our approach reduces the number of processed pixels by over 50% without a drop in detection accuracy. The merits of our approach become more significant on a high resolution test set collected from YFCC100M dataset, where our approach maintains high detection performance while reducing the number of processed pixels by about 70% and the detection time by over 50%.