In 1982 Papadimitriou and Yannakakis introduced the Exact Matching problem, in which given a red and blue edge-colored graph $G$ and an integer $k$ one has to decide whether there exists a perfect matching in $G$ with exactly $k$ red edges. Even though a randomized polynomial-time algorithm for this problem was quickly found a few years later, it is still unknown today whether a deterministic polynomial-time algorithm exists. This makes the Exact Matching problem an important candidate to test the RP=P hypothesis. In this paper we focus on approximating Exact Matching. While there exists a simple algorithm that computes in deterministic polynomial-time an almost perfect matching with exactly $k$ red edges, not a lot of work focuses on computing perfect matchings with almost $k$ red edges. In fact such an algorithm for bipartite graphs running in deterministic polynomial-time was published only recently (STACS'23). It outputs a perfect matching with $k'$ red edges with the guarantee that $0.5k \leq k' \leq 1.5k$. In the present paper we aim at approximating the number of red edges without exceeding the limit of $k$ red edges. We construct a deterministic polynomial-time algorithm, which on bipartite graphs computes a perfect matching with $k'$ red edges such that $k/3 \leq k' \leq k$.
There is a growing interest in using pose estimation algorithms for video-based assessment of Bradykinesia in Parkinson's Disease (PD) to facilitate remote disease assessment and monitoring. However, the accuracy of pose estimation algorithms in videos from video streaming services during Telehealth appointments has not been studied. In this study, we used seven off-the-shelf hand pose estimation models to estimate the movement of the thumb and index fingers in videos of the finger-tapping (FT) test recorded from Healthy Controls (HC) and participants with PD and under two different conditions: streaming (videos recorded during a live Zoom meeting) and on-device (videos recorded locally with high-quality cameras). The accuracy and reliability of the models were estimated by comparing the models' output with manual results. Three of the seven models demonstrated good accuracy for on-device recordings, and the accuracy decreased significantly for streaming recordings. We observed a negative correlation between movement speed and the model's accuracy for the streaming recordings. Additionally, we evaluated the reliability of ten movement features related to bradykinesia extracted from video recordings of PD patients performing the FT test. While most of the features demonstrated excellent reliability for on-device recordings, most of the features demonstrated poor to moderate reliability for streaming recordings. Our findings highlight the limitations of pose estimation algorithms when applied to video recordings obtained during Telehealth visits, and demonstrate that on-device recordings can be used for automatic video-assessment of bradykinesia in PD.
The Euler Elastica (EE) model with surface curvature can generate artifact-free results compared with the traditional total variation regularization model in image processing. However, strong nonlinearity and singularity due to the curvature term in the EE model pose a great challenge for one to design fast and stable algorithms for the EE model. In this paper, we propose a new, fast, hybrid alternating minimization (HALM) algorithm for the EE model based on a bilinear decomposition of the gradient of the underlying image and prove the global convergence of the minimizing sequence generated by the algorithm under mild conditions. The HALM algorithm comprises three sub-minimization problems and each is either solved in the closed form or approximated by fast solvers making the new algorithm highly accurate and efficient. We also discuss the extension of the HALM strategy to deal with general curvature-based variational models, especially with a Lipschitz smooth functional of the curvature. A host of numerical experiments are conducted to show that the new algorithm produces good results with much-improved efficiency compared to other state-of-the-art algorithms for the EE model. As one of the benchmarks, we show that the average running time of the HALM algorithm is at most one-quarter of that of the fast operator-splitting-based Deng-Glowinski-Tai algorithm.
The Natural Language Processing(NLP) community has been using crowd sourcing techniques to create benchmark datasets such as General Language Understanding and Evaluation(GLUE) for training modern Language Models such as BERT. GLUE tasks measure the reliability scores using inter annotator metrics i.e. Cohens Kappa. However, the reliability aspect of LMs has often been overlooked. To counter this problem, we explore a knowledge-guided LM ensembling approach that leverages reinforcement learning to integrate knowledge from ConceptNet and Wikipedia as knowledge graph embeddings. This approach mimics human annotators resorting to external knowledge to compensate for information deficits in the datasets. Across nine GLUE datasets, our research shows that ensembling strengthens reliability and accuracy scores, outperforming state of the art.
We consider the Shortest Odd Path problem, where given an undirected graph $G$, a weight function on its edges, and two vertices $s$ and $t$ in $G$, the aim is to find an $(s,t)$-path with odd length and, among all such paths, of minimum weight. For the case when the weight function is conservative, i.e., when every cycle has non-negative total weight, the complexity of the Shortest Odd Path problem had been open for 20 years, and was recently shown to be NP-hard. We give a polynomial-time algorithm for the special case when the weight function is conservative and the set $E^-$ of negative-weight edges forms a single tree. Our algorithm exploits the strong connection between Shortest Odd Path and the problem of finding two internally vertex-disjoint paths between two terminals in an undirected edge-weighted graph. It also relies on solving an intermediary problem variant called Shortest Parity-Constrained Odd Path where for certain edges we have parity constraints on their position along the path. Also, we exhibit two FPT algorithms for solving Shortest Odd Path in graphs with conservative weight functions. The first FPT algorithm is parameterized by $|E^-|$, the number of negative edges, or more generally, by the maximum size of a matching in the subgraph of $G$ spanned by $E^-$. Our second FPT algorithm is parameterized by the treewidth of $G$.
The Kadison-Singer Conjecture, as proved by Marcus, Spielman, and Srivastava (MSS) [Ann. Math. 182, 327-350 (2015)], has been informally thought of as a strengthening of Batson, Spielman, and Srivastava's theorem that every undirected graph has a linear-sized spectral sparsifier [SICOMP 41, 1704-1721 (2012)]. We formalize this intuition by using a corollary of the MSS result to derive the existence of spectral sparsifiers with a number of edges linear in its number of vertices for all undirected, weighted graphs. The proof consists of two steps. First, following a suggestion of Srivastava [Asia Pac. Math. Newsl. 3, 15-20 (2013)], we show the result in the special case of graphs with bounded leverage scores by repeatedly applying the MSS corollary to partition the graph, while maintaining an appropriate bound on the leverage scores of each subgraph. Then, we extend to the general case by constructing a recursive algorithm that repeatedly (i) divides edges with high leverage scores into multiple parallel edges and (ii) uses the bounded leverage score case to sparsify the resulting graph.
We introduce ChatSQC, an innovative chatbot system that combines the power of OpenAI's Large Language Models (LLM) with a specific knowledge base in Statistical Quality Control (SQC). Our research focuses on enhancing LLMs using specific SQC references, shedding light on how data preprocessing parameters and LLM selection impact the quality of generated responses. By illustrating this process, we hope to motivate wider community engagement to refine LLM design and output appraisal techniques. We also highlight potential research opportunities within the SQC domain that can be facilitated by leveraging ChatSQC, thereby broadening the application spectrum of SQC. A primary goal of our work is to equip practitioners with a tool capable of generating precise SQC-related responses, thereby democratizing access to advanced SQC knowledge. To continuously improve ChatSQC, we ask the SQC community to provide feedback, highlight potential issues, request additional features, and/or contribute via pull requests through our public GitHub repository. Additionally, the team will continue to explore adding supplementary reference material that would further improve the contextual understanding of the chatbot. Overall, ChatSQC serves as a testament to the transformative potential of AI within SQC, and we hope it will spur further advancements in the integration of AI in this field.
Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks. Recently, an upgraded version of BERT has been released with Whole Word Masking (WWM), which mitigate the drawbacks of masking partial WordPiece tokens in pre-training BERT. In this technical report, we adapt whole word masking in Chinese text, that masking the whole word instead of masking Chinese characters, which could bring another challenge in Masked Language Model (MLM) pre-training task. The model was trained on the latest Chinese Wikipedia dump. We aim to provide easy extensibility and better performance for Chinese BERT without changing any neural architecture or even hyper-parameters. The model is verified on various NLP tasks, across sentence-level to document-level, including sentiment classification (ChnSentiCorp, Sina Weibo), named entity recognition (People Daily, MSRA-NER), natural language inference (XNLI), sentence pair matching (LCQMC, BQ Corpus), and machine reading comprehension (CMRC 2018, DRCD, CAIL RC). Experimental results on these datasets show that the whole word masking could bring another significant gain. Moreover, we also examine the effectiveness of Chinese pre-trained models: BERT, ERNIE, BERT-wwm. We release the pre-trained model (both TensorFlow and PyTorch) on GitHub: //github.com/ymcui/Chinese-BERT-wwm
This work addresses a novel and challenging problem of estimating the full 3D hand shape and pose from a single RGB image. Most current methods in 3D hand analysis from monocular RGB images only focus on estimating the 3D locations of hand keypoints, which cannot fully express the 3D shape of hand. In contrast, we propose a Graph Convolutional Neural Network (Graph CNN) based method to reconstruct a full 3D mesh of hand surface that contains richer information of both 3D hand shape and pose. To train networks with full supervision, we create a large-scale synthetic dataset containing both ground truth 3D meshes and 3D poses. When fine-tuning the networks on real-world datasets without 3D ground truth, we propose a weakly-supervised approach by leveraging the depth map as a weak supervision in training. Through extensive evaluations on our proposed new datasets and two public datasets, we show that our proposed method can produce accurate and reasonable 3D hand mesh, and can achieve superior 3D hand pose estimation accuracy when compared with state-of-the-art methods.
Deep Convolutional Neural Networks (CNNs) are a special type of Neural Networks, which have shown state-of-the-art results on various competitive benchmarks. The powerful learning ability of deep CNN is largely achieved with the use of multiple non-linear feature extraction stages that can automatically learn hierarchical representation from the data. Availability of a large amount of data and improvements in the hardware processing units have accelerated the research in CNNs and recently very interesting deep CNN architectures are reported. The recent race in deep CNN architectures for achieving high performance on the challenging benchmarks has shown that the innovative architectural ideas, as well as parameter optimization, can improve the CNN performance on various vision-related tasks. In this regard, different ideas in the CNN design have been explored such as use of different activation and loss functions, parameter optimization, regularization, and restructuring of processing units. However, the major improvement in representational capacity is achieved by the restructuring of the processing units. Especially, the idea of using a block as a structural unit instead of a layer is gaining substantial appreciation. This survey thus focuses on the intrinsic taxonomy present in the recently reported CNN architectures and consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multi-path, width, feature map exploitation, channel boosting and attention. Additionally, it covers the elementary understanding of the CNN components and sheds light on the current challenges and applications of CNNs.
Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a toy task show the effectiveness of this component and we obtain state-of-the-art accuracy on the number category of the VQA v2 dataset without negatively affecting other categories, even outperforming ensemble models with our single model. On a difficult balanced pair metric, the component gives a substantial improvement in counting over a strong baseline by 6.6%.