We investigate pseudopolynomial-time algorithms for Bounded Knapsack and Bounded Subset Sum. Recent years have seen a growing interest in settling their fine-grained complexity with respect to various parameters. For Bounded Knapsack, the number of items $n$ and the maximum item weight $w_{\max}$ are two of the most natural parameters that have been studied extensively in the literature. The previous best running time in terms of $n$ and $w_{\max}$ is $O(n + w^3_{\max})$ [Polak, Rohwedder, Wegrzycki '21]. There is a conditional lower bound of $O((n + w_{\max})^{2-o(1)})$ based on $(\min,+)$-convolution hypothesis [Cygan, Mucha, Wegrzycki, Wlodarczyk '17]. We narrow the gap significantly by proposing a $\tilde{O}(n + w^{12/5}_{\max})$-time algorithm. Note that in the regime where $w_{\max} \approx n$, our algorithm runs in $\tilde{O}(n^{12/5})$ time, while all the previous algorithms require $\Omega(n^3)$ time in the worst case. For Bounded Subset Sum, we give two algorithms running in $\tilde{O}(nw_{\max})$ and $\tilde{O}(n + w^{3/2}_{\max})$ time, respectively. These results match the currently best running time for 0-1 Subset Sum. Prior to our work, the best running times (in terms of $n$ and $w_{\max}$) for Bounded Subset Sum is $\tilde{O}(n + w^{5/3}_{\max})$ [Polak, Rohwedder, Wegrzycki '21] and $\tilde{O}(n + \mu_{\max}^{1/2}w_{\max}^{3/2})$ [implied by Bringmann '19 and Bringmann, Wellnitz '21], where $\mu_{\max}$ refers to the maximum multiplicity of item weights.
Visual Inertial Odometry (VIO) is an essential component of modern Augmented Reality (AR) applications. However, VIO only tracks the relative pose of the device, leading to drift over time. Absolute pose estimation methods infer the device's absolute pose, but their accuracy depends on the input quality. This paper introduces VIO-APR, a new framework for markerless mobile AR that combines an absolute pose regressor (APR) with a local VIO tracking system. VIO-APR uses VIO to assess the reliability of the APR and the APR to identify and compensate for VIO drift. This feedback loop results in more accurate positioning and more stable AR experiences. To evaluate VIO-APR, we created a dataset that combines camera images with ARKit's VIO system output for six indoor and outdoor scenes of various scales. Over this dataset, VIO-APR improves the median accuracy of popular APR by up to 36\% in position and 29\% in orientation, increases the percentage of frames in the high ($0.25 m, 2^{\circ}$) accuracy level by up to 112\% and reduces the percentage of frames predicted below the low ($5 m, 10^\circ$) accuracy greatly. We implement VIO-APR into a mobile AR application using Unity to demonstrate its capabilities. VIO-APR results in noticeably more accurate localization and a more stable overall experience.
The application of Physics-Informed Neural Networks (PINNs) is investigated for the first time in solving the one-dimensional Countercurrent spontaneous imbibition (COUCSI) problem at both early and late time (i.e., before and after the imbibition front meets the no-flow boundary). We introduce utilization of Change-of-Variables as a technique for improving performance of PINNs. We formulated the COUCSI problem in three equivalent forms by changing the independent variables. The first describes saturation as function of normalized position X and time T; the second as function of X and Y=T^0.5; and the third as a sole function of Z=X/T^0.5 (valid only at early time). The PINN model was generated using a feed-forward neural network and trained based on minimizing a weighted loss function, including the physics-informed loss term and terms corresponding to the initial and boundary conditions. All three formulations could closely approximate the correct solutions, with water saturation mean absolute errors around 0.019 and 0.009 for XT and XY formulations and 0.012 for the Z formulation at early time. The Z formulation perfectly captured the self-similarity of the system at early time. This was less captured by XT and XY formulations. The total variation of saturation was preserved in the Z formulation, and it was better preserved with XY- than XT formulation. Redefining the problem based on the physics-inspired variables reduced the non-linearity of the problem and allowed higher solution accuracies, a higher degree of loss-landscape convexity, a lower number of required collocation points, smaller network sizes, and more computationally efficient solutions.
Despite their impressive performance in a wide range of NLP tasks, Large Language Models (LLMs) have been reported to encode worrying-levels of gender bias. Prior work has proposed debiasing methods that require human labelled examples, data augmentation and fine-tuning of the LLMs, which are computationally costly. Moreover, one might not even have access to the internal parameters for performing debiasing such as in the case of commercially available LLMs such as GPT-4. To address this challenge we propose bias suppression, a novel alternative to debiasing that does not require access to model parameters. We show that text-based preambles, generated from manually designed templates covering counterfactual statements, can accurately suppress gender biases in LLMs. Moreover, we find that descriptive sentences for occupations can further suppress gender biases. Interestingly, we find that bias suppression has a minimal adverse effect on downstream task performance, while effectively mitigating the gender biases.
In this work, we study minimum data rate tracking of a dynamical system under a neuromorphic event-based sensing paradigm. We begin by bridging the gap between continuous-time (CT) system dynamics and information theory's causal rate distortion theory. We motivate the use of non-singular source codes to quantify bitrates in event-based sampling schemes. This permits an analysis of minimum bitrate event-based tracking using tools already established in the control and information theory literature. We derive novel, nontrivial lower bounds to event-based sensing, and compare the lower bound with the performance of well-known schemes in the established literature.
The recent Facebook rebranding to Meta has drawn renewed attention to the metaverse. Technology giants, amongst others, are increasingly embracing the vision and opportunities of a hybrid social experience that mixes physical and virtual interactions. As the metaverse gains in traction, it is expected that everyday objects may soon connect more closely with virtual elements. However, discovering this "hidden" virtual world will be a crucial first step to interacting with it in this new augmented world. In this paper, we address the problem of connecting physical objects with their virtual counterparts, especially through connections built upon visual markers. We propose a unified recognition framework that guides approaches to the metaverse access points. We illustrate the use of the framework through experimental studies under different conditions, in which an interactive and visually attractive decoration pattern, an Artcode, is used as the approach to enable the connection. This paper will be of interest to, amongst others, researchers working in Interaction Design or Augmented Reality who are seeking techniques or guidelines for augmenting physical objects in an unobtrusive, complementary manner.
Recent artificial intelligence (AI) systems have reached milestones in "grand challenges" ranging from Go to protein-folding. The capability to retrieve medical knowledge, reason over it, and answer medical questions comparably to physicians has long been viewed as one such grand challenge. Large language models (LLMs) have catalyzed significant progress in medical question answering; Med-PaLM was the first model to exceed a "passing" score in US Medical Licensing Examination (USMLE) style questions with a score of 67.2% on the MedQA dataset. However, this and other prior work suggested significant room for improvement, especially when models' answers were compared to clinicians' answers. Here we present Med-PaLM 2, which bridges these gaps by leveraging a combination of base LLM improvements (PaLM 2), medical domain finetuning, and prompting strategies including a novel ensemble refinement approach. Med-PaLM 2 scored up to 86.5% on the MedQA dataset, improving upon Med-PaLM by over 19% and setting a new state-of-the-art. We also observed performance approaching or exceeding state-of-the-art across MedMCQA, PubMedQA, and MMLU clinical topics datasets. We performed detailed human evaluations on long-form questions along multiple axes relevant to clinical applications. In pairwise comparative ranking of 1066 consumer medical questions, physicians preferred Med-PaLM 2 answers to those produced by physicians on eight of nine axes pertaining to clinical utility (p < 0.001). We also observed significant improvements compared to Med-PaLM on every evaluation axis (p < 0.001) on newly introduced datasets of 240 long-form "adversarial" questions to probe LLM limitations. While further studies are necessary to validate the efficacy of these models in real-world settings, these results highlight rapid progress towards physician-level performance in medical question answering.
Text Classification is the most essential and fundamental problem in Natural Language Processing. While numerous recent text classification models applied the sequential deep learning technique, graph neural network-based models can directly deal with complex structured text data and exploit global information. Many real text classification applications can be naturally cast into a graph, which captures words, documents, and corpus global features. In this survey, we bring the coverage of methods up to 2023, including corpus-level and document-level graph neural networks. We discuss each of these methods in detail, dealing with the graph construction mechanisms and the graph-based learning process. As well as the technological survey, we look at issues behind and future directions addressed in text classification using graph neural networks. We also cover datasets, evaluation metrics, and experiment design and present a summary of published performance on the publicly available benchmarks. Note that we present a comprehensive comparison between different techniques and identify the pros and cons of various evaluation metrics in this survey.
Autonomous driving has achieved a significant milestone in research and development over the last decade. There is increasing interest in the field as the deployment of self-operating vehicles on roads promises safer and more ecologically friendly transportation systems. With the rise of computationally powerful artificial intelligence (AI) techniques, autonomous vehicles can sense their environment with high precision, make safe real-time decisions, and operate more reliably without human interventions. However, intelligent decision-making in autonomous cars is not generally understandable by humans in the current state of the art, and such deficiency hinders this technology from being socially acceptable. Hence, aside from making safe real-time decisions, the AI systems of autonomous vehicles also need to explain how these decisions are constructed in order to be regulatory compliant across many jurisdictions. Our study sheds a comprehensive light on developing explainable artificial intelligence (XAI) approaches for autonomous vehicles. In particular, we make the following contributions. First, we provide a thorough overview of the present gaps with respect to explanations in the state-of-the-art autonomous vehicle industry. We then show the taxonomy of explanations and explanation receivers in this field. Thirdly, we propose a framework for an architecture of end-to-end autonomous driving systems and justify the role of XAI in both debugging and regulating such systems. Finally, as future research directions, we provide a field guide on XAI approaches for autonomous driving that can improve operational safety and transparency towards achieving public approval by regulators, manufacturers, and all engaged stakeholders.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.
Inspired by recent development of artificial satellite, remote sensing images have attracted extensive attention. Recently, noticeable progress has been made in scene classification and target detection.However, it is still not clear how to describe the remote sensing image content with accurate and concise sentences. In this paper, we investigate to describe the remote sensing images with accurate and flexible sentences. First, some annotated instructions are presented to better describe the remote sensing images considering the special characteristics of remote sensing images. Second, in order to exhaustively exploit the contents of remote sensing images, a large-scale aerial image data set is constructed for remote sensing image caption. Finally, a comprehensive review is presented on the proposed data set to fully advance the task of remote sensing caption. Extensive experiments on the proposed data set demonstrate that the content of the remote sensing image can be completely described by generating language descriptions. The data set is available at //github.com/2051/RSICD_optimal