In this paper, we propose an online-matching-based model to tackle the two fundamental issues, matching and pricing, existing in a wide range of real-world gig platforms, including ride-hailing (matching riders and drivers), crowdsourcing markets (pairing workers and tasks), and online recommendations (offering items to customers). Our model assumes the arriving distributions of dynamic agents (e.g., riders, workers, and buyers) are accessible in advance, and they can change over time, which is referred to as \emph{Known Heterogeneous Distributions} (KHD). In this paper, we initiate variance analysis for online matching algorithms under KHD. Unlike the popular competitive-ratio (CR) metric, the variance of online algorithms' performance is rarely studied due to inherent technical challenges, though it is well linked to robustness. We focus on two natural parameterized sampling policies, denoted by $\mathsf{ATT}(\gamma)$ and $\mathsf{SAMP}(\gamma)$, which appear as foundational bedrock in online algorithm design. We offer rigorous competitive ratio (CR) and variance analyses for both policies. Specifically, we show that $\mathsf{ATT}(\gamma)$ with $\gamma \in [0,1/2]$ achieves a CR of $\gamma$ and a variance of $\gamma \cdot (1-\gamma) \cdot B$ on the total number of matches with $B$ being the total matching capacity. In contrast, $\mathsf{SAMP}(\gamma)$ with $\gamma \in [0,1]$ accomplishes a CR of $\gamma (1-\gamma)$ and a variance of $\bar{\gamma} (1-\bar{\gamma})\cdot B$ with $\bar{\gamma}=\min(\gamma,1/2)$. All CR and variance analyses are tight and unconditional of any benchmark. As a byproduct, we prove that $\mathsf{ATT}(\gamma=1/2)$ achieves an optimal CR of $1/2$.
In this paper, we investigate the conditions under which link analysis algorithms prevent minority groups from reaching high ranking slots. We find that the most common link-based algorithms using centrality metrics, such as PageRank and HITS, can reproduce and even amplify bias against minority groups in networks. Yet, their behavior differs: one one hand, we empirically show that PageRank mirrors the degree distribution for most of the ranking positions and it can equalize representation of minorities among the top ranked nodes; on the other hand, we find that HITS amplifies pre-existing bias in homophilic networks through a novel theoretical analysis, supported by empirical results. We find the root cause of bias amplification in HITS to be the level of homophily present in the network, modeled through an evolving network model with two communities. We illustrate our theoretical analysis on both synthetic and real datasets and we present directions for future work.
In this work, we introduce DeepIPC, a novel end-to-end model tailored for autonomous driving, which seamlessly integrates perception and control tasks. Unlike traditional models that handle these tasks separately, DeepIPC innovatively combines a perception module, which processes RGBD images for semantic segmentation and generates bird's eye view (BEV) mappings, with a controller module that utilizes these insights along with GNSS and angular speed measurements to accurately predict navigational waypoints. This integration allows DeepIPC to efficiently translate complex environmental data into actionable driving commands. Our comprehensive evaluation demonstrates DeepIPC's superior performance in terms of drivability and multi-task efficiency across diverse real-world scenarios, setting a new benchmark for end-to-end autonomous driving systems with a leaner model architecture. The experimental results underscore DeepIPC's potential to significantly enhance autonomous vehicular navigation, promising a step forward in the development of autonomous driving technologies. For further insights and replication, we will make our code and datasets available at //github.com/oskarnatan/DeepIPC.
Recommendation algorithms play a pivotal role in shaping our media choices, which makes it crucial to comprehend their long-term impact on user behavior. These algorithms are often linked to two critical outcomes: homogenization, wherein users consume similar content despite disparate underlying preferences, and the filter bubble effect, wherein individuals with differing preferences only consume content aligned with their preferences (without much overlap with other users). Prior research assumes a trade-off between homogenization and filter bubble effects and then shows that personalized recommendations mitigate filter bubbles by fostering homogenization. However, because of this assumption of a tradeoff between these two effects, prior work cannot develop a more nuanced view of how recommendation systems may independently impact homogenization and filter bubble effects. We develop a more refined definition of homogenization and the filter bubble effect by decomposing them into two key metrics: how different the average consumption is between users (inter-user diversity) and how varied an individual's consumption is (intra-user diversity). We then use a novel agent-based simulation framework that enables a holistic view of the impact of recommendation systems on homogenization and filter bubble effects. Our simulations show that traditional recommendation algorithms (based on past behavior) mainly reduce filter bubbles by affecting inter-user diversity without significantly impacting intra-user diversity. Building on these findings, we introduce two new recommendation algorithms that take a more nuanced approach by accounting for both types of diversity.
In this paper, we study a new problem, Film Removal (FR), which attempts to remove the interference of wrinkled transparent films and reconstruct the original information under films for industrial recognition systems. We first physically model the imaging of industrial materials covered by the film. Considering the specular highlight from the film can be effectively recorded by the polarized camera, we build a practical dataset with polarization information containing paired data with and without transparent film. We aim to remove interference from the film (specular highlights and other degradations) with an end-to-end framework. To locate the specular highlight, we use an angle estimation network to optimize the polarization angle with the minimized specular highlight. The image with minimized specular highlight is set as a prior for supporting the reconstruction network. Based on the prior and the polarized images, the reconstruction network can decouple all degradations from the film. Extensive experiments show that our framework achieves SOTA performance in both image reconstruction and industrial downstream tasks. Our code will be released at \url{//github.com/jqtangust/FilmRemoval}.
In this paper, we propose new techniques for solving geometric optimization problems involving interpoint distances of a point set in the plane. Given a set $P$ of $n$ points in the plane and an integer $1 \leq k \leq \binom{n}{2}$, the distance selection problem is to find the $k$-th smallest interpoint distance among all pairs of points of $P$. The previously best deterministic algorithm solves the problem in $O(n^{4/3} \log^2 n)$ time [Katz and Sharir, SIAM J. Comput. 1997 and SoCG 1993]. In this paper, we improve their algorithm to $O(n^{4/3} \log n)$ time. Using similar techniques, we also give improved algorithms on both the two-sided and the one-sided discrete Fr\'{e}chet distance with shortcuts problem for two point sets in the plane. For the two-sided problem (resp., one-sided problem), we improve the previous work [Avraham, Filtser, Kaplan, Katz, and Sharir, ACM Trans. Algorithms 2015 and SoCG 2014] by a factor of roughly $\log^2(m+n)$ (resp., $(m+n)^{\epsilon}$), where $m$ and $n$ are the sizes of the two input point sets, respectively. Other problems whose solutions can be improved by our techniques include the reverse shortest path problems for unit-disk graphs. Our techniques are quite general and we believe they will find many other applications in future.
This paper presents a comprehensive survey of research works on the topic of form understanding in the context of scanned documents. We delve into recent advancements and breakthroughs in the field, highlighting the significance of language models and transformers in solving this challenging task. Our research methodology involves an in-depth analysis of popular documents and forms of understanding of trends over the last decade, enabling us to offer valuable insights into the evolution of this domain. Focusing on cutting-edge models, we showcase how transformers have propelled the field forward, revolutionizing form-understanding techniques. Our exploration includes an extensive examination of state-of-the-art language models designed to effectively tackle the complexities of noisy scanned documents. Furthermore, we present an overview of the latest and most relevant datasets, which serve as essential benchmarks for evaluating the performance of selected models. By comparing and contrasting the capabilities of these models, we aim to provide researchers and practitioners with useful guidance in choosing the most suitable solutions for their specific form understanding tasks.
In this article, we study the relationship between notions of depth for sequences, namely, Bennett's notions of strong and weak depth, and deep $\Pi^0_1$ classes, introduced by the authors and motivated by previous work of Levin. For the first main result of the study, we show that every member of a $\Pi^0_1$ class is order-deep, a property that implies strong depth. From this result, we obtain new examples of strongly deep sequences based on properties studied in computability theory and algorithmic randomness. We further show that not every strongly deep sequence is a member of a deep $\Pi^0_1$ class. For the second main result, we show that the collection of strongly deep sequences is negligible, which is equivalent to the statement that the probability of computing a strongly deep sequence with some random oracle is 0, a property also shared by every deep $\Pi^0_1$ class. Finally, we show that variants of strong depth, given in terms of a priori complexity and monotone complexity, are equivalent to weak depth.
Current studies on human locomotion focus mainly on solid ground walking conditions. In this paper, we present a biomechanic comparison of human walking locomotion on solid ground and sand. A novel dataset containing 3-dimensional motion and biomechanical data from 20 able-bodied adults for locomotion on solid ground and sand is collected. We present the data collection methods and report the sensor data along with the kinematic and kinetic profiles of joint biomechanics. A comprehensive analysis of human gait and joint stiffness profiles is presented. The kinematic and kinetic analysis reveals that human walking locomotion on sand shows different ground reaction forces and joint torque profiles, compared with those patterns from walking on solid ground. These gait differences reflect that humans adopt motion control strategies for yielding terrain conditions such as sand. The dataset also provides a source of locomotion data for researchers to study human activity recognition and assistive devices for walking on different terrains.
Reasoning, a crucial ability for complex problem-solving, plays a pivotal role in various real-world settings such as negotiation, medical diagnosis, and criminal investigation. It serves as a fundamental methodology in the field of Artificial General Intelligence (AGI). With the ongoing development of foundation models, e.g., Large Language Models (LLMs), there is a growing interest in exploring their abilities in reasoning tasks. In this paper, we introduce seminal foundation models proposed or adaptable for reasoning, highlighting the latest advancements in various reasoning tasks, methods, and benchmarks. We then delve into the potential future directions behind the emergence of reasoning abilities within foundation models. We also discuss the relevance of multimodal learning, autonomous agents, and super alignment in the context of reasoning. By discussing these future research directions, we hope to inspire researchers in their exploration of this field, stimulate further advancements in reasoning with foundation models, and contribute to the development of AGI.
In this paper, we propose a novel Feature Decomposition and Reconstruction Learning (FDRL) method for effective facial expression recognition. We view the expression information as the combination of the shared information (expression similarities) across different expressions and the unique information (expression-specific variations) for each expression. More specifically, FDRL mainly consists of two crucial networks: a Feature Decomposition Network (FDN) and a Feature Reconstruction Network (FRN). In particular, FDN first decomposes the basic features extracted from a backbone network into a set of facial action-aware latent features to model expression similarities. Then, FRN captures the intra-feature and inter-feature relationships for latent features to characterize expression-specific variations, and reconstructs the expression feature. To this end, two modules including an intra-feature relation modeling module and an inter-feature relation modeling module are developed in FRN. Experimental results on both the in-the-lab databases (including CK+, MMI, and Oulu-CASIA) and the in-the-wild databases (including RAF-DB and SFEW) show that the proposed FDRL method consistently achieves higher recognition accuracy than several state-of-the-art methods. This clearly highlights the benefit of feature decomposition and reconstruction for classifying expressions.