By the MAXSAT problem, we are given a set $V$ of $m$ variables and a collection $C$ of $n$ clauses over $V$. We will seek a truth assignment to maximize the number of satisfied clauses. This problem is $\textit{NP}$-hard even for its restricted version, the 2-MAXSAT problem by which every clause contains at most 2 literals. In this paper, we discuss an efficient algorithm to solve this problem. Its worst case time complexity is bounded by O($n^2m^3(log_2\;nm)^{log_2\;nm}$). This shows that the 2-MAXSAT problem can be solved in polynomial time.
The reachability problem for vector addition systems with states (VASS) has been shown to be \textsc{Ackermann}-complete. For every $k\geq 3$, a completeness result for the $k$-dimensional VASS reachability problem is not yet available. It is shown in this paper that the $3$-dimensional VASS reachability problem is in \textsc{Tower}, improving upon the current best upper bound $\mathbf{F}_7$ established by Leroux and Schmidt in 2019.
The Bin Packing Problem is a classic problem with wide industrial applicability. In fact, the efficient packing of items into bins is one of the toughest challenges in many logistic corporations and is a critical issue for reducing storage costs or improving vehicle space allocation. In this work, we resort to our previously published quantum-classical framework known as Q4RealBPP, and elaborate on the solving of real-world oriented instances of the Bin Packing Problem. With this purpose, this paper gravitates on the following characteristics: i) the existence of heterogeneous bins, ii) the extension of the framework to solve not only three-dimensional, but also one- and two-dimensional instances of the problem, iii) requirements for item-bin associations, and iv) delivery priorities. All these features have been tested in this paper, as well as the ability of Q4RealBPP to solve real-world oriented instances.
Given a matrix $M\in \mathbb{R}^{m\times n}$, the low rank matrix completion problem asks us to find a rank-$k$ approximation of $M$ as $UV^\top$ for $U\in \mathbb{R}^{m\times k}$ and $V\in \mathbb{R}^{n\times k}$ by only observing a few entries specified by a set of entries $\Omega\subseteq [m]\times [n]$. In particular, we examine an approach that is widely used in practice -- the alternating minimization framework. Jain, Netrapalli and Sanghavi~\cite{jns13} showed that if $M$ has incoherent rows and columns, then alternating minimization provably recovers the matrix $M$ by observing a nearly linear in $n$ number of entries. While the sample complexity has been subsequently improved~\cite{glz17}, alternating minimization steps are required to be computed exactly. This hinders the development of more efficient algorithms and fails to depict the practical implementation of alternating minimization, where the updates are usually performed approximately in favor of efficiency. In this paper, we take a major step towards a more efficient and error-robust alternating minimization framework. To this end, we develop an analytical framework for alternating minimization that can tolerate moderate amount of errors caused by approximate updates. Moreover, our algorithm runs in time $\widetilde O(|\Omega| k)$, which is nearly linear in the time to verify the solution while preserving the sample complexity. This improves upon all prior known alternating minimization approaches which require $\widetilde O(|\Omega| k^2)$ time.
I study a general revenue management problem in which $ n $ customers arrive sequentially over $ n $ periods, and you must dynamically decide which to satisfy. Satisfying the period-$ t $ customer yields utility $ u_{t} \in \mathbb{R}_{+} $ and decreases your inventory holdings by $ A_{t} \in \mathbb{R}_{+}^{M} $. The customer vectors, $ (u_{t}, A_{t}')' $, are i.i.d., with $ u_{t} $ drawn from a finite-mean continuous distribution and $ A_{t} $ drawn from a bounded discrete or continuous distribution. I study this system's regret, which is the additional utility you could get if you didn't have to make decisions on the fly. I show that if your initial inventory endowment scales linearly with $ n $ then your expected regret is $ \Theta(\log(n)) $ as $ n \rightarrow \infty $. I provide a simple policy that achieves this $ \Theta(\log(n)) $ regret rate. Finally, I extend this result to Arlotto and Gurich's (2019) multisecretary problem with uniformly distributed secretary valuations.
Let $G$ be a finite group given as input by its multiplication table. For a subset $S$ of $G$ and an element $g\in G$ the Cayley Group Membership Problem (denoted CGM) is to check if $g$ belongs to the subgroup generated by $S$. While this problem is easily seen to be in polynomial time, pinpointing its parallel complexity has been of research interest over the years. In this paper we further explore the parallel complexity of the abelian CGM problem, with focus on the dynamic setting: the generating set $S$ changes with insertions and deletions and the goal is to maintain a data structure that supports efficient membership queries to the subgroup $\angle{S}$. We obtain the following results: 1. We first consider the more general problem of Monoid Membership. When $G$ is a commutative monoid we give a deterministic dynamic algorithm constant time parallel algorithm for membership testing that supports $O(1)$ insertions and deletions in each step. 2. Building on the previous result we show that there is a dynamic randomized constant-time parallel algorithm for abelian CGM that supports polylogarithmically many insertions/deletions to $S$ in each step. 3. If the number of insertions/deletions is at most $O(\log n/\log\log n)$ then we obtain a deterministic dynamic constant-time parallel algorithm for the problem. 4. We obtain analogous results for the dynamic abelian Group Isomorphism.
Given $n$ observations from two balanced classes, consider the task of labeling an additional $m$ inputs that are known to all belong to \emph{one} of the two classes. Special cases of this problem are well-known: with complete knowledge of class distributions ($n=\infty$) the problem is solved optimally by the likelihood-ratio test; when $m=1$ it corresponds to binary classification; and when $m\approx n$ it is equivalent to two-sample testing. The intermediate settings occur in the field of likelihood-free inference, where labeled samples are obtained by running forward simulations and the unlabeled sample is collected experimentally. In recent work it was discovered that there is a fundamental trade-off between $m$ and $n$: increasing the data sample $m$ reduces the amount $n$ of training/simulation data needed. In this work we (a) introduce a generalization where unlabeled samples come from a mixture of the two classes -- a case often encountered in practice; (b) study the minimax sample complexity for non-parametric classes of densities under \textit{maximum mean discrepancy} (MMD) separation; and (c) investigate the empirical performance of kernels parameterized by neural networks on two tasks: detection of the Higgs boson and detection of planted DDPM generated images amidst CIFAR-10 images. For both problems we confirm the existence of the theoretically predicted asymmetric $m$ vs $n$ trade-off.
We study the problem of allocating divisible resources among $n$ agents, hopefully in a fair and efficient manner. With the presence of strategic agents, additional incentive guarantees are also necessary, and the problem of designing fair and efficient mechanisms becomes much less tractable. While the maximum Nash welfare (MNW) mechanism has been proven to be prominent by providing desirable fairness and efficiency guarantees as well as other intuitive properties, no incentive property is known for it. We show a surprising result that, when agents have piecewise constant value density functions, the incentive ratio of the MNW mechanism is $2$ for cake cutting, where the incentive ratio of a mechanism is defined as the ratio between the largest possible utility that an agent can gain by manipulation and his utility in honest behavior. Remarkably, this result holds even without the free disposal assumption, which is hard to get rid of in the design of truthful cake cutting mechanisms. We also show that the MNW mechanism is group strategyproof when agents have piecewise uniform value density functions. Moreover, we show that, for cake cutting, the Partial Allocation (PA) mechanism proposed by Cole et al., which is truthful and $1/e$-MNW for homogeneous divisible items, has an incentive ratio between $[e^{1 / e}, e]$ and when randomization is allowed, can be turned to be truthful in expectation. Given two alternatives for a trade-off between incentive ratio and Nash welfare provided by the MNW and PA mechanisms, we establish an interpolation between them for both cake cutting and homogeneous divisible items. Finally, we study the existence of fair mechanisms with a low incentive ratio in the connected pieces setting. We show that any envy-free cake cutting mechanism with the connected pieces constraint has an incentive ratio of at least $\Omega(n)$.
Chain-of-Though (CoT) prompting has shown promising performance in various reasoning tasks. Recently, Self-Consistency \citep{wang2023selfconsistency} proposes to sample a diverse set of reasoning chains which may lead to different answers while the answer that receives the most votes is selected. In this paper, we propose a novel method to use backward reasoning in verifying candidate answers. We mask a token in the question by ${\bf x}$ and ask the LLM to predict the masked token when a candidate answer is provided by \textit{a simple template}, i.e., ``\textit{\textbf{If we know the answer of the above question is \{a candidate answer\}, what is the value of unknown variable ${\bf x}$?}}'' Intuitively, the LLM is expected to predict the masked token successfully if the provided candidate answer is correct. We further propose FOBAR to combine forward and backward reasoning for estimating the probability of candidate answers. We conduct extensive experiments on six data sets and three LLMs. Experimental results demonstrate that FOBAR achieves state-of-the-art performance on various reasoning benchmarks.
Deep Learning (DL) is vulnerable to out-of-distribution and adversarial examples resulting in incorrect outputs. To make DL more robust, several posthoc anomaly detection techniques to detect (and discard) these anomalous samples have been proposed in the recent past. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection for DL based applications. We provide a taxonomy for existing techniques based on their underlying assumptions and adopted approaches. We discuss various techniques in each of the categories and provide the relative strengths and weaknesses of the approaches. Our goal in this survey is to provide an easier yet better understanding of the techniques belonging to different categories in which research has been done on this topic. Finally, we highlight the unsolved research challenges while applying anomaly detection techniques in DL systems and present some high-impact future research directions.
The problem of Multiple Object Tracking (MOT) consists in following the trajectory of different objects in a sequence, usually a video. In recent years, with the rise of Deep Learning, the algorithms that provide a solution to this problem have benefited from the representational power of deep models. This paper provides a comprehensive survey on works that employ Deep Learning models to solve the task of MOT on single-camera videos. Four main steps in MOT algorithms are identified, and an in-depth review of how Deep Learning was employed in each one of these stages is presented. A complete experimental comparison of the presented works on the three MOTChallenge datasets is also provided, identifying a number of similarities among the top-performing methods and presenting some possible future research directions.