We discuss how to implement backjumping (or intelligent backtracking) in Prolog by using the built-ins throw/1 and catch/3. We show that it is impossible in a general case, contrary to a claim that ``backjumping is exception handling". We provide two solutions. One works for binary programs; in a general case it imposes a restriction on where backjumping may originate. The other restricts the class of backjump targets. We also discuss implementing backjumping by using backtracking and the Prolog database. Additionally, we explain the semantics of Prolog exception handling in the presence of coroutining.
Virtual reality (VR) telepresence applications and the so-called "metaverse" promise to be the next major medium of interaction with the internet. However, with numerous recent studies showing the ease at which VR users can be profiled, deanonymized, and data harvested, metaverse platforms carry all the privacy risks of the current internet and more while at present having none of the defensive privacy tools we are accustomed to using on the web. To remedy this, we present the first known method of implementing an "incognito mode" for VR. Our technique leverages local differential privacy to quantifiably obscure sensitive user data attributes, with a focus on intelligently adding noise when and where it is needed most to maximize privacy while minimizing usability impact. Moreover, our system is capable of flexibly adapting to the unique needs of each metaverse application to further optimize this trade-off. We implement our solution as a universal Unity (C#) plugin that we then evaluate using several popular VR applications. Upon faithfully replicating the most well-known VR privacy attack studies, we show a significant degradation of attacker capabilities when using our proposed solution.
In the Euclidean $k$-TSP (resp. Euclidean $k$-MST), we are given $n$ points in the $d$-dimensional Euclidean space (for any fixed constant $d\geq 2$) and a positive integer $k$, the goal is to find a shortest tour visiting at least $k$ points (resp. a minimum tree spanning at least $k$ points). We give approximation schemes for both Euclidean $k$-TSP and Euclidean $k$-MST in time $2^{O(1/\varepsilon^{d-1})}\cdot n \cdot(\log n)^{d\cdot 4^{d}}$. This improves the running time of the previous approximation schemes due to Arora [J. ACM 1998] and Mitchell [SICOMP 1999]. Our algorithms can be derandomized by increasing the running time by a factor $O(n^d)$. In addition, our algorithm for Euclidean $k$-TSP is Gap-ETH tight, given the matching Gap-ETH lower bound due to Kisfaludi-Bak, Nederlof, and W\k{e}grzycki [FOCS 2021].
We consider functional data where an underlying smooth curve is composed not just with errors, but also with irregular spikes. We propose an approach that, combining regularized spline smoothing and an Expectation-Maximization algorithm, allows one to both identify spikes and estimate the smooth component. Imposing some assumptions on the error distribution, we prove consistency of EM estimates. Next, we demonstrate the performance of our proposal on finite samples and its robustness to assumptions violations through simulations. Finally, we apply our proposal to data on the annual heatwaves index in the US and on weekly electricity consumption in Ireland. In both datasets, we are able to characterize underlying smooth trends and to pinpoint irregular/extreme behaviors.
Recently, some studies on the fair allocation of indivisible goods notice a connection between a purely combinatorial problem called the Rainbow Cycle problem and a fairness notion known as $\efx$: assuming that the rainbow cycle number for parameter $d$ (i.e. $\rainbow(d)$) is $O(d^\beta \log^\gamma d)$, we can find a $(1-\epsilon)$-$\efx$ allocation with $O_{\epsilon}(n^{\frac{\beta}{\beta+1}}\log^{\frac{\gamma}{\beta +1}} n)$ number of discarded goods \cite{chaudhury2021improving}. The best upper bound on $\rainbow(d)$ is improved in a series of works to $O(d^4)$ \cite{chaudhury2021improving}, $O(d^{2+o(1)})$ \cite{berendsohn2022fixed}, and finally to $O(d^2)$ \cite{Akrami2022}.\footnote{We refer to the note at the end of the introduction for a short discussion on the result of \cite{Akrami2022}.} Also, via a simple observation, we have $\rainbow(d) \in \Omega(d)$ \cite{chaudhury2021improving}. In this paper, we introduce another problem in extremal combinatorics. For a parameter $\ell$, we define the rainbow path degree and denote it by $\ech(\ell)$. We show that any lower bound on $\ech(\ell)$ yields an upper bound on $\rainbow(d)$. Next, we prove that $\ech(\ell) \in \Omega(\ell^2/\log n)$ which yields an almost tight upper bound of $\rainbow(d) \in \Omega(d \log d)$. This in turn proves the existence of $(1-\epsilon)$-$\efx$ allocation with $O_{\epsilon}(\sqrt{n \log n})$ number of discarded goods. In addition, for the special case of the Rainbow Cycle problem that the edges in each part form a permutation, we improve the upper bound to $\rainbow(d) \leq 2d-4$. We leverage $\ech(\ell)$ to achieve this bound. Our conjecture is that the exact value of $\ech(\ell) $ is $ \lfloor \frac{\ell^2}{2} \rfloor -1$. We provide some experiments that support this conjecture. Assuming this conjecture is correct, we have $\rainbow(d) \in \Theta(d)$.
We study the fundamental problem of fairly allocating a set of indivisible goods among $n$ agents with additive valuations using the desirable fairness notion of maximin share (MMS). MMS is the most popular share-based notion, in which an agent finds an allocation fair to her if she receives goods worth at least her MMS value. An allocation is called MMS if all agents receive at least their MMS value. However, since MMS allocations need not exist when $n>2$, a series of works showed the existence of approximate MMS allocations with the current best factor of $\frac{3}{4} + O(\frac{1}{n})$. The recent work by Akrami et al. showed the limitations of existing approaches and proved that they cannot improve this factor to $3/4 + \Omega(1)$. In this paper, we bypass these barriers to show the existence of $(\frac{3}{4} + \frac{3}{3836})$-MMS allocations by developing new reduction rules and analysis techniques.
The last decade has seen many attempts to generalise the definition of modes, or MAP estimators, of a probability distribution $\mu$ on a space $X$ to the case that $\mu$ has no continuous Lebesgue density, and in particular to infinite-dimensional Banach and Hilbert spaces $X$. This paper examines the properties of and connections among these definitions. We construct a systematic taxonomy -- or `periodic table' -- of modes that includes the established notions as well as large hitherto-unexplored classes. We establish implications between these definitions and provide counterexamples to distinguish them. We also distinguish those definitions that are merely `grammatically correct' from those that are `meaningful' in the sense of satisfying certain `common-sense' axioms for a mode, among them the correct handling of discrete measures and those with continuous Lebesgue densities. However, despite there being 17 such `meaningful' definitions of mode, we show that none of them satisfy the `merging property', under which the modes of $\mu|_{A}$, $\mu|_{B}$ and $\mu|_{A \cup B}$ enjoy a straightforward relationship for well-separated positive-mass events $A,B \subseteq X$.
The edit distance is a fundamental measure of sequence similarity, defined as the minimum number of character insertions, deletions, and substitutions needed to transform one string into the other. Given two strings of length at most $n$, simple dynamic programming computes their edit distance exactly in $O(n^2)$ time, which is also the best possible (up to subpolynomial factors) assuming the Strong Exponential Time Hypothesis (SETH). The last few decades have seen tremendous progress in edit distance approximation, where the runtime has been brought down to subquadratic, near-linear, and even sublinear at the cost of approximation. In this paper, we study the dynamic edit distance problem, where the strings change dynamically as the characters are substituted, inserted, or deleted over time. Each change may happen at any location of either of the two strings. The goal is to maintain the (exact or approximate) edit distance of such dynamic strings while minimizing the update time. The exact edit distance can be maintained in $\tilde{O}(n)$ time per update (Charalampopoulos, Kociumaka, Mozes; 2020), which is again tight assuming SETH. Unfortunately, even with the unprecedented progress in edit distance approximation in the static setting, strikingly little is known regarding dynamic edit distance approximation. Utilizing the off-the-shelf tools, it is possible to achieve an $O(n^{c})$-approximation in $n^{0.5-c+o(1)}$ update time for any constant $c\in [0,\frac16]$. Improving upon this trade-off remains open. The contribution of this work is a dynamic $n^{o(1)}$-approximation algorithm with amortized expected update time of $n^{o(1)}$. In other words, we bring the approximation-ratio and update-time product down to $n^{o(1)}$. Our solution utilizes an elegant framework of precision sampling tree for edit distance approximation (Andoni, Krauthgamer, Onak; 2010).
Choice Modeling is at the core of many economics, operations, and marketing problems. In this paper, we propose a fundamental characterization of choice functions that encompasses a wide variety of extant choice models. We demonstrate how nonparametric estimators like neural nets can easily approximate such functionals and overcome the curse of dimensionality that is inherent in the non-parametric estimation of choice functions. We demonstrate through extensive simulations that our proposed functionals can flexibly capture underlying consumer behavior in a completely data-driven fashion and outperform traditional parametric models. As demand settings often exhibit endogenous features, we extend our framework to incorporate estimation under endogenous features. Further, we also describe a formal inference procedure to construct valid confidence intervals on objects of interest like price elasticity. Finally, to assess the practical applicability of our estimator, we utilize a real-world dataset from S. Berry, Levinsohn, and Pakes (1995). Our empirical analysis confirms that the estimator generates realistic and comparable own- and cross-price elasticities that are consistent with the observations reported in the existing literature.
In this paper, we consider algorithms for edge-coloring multigraphs $G$ of bounded maximum degree, i.e., $\Delta(G) = O(1)$. Shannon's theorem states that any multigraph of maximum degree $\Delta$ can be properly edge-colored with $\lfloor 3\Delta/2\rfloor$ colors. Our main results include algorithms for computing such colorings. We design deterministic and randomized sequential algorithms with running time $O(n\log n)$ and $O(n)$, respectively. This is the first improvement since the $O(n^2)$ algorithm in Shannon's original paper, and our randomized algorithm is optimal up to constant factors. We also develop distributed algorithms in the $\mathsf{LOCAL}$ model of computation. Namely, we design deterministic and randomized $\mathsf{LOCAL}$ algorithms with running time $\tilde O(\log^5 n)$ and $O(\log^2n)$, respectively. The deterministic sequential algorithm is a simplified extension of earlier work of Gabow et al. in edge-coloring simple graphs. The other algorithms apply the entropy compression method in a similar way to recent work by the author and Bernshteyn, where the authors design algorithms for Vizing's theorem for simple graphs. We also extend their results to Vizing's theorem for multigraphs.
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.