We present a parameterized dichotomy for the \textsc{$k$-Sparsest Cut} problem in weighted and unweighted versions. In particular, we show that the weighted \textsc{$k$-Sparsest Cut} problem is NP-hard for every $k\geq 3$ even on graphs with bounded vertex cover number. Also, the unweighted \textsc{$k$-Sparsest Cut} problem is W[1]-hard when parameterized by the three combined parameters tree-depth, feedback vertex set number, and $k$. On the positive side, we show that unweighted \textsc{$k$-Sparsest Cut} problem is FPT when parameterized by the vertex cover number and $k$, and when $k$ is fixed, it is FPT with respect to the treewidth. Moreover, we show that the generalized version \textsc{$k$-Small-Set Expansion} problem is FPT when parameterized by $k$ and the maximum degree of the graph, though it is W[1]-hard for each of these parameters separately.
Constraint satisfaction problems form a nicely behaved class of problems that lends itself to complexity classification results. From the point of view of parameterized complexity, a natural task is to classify the parameterized complexity of MinCSP problems parameterized by the number of unsatisfied constraints. In other words, we ask whether we can delete at most $k$ constraints, where $k$ is the parameter, to get a satisfiable instance. In this work, we take a step towards classifying the parameterized complexity for an important infinite-domain CSP: Allen's interval algebra (IA). This CSP has closed intervals with rational endpoints as domain values and employs a set $A$ of 13 basic comparison relations such as ``precedes'' or ``during'' for relating intervals. IA is a highly influential and well-studied formalism within AI and qualitative reasoning that has numerous applications in, for instance, planning, natural language processing and molecular biology. We provide an FPT vs. W[1]-hard dichotomy for MinCSP$(\Gamma)$ for all $\Gamma \subseteq A$. IA is sometimes extended with unions of the relations in $A$ or first-order definable relations over $A$, but extending our results to these cases would require first solving the parameterized complexity of Directed Symmetric Multicut, which is a notorious open problem. Already in this limited setting, we uncover connections to new variants of graph cut and separation problems. This includes hardness proofs for simultaneous cuts or feedback arc set problems in directed graphs, as well as new tractable cases with algorithms based on the recently introduced flow augmentation technique. Given the intractability of MinCSP$(A)$ in general, we then consider (parameterized) approximation algorithms and present a factor-$2$ fpt-approximation algorithm.
In the prophet inequality problem, a gambler faces a sequence of items arriving online with values drawn independently from known distributions. On seeing an item, the gambler must choose whether to accept its value as her reward and quit the game, or reject it and continue. The gambler's aim is to maximize her expected reward relative to the expected maximum of the values of all items. Since the seventies, a tight bound of 1/2 has been known for this competitive ratio in the setting where the items arrive in an adversarial order (Krengel and Sucheston, 1977, 1978). However, the optimum ratio still remains unknown in the order selection setting, where the gambler selects the arrival order, as well as in prophet secretary, where the items arrive in a random order. Moreover, it is not even known whether a separation exists between the two settings. In this paper, we show that the power of order selection allows the gambler to guarantee a strictly better competitive ratio than if the items arrive randomly. For the order selection setting, we identify an instance for which Peng and Tang's (FOCS'22) state-of-the-art algorithm performs no better than their claimed competitive ratio of (approximately) 0.7251, thus illustrating the need for an improved approach. We therefore extend their design and provide a more general algorithm design framework, using which we show that their ratio can be beaten, by designing a 0.7258-competitive algorithm. For the random order setting, we improve upon Correa, Saona and Ziliotto's (SODA'19) 0.732-hardness result to show a hardness of 0.7254 for general algorithms - even in the setting where the gambler knows the arrival order beforehand, thus establishing a separation between the order selection and random order settings.
In the open online dial-a-ride problem, a single server has to deliver transportation requests appearing over time in some metric space, subject to minimizing the completion time. We improve on the best known upper bounds on the competitive ratio on general metric spaces and on the half-line, for both the preemptive and non-preemptive version of the problem. We achieve this by revisiting the algorithm $\textsc{Lazy}$ recently suggested in [WAOA, 2022] and giving an improved and tight analysis. More precisely, we show that it has competitive ratio $2.457$ on general metric spaces and $2.366$ on the half-line. This is the first upper bound that beats known lower bounds of 2.5 for schedule-based algorithms as well as the natural $\textsc{Replan}$ algorithm.
We present a comprehensive study on discrete morphological symmetries of dynamical systems, which are commonly observed in biological and artificial locomoting systems, such as legged, swimming, and flying animals/robots/virtual characters. These symmetries arise from the presence of one or more planes/axis of symmetry in the system's morphology, resulting in harmonious duplication and distribution of body parts. Significantly, we characterize how morphological symmetries extend to symmetries in the system's dynamics, optimal control policies, and in all proprioceptive and exteroceptive measurements related to the system's dynamics evolution. In the context of data-driven methods, symmetry represents an inductive bias that justifies the use of data augmentation or symmetric function approximators. To tackle this, we present a theoretical and practical framework for identifying the system's morphological symmetry group $\G$ and characterizing the symmetries in proprioceptive and exteroceptive data measurements. We then exploit these symmetries using data augmentation and $\G$-equivariant neural networks. Our experiments on both synthetic and real-world applications provide empirical evidence of the advantageous outcomes resulting from the exploitation of these symmetries, including improved sample efficiency, enhanced generalization, and reduction of trainable parameters.
Non-stationary multi-armed bandit (NS-MAB) problems have recently received significant attention. NS-MAB are typically modelled in two scenarios: abruptly changing, where reward distributions remain constant for a certain period and change at unknown time steps, and smoothly changing, where reward distributions evolve smoothly based on unknown dynamics. In this paper, we propose Discounted Thompson Sampling (DS-TS) with Gaussian priors to address both non-stationary settings. Our algorithm passively adapts to changes by incorporating a discounted factor into Thompson Sampling. DS-TS method has been experimentally validated, but analysis of the regret upper bound is currently lacking. Under mild assumptions, we show that DS-TS with Gaussian priors can achieve nearly optimal regret bound on the order of $\tilde{O}(\sqrt{TB_T})$ for abruptly changing and $\tilde{O}(T^{\beta})$ for smoothly changing, where $T$ is the number of time steps, $B_T$ is the number of breakpoints, $\beta$ is associated with the smoothly changing environment and $\tilde{O}$ hides the parameters independent of $T$ as well as logarithmic terms. Furthermore, empirical comparisons between DS-TS and other non-stationary bandit algorithms demonstrate its competitive performance. Specifically, when prior knowledge of the maximum expected reward is available, DS-TS has the potential to outperform state-of-the-art algorithms.
We study the parameterized complexity of winner determination problems for three prevalent $k$-committee selection rules, namely the minimax approval voting (MAV), the proportional approval voting (PAV), and the Chamberlin-Courant's approval voting (CCAV). It is known that these problems are computationally hard. Although they have been studied from the parameterized complexity point of view with respect to several natural parameters, many of them turned out to be W[1]-hard or W[2]-hard. Aiming at obtaining plentiful fixed-parameter algorithms, we revisit these problems by considering more natural single parameters, combined parameters, and structural parameters.
A pure quantum state of $n$ parties associated with the Hilbert space $\CC^{d_1}\otimes \CC^{d_2}\otimes\cdots\otimes \CC^{d_n}$ is called $k$-uniform if all the reductions to $k$-parties are maximally mixed. The $n$ partite system is called homogenous if the local dimension $d_1=d_2=\cdots=d_n$, while it is called heterogeneous if the local dimension are not all equal. $k$-uniform sates play an important role in quantum information theory. There are many progress in characterizing and constructing $k$-uniform states in homogeneous systems. However, the study of entanglement for heterogeneous systems is much more challenging than that for the homogeneous case. There are very few results known for the $k$-uniform states in heterogeneous systems for $k>3$. We present two general methods to construct $k$-uniform states in the heterogeneous systems for general $k$. The first construction is derived from the error correcting codes by establishing a connection between irredundant mixed orthogonal arrays and error correcting codes. We can produce many new $k$-uniform states such that the local dimension of each subsystem can be a prime power. The second construction is derived from a matrix $H$ meeting the condition that $H_{A\times \bar{A}}+H^T_{\bar{A}\times A}$ has full rank for any row index set $A$ of size $k$. These matrix construction can provide more flexible choices for the local dimensions, i.e., the local dimensions can be any integer (not necessarily prime power) subject to some constraints. Our constructions imply that for any positive integer $k$, one can construct $k$-uniform states of a heterogeneous system in many different Hilbert spaces.
Gaussian boson sampling, a computational model that is widely believed to admit quantum supremacy, has already been experimentally demonstrated and is claimed to surpass the classical simulation capabilities of even the most powerful supercomputers today. However, whether the current approach limited by photon loss and noise in such experiments prescribes a scalable path to quantum advantage is an open question. To understand the effect of photon loss on the scalability of Gaussian boson sampling, we analytically derive the asymptotic operator entanglement entropy scaling, which relates to the simulation complexity. As a result, we observe that efficient tensor network simulations are likely possible under the $N_\text{out}\propto\sqrt{N}$ scaling of the number of surviving photons in the number of input photons. We numerically verify this result using a tensor network algorithm with $U(1)$ symmetry, and overcome previous challenges due to the large local Hilbert space dimensions in Gaussian boson sampling with hardware acceleration. Additionally, we observe that increasing the photon number through larger squeezing does not increase the entanglement entropy significantly. Finally, we numerically find the bond dimension necessary for fixed accuracy simulations, providing more direct evidence for the complexity of tensor networks.
We present an effective framework for improving the breakdown point of robust regression algorithms. Robust regression has attracted widespread attention due to the ubiquity of outliers, which significantly affect the estimation results. However, many existing robust least-squares regression algorithms suffer from a low breakdown point, as they become stuck around local optima when facing severe attacks. By expanding on the previous work, we propose a novel framework that enhances the breakdown point of these algorithms by inserting a prior distribution in each iteration step, and adjusting the prior distribution according to historical information. We apply this framework to a specific algorithm and derive the consistent robust regression algorithm with iterative local search (CORALS). The relationship between CORALS and momentum gradient descent is described, and a detailed proof of the theoretical convergence of CORALS is presented. Finally, we demonstrate that the breakdown point of CORALS is indeed higher than that of the algorithm from which it is derived. We apply the proposed framework to other robust algorithms, and show that the improved algorithms achieve better results than the original algorithms, indicating the effectiveness of the proposed framework.
Markov Switching models have had increasing success in time series analysis due to their ability to capture the existence of unobserved discrete states in the dynamics of the variables under study. This result is generally obtained thanks to the inference on states derived from the so--called Hamilton filter. One of the open problems in this framework is the identification of the number of states, generally fixed a priori; it is in fact impossible to apply classical tests due to the problem of the nuisance parameters present only under the alternative hypothesis. In this work we show, by Monte Carlo simulations, that fuzzy clustering is able to reproduce the parametric state inference derived from the Hamilton filter and that the typical indices used in clustering to determine the number of groups can be used to identify the number of states in this framework. The procedure is very simple to apply, considering that it is performed (in a nonparametric way) independently of the data generation process and that the indicators we use are present in most statistical packages. A final application on real data completes the analysis.