We provide a new approximation algorithm for the Red-Blue Set Cover problem and give a new hardness result. Our approximation algorithm achieves $\tilde O(m^{1/3})$-approximation improving on the $\tilde O(m^{1/2})$-approximation due to Elkin and Peleg (where $m$ is the number of sets). Additionally, we provide a nearly approximation preserving reduction from Min $k$-Union to Red-Blue Set Cover that gives an $\tilde\Omega(m^{1/4 - \varepsilon})$ hardness under the Dense-vs-Random conjecture.
We consider a multi-process remote estimation system observing $K$ independent Ornstein-Uhlenbeck processes. In this system, a shared sensor samples the $K$ processes in such a way that the long-term average sum mean square error (MSE) is minimized. The sensor operates under a total sampling frequency constraint $f_{\max}$. The samples from all processes consume random processing delays in a shared queue and then are transmitted over an erasure channel with probability $\epsilon$. We study two variants of the problem: first, when the samples are scheduled according to a Maximum-Age-First (MAF) policy, and the receiver provides an erasure status feedback; and second, when samples are scheduled according to a Round-Robin (RR) policy, when there is no erasure status feedback from the receiver. Aided by optimal structural results, we show that the optimal sampling policy for both settings, under some conditions, is a \emph{threshold policy}. We characterize the optimal threshold and the corresponding optimal long-term average sum MSE as a function of $K$, $f_{\max}$, $\epsilon$, and the statistical properties of the observed processes. Our results show that, with an exponentially distributed service rate, the optimal threshold $\tau^*$ increases as the number of processes $K$ increases, for both settings. Additionally, we show that the optimal threshold is an \emph{increasing} function of $\epsilon$ in the case of \emph{available} erasure status feedback, while it exhibits the \emph{opposite behavior}, i.e., $\tau^*$ is a \emph{decreasing} function of $\epsilon$, in the case of \emph{absent} erasure status feedback.
The Euler characteristic transform (ECT) is a signature from topological data analysis (TDA) which summarises shapes embedded in Euclidean space. Compared with other TDA methods, the ECT is fast to compute and it is a sufficient statistic for a broad class of shapes. However, small perturbations of a shape can lead to large distortions in its ECT. In this paper, we propose a new metric on compact one-dimensional shapes and prove that the ECT is stable with respect to this metric. Crucially, our result uses curvature, rather than the size of a triangulation of an underlying shape, to control stability. We further construct a computationally tractable statistical estimator of the ECT based on the theory of Gaussian processes. We use our stability result to prove that our estimator is consistent on shapes perturbed by independent ambient noise; i.e., the estimator converges to the true ECT as the sample size increases.
The chase procedure is a fundamental algorithmic tool in databases that allows us to reason with constraints, such as existential rules, with a plethora of applications. It takes as input a database and a set of constraints, and iteratively completes the database as dictated by the constraints. A key challenge, though, is the fact that it may not terminate, which leads to the problem of checking whether it terminates given a database and a set of constraints. In this work, we focus on the semi-oblivious version of the chase, which is well-suited for practical implementations, and linear existential rules, a central class of constraints with several applications. In this setting, there is a mature body of theoretical work that provides syntactic characterizations of when the chase terminates, algorithms for checking chase termination, precise complexity results, and worst-case optimal bounds on the size of the result of the chase (whenever is finite). Our main objective is to experimentally evaluate the existing chase termination algorithms with the aim of understanding which input parameters affect their performance, clarifying whether they can be used in practice, and revealing their performance limitations.
Random quantum circuits have been utilized in the contexts of quantum supremacy demonstrations, variational quantum algorithms for chemistry and machine learning, and blackhole information. The ability of random circuits to approximate any random unitaries has consequences on their complexity, expressibility, and trainability. To study this property of random circuits, we develop numerical protocols for estimating the frame potential, the distance between a given ensemble and the exact randomness. Our tensor-network-based algorithm has polynomial complexity for shallow circuits and is high-performing using CPU and GPU parallelism. We study 1. local and parallel random circuits to verify the linear growth in complexity as stated by the Brown-Susskind conjecture, and; 2. hardware-efficient ans\"atze to shed light on its expressibility and the barren plateau problem in the context of variational algorithms. Our work shows that large-scale tensor network simulations could provide important hints toward open problems in quantum information science.
We study the complexity of high-dimensional approximation in the $L_2$-norm when different classes of information are available; we compare the power of function evaluations with the power of arbitrary continuous linear measurements. Here, we discuss the situation when the number of linear measurements required to achieve an error $\varepsilon \in (0,1)$ in dimension $d\in\mathbb{N}$ depends only poly-logarithmically on $\varepsilon^{-1}$. This corresponds to an exponential order of convergence of the approximation error, which often happens in applications. However, it does not mean that the high-dimensional approximation problem is easy, the main difficulty usually lies within the dependence on the dimension $d$. We determine to which extent the required amount of information changes, if we allow only function evaluation instead of arbitrary linear information. It turns out that in this case we only lose very little, and we can even restrict to linear algorithms. In particular, several notions of tractability hold simultaneously for both types of available information.
Most recent 6D object pose estimation methods first use object detection to obtain 2D bounding boxes before actually regressing the pose. However, the general object detection methods they use are ill-suited to handle cluttered scenes, thus producing poor initialization to the subsequent pose network. To address this, we propose a rigidity-aware detection method exploiting the fact that, in 6D pose estimation, the target objects are rigid. This lets us introduce an approach to sampling positive object regions from the entire visible object area during training, instead of naively drawing samples from the bounding box center where the object might be occluded. As such, every visible object part can contribute to the final bounding box prediction, yielding better detection robustness. Key to the success of our approach is a visibility map, which we propose to build using a minimum barrier distance between every pixel in the bounding box and the box boundary. Our results on seven challenging 6D pose estimation datasets evidence that our method outperforms general detection frameworks by a large margin. Furthermore, combined with a pose regression network, we obtain state-of-the-art pose estimation results on the challenging BOP benchmark.
Estimating long-term causal effects based on short-term surrogates is a significant but challenging problem in many real-world applications, e.g., marketing and medicine. Despite its success in certain domains, most existing methods estimate causal effects in an idealistic and simplistic way - ignoring the causal structure among short-term outcomes and treating all of them as surrogates. However, such methods cannot be well applied to real-world scenarios, in which the partially observed surrogates are mixed with their proxies among short-term outcomes. To this end, we develop our flexible method, Laser, to estimate long-term causal effects in the more realistic situation that the surrogates are observed or have observed proxies.Given the indistinguishability between the surrogates and proxies, we utilize identifiable variational auto-encoder (iVAE) to recover the whole valid surrogates on all the surrogates candidates without the need of distinguishing the observed surrogates or the proxies of latent surrogates. With the help of the recovered surrogates, we further devise an unbiased estimation of long-term causal effects. Extensive experimental results on the real-world and semi-synthetic datasets demonstrate the effectiveness of our proposed method.
The aim of this work is to study the dual and the algebraic dual of an evaluation code using standard monomials and indicator functions. We show that the dual of an evaluation code is the evaluation code of the algebraic dual. We develop an algorithm for computing a basis for the algebraic dual. Let $C_1$ and $C_2$ be linear codes spanned by standard monomials. We give a combinatorial condition for the monomial equivalence of $C_1$ and the dual $C_2^\perp$. Moreover, we give an explicit description of a generator matrix of $C_2^\perp$ in terms of that of $C_1$ and coefficients of indicator functions. For Reed--Muller-type codes we give a duality criterion in terms of the v-number and the Hilbert function of a vanishing ideal. As an application, we provide an explicit duality for Reed--Muller-type codes corresponding to Gorenstein ideals. In addition, when the evaluation code is monomial and the set of evaluation points is a degenerate affine space, we classify when the dual is a monomial code.
In fully dynamic clustering problems, a clustering of a given data set in a metric space must be maintained while it is modified through insertions and deletions of individual points. In this paper, we resolve the complexity of fully dynamic $k$-center clustering against both adaptive and oblivious adversaries. Against oblivious adversaries, we present the first algorithm for fully dynamic $k$-center in an arbitrary metric space that maintains an optimal $(2+\epsilon)$-approximation in $O(k \cdot \mathrm{polylog}(n,\Delta))$ amortized update time. Here, $n$ is an upper bound on the number of active points at any time, and $\Delta$ is the aspect ratio of the metric space. Previously, the best known amortized update time was $O(k^2\cdot \mathrm{polylog}(n,\Delta))$, and is due to Chan, Gourqin, and Sozio (2018). Moreover, we demonstrate that our runtime is optimal up to $\mathrm{polylog}(n,\Delta)$ factors. In fact, we prove that even offline algorithms for $k$-clustering tasks in arbitrary metric spaces, including $k$-medians, $k$-means, and $k$-center, must make at least $\Omega(n k)$ distance queries to achieve any non-trivial approximation factor. This implies a lower bound of $\Omega(k)$ which holds even for the insertions-only setting. We also show deterministic lower and upper bounds for adaptive adversaries, demonstrate that an update time sublinear in $k$ is possible against oblivious adversaries for metric spaces which admit locally sensitive hash functions (LSH) and give the first fully dynamic $O(1)$-approximation algorithms for the closely related $k$-sum-of-radii and $k$-sum-of-diameter problems.
We review Quasi Maximum Likelihood estimation of factor models for high-dimensional panels of time series. We consider two cases: (1) estimation when no dynamic model for the factors is specified (Bai and Li, 2016); (2) estimation based on the Kalman smoother and the Expectation Maximization algorithm thus allowing to model explicitly the factor dynamics (Doz et al., 2012). Our interest is in approximate factor models, i.e., when we allow for the idiosyncratic components to be mildly cross-sectionally, as well as serially, correlated. Although such setting apparently makes estimation harder, we show, in fact, that factor models do not suffer of the curse of dimensionality problem, but instead they enjoy a blessing of dimensionality property. In particular, we show that if the cross-sectional dimension of the data, $N$, grows to infinity, then: (i) identification of the model is still possible, (ii) the mis-specification error due to the use of an exact factor model log-likelihood vanishes. Moreover, if we let also the sample size, $T$, grow to infinity, we can also consistently estimate all parameters of the model and make inference. The same is true for estimation of the latent factors which can be carried out by weighted least-squares, linear projection, or Kalman filtering/smoothing. We also compare the approaches presented with: Principal Component analysis and the classical, fixed $N$, exact Maximum Likelihood approach. We conclude with a discussion on efficiency of the considered estimators.