In Gapped String Indexing, the goal is to compactly represent a string $S$ of length $n$ such that given queries consisting of two strings $P_1$ and $P_2$, called patterns, and an integer interval $[\alpha, \beta]$, called gap range, we can quickly find occurrences of $P_1$ and $P_2$ in $S$ with distance in $[\alpha, \beta]$. Due to the many applications of this fundamental problem in computational biology and elsewhere, there is a great body of work for restricted or parameterised variants of the problem. However, for the general problem statement, no improvements upon the trivial $\mathcal{O}(n)$-space $\mathcal{O}(n)$-query time or $\Omega(n^2)$-space $\mathcal{\tilde{O}}(|P_1| + |P_2| + \mathrm{occ})$-query time solutions were known so far. We break this barrier obtaining interesting trade-offs with polynomially subquadratic space and polynomially sublinear query time. In particular, we show that, for every $0\leq \delta \leq 1$, there is a data structure for Gapped String Indexing with either $\mathcal{\tilde{O}}(n^{2-\delta/3})$ or $\mathcal{\tilde{O}}(n^{3-2\delta})$ space and $\mathcal{\tilde{O}}(|P_1| + |P_2| + n^{\delta}\cdot (\mathrm{occ}+1))$ query time, where $\mathrm{occ}$ is the number of reported occurrences. As a new fundamental tool towards obtaining our main result, we introduce the Shifted Set Intersection problem: preprocess a collection of sets $S_1, \ldots, S_k$ of integers such that given queries consisting of three integers $i,j,s$, we can quickly output YES if and only if there exist $a \in S_i$ and $b \in S_j$ with $a+s = b$. We start by showing that the Shifted Set Intersection problem is equivalent to the indexing variant of 3SUM (3SUM Indexing) [Golovnev et al., STOC 2020]. Via several steps of reduction we then show that the Gapped String Indexing problem reduces to polylogarithmically many instances of the Shifted Set Intersection problem.
Simulations of large-scale dynamical systems require expensive computations. Low-dimensional parametrization of high-dimensional states such as Proper Orthogonal Decomposition (POD) can be a solution to lessen the burdens by providing a certain compromise between accuracy and model complexity. However, for really low-dimensional parametrizations (for example for controller design) linear methods like the POD come to their natural limits so that nonlinear approaches will be the methods of choice. In this work we propose a convolutional autoencoder (CAE) consisting of a nonlinear encoder and an affine linear decoder and consider combinations with k-means clustering for improved encoding performance. The proposed set of methods is compared to the standard POD approach in two cylinder-wake scenarios modeled by the incompressible Navier-Stokes equations.
Nonparametric estimation for semilinear SPDEs, namely stochastic reaction-diffusion equations in one space dimension, is studied. We consider observations of the solution field on a discrete grid in time and space with infill asymptotics in both coordinates. Firstly, we derive a nonparametric estimator for the reaction function of the underlying equation. The estimate is chosen from a finite-dimensional function space based on a least squares criterion. Oracle inequalities provide conditions for the estimator to achieve the usual nonparametric rate of convergence. Adaptivity is provided via model selection. Secondly, we show that the asymptotic properties of realized quadratic variation based estimators for the diffusivity and volatility carry over from linear SPDEs. In particular, we obtain a rate-optimal joint estimator of the two parameters. The result relies on our precise analysis of the H\"older regularity of the solution process and its nonlinear component, which may be of its own interest. Both steps of the calibration can be carried out simultaneously without prior knowledge of the parameters.
In statistics, independent, identically distributed random samples do not carry a natural ordering, and their statistics are typically invariant with respect to permutations of their order. Thus, an $n$-sample in a space $M$ can be considered as an element of the quotient space of $M^n$ modulo the permutation group. The present paper takes this definition of sample space and the related concept of orbit types as a starting point for developing a geometric perspective on statistics. We aim at deriving a general mathematical setting for studying the behavior of empirical and population means in spaces ranging from smooth Riemannian manifolds to general stratified spaces. We fully describe the orbifold and path-metric structure of the sample space when $M$ is a manifold or path-metric space, respectively. These results are non-trivial even when $M$ is Euclidean. We show that the infinite sample space exists in a Gromov-Hausdorff type sense and coincides with the Wasserstein space of probability distributions on $M$. We exhibit Fr\'echet means and $k$-means as metric projections onto 1-skeleta or $k$-skeleta in Wasserstein space, and we define a new and more general notion of polymeans. This geometric characterization via metric projections applies equally to sample and population means, and we use it to establish asymptotic properties of polymeans such as consistency and asymptotic normality.
During the past decades, evolutionary computation (EC) has demonstrated promising potential in solving various complex optimization problems of relatively small scales. Nowadays, however, ongoing developments in modern science and engineering are bringing increasingly grave challenges to the conventional EC paradigm in terms of scalability. As problem scales increase, on the one hand, the encoding spaces (i.e., dimensions of the decision vectors) are intrinsically larger; on the other hand, EC algorithms often require growing numbers of function evaluations (and probably larger population sizes as well) to work properly. To meet such emerging challenges, not only does it require delicate algorithm designs, but more importantly, a high-performance computing framework is indispensable. Hence, we develop a distributed GPU-accelerated algorithm library -- EvoX. First, we propose a generalized workflow for implementing general EC algorithms. Second, we design a scalable computing framework for running EC algorithms on distributed GPU devices. Third, we provide user-friendly interfaces to both researchers and practitioners for benchmark studies as well as extended real-world applications. Empirically, we assess the promising scalability of EvoX via a series of benchmark experiments with problem dimensions/population sizes up to millions. Moreover, we demonstrate the easy usability of EvoX by applying it to solving reinforcement learning tasks on OpenAI Gym. To the best of our knowledge, this is the first library supporting distributed GPU computing in the EC literature. The code of EvoX is available at //github.com/EMI-Group/EvoX.
A series-parallel matrix is a binary matrix that can be obtained from an empty matrix by successively adjoining rows or columns that are parallel to an existing row/column or have at most one 1-entry. Equivalently, series-parallel matrices are representation matrices of graphic matroids of series-parallel graphs, which can be recognized in linear time. We propose an algorithm that, for an m-by-n matrix A with k nonzeros, determines in expected $\mathcal{O}(m + n + k)$ time whether A is series-parallel, or returns a minimal non-series-parallel submatrix of A. We complement the developed algorithm by an efficient implementation and report about computational results.
Causal inference on populations embedded in social networks poses technical challenges, since the typical no interference assumption may no longer hold. For instance, in the context of social research, the outcome of a study unit will likely be affected by an intervention or treatment received by close neighbors. While inverse probability-of-treatment weighted (IPW) estimators have been developed for this setting, they are often highly inefficient. In this work, we assume that the network is a union of disjoint components and propose doubly robust (DR) estimators combining models for treatment and outcome that are consistent and asymptotically normal if either model is correctly specified. We present empirical results that illustrate the DR property and the efficiency gain of DR over IPW estimators when both the outcome and treatment models are correctly specified. Simulations are conducted for networks with equal and unequal component sizes and outcome data with and without a multilevel structure. We apply these methods in an illustrative analysis using the Add Health network, examining the impact of maternal college education on adolescent school performance, both direct and indirect.
Constant-function market makers (CFMMs), such as Uniswap, are automated exchanges offering trades among a set of assets. We study their technical relationship to another class of automated market makers, cost-function prediction markets. We first introduce axioms for market makers and show that CFMMs with concave potential functions characterize "good" market makers according to these axioms. We then show that every such CFMM on $n$ assets is equivalent to a cost-function prediction market for events with $n$ outcomes. Our construction directly converts a CFMM into a prediction market and vice versa. Conceptually, our results show that desirable market-making axioms are equivalent to desirable information-elicitation axioms, i.e., markets are good at facilitating trade if and only if they are good at revealing beliefs. For example, we show that every CFMM implicitly defines a \emph{proper scoring rule} for eliciting beliefs; the scoring rule for Uniswap is unusual, but known. From a technical standpoint, our results show how tools for prediction markets and CFMMs can interoperate. We illustrate this interoperability by showing how liquidity strategies from both literatures transfer to the other, yielding new market designs.
The computational study of equilibria involving constraints on players' strategies has been largely neglected. However, in real-world applications, players are usually subject to constraints ruling out the feasibility of some of their strategies, such as, e.g., safety requirements and budget caps. Computational studies on constrained versions of the Nash equilibrium have lead to some results under very stringent assumptions, while finding constrained versions of the correlated equilibrium (CE) is still unexplored. In this paper, we introduce and computationally characterize constrained Phi-equilibria -- a more general notion than constrained CEs -- in normal-form games. We show that computing such equilibria is in general computationally intractable, and also that the set of the equilibria may not be convex, providing a sharp divide with unconstrained CEs. Nevertheless, we provide a polynomial-time algorithm for computing a constrained (approximate) Phi-equilibrium maximizing a given linear function, when either the number of constraints or that of players' actions is fixed. Moreover, in the special case in which a player's constraints do not depend on other players' strategies, we show that an exact, function-maximizing equilibrium can be computed in polynomial time, while one (approximate) equilibrium can be found with an efficient decentralized no-regret learning algorithm.
HTTP Adaptive Streaming (HAS) is nowadays a popular solution for multimedia delivery. The novelty of HAS lies in the possibility of continuously adapting the streaming session to current network conditions, facilitated by Adaptive Bitrate (ABR) algorithms. Various popular streaming and Video on Demand services such as Netflix, Amazon Prime Video, and Twitch use this method. Given this broad consumer base, ABR algorithms continuously improve to increase user satisfaction. The insights for these improvements are, among others, gathered within the research area of Quality of Experience (QoE). Within this field, various researchers have dedicated their works to identifying potential impairments and testing their impact on viewers' QoE. Two frequently discussed visual impairments influencing QoE are stalling events and quality switches. So far, it is commonly assumed that those stalling events have the worst impact on QoE. This paper challenged this belief and reviewed this assumption by comparing stalling events with multiple quality and high amplitude quality switches. Two subjective studies were conducted. During the first subjective study, participants received a monetary incentive, while the second subjective study was carried out with volunteers. The statistical analysis demonstrated that stalling events do not result in the worst degradation of QoE. These findings suggest that a reevaluation of the effect of stalling events in QoE research is needed. Therefore, these findings may be used for further research and to improve current adaptation strategies in ABR algorithms.
Gaussian elimination with partial pivoting (GEPP) remains the most common method to solve dense linear systems. Each GEPP step uses a row transposition pivot movement if needed to ensure the leading pivot entry is maximal in magnitude for the leading column of the remaining untriangularized subsystem. We will use theoretical and numerical approaches to study how often this pivot movement is needed. We provide full distributional descriptions for the number of pivot movements needed using GEPP using particular Haar random ensembles, as well as compare these models to other common transformations from randomized numerical linear algebra. Additionally, we introduce new random ensembles with fixed pivot movement counts and fixed sparsity, $\alpha$. Experiments estimating the empirical spectral density (ESD) of these random ensembles leads to a new conjecture on a universality class of random matrices with fixed sparsity whose scaled ESD converges to a measure on the complex unit disk that depends on $\alpha$ and is an interpolation of the uniform measure on the unit disk and the Dirac measure at the origin.