We consider the simultaneously fast and in-place computation of the Euclidean polynomial modular remainder $R(X) $\not\equiv$ A(X) \mod B(X)$ with $A$ and $B$ of respective degrees $n$ and $m $\le$ n$. But fast algorithms for this usually come at the expense of (potentially large) extra temporary space. To remain in-place a further issue is to avoid the storage of the whole quotient $Q(X)$ such that $A=BQ+R$. If the multiplication of two polynomials of degree $k$ can be performed with $M(k)$ operations and $O(k)$ extra space, and if it is allowed to use the input space of $A$ or $B$ for intermediate computations, but putting $A$ and $B$ back to their initial states after the completion of the remainder computation, we here propose an in-place algorithm (that is with its extra required space reduced to $O(1)$ only) using at most $O(n/m M(m)\log(m)$ arithmetic operations, if $\M(m)$ is quasi-linear, or $O(n/m M(m)}$ otherwise. We also propose variants that compute -- still in-place and with the same kind of complexity bounds -- the over-place remainder $A(X) $\not\equiv$ A(X) \mod B(X)$, the accumulated remainder $R(X) += A(X) \mod B(X)$ and the accumulated modular multiplication $R(X) += A(X)C(X) \mod B(X)$. To achieve this, we develop techniques for Toeplitz matrix operations which output is also part of the input. Fast and in-place accumulating versions are obtained for the latter, and thus for convolutions, and then used for polynomial remaindering. This is realized via further reductions to accumulated polynomial multiplication, for which fast in-place algorithms have recently been developed.
Suppose that one can construct a valid $(1-\delta)$-confidence interval (CI) for each of $K$ parameters of potential interest. If a data analyst uses an arbitrary data-dependent criterion to select some subset $S$ of parameters, then the aforementioned CIs for the selected parameters are no longer valid due to selection bias. We design a new method to adjust the intervals in order to control the false coverage rate (FCR). The main established method is the "BY procedure" by Benjamini and Yekutieli (JASA, 2005). The BY guarantees require certain restrictions on the selection criterion and on the dependence between the CIs. We propose a new simple method which, in contrast, is valid under any dependence structure between the original CIs, and any (unknown) selection criterion, but which only applies to a special, yet broad, class of CIs that we call e-CIs. To elaborate, our procedure simply reports $(1-\delta|S|/K)$-CIs for the selected parameters, and we prove that it controls the FCR at $\delta$ for confidence intervals that implicitly invert e-values; examples include those constructed via supermartingale methods, via universal inference, or via Chernoff-style bounds, among others. The e-BY procedure is admissible, and recovers the BY procedure as a special case via a particular calibrator. Our work also has implications for post-selection inference in sequential settings, since it applies at stopping times, to continuously-monitored confidence sequences, and under bandit sampling. We demonstrate the efficacy of our procedure using numerical simulations and real A/B testing data from Twitter.
New geometric methods for fast evaluation of derivatives of polynomial and rational B\'{e}zier curves are proposed. They apply an algorithm for evaluating polynomial or rational B\'{e}zier curves, which was recently given by the authors. Numerical tests show that the new approach is more efficient than the methods which use the famous de Casteljau algorithm. The algorithms work well even for high-order derivatives of rational B\'{e}zier curves of high degrees.
We describe a `discretize-then-relax' strategy to globally minimize integral functionals over functions $u$ in a Sobolev space subject to Dirichlet boundary conditions. The strategy applies whenever the integral functional depends polynomially on $u$ and its derivatives, even if it is nonconvex. The `discretize' step uses a bounded finite element scheme to approximate the integral minimization problem with a convergent hierarchy of polynomial optimization problems over a compact feasible set, indexed by the decreasing size $h$ of the finite element mesh. The `relax' step employs sparse moment-sum-of-squares relaxations to approximate each polynomial optimization problem with a hierarchy of convex semidefinite programs, indexed by an increasing relaxation order $\omega$. We prove that, as $\omega\to\infty$ and $h\to 0$, solutions of such semidefinite programs provide approximate minimizers that converge in a suitable sense (including in certain $L^p$ norms) to the global minimizer of the original integral functional if it is unique. We also report computational experiments showing that our numerical strategy works well even when technical conditions required by our theoretical analysis are not satisfied.
We explore the theoretical possibility of learning $d$-dimensional targets with $W$-parameter models by gradient flow (GF) when $W<d$. Our main result shows that if the targets are described by a particular $d$-dimensional probability distribution, then there exist models with as few as two parameters that can learn the targets with arbitrarily high success probability. On the other hand, we show that for $W<d$ there is necessarily a large subset of GF-non-learnable targets. In particular, the set of learnable targets is not dense in $\mathbb R^d$, and any subset of $\mathbb R^d$ homeomorphic to the $W$-dimensional sphere contains non-learnable targets. Finally, we observe that the model in our main theorem on almost guaranteed two-parameter learning is constructed using a hierarchical procedure and as a result is not expressible by a single elementary function. We show that this limitation is essential in the sense that such learnability can be ruled out for a large class of elementary functions.
We consider the additive version of the matrix denoising problem, where a random symmetric matrix $S$ of size $n$ has to be inferred from the observation of $Y=S+Z$, with $Z$ an independent random matrix modeling a noise. For prior distributions of $S$ and $Z$ that are invariant under conjugation by orthogonal matrices we determine, using results from first and second order free probability theory, the Bayes-optimal (in terms of the mean square error) polynomial estimators of degree at most $D$, asymptotically in $n$, and show that as $D$ increases they converge towards the estimator introduced by Bun, Allez, Bouchaud and Potters in [IEEE Transactions on Information Theory {\bf 62}, 7475 (2016)]. We conjecture that this optimality holds beyond strictly orthogonally invariant priors, and provide partial evidences of this universality phenomenon when $S$ is an arbitrary Wishart matrix and $Z$ is drawn from the Gaussian Orthogonal Ensemble, a case motivated by the related extensive rank matrix factorization problem.
We give a new lower bound for the minimal dispersion of a point set in the unit cube and its inverse function in the high dimension regime. This is done by considering only a very small class of test boxes, which allows us to reduce bounding the dispersion to a problem in extremal set theory. Specifically, we translate a lower bound on the size of $r$-cover-free families to a lower bound on the inverse of the minimal dispersion of a point set. The lower bound we obtain matches the recently obtained upper bound on the minimal dispersion up to logarithmic terms.
Let $1<t<n$ be integers, where $t$ is a divisor of $n$. An R-$q^t$-partially scattered polynomial is a $\mathbb F_q$-linearized polynomial $f$ in $\mathbb F_{q^n}[X]$ that satisfies the condition that for all $x,y\in\mathbb F_{q^n}^*$ such that $x/y\in\mathbb F_{q^t}$, if $f(x)/x=f(y)/y$, then $x/y\in\mathbb F_q$; $f$ is called scattered if this implication holds for all $x,y\in\mathbb F_{q^n}^*$. Two polynomials in $\mathbb F_{q^n}[X]$ are said to be equivalent if their graphs are in the same orbit under the action of the group $\Gamma L(2,q^n)$. For $n>8$ only three families of scattered polynomials in $\mathbb F_{q^n}[X]$ are known: $(i)$~monomials of pseudoregulus type, $(ii)$~binomials of Lunardon-Polverino type, and $(iii)$~a family of quadrinomials defined in [1,10] and extended in [8,13]. In this paper we prove that the polynomial $\varphi_{m,q^J}=X^{q^{J(t-1)}}+X^{q^{J(2t-1)}}+m(X^{q^J}-X^{q^{J(t+1)}})\in\mathbb F_{q^{2t}}[X]$, $q$ odd, $t\ge3$ is R-$q^t$-partially scattered for every value of $m\in\mathbb F_{q^t}^*$ and $J$ coprime with $2t$. Moreover, for every $t>4$ and $q>5$ there exist values of $m$ for which $\varphi_{m,q}$ is scattered and new with respect to the polynomials mentioned in $(i)$, $(ii)$ and $(iii)$ above. The related linear sets are of $\Gamma L$-class at least two.
A constraint satisfaction problem (CSP), $\textsf{Max-CSP}(\mathcal{F})$, is specified by a finite set of constraints $\mathcal{F} \subseteq \{[q]^k \to \{0,1\}\}$ for positive integers $q$ and $k$. An instance of the problem on $n$ variables is given by $m$ applications of constraints from $\mathcal{F}$ to subsequences of the $n$ variables, and the goal is to find an assignment to the variables that satisfies the maximum number of constraints. In the $(\gamma,\beta)$-approximation version of the problem for parameters $0 \leq \beta < \gamma \leq 1$, the goal is to distinguish instances where at least $\gamma$ fraction of the constraints can be satisfied from instances where at most $\beta$ fraction of the constraints can be satisfied. In this work we consider the approximability of this problem in the context of sketching algorithms and give a dichotomy result. Specifically, for every family $\mathcal{F}$ and every $\beta < \gamma$, we show that either a linear sketching algorithm solves the problem in polylogarithmic space, or the problem is not solvable by any sketching algorithm in $o(\sqrt{n})$ space. In particular, we give non-trivial approximation algorithms using polylogarithmic space for infinitely many constraint satisfaction problems. We also extend previously known lower bounds for general streaming algorithms to a wide variety of problems, and in particular the case of $q=k=2$, where we get a dichotomy, and the case when the satisfying assignments of the constraints of $\mathcal{F}$ support a distribution on $[q]^k$ with uniform marginals. Prior to this work, other than sporadic examples, the only systematic classes of CSPs that were analyzed considered the setting of Boolean variables $q=2$, binary constraints $k=2$, singleton families $|\mathcal{F}|=1$ and only considered the setting where constraints are placed on literals rather than variables.
We consider the problem of the exact computation of the marginal eigenvalue distributions in the Laguerre and Jacobi $\beta$ ensembles. In the case $\beta=1$ this is a question of long standing in the mathematical statistics literature. A recursive procedure to accomplish this task is given for $\beta$ a positive integer, and the parameter $\lambda_1$ a non-negative integer. This case is special due to a finite basis of elementary functions, with coefficients which are polynomials. In the Laguerre case with $\beta = 1$ and $\lambda_1 + 1/2$ a non-negative integer some evidence is given of their again being a finite basis, now consisting of elementary functions and the error function multiplied by elementary functions. Moreover, from this the corresponding distributions in the fixed trace case permit a finite basis of power functions, as also for $\lambda_1$ a non-negative integer. The fixed trace case in this setting is relevant to quantum information theory and quantum transport problem, allowing particularly the exact determination of Landauer conductance distributions in a previously intractable parameter regime. Our findings also aid in analyzing zeros of the generating function for specific gap probabilities, supporting the validity of an associated large $N$ local central limit theorem.
A uniform $k$-{\sc dag} generalizes the uniform random recursive tree by picking $k$ parents uniformly at random from the existing nodes. It starts with $k$ ''roots''. Each of the $k$ roots is assigned a bit. These bits are propagated by a noisy channel. The parents' bits are flipped with probability $p$, and a majority vote is taken. When all nodes have received their bits, the $k$-{\sc dag} is shown without identifying the roots. The goal is to estimate the majority bit among the roots. We identify the threshold for $p$ as a function of $k$ below which the majority rule among all nodes yields an error $c+o(1)$ with $c<1/2$. Above the threshold the majority rule errs with probability $1/2+o(1)$.