We show how to find all $k$ marked elements in a list of size $N$ using the optimal number $O(\sqrt{N k})$ of quantum queries and only a polylogarithmic overhead in the gate complexity, in the setting where one has a small quantum memory. Previous algorithms either incurred a factor $k$ overhead in the gate complexity, or had an extra factor $\log(k)$ in the query complexity. We then consider the problem of finding a multiplicative $\delta$-approximation of $s = \sum_{i=1}^N v_i$ where $v=(v_i) \in [0,1]^N$, given quantum query access to a binary description of $v$. We give an algorithm that does so, with probability at least $1-\rho$, using $O(\sqrt{N \log(1/\rho) / \delta})$ quantum queries (under mild assumptions on $\rho$). This quadratically improves the dependence on $1/\delta$ and $\log(1/\rho)$ compared to a straightforward application of amplitude estimation. To obtain the improved $\log(1/\rho)$ dependence we use the first result.
Suppose that $S \subseteq [n]^2$ contains no three points of the form $(x,y), (x,y+\delta), (x+\delta,y')$, where $\delta \neq 0$. How big can $S$ be? Trivially, $n \le |S| \le n^2$. Slight improvements on these bounds are obtained from Shkredov's upper bound for the corners problem [Shk06], which shows that $|S| \le O(n^2/(\log \log n)^c)$ for some small $c > 0$, and a construction due to Petrov [Pet23], which shows that $|S| \ge \Omega(n \log n/\sqrt{\log \log n})$. Could it be that for all $\varepsilon > 0$, $|S| \le O(n^{1+\varepsilon})$? We show that if so, this would rule out obtaining $\omega = 2$ using a large family of abelian groups in the group-theoretic framework of Cohn, Kleinberg, Szegedy and Umans [CU03,CKSU05] (which is known to capture the best bounds on $\omega$ to date), for which no barriers are currently known. Furthermore, an upper bound of $O(n^{4/3 - \varepsilon})$ for any fixed $\varepsilon > 0$ would rule out a conjectured approach to obtain $\omega = 2$ of [CKSU05]. Along the way, we encounter several problems that have much stronger constraints and that would already have these implications.
Let $\{\Lambda_n=\{\lambda_{1,n},\ldots,\lambda_{d_n,n}\}\}_n$ be a sequence of finite multisets of real numbers such that $d_n\to\infty$ as $n\to\infty$, and let $f:\Omega\subset\mathbb R^d\to\mathbb R$ be a Lebesgue measurable function defined on a domain $\Omega$ with $0<\mu_d(\Omega)<\infty$, where $\mu_d$ is the Lebesgue measure in $\mathbb R^d$. We say that $\{\Lambda_n\}_n$ has an asymptotic distribution described by $f$, and we write $\{\Lambda_n\}_n\sim f$, if \[ \lim_{n\to\infty}\frac1{d_n}\sum_{i=1}^{d_n}F(\lambda_{i,n})=\frac1{\mu_d(\Omega)}\int_\Omega F(f({\boldsymbol x})){\rm d}{\boldsymbol x}\qquad\qquad(*) \] for every continuous function $F$ with bounded support. If $\Lambda_n$ is the spectrum of a matrix $A_n$, we say that $\{A_n\}_n$ has an asymptotic spectral distribution described by $f$ and we write $\{A_n\}_n\sim_\lambda f$. In the case where $d=1$, $\Omega$~is a bounded interval, $\Lambda_n\subseteq f(\Omega)$ for all $n$, and $f$ satisfies suitable conditions, Bogoya, B\"ottcher, Grudsky, and Maximenko proved that the asymptotic distribution (*) implies the uniform convergence to $0$ of the difference between the properly sorted vector $[\lambda_{1,n},\ldots,\lambda_{d_n,n}]$ and the vector of samples $[f(x_{1,n}),\ldots,f(x_{d_n,n})]$, i.e., \[ \lim_{n\to\infty}\,\max_{i=1,\ldots,d_n}|f(x_{i,n})-\lambda_{\tau_n(i),n}|=0, \qquad\qquad(**) \] where $x_{1,n},\ldots,x_{d_n,n}$ is a uniform grid in $\Omega$ and $\tau_n$ is the sorting permutation. We extend this result to the case where $d\ge1$ and $\Omega$ is a Peano--Jordan measurable set (i.e., a bounded set with $\mu_d(\partial\Omega)=0$). See the rest of the abstract in the manuscript.
In 1-equation URANS models of turbulence the eddy viscosity is given by $\nu_{T}=0.55l(x,t)\sqrt{k(x,t)}$ . The length scale $l$ must be pre-specified and $k(x,t)$ is determined by solving a nonlinear partial differential equation. We show that in interesting cases the spacial mean of $k(x,t)$ satisfies a simple ordinary differential equation. Using its solution in $\nu_{T}$ results in a 1/2-equation model. This model has attractive analytic properties. Further, in comparative tests in 2d and 3d the velocity statistics produced by the 1/2-equation model are comparable to those of the full 1-equation model.
We prove that if $X,Y$ are positive, independent, non-Dirac random variables and if for $\alpha,\beta\ge 0$, $\alpha\neq \beta$, $$ \psi_{\alpha,\beta}(x,y)=\left(y\,\tfrac{1+\beta(x+y)}{1+\alpha x+\beta y},\;x\,\tfrac{1+\alpha(x+y)}{1+\alpha x+\beta y}\right), $$ then the random variables $U$ and $V$ defined by $(U,V)=\psi_{\alpha,\beta}(X,Y)$ are independent if and only if $X$ and $Y$ follow Kummer distributions with suitably related parameters. In other words, any invariant measure for a lattice recursion model governed by $\psi_{\alpha,\beta}$ in the scheme introduced by Croydon and Sasada in \cite{CS2020}, is necessarily a product measure with Kummer marginals. The result extends earlier characterizations of Kummer and gamma laws by independence of $$ U=\tfrac{Y}{1+X}\quad\mbox{and}\quad V= X\left(1+\tfrac{Y}{1+X}\right), $$ which corresponds to the case of $\psi_{1,0}$. We also show that this independence property of Kummer laws covers, as limiting cases, several independence models known in the literature: the Lukacs, the Kummer-Gamma, the Matsumoto-Yor and the discrete Korteweg de Vries ones.
We study $L_2$-approximation problems $\text{APP}_d$ in the worst case setting in the weighted Korobov spaces $H_{d,\a,{\bm \ga}}$ with parameter sequences ${\bm \ga}=\{\ga_j\}$ and $\a=\{\az_j\}$ of positive real numbers $1\ge \ga_1\ge \ga_2\ge \cdots\ge 0$ and $\frac1 2<\az_1\le \az_2\le \cdots$. We consider the minimal worst case error $e(n,\text{APP}_d)$ of algorithms that use $n$ arbitrary continuous linear functionals with $d$ variables. We study polynomial convergence of the minimal worst case error, which means that $e(n,\text{APP}_d)$ converges to zero polynomially fast with increasing $n$. We recall the notions of polynomial, strongly polynomial, weak and $(t_1,t_2)$-weak tractability. In particular, polynomial tractability means that we need a polynomial number of arbitrary continuous linear functionals in $d$ and $\va^{-1}$ with the accuracy $\va$ of the approximation. We obtain that the matching necessary and sufficient condition on the sequences ${\bm \ga}$ and $\a$ for strongly polynomial tractability or polynomial tractability is $$\dz:=\liminf_{j\to\infty}\frac{\ln \ga_j^{-1}}{\ln j}>0,$$ and the exponent of strongly polynomial tractability is $$p^{\text{str}}=2\max\big\{\frac 1 \dz, \frac 1 {2\az_1}\big\}.$$
We present difference schemes for stochastic transport equations with low-regularity velocity fields. We establish $L^2$ stability and convergence of the difference approximations under conditions that are less strict than those required for deterministic transport equations. The $L^2$ estimate, crucial for the analysis, is obtained through a discrete duality argument and a comprehensive examination of a class of backward parabolic difference schemes.
This short note shows the superconvergence of an $H(\mathrm{grad}\,\mathrm{curl})$-nonconforming brick element very recently introduced in [17] for the quad-curl problem. The supercloseness is based on proper modifications for both the interpolation and the discrete formulation, leading to an $O(h^2)$ superclose order in the discrete $H(\mathrm{grad}\,\mathrm{curl})$ norm. Moreover, we propose a suitable postprocessing method to ensure the global superconvergence. Numerical results verify our theory.
Challenges with data in the big-data era include (i) the dimension $p$ is often larger than the sample size $n$ (ii) outliers or contaminated points are frequently hidden and more difficult to detect. Challenge (i) renders most conventional methods inapplicable. Thus, it attracts tremendous attention from statistics, computer science, and bio-medical communities. Numerous penalized regression methods have been introduced as modern methods for analyzing high-dimensional data. Disproportionate attention has been paid to the challenge (ii) though. Penalized regression methods can do their job very well and are expected to handle the challenge (ii) simultaneously. Most of them, however, can break down by a single outlier (or single adversary contaminated point) as revealed in this article. The latter systematically examines leading penalized regression methods in the literature in terms of their robustness, provides quantitative assessment, and reveals that most of them can break down by a single outlier. Consequently, a novel robust penalized regression method based on the least sum of squares of depth trimmed residuals is proposed and studied carefully. Experiments with simulated and real data reveal that the newly proposed method can outperform some leading competitors in estimation and prediction accuracy in the cases considered.
Given a matrix-valued function $\mathcal{F}(\lambda)=\sum_{i=1}^d f_i(\lambda) A_i$, with complex matrices $A_i$ and $f_i(\lambda)$ analytic functions for $i=1,\ldots,d$, we discuss a method for the numerical approximation of the distance to singularity for $\mathcal{F}(\lambda)$. The closest matrix-valued function $\widetilde {\mathcal{F}}(\lambda)$ with respect to the Frobenius norm is approximated using an iterative method. The condition of singularity on the matrix-valued function is translated into a numerical constraint for a suitable minimization problem. Unlike the case of matrix polynomials, in the general setting of matrix-valued functions the main issue is that the function $\det ( \widetilde{\mathcal{F}}(\lambda) )$ may have an infinite number of roots. The main feature of the numerical method consists in the possibility of extending it to different structures, such as sparsity patterns induced by the matrix coefficients.
We consider the minimal thermodynamic cost of an individual computation, where a single input $x$ is mapped to a single output $y$. In prior work, Zurek proposed that this cost was given by $K(x\vert y)$, the conditional Kolmogorov complexity of $x$ given $y$ (up to an additive constant which does not depend on $x$ or $y$). However, this result was derived from an informal argument, applied only to deterministic computations, and had an arbitrary dependence on the choice of protocol (via the additive constant). Here we use stochastic thermodynamics to derive a generalized version of Zurek's bound from a rigorous Hamiltonian formulation. Our bound applies to all quantum and classical processes, whether noisy or deterministic, and it explicitly captures the dependence on the protocol. We show that $K(x\vert y)$ is a minimal cost of mapping $x$ to $y$ that must be paid using some combination of heat, noise, and protocol complexity, implying a tradeoff between these three resources. Our result is a kind of "algorithmic fluctuation theorem" with implications for the relationship between the Second Law and the Physical Church-Turing thesis.