We study the fundamental challenge of exhibiting explicit functions that have small correlation with low-degree polynomials over $\mathbb{F}_{2}$. Our main contributions include: 1. In STOC 2020, CHHLZ introduced a new technique to prove correlation bounds. Using their technique they established new correlation bounds for low-degree polynomials. They conjectured that their technique generalizes to higher degree polynomials as well. We give a counterexample to their conjecture, in fact ruling out weaker parameters and showing what they prove is essentially the best possible. 2. We propose a new approach for proving correlation bounds with the central "mod functions", consisting of two steps: (I) the polynomials that maximize correlation are symmetric and (II) symmetric polynomials have small correlation. Contrary to related results in the literature, we conjecture that (I) is true. We argue this approach is not affected by existing "barrier results". 3. We prove our conjecture for quadratic polynomials. Specifically, we determine the maximum possible correlation between quadratic polynomials modulo 2 and the functions $(x_{1},\dots,x_{n})\to z^{\sum x_{i}}$ for any $z$ on the complex unit circle; and show that it is achieved by symmetric polynomials. To obtain our results we develop a new proof technique: we express correlation in terms of directional derivatives and analyze it by slowly restricting the direction. 4. We make partial progress on the conjecture for cubic polynomials, in particular proving tight correlation bounds for cubic polynomials whose degree-3 part is symmetric.
This paper investigates the estimation of the interaction function for a class of McKean-Vlasov stochastic differential equations. The estimation is based on observations of the associated particle system at time $T$, considering the scenario where both the time horizon $T$ and the number of particles $N$ tend to infinity. Our proposed method recovers polynomial rates of convergence for the resulting estimator. This is achieved under the assumption of exponentially decaying tails for the interaction function. Additionally, we conduct a thorough analysis of the transform of the associated invariant density as a complex function, providing essential insights for our main results.
We propose and study a framework for quantifying the importance of the choices of parameter values to the result of a query over a database. These parameters occur as constants in logical queries, such as conjunctive queries. In our framework, the importance of a parameter is its SHAP score. This score is a popular instantiation of the game-theoretic Shapley value to measuring the importance of feature values in machine learning models. We make the case for the rationale of using this score by explaining the intuition behind SHAP, and by showing that we arrive at this score in two different, apparently opposing, approaches to quantifying the contribution of a parameter. The application of the SHAP score requires two components in addition to the query and the database: (a) a probability distribution over the combinations of parameter values, and (b) a utility function that measures the similarity between the result for the original parameters and the result for hypothetical parameters. The main question addressed in the paper is the complexity of calculating the SHAP score for different distributions and similarity measures. We first address the case of probabilistically independent parameters. The problem is hard if we consider a fragment of queries that is hard to evaluate (as one would expect), and even for the fragment of acyclic conjunctive queries. In some cases, though, one can efficiently list all relevant parameter combinations, and then the SHAP score can be computed in polynomial time under reasonable general conditions. Also tractable is the case of full acyclic conjunctive queries for certain (natural) similarity functions. We extend our results to conjunctive queries with inequalities between variables and parameters. Finally, we discuss a simple approximation technique for the case of correlated parameters.
It is shown how mixed finite element methods for symmetric positive definite eigenvalue problems related to partial differential operators can provide guaranteed lower eigenvalue bounds. The method is based on a classical compatibility condition (inclusion of kernels) of the mixed scheme and on local constants related to compact embeddings, which are often known explicitly. Applications include scalar second-order elliptic operators, linear elasticity, and the Steklov eigenvalue problem.
The use of alternative operations in differential cryptanalysis, or alternative notions of differentials, are lately receiving increasing attention. Recently, Civino et al. managed to design a block cipher which is secure w.r.t. classical differential cryptanalysis performed using XOR-differentials, but weaker with respect to the attack based on an alternative difference operation acting on the first s-box of the block. We extend this result to parallel alternative operations, i.e. acting on each s-box of the block. First, we recall the mathematical framework needed to define and use such operations. After that, we perform some differential experiments against a toy cipher and compare the effectiveness of the attack w.r.t. the one that uses XOR-differentials.
In policy learning, the goal is typically to optimize a primary performance metric, but other subsidiary metrics often also warrant attention. This paper presents two strategies for evaluating these subsidiary metrics under a policy that is optimal for the primary one. The first relies on a novel margin condition that facilitates Wald-type inference. Under this and other regularity conditions, we show that the one-step corrected estimator is efficient. Despite the utility of this margin condition, it places strong restrictions on how the subsidiary metric behaves for nearly optimal policies, which may not hold in practice. We therefore introduce alternative, two-stage strategies that do not require a margin condition. The first stage constructs a set of candidate policies and the second builds a uniform confidence interval over this set. We provide numerical simulations to evaluate the performance of these methods in different scenarios.
This paper studies the numerical approximation for McKean-Vlasov stochastic differential equations driven by L\'evy processes. We propose a tamed-adaptive Euler-Maruyama scheme and consider its strong convergence in both finite and infinite time horizons when applying for some classes of L\'evy-driven McKean-Vlasov stochastic differential equations with non-globally Lipschitz continuous and super-linearly growth drift and diffusion coefficients.
The separate tasks of denoising, least squares expectation, and manifold learning can often be posed in a common setting of finding the conditional expectations arising from a product of two random variables. This paper focuses on this more general problem and describes an operator theoretic approach to estimating the conditional expectation. Kernel integral operators are used as a compactification tool, to set up the estimation problem as a linear inverse problem in a reproducing kernel Hilbert space. This equation is shown to have solutions that allow numerical approximation, thus guaranteeing the convergence of data-driven implementations. The overall technique is easy to implement, and their successful application to some real-world problems are also shown.
We introduce a simple initialization of the Maubach bisection routine for adaptive mesh refinement which applies to any conforming initial triangulation. Using Maubach's routine with this initialization generates meshes that preserve shape regularity and satisfy the closure estimate needed for optimal convergence of adaptive schemes. Our ansatz allows for the intrinsic use of existing implementations.
A new nonparametric estimator for Toeplitz covariance matrices is proposed. This estimator is based on a data transformation that translates the problem of Toeplitz covariance matrix estimation to the problem of mean estimation in an approximate Gaussian regression. The resulting Toeplitz covariance matrix estimator is positive definite by construction, fully data-driven and computationally very fast. Moreover, this estimator is shown to be minimax optimal under the spectral norm for a large class of Toeplitz matrices. These results are readily extended to estimation of inverses of Toeplitz covariance matrices. Also, an alternative version of the Whittle likelihood for the spectral density based on the Discrete Cosine Transform (DCT) is proposed. The method is implemented in the R package vstdct that accompanies the paper.
The Levin method is a well-known technique for evaluating oscillatory integrals, which operates by solving a certain ordinary differential equation in order to construct an antiderivative of the integrand. It was long believed that this approach suffers from "low-frequency breakdown," meaning that the accuracy of the calculated value of the integral deteriorates when the integrand is only slowly oscillating. Recently presented experimental evidence, however, suggests that if a Chebyshev spectral method is used to discretize the differential equation and the resulting linear system is solved via a truncated singular value decomposition, then no low-frequency breakdown occurs. Here, we provide a proof that this is the case, and our proof applies not only when the integrand is slowly oscillating, but even in the case of stationary points. Our result puts adaptive schemes based on the Levin method on a firm theoretical foundation and accounts for their behavior in the presence of stationary points. We go on to point out that by combining an adaptive Levin scheme with phase function methods for ordinary differential equations, a large class of oscillatory integrals involving special functions, including products of such functions and the compositions of such functions with slowly-varying functions, can be easily evaluated without the need for symbolic computations. Finally, we present the results of numerical experiments which illustrate the consequences of our analysis and demonstrate the properties of the adaptive Levin method.