Information inequalities appear in many database applications such as query output size bounds, query containment, and implication between data dependencies. Recently Khamis et al. proposed to study the algorithmic aspects of information inequalities, including the information inequality problem: decide whether a linear inequality over entropies of random variables is valid. While the decidability of this problem is a major open question, applications often involve only inequalities that adhere to specific syntactic forms linked to useful semantic invariance properties. This paper studies the information inequality problem in different syntactic and semantic scenarios that arise from database applications. Focusing on the boundary between tractability and intractability, we show that the information inequality problem is coNP-complete if restricted to normal polymatroids, and in polynomial time if relaxed to monotone functions. We also examine syntactic restrictions related to query output size bounds, and provide an alternative proof, through monotone functions, for the polynomial-time computability of the entropic bound over simple sets of degree constraints.
We describe a quantum algorithm based on an interior point method for solving a linear program with $n$ inequality constraints on $d$ variables. The algorithm explicitly returns a feasible solution that is $\epsilon$-close to optimal, and runs in time $\sqrt{n}\, \mathrm{poly}(d,\log(n),\log(1/\varepsilon))$ which is sublinear for tall linear programs (i.e., $n \gg d$). Our algorithm speeds up the Newton step in the state-of-the-art interior point method of Lee and Sidford [FOCS '14]. This requires us to efficiently approximate the Hessian and gradient of the barrier function, and these are our main contributions. To approximate the Hessian, we describe a quantum algorithm for the spectral approximation of $A^T A$ for a tall matrix $A \in \mathbb R^{n \times d}$. The algorithm uses leverage score sampling in combination with Grover search, and returns a $\delta$-approximation by making $O(\sqrt{nd}/\delta)$ row queries to $A$. This generalizes an earlier quantum speedup for graph sparsification by Apers and de Wolf [FOCS '20]. To approximate the gradient, we use a recent quantum algorithm for multivariate mean estimation by Cornelissen, Hamoudi and Jerbi [STOC '22]. While a naive implementation introduces a dependence on the condition number of the Hessian, we avoid this by pre-conditioning our random variable using our quantum algorithm for spectral approximation.
This paper addresses the fundamental task of estimating covariance matrix functions for high-dimensional functional data/functional time series. We consider two functional factor structures encompassing either functional factors with scalar loadings or scalar factors with functional loadings, and postulate functional sparsity on the covariance of idiosyncratic errors after taking out the common unobserved factors. To facilitate estimation, we rely on the spiked matrix model and its functional generalization, and derive some novel asymptotic identifiability results, based on which we develop DIGIT and FPOET estimators under two functional factor models, respectively. Both estimators involve performing associated eigenanalysis to estimate the covariance of common components, followed by adaptive functional thresholding applied to the residual covariance. We also develop functional information criteria for the purpose of model selection. The convergence rates of estimated factors, loadings, and conditional sparse covariance matrix functions under various functional matrix norms, are respectively established for DIGIT and FPOET estimators. Numerical studies including extensive simulations and two real data applications on mortality rates and functional portfolio allocation are conducted to examine the finite-sample performance of the proposed methodology.
We propose a new framework for the simultaneous inference of monotone and smoothly time-varying functions under complex temporal dynamics utilizing the monotone rearrangement and the nonparametric estimation. We capitalize the Gaussian approximation for the nonparametric monotone estimator and construct the asymptotically correct simultaneous confidence bands (SCBs) by carefully designed bootstrap methods. We investigate two general and practical scenarios. The first is the simultaneous inference of monotone smooth trends from moderately high-dimensional time series, and the proposed algorithm has been employed for the joint inference of temperature curves from multiple areas. Specifically, most existing methods are designed for a single monotone smooth trend. In such cases, our proposed SCB empirically exhibits the narrowest width among existing approaches while maintaining confidence levels, and has been used for testing several hypotheses tailored to global warming. The second scenario involves simultaneous inference of monotone and smoothly time-varying regression coefficients in time-varying coefficient linear models. The proposed algorithm has been utilized for testing the impact of sunshine duration on temperature which is believed to be increasing by the increasingly severe greenhouse effect. The validity of the proposed methods has been justified in theory as well as by extensive simulations.
In prediction settings where data are collected over time, it is often of interest to understand both the importance of variables for predicting the response at each time point and the importance summarized over the time series. Building on recent advances in estimation and inference for variable importance measures, we define summaries of variable importance trajectories. These measures can be estimated and the same approaches for inference can be applied regardless of the choice of the algorithm(s) used to estimate the prediction function. We propose a nonparametric efficient estimation and inference procedure as well as a null hypothesis testing procedure that are valid even when complex machine learning tools are used for prediction. Through simulations, we demonstrate that our proposed procedures have good operating characteristics, and we illustrate their use by investigating the longitudinal importance of risk factors for suicide attempt.
We consider the estimation of the cumulative hazard function, and equivalently the distribution function, with censored data under a setup that preserves the privacy of the survival database. This is done through a $\alpha$-locally differentially private mechanism for the failure indicators and by proposing a non-parametric kernel estimator for the cumulative hazard function that remains consistent under the privatization. Under mild conditions, we also prove lowers bounds for the minimax rates of convergence and show that estimator is minimax optimal under a well-chosen bandwidth.
We propose to approximate a (possibly discontinuous) multivariate function f (x) on a compact set by the partial minimizer arg miny p(x, y) of an appropriate polynomial p whose construction can be cast in a univariate sum of squares (SOS) framework, resulting in a highly structured convex semidefinite program. In a number of non-trivial cases (e.g. when f is a piecewise polynomial) we prove that the approximation is exact with a low-degree polynomial p. Our approach has three distinguishing features: (i) It is mesh-free and does not require the knowledge of the discontinuity locations. (ii) It is model-free in the sense that we only assume that the function to be approximated is available through samples (point evaluations). (iii) The size of the semidefinite program is independent of the ambient dimension and depends linearly on the number of samples. We also analyze the sample complexity of the approach, proving a generalization error bound in a probabilistic setting. This allows for a comparison with machine learning approaches.
We propose a method to modify a polygonal mesh in order to fit the zero-isoline of a level set function by extending a standard body-fitted strategy to a tessellation with arbitrarily-shaped elements. The novel level set-fitted approach, in combination with a Discontinuous Galerkin finite element approximation, provides an ideal setting to model physical problems characterized by embedded or evolving complex geometries, since it allows skipping any mesh post-processing in terms of grid quality. The proposed methodology is firstly assessed on the linear elasticity equation, by verifying the approximation capability of the level set-fitted approach when dealing with configurations with heterogeneous material properties. Successively, we combine the level set-fitted methodology with a minimum compliance topology optimization technique, in order to deliver optimized layouts exhibiting crisp boundaries and reliable mechanical performances. An extensive numerical test campaign confirms the effectiveness of the proposed method.
Power functions with low $c$-differential uniformity have been widely studied not only because of their strong resistance to multiplicative differential attacks, but also low implementation cost in hardware. Furthermore, the $c$-differential spectrum of a function gives a more precise characterization of its $c$-differential properties. Let $f(x)=x^{\frac{p^n+3}{2}}$ be a power function over the finite field $\mathbb{F}_{p^{n}}$, where $p\neq3$ is an odd prime and $n$ is a positive integer. In this paper, for all primes $p\neq3$, by investigating certain character sums with regard to elliptic curves and computing the number of solutions of a system of equations over $\mathbb{F}_{p^{n}}$, we determine explicitly the $(-1)$-differential spectrum of $f$ with a unified approach. We show that if $p^n \equiv 3 \pmod 4$, then $f$ is a differentially $(-1,3)$-uniform function except for $p^n\in\{7,19,23\}$ where $f$ is an APcN function, and if $p^n \equiv 1 \pmod 4$, the $(-1)$-differential uniformity of $f$ is equal to $4$. In addition, an upper bound of the $c$-differential uniformity of $f$ is also given.
We study Whitney-type estimates for approximation of convex functions in the uniform norm on various convex multivariate domains while paying a particular attention to the dependence of the involved constants on the dimension and the geometry of the domain.
We propose a new variable selection procedure for a functional linear model with multiple scalar responses and multiple functional predictors. This method is based on basis expansions of the involved functional predictors and coefficients that lead to a multivariate linear regression model. Then a criterion by means of which the variable selection problem reduces to that of estimating a suitable set is introduced. Estimation of this set is achieved by using appropriate penalizations of estimates of this criterion, so leading to our proposal. A simulation study that permits to investigate the effectiveness of the proposed approach and to compare it with existing methods is given.