The absolute value equations (AVE) problem is an algebraic problem of solving Ax+|x|=b. So far, most of the research focused on methods for solving AVEs, but we address the problem itself by analysing properties of AVE and the corresponding solution set. In particular, we investigate topological properties of the solution set, such as convexity, boundedness, connectedness, or whether it consists of finitely many solutions. Further, we address problems related to nonnegativity of solutions such as solvability or unique solvability. AVE can be formulated by means of different optimization problems, and in this regard we are interested in how the solutions of AVE are related with optima, Karush-Kuhn-Tucker points and feasible solutions of these optimization problems. We characterize the matrix classes associated with the above mentioned properties and inspect the computational complexity of the recognition problem; some of the classes are polynomially recognizable, but some others are proved to be NP-hard. For the intractable cases, we propose various sufficient conditions. We also post new challenging problems that raised during the investigation of the problem.
The study of Markov processes and broadcasting on trees has deep connections to a variety of areas including statistical physics, graphical models, phylogenetic reconstruction, Markov Chain Monte Carlo, and community detection in random graphs. Notably, the celebrated Belief Propagation (BP) algorithm achieves Bayes-optimal performance for the reconstruction problem of predicting the value of the Markov process at the root of the tree from its values at the leaves. Recently, the analysis of low-degree polynomials has emerged as a valuable tool for predicting computational-to-statistical gaps. In this work, we investigate the performance of low-degree polynomials for the reconstruction problem on trees. Perhaps surprisingly, we show that there are simple tree models with $N$ leaves and bounded arity where (1) nontrivial reconstruction of the root value is possible with a simple polynomial time algorithm and with robustness to noise, but not with any polynomial of degree $N^{c}$ for $c > 0$ a constant depending only on the arity, and (2) when the tree is unknown and given multiple samples with correlated root assignments, nontrivial reconstruction of the root value is possible with a simple Statistical Query algorithm but not with any polynomial of degree $N^c$. These results clarify some of the limitations of low-degree polynomials vs. polynomial time algorithms for Bayesian estimation problems. They also complement recent work of Moitra, Mossel, and Sandon who studied the circuit complexity of Belief Propagation. As a consequence of our main result, we show that for some $c' > 0$ depending only on the arity, $\exp(N^{c'})$ many samples are needed for RBF kernel regression to obtain nontrivial correlation with the true regression function (BP). We pose related open questions about low-degree polynomials and the Kesten-Stigum threshold.
The common methods of spectral analysis for multivariate ($n$-dimensional) time series, like discrete Frourier transform (FT) or Wavelet transform, are based on Fourier series to decompose discrete data into a set of trigonometric model components, e. g. amplitude and phase. Applied to discrete data with a finite range several limitations of (time discrete) FT can be observed which are caused by the orthogonality mismatch of the trigonometric basis functions on a finite interval. However, in the general situation of non-equidistant or fragmented sampling FT based methods will cause significant errors in the parameter estimation. Therefore, the classical Lomb-Scargle method (LSM), which is not based on Fourier series, was developed as a statistical tool for one dimensional data to circumvent the inconsistent and erroneous parameter estimation of FT. The present work deduces LSM for $n$-dimensional data sets by a redefinition of the shifting parameter $\tau$, to maintain orthogonality of the trigonometric basis. An analytical derivation shows, that $n$-D LSM extents the traditional 1D case preserving all the statistical benefits, such as the improved noise rejection. Here, we derive the parameter confidence intervals for LSM and compare it with FT. Applications with ideal test data and experimental data will illustrate and support the proposed method.
We consider the problem of finding the matching map between two sets of $d$-dimensional noisy feature-vectors. The distinctive feature of our setting is that we do not assume that all the vectors of the first set have their corresponding vector in the second set. If $n$ and $m$ are the sizes of these two sets, we assume that the matching map that should be recovered is defined on a subset of unknown cardinality $k^*\le \min(n,m)$. We show that, in the high-dimensional setting, if the signal-to-noise ratio is larger than $5(d\log(4nm/\alpha))^{1/4}$, then the true matching map can be recovered with probability $1-\alpha$. Interestingly, this threshold does not depend on $k^*$ and is the same as the one obtained in prior work in the case of $k = \min(n,m)$. The procedure for which the aforementioned property is proved is obtained by a data-driven selection among candidate mappings $\{\hat\pi_k:k\in[\min(n,m)]\}$. Each $\hat\pi_k$ minimizes the sum of squares of distances between two sets of size $k$. The resulting optimization problem can be formulated as a minimum-cost flow problem, and thus solved efficiently. Finally, we report the results of numerical experiments on both synthetic and real-world data that illustrate our theoretical results and provide further insight into the properties of the algorithms studied in this work.
We consider a high-dimensional random constrained optimization problem in which a set of binary variables is subjected to a linear system of equations. The cost function is a simple linear cost, measuring the Hamming distance with respect to a reference configuration. Despite its apparent simplicity, this problem exhibits a rich phenomenology. We show that different situations arise depending on the random ensemble of linear systems. When each variable is involved in at most two linear constraints, we show that the problem can be partially solved analytically, in particular we show that upon convergence, the zero-temperature limit of the cavity equations returns the optimal solution. We then study the geometrical properties of more general random ensembles. In particular we observe a range in the density of constraints at which the systems enters a glassy phase where the cost function has many minima. Interestingly, the algorithmic performances are only sensitive to another phase transition affecting the structure of configurations allowed by the linear constraints. We also extend our results to variables belonging to $\text{GF}(q)$, the Galois Field of order $q$. We show that increasing the value of $q$ allows to achieve a better optimum, which is confirmed by the Replica Symmetric cavity method predictions.
The objective of this paper is to investigate a new numerical method for the approximation of the self-diffusion matrix of a tagged particle process defined on a grid. While standard numerical methods make use of long-time averages of empirical means of deviations of some stochastic processes, and are thus subject to statistical noise, we propose here a tensor method in order to compute an approximation of the solution of a high-dimensional quadratic optimization problem, which enables to obtain a numerical approximation of the self-diffusion matrix. The tensor method we use here relies on an iterative scheme which builds low-rank approximations of the quantity of interest and on a carefully tuned variance reduction method so as to evaluate the various terms arising in the functional to minimize. In particular, we numerically observe here that it is much less subject to statistical noise than classical approaches.
Given a stream of Bernoulli random variables, consider the problem of estimating the mean of the random variable within a specified relative error with a specified probability of failure. Until now, the Gamma Bernoulli Approximation Scheme (GBAS) was the method that accomplished this goal using the smallest number of average samples. In this work, a new method is introduced that is faster when the mean is bounded away from zero. The process uses a two-stage process together with some simple inequalities to get rigorous bounds on the error probability.
Polynomial Networks (PNs) have demonstrated promising performance on face and image recognition recently. However, robustness of PNs is unclear and thus obtaining certificates becomes imperative for enabling their adoption in real-world applications. Existing verification algorithms on ReLU neural networks (NNs) based on classical branch and bound (BaB) techniques cannot be trivially applied to PN verification. In this work, we devise a new bounding method, equipped with BaB for global convergence guarantees, called Verification of Polynomial Networks or VPN for short. One key insight is that we obtain much tighter bounds than the interval bound propagation (IBP) and DeepT-Fast [Bonaert et al., 2021] baselines. This enables sound and complete PN verification with empirical validation on MNIST, CIFAR10 and STL10 datasets. We believe our method has its own interest to NN verification. The source code is publicly available at //github.com/megaelius/PNVerification.
The independent set polynomial is important in many areas. For every integer $\Delta\geq 2$, the Shearer threshold is the value $\lambda^*(\Delta)=(\Delta-1)^{\Delta-1}/\Delta^{\Delta}$ . It is known that for $\lambda < - \lambda^*(\Delta)$, there are graphs~$G$ with maximum degree~$\Delta$ whose independent set polynomial, evaluated at~$\lambda$, is at most~$0$. Also, there are no such graphs for any $\lambda > -\lambda^*(\Delta)$. This paper is motivated by the computational problem of approximating the independent set polynomial when $\lambda < - \lambda^*(\Delta)$. The key issue in complexity bounds for this problem is "implementation". Informally, an implementation of a real number $\lambda'$ is a graph whose hard-core partition function, evaluated at~$\lambda$, simulates a vertex-weight of~$\lambda'$ in the sense that $\lambda'$ is the ratio between the contribution to the partition function from independent sets containing a certain vertex and the contribution from independent sets that do not contain that vertex. Implementations are the cornerstone of intractability results for the problem of approximately evaluating the independent set polynomial. Our main result is that, for any $\lambda < - \lambda^*(\Delta)$, it is possible to implement a set of values that is dense over the reals. The result is tight in the sense that it is not possible to implement a set of values that is dense over the reals for any $\lambda> \lambda^*(\Delta)$. Our result has already been used in a paper with \bezakova{} (STOC 2018) to show that it is \#P-hard to approximate the evaluation of the independent set polynomial on graphs of degree at most~$\Delta$ at any value $\lambda<-\lambda^*(\Delta)$. In the appendix, we give an additional incomparable inapproximability result (strengthening the inapproximability bound to an exponential factor, but weakening the hardness to NP-hardness).
One-class classification (OCC) is the problem of deciding whether an observed sample belongs to a target class or not. We consider the problem of learning an OCC model when the dataset available at the learning stage contains only samples from the target class. We aim at obtaining a classifier that performs as the generalized likelihood ratio test (GLRT), which is a well-known and provably optimal (under specific assumptions) classifier when the statistic of the target class is available. To this end, we consider both the multilayer perceptron neural network (NN) and the support vector machine (SVM) models. They are trained as two-class classifiers using an artificial dataset for the alternative class, obtained by generating random samples, uniformly over the domain of the target-class dataset. We prove that, under suitable assumptions, the models converge (with a large dataset) to the GLRT. Moreover, we show that the one-class least squares SVM (OCLSSVM) at convergence performs as the GLRT, with a suitable transformation function. Lastly, we compare the obtained solutions with the autoencoder (AE) classifier, which does not in general provide the GLRT
We study the rank of sub-matrices arising out of kernel functions, $F(\pmb{x},\pmb{y}): \mathbb{R}^d \times \mathbb{R}^d \mapsto \mathbb{R}$, where $\pmb{x},\pmb{y} \in \mathbb{R}^d$ with $F(\pmb{x},\pmb{y})$ is smooth everywhere except along the line $\pmb{x}=\pmb{y}$. Such kernel functions are frequently encountered in a wide range of applications such as $N$ body problems, Green's functions, integral equations, geostatistics, kriging, Gaussian processes, etc. One of the challenges in dealing with these kernel functions is that the corresponding matrix associated with these kernels is large and dense and thereby, the computational cost of matrix operations is high. In this article, we prove new theorems bounding the numerical rank of sub-matrices arising out of these kernel functions. Under reasonably mild assumptions, we prove that the rank of certain sub-matrices is rank-deficient in finite precision. This rank depends on the dimension of the ambient space and also on the type of interaction between the hyper-cubes containing the corresponding set of particles. This rank structure can be leveraged to reduce the computational cost of certain matrix operations such as matrix-vector products, solving linear systems, etc. We also present numerical results on the growth of rank of certain sub-matrices in $1$D, $2$D, $3$D and $4$D, which, not surprisingly, agrees with the theoretical results.