We consider the problem of inference for projection parameters in linear regression with increasing dimensions. This problem has been studied under a variety of assumptions in the literature. The classical asymptotic normality result for the least squares estimator of the projection parameter only holds when the dimension $d$ of the covariates is of smaller order than $n^{1/2}$, where $n$ is the sample size. Traditional sandwich estimator-based Wald intervals are asymptotically valid in this regime. In this work, we propose a bias correction for the least squares estimator and prove the asymptotic normality of the resulting debiased estimator as long as $d = o(n^{2/3})$, with an explicit bound on the rate of convergence to normality. We leverage recent methods of statistical inference that do not require an estimator of the variance to perform asymptotically valid statistical inference. We provide a discussion of how our techniques can be generalized to increase the allowable range of $d$ even further.
We present an algorithm for the solution of Sylvester equations with right-hand side of low rank. The method is based on projection onto a block rational Krylov subspace, with two key contributions with respect to the state-of-the-art. First, we show how to maintain the last pole equal to infinity throughout the iteration, by means of pole reodering. This allows for a cheap evaluation of the true residual at every step. Second, we extend the convergence analysis in [Beckermann B., An error analysis for rational Galerkin projection applied to the Sylvester equation, SINUM, 2011] to the block case. This extension allows to link the convergence with the problem of minimizing the norm of a small rational matrix over the spectra or field-of-values of the involved matrices. This is in contrast with the non-block case, where the minimum problem is scalar, instead of matrix-valued. Replacing the norm of the objective function with an easier to evaluate function yields several adaptive pole selection strategies, providing a theoretical analysis for known heuristics, as well as effective novel techniques.
The study of nonplanar drawings of graphs with restricted crossing configurations is a well-established topic in graph drawing, often referred to as beyond-planar graph drawing. One of the most studied types of drawings in this area are the $k$-planar drawings $(k \geq 1)$, where each edge cannot cross more than $k$ times. We generalize $k$-planar drawings, by introducing the new family of min-$k$-planar drawings. In a min-$k$-planar drawing edges can cross an arbitrary number of times, but for any two crossing edges, one of the two must have no more than $k$ crossings. We prove a general upper bound on the number of edges of min-$k$-planar drawings, a finer upper bound for $k=3$, and tight upper bounds for $k=1,2$. Also, we study the inclusion relations between min-$k$-planar graphs (i.e., graphs admitting min-$k$-planar drawings) and $k$-planar graphs.
Preconditioning is essential in iterative methods for solving linear systems of equations. We study a nonclassic matrix condition number, the $\omega$-condition number, in the context of optimal conditioning for low rank updating of positive definite matrices. For a positive definite matrix, this condition measure is the ratio of the arithmetic and geometric means of the eigenvalues. In particular, we concentrate on linear systems with low rank updates of positive definite matrices which are close to singular. These systems arise in the contexts of nonsmooth Newton methods using generalized Jacobians. We derive an explicit formula for the optimal $\omega$-preconditioned update in this framework. Evaluating or estimating the classical condition number $\kappa$ can be expensive. We show that the $\omega$-condition number can be evaluated exactly following a Cholesky or LU factorization and it estimates the actual condition of a linear system significantly better. Moreover, our empirical results show a significant decrease in the number of iterations required for a requested accuracy in the residual during an iterative method, i.e., these results confirm the efficacy of using the $\omega$-condition number compared to the classical condition number.
The Krasnosel'skii-Mann (KM) algorithm is the most fundamental iterative scheme designed to find a fixed point of an averaged operator in the framework of a real Hilbert space, since it lies at the heart of various numerical algorithms for solving monotone inclusions and convex optimization problems. We enhance the Krasnosel'skii-Mann algorithm with Nesterov's momentum updates and show that the resulting numerical method exhibits a convergence rate for the fixed point residual of $o(1/k)$ while preserving the weak convergence of the iterates to a fixed point of the operator. Numerical experiments illustrate the superiority of the resulting so-called Fast KM algorithm over various fixed point iterative schemes, and also its oscillatory behavior, which is a specific of Nesterov's momentum optimization algorithms.
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability. The commonly adopted compression schemes introduce information loss into local data while improving communication efficiency, and it remains an open problem whether such discrete-valued mechanisms provide any privacy protection. In this paper, we study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP). More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms, including the binomial noise and the binomial mechanisms that are proposed for privacy preservation, and the sign-based methods that are proposed for data compression, in closed-form expressions. We further investigate the amplification in privacy by sparsification and propose a ternary stochastic compressor. By leveraging compression for privacy amplification, we improve the existing methods by removing the dependency of accuracy (in terms of mean square error) on communication cost in the popular use case of distributed mean estimation, therefore breaking the three-way tradeoff between privacy, communication, and accuracy. Finally, we discuss the Byzantine resilience of the proposed mechanism and its application in federated learning.
This paper presents a Multiple Kernel Learning (abbreviated as MKL) framework for the Support Vector Machine (SVM) with the $(0, 1)$ loss function. Some first-order optimality conditions are given and then exploited to develop a fast ADMM solver to deal with the nonconvex and nonsmooth optimization problem. Extensive numerical experiments on synthetic and real datasets show that the performance of our MKL-$L_{0/1}$-SVM is comparable with the one of the leading approaches called SimpleMKL developed by Rakotomamonjy, Bach, Canu, and Grandvalet [Journal of Machine Learning Research, vol. 9, pp. 2491-2521, 2008].
The Sinc approximation applied to double-exponentially decaying functions is referred to as the DE-Sinc approximation. Because of its high efficiency, this method has been used in various applications. In the Sinc approximation, the mesh size and truncation numbers should be optimally selected to achieve its best performance. However, the standard selection formula has only been "near-optimally" selected because the optimal formula of the mesh size cannot be expressed in terms of elementary functions of truncation numbers. In this study, we propose two improved selection formulas. The first one is based on the concept by an earlier research that resulted in a better selection formula for the double-exponential formula. The formula performs slightly better than the standard one, but is still not optimal. As a second selection formula, we introduce a new parameter to propose truly optimal selection formula. We provide explicit error bounds for both selection formulas. Numerical comparisons show that the first formula gives a better error bound than the standard formula, and the second formula gives a much better error bound than the standard and first formulas.
Identification and analysis of symmetrical patterns in the natural world have led to significant discoveries across various scientific fields, such as the formulation of gravitational laws in physics and advancements in the study of chemical structures. In this paper, we focus on exploiting Euclidean symmetries inherent in certain cooperative multi-agent reinforcement learning (MARL) problems and prevalent in many applications. We begin by formally characterizing a subclass of Markov games with a general notion of symmetries that admits the existence of symmetric optimal values and policies. Motivated by these properties, we design neural network architectures with symmetric constraints embedded as an inductive bias for multi-agent actor-critic methods. This inductive bias results in superior performance in various cooperative MARL benchmarks and impressive generalization capabilities such as zero-shot learning and transfer learning in unseen scenarios with repeated symmetric patterns. The code is available at: //github.com/dchen48/E3AC.
In this article, we study the inconsistency of systems of $\min-\rightarrow$ fuzzy relational equations. We give analytical formulas for computing the Chebyshev distances $\nabla = \inf_{d \in \mathcal{D}} \Vert \beta - d \Vert$ associated to systems of $\min-\rightarrow$ fuzzy relational equations of the form $\Gamma \Box_{\rightarrow}^{\min} x = \beta$, where $\rightarrow$ is a residual implicator among the G\"odel implication $\rightarrow_G$, the Goguen implication $\rightarrow_{GG}$ or Lukasiewicz's implication $\rightarrow_L$ and $\mathcal{D}$ is the set of second members of consistent systems defined with the same matrix $\Gamma$. The main preliminary result that allows us to obtain these formulas is that the Chebyshev distance $\nabla$ is the lower bound of the solutions of a vector inequality, whatever the residual implicator used. Finally, we show that, in the case of the $\min-\rightarrow_{G}$ system, the Chebyshev distance $\nabla$ may be an infimum, while it is always a minimum for $\min-\rightarrow_{GG}$ and $\min-\rightarrow_{L}$ systems.
Click-through rate (CTR) prediction plays a critical role in recommender systems and online advertising. The data used in these applications are multi-field categorical data, where each feature belongs to one field. Field information is proved to be important and there are several works considering fields in their models. In this paper, we proposed a novel approach to model the field information effectively and efficiently. The proposed approach is a direct improvement of FwFM, and is named as Field-matrixed Factorization Machines (FmFM, or $FM^2$). We also proposed a new explanation of FM and FwFM within the FmFM framework, and compared it with the FFM. Besides pruning the cross terms, our model supports field-specific variable dimensions of embedding vectors, which acts as soft pruning. We also proposed an efficient way to minimize the dimension while keeping the model performance. The FmFM model can also be optimized further by caching the intermediate vectors, and it only takes thousands of floating-point operations (FLOPs) to make a prediction. Our experiment results show that it can out-perform the FFM, which is more complex. The FmFM model's performance is also comparable to DNN models which require much more FLOPs in runtime.