We derive minimax testing errors in a distributed framework where the data is split over multiple machines and their communication to a central machine is limited to $b$ bits. We investigate both the $d$- and infinite-dimensional signal detection problem under Gaussian white noise. We also derive distributed testing algorithms reaching the theoretical lower bounds. Our results show that distributed testing is subject to fundamentally different phenomena that are not observed in distributed estimation. Among our findings, we show that testing protocols that have access to shared randomness can perform strictly better in some regimes than those that do not. We also observe that consistent nonparametric distributed testing is always possible, even with as little as $1$-bit of communication and the corresponding test outperforms the best local test using only the information available at a single local machine. Furthermore, we also derive adaptive nonparametric distributed testing strategies and the corresponding theoretical lower bounds.
Zhang (2019) presented a general estimation approach based on the Gaussian distribution for general parametric models where the likelihood of the data is difficult to obtain or unknown, but the mean and variance-covariance matrix are known. Castilla and Zografos (2021) extended the method to density power divergence-based estimators, which are more robust than the likelihood-based Gaussian estimator against data contamination. In this paper we introduce the restricted minimum density power divergence Gaussian estimator (MDPDGE) and study its main asymptotic properties. Also, we examine it robustness through its influence function analysis. Restricted estimators are required in many practical situations, in special in testing composite null hypothesis, and provide here constrained estimators to inherent restrictions of the underlying distribution. Further, we derive robust Rao-type test statistics based on the MDPDGE for testing simple null hypothesis and we deduce explicit expressions for some main important distributions. Finally, we empirically evaluate the efficiency and robustness of the method through a simulation study.
In group sequential analysis, data is collected and analyzed in batches until pre-defined stopping criteria are met. Inference in the parametric setup typically relies on the limiting asymptotic multivariate normality of the repeatedly computed maximum likelihood estimators (MLEs), a result first rigorously proved by Jennison and Turbull (1997) under general regularity conditions. In this work, using Stein's method we provide optimal order, non-asymptotic bounds on the distance for smooth test functions between the joint group sequential MLEs and the appropriate normal distribution under the same conditions. Our results assume independent observations but allow heterogeneous (i.e., non-identically distributed) data. We examine how the resulting bounds simplify when the data comes from an exponential family. Finally, we present a general result relating multivariate Kolmogorov distance to smooth function distance which, in addition to extending our results to the former metric, may be of independent interest.
Prevalent deep learning models suffer from significant over-confidence under distribution shifts. In this paper, we propose Density-Softmax, a single deterministic approach for uncertainty estimation via a combination of density function with the softmax layer. By using the latent representation's likelihood value, our approach produces more uncertain predictions when test samples are distant from the training samples. Theoretically, we prove that Density-Softmax is distance aware, which means its associated uncertainty metrics are monotonic functions of distance metrics. This has been shown to be a necessary condition for a neural network to produce high-quality uncertainty estimation. Empirically, our method enjoys similar computational efficiency as standard softmax on shifted CIFAR-10, CIFAR-100, and ImageNet dataset across modern deep learning architectures. Notably, Density-Softmax uses 4 times fewer parameters than Deep Ensembles and 6 times lower latency than Rank-1 Bayesian Neural Network, while obtaining competitive predictive performance and lower calibration errors under distribution shifts.
This letter proposes an extrinsic calibration approach for a pair of monocular camera and prism-spinning solid-state LiDAR. The unique characteristics of the point cloud measured resulting from the flower-like scanning pattern is first disclosed as the vacant points, a type of outlier between foreground target and background objects. Unlike existing method using only depth continuous measurements, we use depth discontinuous measurements to retain more valid features and efficiently remove vacant points. The larger number of detected 3D corners thus contain more robust a priori information than usual which, together with the 2D corners detected by overlapping cameras and constrained by the proposed circularity and rectangularity rules, produce accurate extrinsic estimates. The algorithm is evaluated with real field experiments adopting both qualitative and quantitative performance criteria, and found to be superior to existing algorithms. The code is available on GitHub.
Federated learning provides a privacy-aware learning framework by enabling participants to jointly train models without exposing their private data. However, federated learning has exhibited vulnerabilities to Byzantine attacks, where the adversary aims to destroy the convergence and performance of the global model. Meanwhile, we observe that most existing robust AGgregation Rules (AGRs) fail to stop the aggregated gradient deviating from the optimal gradient (the average of honest gradients) in the non-IID setting. We attribute the reason of the failure of these AGRs to two newly proposed concepts: identification failure and integrity failure. The identification failure mainly comes from the exacerbated curse of dimensionality in the non-IID setting. The integrity failure is a combined result of conservative filtering strategy and gradient heterogeneity. In order to address both failures, we propose GAIN, a gradient decomposition scheme that can help adapt existing robust algorithms to heterogeneous datasets. We also provide convergence analysis for integrating existing robust AGRs into GAIN. Experiments on various real-world datasets verify the efficacy of our proposed GAIN.
We prove a convergence theorem for U-statistics of degree two, where the data dimension $d$ is allowed to scale with sample size $n$. We find that the limiting distribution of a U-statistic undergoes a phase transition from the non-degenerate Gaussian limit to the degenerate limit, regardless of its degeneracy and depending only on a moment ratio. A surprising consequence is that a non-degenerate U-statistic in high dimensions can have a non-Gaussian limit with a larger variance and asymmetric distribution. Our bounds are valid for any finite $n$ and $d$, independent of individual eigenvalues of the underlying function, and dimension-independent under a mild assumption. As an application, we apply our theory to two popular kernel-based distribution tests, MMD and KSD, whose high-dimensional performance has been challenging to study. In a simple empirical setting, our results correctly predict how the test power at a fixed threshold scales with $d$ and the bandwidth.
Integrated sensing and communication improves the design of systems by combining sensing and communication functions for increased efficiency, accuracy, and cost savings. The optimal integration requires understanding the trade-off between sensing and communication, but this can be difficult due to the lack of unified performance metrics. In this paper, an information-theoretical approach is used to design the system with a unified metric. A sensing rate is introduced to measure the amount of information obtained by a pulse-Doppler radar system. An approximation and lower bound of the sensing rate is obtained in closed forms. Using both the derived sensing information and communication rates, the optimal bandwidth allocation strategy is found for maximizing the weighted sum of the spectral efficiency for sensing and communication. The simulation results confirm the validity of the approximation and the effectiveness of the proposed bandwidth allocation.
Federated bilevel optimization has attracted increasing attention due to emerging machine learning and communication applications. The biggest challenge lies in computing the gradient of the upper-level objective function (i.e., hypergradient) in the federated setting due to the nonlinear and distributed construction of a series of global Hessian matrices. In this paper, we propose a novel communication-efficient federated hypergradient estimator via aggregated iterative differentiation (AggITD). AggITD is simple to implement and significantly reduces the communication cost by conducting the federated hypergradient estimation and the lower-level optimization simultaneously. We show that the proposed AggITD-based algorithm achieves the same sample complexity as existing approximate implicit differentiation (AID)-based approaches with much fewer communication rounds in the presence of data heterogeneity. Our results also shed light on the great advantage of ITD over AID in the federated/distributed hypergradient estimation. This differs from the comparison in the non-distributed bilevel optimization, where ITD is less efficient than AID. Our extensive experiments demonstrate the great effectiveness and communication efficiency of the proposed method.
We study distributed algorithms for finding a Nash equilibrium (NE) in a class of non-cooperative convex games under partial information. Specifically, each agent has access only to its own smooth local cost function and can receive information from its neighbors in a time-varying directed communication network. To this end, we propose a distributed gradient play algorithm to compute a NE by utilizing local information exchange among the players. In this algorithm, every agent performs a gradient step to minimize its own cost function while sharing and retrieving information locally among its neighbors. The existing methods impose strong assumptions such as balancedness of the mixing matrices and global knowledge of the network communication structure, including Perron-Frobenius eigenvector of the adjacency matrix and other graph connectivity constants. In contrast, our approach relies only on a reasonable and widely-used assumption of row-stochasticity of the mixing matrices. We analyze the algorithm for time-varying directed graphs and prove its convergence to the NE, when the agents' cost functions are strongly convex and have Lipschitz continuous gradients. Numerical simulations are performed for a Nash-Cournot game to illustrate the efficacy of the proposed algorithm.
The aim of this work is to develop a fully-distributed algorithmic framework for training graph convolutional networks (GCNs). The proposed method is able to exploit the meaningful relational structure of the input data, which are collected by a set of agents that communicate over a sparse network topology. After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents. Then, we propose a distributed gradient descent procedure to solve the GCN training problem. The resulting model distributes computation along three lines: during inference, during back-propagation, and during optimization. Convergence to stationary solutions of the GCN training problem is also established under mild conditions. Finally, we propose an optimization criterion to design the communication topology between agents in order to match with the graph describing data relationships. A wide set of numerical results validate our proposal. To the best of our knowledge, this is the first work combining graph convolutional neural networks with distributed optimization.