Spectral methods have myriad applications in high-dimensional statistics and data science, and while previous works have primarily focused on $\ell_2$ or $\ell_{2,\infty}$ eigenvector and singular vector perturbation theory, in many settings these analyses fall short of providing the fine-grained guarantees required for various inferential tasks. In this paper we study statistical inference for linear functions of eigenvectors and principal components with a particular emphasis on the setting where gaps between eigenvalues may be extremely small relative to the corresponding spiked eigenvalue, a regime which has been oft-neglected in the literature. It has been previously established that linear functions of eigenvectors and principal components incur a non-negligible bias, so in this work we provide Berry-Esseen bounds for empirical linear forms and their debiased counterparts respectively in the matrix denoising model and the spiked principal component analysis model, both under Gaussian noise. Next, we propose data-driven estimators for the appropriate bias and variance quantities resulting in approximately valid confidence intervals, and we demonstrate our theoretical results through numerical simulations. We further apply our results to obtain distributional theory and confidence intervals for eigenvector entries, for which debiasing is not necessary. Crucially, our proposed confidence intervals and bias-correction procedures can all be computed directly from data without sample-splitting and are asymptotically valid under minimal assumptions on the eigengap and signal strength. Furthermore, our Berry-Esseen bounds clearly reflect the effects of both signal strength and eigenvalue closeness on the estimation and inference tasks.
The vast majority of ASR research uses corpora in which both the training and test data have been pre-segmented into utterances. In most real-word ASR use-cases, however, test audio is not segmented, leading to a mismatch between inference-time conditions and models trained on segmented utterances. In this paper, we re-release three standard ASR corpora - TED-LIUM 3, Gigapeech, and VoxPopuli-en - with updated transcription and alignments to enable their use for long-form ASR research. We use these reconstituted corpora to study the train-test mismatch problem for transducers and attention-based encoder-decoders (AEDs), confirming that AEDs are more susceptible to this issue. Finally, we benchmark a simple long-form training for these models, showing its efficacy for model robustness under this domain shift.
Machine learning (ML) methods offer a wide range of configurable hyperparameters that have a significant influence on their performance. While accuracy is a commonly used performance objective, in many settings, it is not sufficient. Optimizing the ML models with respect to multiple objectives such as accuracy, confidence, fairness, calibration, privacy, latency, and memory consumption is becoming crucial. To that end, hyperparameter optimization, the approach to systematically optimize the hyperparameters, which is already challenging for a single objective, is even more challenging for multiple objectives. In addition, the differences in objective scales, the failures, and the presence of outlier values in objectives make the problem even harder. We propose a multi-objective Bayesian optimization (MoBO) algorithm that addresses these problems through uniform objective normalization and randomized weights in scalarization. We increase the efficiency of our approach by imposing constraints on the objective to avoid exploring unnecessary configurations (e.g., insufficient accuracy). Finally, we leverage an approach to parallelize the MoBO which results in a 5x speed-up when using 16x more workers.
The membership and threshold problems for recurrence sequences are fundamental open decision problems in automated verification. The former problem asks whether a chosen target is an element of a sequence, whilst the latter asks whether every term in a sequence is bounded from below by a given value. A rational-valued sequence $\langle u_n \rangle_n$ is hypergeometric if it satisfies a first-order linear recurrence of the form $p(n)u_{n+1} = q(n)u_{n}$ with polynomial coefficients $p,q\in\mathbb{Z}[x]$. In this note we establish decidability results for the aforementioned problems for restricted classes of hypergeometric sequences. For example, we establish decidability for the aforementioned problems under the assumption that the polynomial coefficients $p,q\in\mathbb{Z}[x]$ are monic and split over an imaginary rational extension of $\mathbb{Q}$. We also establish conditional decidability results; that is, conditional on Schanuel's conjecture, when the irreducible factors of the monic polynomial coefficients $p,q\in\mathbb{Z}[x]$ are either linear or quadratic.
Recent research efforts on semantic communication have mostly considered accuracy as a main problem for optimizing goal-oriented communication systems. However, these approaches introduce a paradox: the accuracy of artificial intelligence (AI) tasks should naturally emerge through training rather than being dictated by network constraints. Acknowledging this dilemma, this work introduces an innovative approach that leverages the rate-distortion theory to analyze distortions induced by communication and semantic compression, thereby analyzing the learning process. Specifically, we examine the distribution shift between the original data and the distorted data, thus assessing its impact on the AI model's performance. Founding upon this analysis, we can preemptively estimate the empirical accuracy of AI tasks, making the goal-oriented semantic communication problem feasible. To achieve this objective, we present the theoretical foundation of our approach, accompanied by simulations and experiments that demonstrate its effectiveness. The experimental results indicate that our proposed method enables accurate AI task performance while adhering to network constraints, establishing it as a valuable contribution to the field of signal processing. Furthermore, this work advances research in goal-oriented semantic communication and highlights the significance of data-driven approaches in optimizing the performance of intelligent systems.
Most autonomous navigation systems assume wheeled robots are rigid bodies and their 2D planar workspaces can be divided into free spaces and obstacles. However, recent wheeled mobility research, showing that wheeled platforms have the potential of moving over vertically challenging terrain (e.g., rocky outcroppings, rugged boulders, and fallen tree trunks), invalidate both assumptions. Navigating off-road vehicle chassis with long suspension travel and low tire pressure in places where the boundary between obstacles and free spaces is blurry requires precise 3D modeling of the interaction between the chassis and the terrain, which is complicated by suspension and tire deformation, varying tire-terrain friction, vehicle weight distribution and momentum, etc. In this paper, we present a learning approach to model wheeled mobility, i.e., in terms of vehicle-terrain forward dynamics, and plan feasible, stable, and efficient motion to drive over vertically challenging terrain without rolling over or getting stuck. We present physical experiments on two wheeled robots and show that planning using our learned model can achieve up to 60% improvement in navigation success rate and 46% reduction in unstable chassis roll and pitch angles.
This paper presents a comprehensive study focusing on the influence of DEM type and spatial resolution on the accuracy of flood inundation prediction. The research employs a state-of-the-art deep learning method using a 1D convolutional neural network (CNN). The CNN-based method employs training input data in the form of synthetic hydrographs, along with target data represented by water depth obtained utilizing a 2D hydrodynamic model, LISFLOOD-FP. The performance of the trained CNN models is then evaluated and compared with the observed flood event. This study examines the use of digital surface models (DSMs) and digital terrain models (DTMs) derived from a LIDAR-based 1m DTM, with resolutions ranging from 15 to 30 meters. The proposed methodology is implemented and evaluated in a well-established benchmark location in Carlisle, UK. The paper also discusses the applicability of the methodology to address the challenges encountered in a data-scarce flood-prone region, exemplified by Pakistan. The study found that DTM performs better than DSM at lower resolutions. Using a 30m DTM improved flood depth prediction accuracy by about 21% during the peak stage. Increasing the resolution to 15m increased RMSE and overlap index by at least 50% and 20% across all flood phases. The study demonstrates that while coarser resolution may impact the accuracy of the CNN model, it remains a viable option for rapid flood prediction compared to hydrodynamic modeling approaches.
Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.
Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become an indispensable strategy to speed up training large-scale Graph Neural Networks (GNNs). However, existing sampling methods are mostly based on the graph structural information and ignore the dynamicity of optimization, which leads to high variance in estimating the stochastic gradients. The high variance issue can be very pronounced in extremely large graphs, where it results in slow convergence and poor generalization. In this paper, we theoretically analyze the variance of sampling methods and show that, due to the composite structure of empirical risk, the variance of any sampling method can be decomposed into \textit{embedding approximation variance} in the forward stage and \textit{stochastic gradient variance} in the backward stage that necessities mitigating both types of variance to obtain faster convergence rate. We propose a decoupled variance reduction strategy that employs (approximate) gradient information to adaptively sample nodes with minimal variance, and explicitly reduces the variance introduced by embedding approximation. We show theoretically and empirically that the proposed method, even with smaller mini-batch sizes, enjoys a faster convergence rate and entails a better generalization compared to the existing methods.
The notion of uncertainty is of major importance in machine learning and constitutes a key element of machine learning methodology. In line with the statistical tradition, uncertainty has long been perceived as almost synonymous with standard probability and probabilistic predictions. Yet, due to the steadily increasing relevance of machine learning for practical applications and related issues such as safety requirements, new problems and challenges have recently been identified by machine learning scholars, and these problems may call for new methodological developments. In particular, this includes the importance of distinguishing between (at least) two different types of uncertainty, often refereed to as aleatoric and epistemic. In this paper, we provide an introduction to the topic of uncertainty in machine learning as well as an overview of hitherto attempts at handling uncertainty in general and formalizing this distinction in particular.
We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.