Bayesian inference and kernel methods are well established in machine learning. The neural network Gaussian process in particular provides a concept to investigate neural networks in the limit of infinitely wide hidden layers by using kernel and inference methods. Here we build upon this limit and provide a field-theoretic formalism which covers the generalization properties of infinitely wide networks. We systematically compute generalization properties of linear, non-linear, and deep non-linear networks for kernel matrices with heterogeneous entries. In contrast to currently employed spectral methods we derive the generalization properties from the statistical properties of the input, elucidating the interplay of input dimensionality, size of the training data set, and variability of the data. We show that data variability leads to a non-Gaussian action reminiscent of a ($\varphi^3+\varphi^4$)-theory. Using our formalism on a synthetic task and on MNIST we obtain a homogeneous kernel matrix approximation for the learning curve as well as corrections due to data variability which allow the estimation of the generalization properties and exact results for the bounds of the learning curves in the case of infinitely many training data points.
Quantum kernel methods are a candidate for quantum speed-ups in supervised machine learning. The number of quantum measurements N required for a reasonable kernel estimate is a critical resource, both from complexity considerations and because of the constraints of near-term quantum hardware. We emphasize that for classification tasks, the aim is reliable classification and not precise kernel evaluation, and demonstrate that the former is far more resource efficient. Furthermore, it is shown that the accuracy of classification is not a suitable performance metric in the presence of noise and we motivate a new metric that characterizes the reliability of classification. We then obtain a bound for N which ensures, with high probability, that classification errors over a dataset are bounded by the margin errors of an idealized quantum kernel classifier. Using chance constraint programming and the subgaussian bounds of quantum kernel distributions, we derive several Shot-frugal and Robust (ShofaR) programs starting from the primal formulation of the Support Vector Machine. This significantly reduces the number of quantum measurements needed and is robust to noise by construction. Our strategy is applicable to uncertainty in quantum kernels arising from any source of unbiased noise.
Iterative minimization algorithms appear in various areas including machine learning, neural networks, and information theory.The em algorithm is one of the famous iterative minimization algorithms in the area of machine learning, and the Arimoto-Blahut algorithm is a typical iterative algorithm in the area of information theory.However, these two topics had been separately studied for a long time. In this paper, we generalize an algorithm that was recently proposed in the context of the Arimoto-Blahut algorithm.Then, we show various convergence theorems, one of which covers the case when each iterative step is done approximately.Also, we apply this algorithm to the target problem of the em algorithm, and propose its improvement. In addition, we apply it to other various problems in information theory.
Detecting differences in gene expression is an important part of single-cell RNA sequencing experiments, and many statistical methods have been developed for this aim. Most differential expression analyses focus on comparing expression between two groups (e.g., treatment vs. control). But there is increasing interest in multi-condition differential expression analyses in which expression is measured in many conditions, and the aim is to accurately detect and estimate expression differences in all conditions. We show that directly modeling single-cell RNA-seq counts in all conditions simultaneously, while also inferring how expression differences are shared across conditions, leads to greatly improved performance for detecting and estimating expression differences compared to existing methods. We illustrate the potential of this new approach by analyzing data from a single-cell experiment studying the effects of cytokine stimulation on gene expression. We call our new method "Poisson multivariate adaptive shrinkage", and it is implemented in an R package available online at //github.com/stephenslab/poisson.mash.alpha.
We prove the convergence of meshfree method for solving the elliptic Monge-Ampere equation with Dirichlet boundary on the bounded domain. L2 error is obtained based on the kernel-based trial spaces generated by the compactly supported radial basis functions. We obtain the convergence result when the testing discretization is finer than the trial discretization. The convergence rate depend on the regularity of the solution, the smoothness of the computing domain, and the approximation of scaled kernel-based spaces. The presented convergence theory covers a wide range of kernel-based trial spaces including stationary approximation and non-stationary approximation. An extension to non-Dirichlet boundary condition is in a forthcoming paper.
We study operator - or noncommutative - variants of constraint satisfaction problems (CSPs). These higher-dimensional variants are a core topic of investigation in quantum information, where they arise as nonlocal games and entangled multiprover interactive proof systems (MIP*). The idea of higher-dimensional relaxations of CSPs is also important in the classical literature. For example since the celebrated work of Goemans and Williamson on Max-Cut, higher dimensional vector relaxations have been central in the design of approximation algorithms for classical CSPs. We introduce a framework for designing approximation algorithms for noncommutative CSPs. Prior to this work Max-$2$-Lin$(k)$ was the only family of noncommutative CSPs known to be efficiently solvable. This work is the first to establish approximation ratios for a broader class of noncommutative CSPs. In the study of classical CSPs, $k$-ary decision variables are often represented by $k$-th roots of unity, which generalise to the noncommutative setting as order-$k$ unitary operators. In our framework, using representation theory, we develop a way of constructing unitary solutions from SDP relaxations, extending the pioneering work of Tsirelson on XOR games. Then, we introduce a novel rounding scheme to transform these solutions to order-$k$ unitaries. Our main technical innovation here is a theorem guaranteeing that, for any set of unitary operators, there exists a set of order-$k$ unitaries that closely mimics it. As an integral part of the rounding scheme, we prove a random matrix theory result that characterises the distribution of the relative angles between eigenvalues of random unitaries using tools from free probability.
Speech emotion recognition (SER) systems aim to recognize human emotional state during human-computer interaction. Most existing SER systems are trained based on utterance-level labels. However, not all frames in an audio have affective states consistent with utterance-level label, which makes it difficult for the model to distinguish the true emotion of the audio and perform poorly. To address this problem, we propose a frame-level emotional state alignment method for SER. First, we fine-tune HuBERT model to obtain a SER system with task-adaptive pretraining (TAPT) method, and extract embeddings from its transformer layers to form frame-level pseudo-emotion labels with clustering. Then, the pseudo labels are used to pretrain HuBERT. Hence, the each frame output of HuBERT has corresponding emotional information. Finally, we fine-tune the above pretrained HuBERT for SER by adding an attention layer on the top of it, which can focus only on those frames that are emotionally more consistent with utterance-level label. The experimental results performed on IEMOCAP indicate that our proposed method performs better than state-of-the-art (SOTA) methods.
Most state-of-the-art machine learning techniques revolve around the optimisation of loss functions. Defining appropriate loss functions is therefore critical to successfully solving problems in this field. We present a survey of the most commonly used loss functions for a wide range of different applications, divided into classification, regression, ranking, sample generation and energy based modelling. Overall, we introduce 33 different loss functions and we organise them into an intuitive taxonomy. Each loss function is given a theoretical backing and we describe where it is best used. This survey aims to provide a reference of the most essential loss functions for both beginner and advanced machine learning practitioners.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.
The remarkable practical success of deep learning has revealed some major surprises from a theoretical perspective. In particular, simple gradient methods easily find near-optimal solutions to non-convex optimization problems, and despite giving a near-perfect fit to training data without any explicit effort to control model complexity, these methods exhibit excellent predictive accuracy. We conjecture that specific principles underlie these phenomena: that overparametrization allows gradient methods to find interpolating solutions, that these methods implicitly impose regularization, and that overparametrization leads to benign overfitting. We survey recent theoretical progress that provides examples illustrating these principles in simpler settings. We first review classical uniform convergence results and why they fall short of explaining aspects of the behavior of deep learning methods. We give examples of implicit regularization in simple settings, where gradient methods lead to minimal norm functions that perfectly fit the training data. Then we review prediction methods that exhibit benign overfitting, focusing on regression problems with quadratic loss. For these methods, we can decompose the prediction rule into a simple component that is useful for prediction and a spiky component that is useful for overfitting but, in a favorable setting, does not harm prediction accuracy. We focus specifically on the linear regime for neural networks, where the network can be approximated by a linear model. In this regime, we demonstrate the success of gradient flow, and we consider benign overfitting with two-layer networks, giving an exact asymptotic analysis that precisely demonstrates the impact of overparametrization. We conclude by highlighting the key challenges that arise in extending these insights to realistic deep learning settings.
Deep learning constitutes a recent, modern technique for image processing and data analysis, with promising results and large potential. As deep learning has been successfully applied in various domains, it has recently entered also the domain of agriculture. In this paper, we perform a survey of 40 research efforts that employ deep learning techniques, applied to various agricultural and food production challenges. We examine the particular agricultural problems under study, the specific models and frameworks employed, the sources, nature and pre-processing of data used, and the overall performance achieved according to the metrics used at each work under study. Moreover, we study comparisons of deep learning with other existing popular techniques, in respect to differences in classification or regression performance. Our findings indicate that deep learning provides high accuracy, outperforming existing commonly used image processing techniques.