Many studies have analyzed working memory (WM) from electroencephalogram (EEG). However, little is known about changes in the brain neurodynamics among resting-state (RS) according to the WM process. Here, we identified frequency-specific power and information flow patterns among three RS EEG before and after WM encoding and WM retrieval. Our results demonstrated the difference in power and information flow among RS EEG in delta (1-3.5 Hz), alpha (8-13.5 Hz), and beta (14-29.5 Hz) bands. In particular, there was a marked increase in the alpha band after WM retrieval. In addition, we calculated the association between significant characteristics of RS EEG and WM performance, and interestingly, correlations were found only in the alpha band. These results suggest that RS EEG according to the WM process has a significant impact on the variability and WM performance of brain mechanisms in relation to cognitive function.
We consider a generalization of the classical 100 Prisoner problem and its variant, involving empty boxes, whereby winning probabilities for a team depend on the number of attempts, as well as on the number of winners. We call this the unconstrained 100 prisoner problem. After introducing the 3 main classes of strategies, we define a variety of `hybrid' strategies and quantify their winning-efficiency. Whenever analytic results are not available, we make use of Monte Carlo simulations to estimate with high accuracy the winning-probabilities. Based on the results obtained, we conjecture that all strategies, except for the strategy maximizing the winning probability of the classical (constrained) problem, converge to the random strategy under weak conditions on the number of players or empty boxes. We conclude by commenting on the possible applications of our results in understanding processes of information retrieval, such as ``memory'' in living organisms.
Massive samples of event sequences data occur in various domains, including e-commerce, healthcare, and finance. There are two main challenges regarding inference of such data: computational and methodological. The amount of available data and the length of event sequences per client are typically large, thus it requires long-term modelling. Moreover, this data is often sparse and non-uniform, making classic approaches for time series processing inapplicable. Existing solutions include recurrent and transformer architectures in such cases. To allow continuous time, the authors introduce specific parametric intensity functions defined at each moment on top of existing models. Due to the parametric nature, these intensities represent only a limited class of event sequences. We propose the COTIC method based on a continuous convolution neural network suitable for non-uniform occurrence of events in time. In COTIC, dilations and multi-layer architecture efficiently handle dependencies between events. Furthermore, the model provides general intensity dynamics in continuous time - including self-excitement encountered in practice. The COTIC model outperforms existing approaches on majority of the considered datasets, producing embeddings for an event sequence that can be used to solve downstream tasks - e.g. predicting next event type and return time. The code of the proposed method can be found in the GitHub repository (//github.com/VladislavZh/COTIC).
The change-plane Cox model is a popular tool for the subgroup analysis of survival data. Despite the rich literature on this model, there has been limited investigation into the asymptotic properties of the estimators of the finite-dimensional parameter. Particularly, the convergence rate, not to mention the asymptotic distribution, has not been fully characterized for the general model where classification is based on multiple covariates. To bridge this theoretical gap, this study proposes a maximum smoothed partial likelihood estimator and establishes the following asymptotic properties. First, it shows that the convergence rate for the classification parameter can be arbitrarily close to 1/n up to a logarithmic factor under a certain condition on covariates and the choice of tuning parameter. Given this convergence rate result, it also establishes the asymptotic normality for the regression parameter.
In the Internet-of-Things (IoT), massive sensitive and confidential information is transmitted wirelessly, making security a serious concern. This is particularly true when technologies, such as non-orthogonal multiple access (NOMA), are used, making it possible for users to access each other's data. This paper studies secure communications in multiuser NOMA downlink systems, where each user is potentially an eavesdropper. Resource allocation is formulated to achieve the maximum sum secrecy rate, meanwhile satisfying the users' data requirements and power constraint. We solve this non-trivial, mixed-integer non-linear programming problem by decomposing it into power allocation with a closed-form solution, and user pairing obtained effectively using linear programming relaxation and barrier algorithm. These subproblems are solved iteratively until convergence, with the convergence rate rigorously analyzed. Simulations demonstrate that our approach outperforms its existing alternatives significantly in the sum secrecy rate and computational complexity.
We study distributed algorithms for finding a Nash equilibrium (NE) in a class of non-cooperative convex games under partial information. Specifically, each agent has access only to its own smooth local cost function and can receive information from its neighbors in a time-varying directed communication network. To this end, we propose a distributed gradient play algorithm to compute a NE by utilizing local information exchange among the players. In this algorithm, every agent performs a gradient step to minimize its own cost function while sharing and retrieving information locally among its neighbors. The existing methods impose strong assumptions such as balancedness of the mixing matrices and global knowledge of the network communication structure, including Perron-Frobenius eigenvector of the adjacency matrix and other graph connectivity constants. In contrast, our approach relies only on a reasonable and widely-used assumption of row-stochasticity of the mixing matrices. We analyze the algorithm for time-varying directed graphs and prove its convergence to the NE, when the agents' cost functions are strongly convex and have Lipschitz continuous gradients. Numerical simulations are performed for a Nash-Cournot game to illustrate the efficacy of the proposed algorithm.
Deep Neural Networks (DNNs) are widely used for their ability to effectively approximate large classes of functions. This flexibility, however, makes the strict enforcement of constraints on DNNs an open problem. Here we present a framework that, under mild assumptions, allows the exact enforcement of constraints on parameterized sets of functions such as DNNs. Instead of imposing "soft'' constraints via additional terms in the loss, we restrict (a subset of) the DNN parameters to a submanifold on which the constraints are satisfied exactly throughout the entire training procedure. We focus on constraints that are outside the scope of equivariant networks used in Geometric Deep Learning. As a major example of the framework, we restrict filters of a Convolutional Neural Network (CNN) to be wavelets, and apply these wavelet networks to the task of contour prediction in the medical domain.
The success of pre-trained contextualized representations has prompted researchers to analyze them for the presence of linguistic information. Indeed, it is natural to assume that these pre-trained representations do encode some level of linguistic knowledge as they have brought about large empirical improvements on a wide variety of NLP tasks, which suggests they are learning true linguistic generalization. In this work, we focus on intrinsic probing, an analysis technique where the goal is not only to identify whether a representation encodes a linguistic attribute but also to pinpoint where this attribute is encoded. We propose a novel latent-variable formulation for constructing intrinsic probes and derive a tractable variational approximation to the log-likelihood. Our results show that our model is versatile and yields tighter mutual information estimates than two intrinsic probes previously proposed in the literature. Finally, we find empirical evidence that pre-trained representations develop a cross-lingually entangled notion of morphosyntax.
When analysing multiple time series that may be subject to changepoints, it is sometimes possible to specify a priori, by means of a graph, which pairs of time series are likely to be impacted by simultaneous changepoints. This article proposes an informative prior for changepoints which encodes the information contained in the graph, inducing a changepoint model for multiple time series that borrows strength across clusters of connected time series to detect weak signals for synchronous changepoints. The graphical model for changepoints is further extended to allow dependence between nearby but not necessarily synchronous changepoints across neighbouring time series in the graph. A novel reversible jump Markov chain Monte Carlo (MCMC) algorithm making use of auxiliary variables is proposed to sample from the graphical changepoint model. The merit of the proposed approach is demonstrated through a changepoint analysis of computer network authentication logs from Los Alamos National Laboratory (LANL), demonstrating an improvement at detecting weak signals for network intrusions across users linked by network connectivity, whilst limiting the number of false alerts.
As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.
Generative adversarial networks (GANs) have been extensively studied in the past few years. Arguably their most significant impact has been in the area of computer vision where great advances have been made in challenges such as plausible image generation, image-to-image translation, facial attribute manipulation and similar domains. Despite the significant successes achieved to date, applying GANs to real-world problems still poses significant challenges, three of which we focus on here. These are: (1) the generation of high quality images, (2) diversity of image generation, and (3) stable training. Focusing on the degree to which popular GAN technologies have made progress against these challenges, we provide a detailed review of the state of the art in GAN-related research in the published scientific literature. We further structure this review through a convenient taxonomy we have adopted based on variations in GAN architectures and loss functions. While several reviews for GANs have been presented to date, none have considered the status of this field based on their progress towards addressing practical challenges relevant to computer vision. Accordingly, we review and critically discuss the most popular architecture-variant, and loss-variant GANs, for tackling these challenges. Our objective is to provide an overview as well as a critical analysis of the status of GAN research in terms of relevant progress towards important computer vision application requirements. As we do this we also discuss the most compelling applications in computer vision in which GANs have demonstrated considerable success along with some suggestions for future research directions. Code related to GAN-variants studied in this work is summarized on //github.com/sheqi/GAN_Review.