We propose a covariance stationarity test for an otherwise dependent and possibly globally non-stationary time series. We work in the new setting of Jin, Wang and Wang (2015) who exploit Walsh (1923) functions (global square waves) in order to compare sub-sample covariances with the full sample counterpart. They impose strict stationarity under the null, only consider linear processes under either hypothesis, and exploit linearity in order to achieve a parametric estimator for an inverted high dimensional asymptotic covariance matrix. Conversely, we allow for linear or linear processes with possibly non-iid innovations. This is important in macroeconomics and finance where nonlinear feedback and random volatility occur in many settings. We completely sidestep asymptotic covariance matrix estimation and inversion by bootstrapping a max-correlation difference statistic, where the maximum is taken over the correlation lag h and Walsh function generated sub-sample counter k (the number of systematic samples). We achieve a higher feasible rate of increase for the maximum lag and counter H and K, and in the supplemental material we present a data driven method for selecting H and K. Of particular note, our test is capable of detecting breaks in variance, and distant, or very mild, deviations from stationarity.
We consider the problem of comparing several samples of stochastic processes with respect to their second-order structure, and describing the main modes of variation in this second order structure, if present. These tasks can be seen as an Analysis of Variance (ANOVA) and a Principal Component Analysis (PCA) of covariance operators, respectively. They arise naturally in functional data analysis, where several populations are to be contrasted relative to the nature of their dispersion around their means, rather than relative to their means themselves. We contribute a novel approach based on optimal (multi)transport, where each covariance can be identified with a a centred Gaussian process of corresponding covariance. By means of constructing the optimal simultaneous coupling of these Gaussian processes, we contrast the (linear) maps that achieve it with the identity with respect to a norm-induced distance. The resulting test statistic, calibrated by permutation, is seen to distinctly outperform the state-of-the-art, and to furnish considerable power even under local alternatives. This effect is seen to be genuinely functional, and is related to the potential for perfect discrimination in infinite dimensions. In the event of a rejection of the null hypothesis stipulating equality, a geometric interpretation of the transport maps allows us to construct a (tangent space) PCA revealing the main modes of variation. As a necessary step to developing our methodology, we prove results on the existence and boundedness of optimal multitransport maps. These are of independent interest in the theory of transport of Gaussian processes. The transportation ANOVA and PCA are illustrated on a variety of simulated and real examples.
Fiber metal laminates (FML) are of high interest for lightweight structures as they combine the advantageous material properties of metals and fiber-reinforced polymers. However, low-velocity impacts can lead to complex internal damage. Therefore, structural health monitoring with guided ultrasonic waves (GUW) is a methodology to identify such damage. Numerical simulations form the basis for corresponding investigations, but experimental validation of dispersion diagrams over a wide frequency range is hardly found in the literature. In this work the dispersive relation of GUWs is experimentally determined for an FML made of carbon fiber-reinforced polymer and steel. For this purpose, multi-frequency excitation signals are used to generate GUWs and the resulting wave field is measured via laser scanning vibrometry. The data are processed by means of a non-uniform discrete 2d Fourier transform and analyzed in the frequency-wavenumber domain. The experimental data are in excellent agreement with data from a numerical solution of the analytical framework. In conclusion, this work presents a highly automatable method to experimentally determine dispersion diagrams of GUWs in FML over large frequency ranges with high accuracy.
An implicit variable-step BDF2 scheme is established for solving the space fractional Cahn-Hilliard equation, involving the fractional Laplacian, derived from a gradient flow in the negative order Sobolev space $H^{-\alpha}$, $\alpha\in(0,1)$. The Fourier pseudo-spectral method is applied for the spatial approximation. The proposed scheme inherits the energy dissipation law in the form of the modified discrete energy under the sufficient restriction of the time-step ratios. The convergence of the fully discrete scheme is rigorously provided utilizing the newly proved discrete embedding type convolution inequality dealing with the fractional Laplacian. Besides, the mass conservation and the unique solvability are also theoretically guaranteed. Numerical experiments are carried out to show the accuracy and the energy dissipation both for various interface widths. In particular, the multiple-time-scale evolution of the solution is captured by an adaptive time-stepping strategy in the short-to-long time simulation.
Results on the spectral behavior of random matrices as the dimension increases are applied to the problem of detecting the number of sources impinging on an array of sensors. A common strategy to solve this problem is to estimate the multiplicity of the smallest eigenvalue of the spatial covariance matrix $R$ of the sensed data from the sample covariance matrix $\widehat{R}$. Existing approaches, such as that based on information theoretic criteria, rely on the closeness of the noise eigenvalues of $\widehat R$ to each other and, therefore, the sample size has to be quite large when the number of sources is large in order to obtain a good estimate. The analysis presented in this report focuses on the splitting of the spectrum of $\widehat{R}$ into noise and signal eigenvalues. It is shown that, when the number of sensors is large, the number of signals can be estimated with a sample size considerably less than that required by previous approaches. The practical significance of the main result is that detection can be achieved with a number of samples comparable to the number of sensors in large dimensional array processing.
In this paper we derive a Large Deviation Principle (LDP) for inhomogeneous U/V-statistics of a general order. Using this, we derive a LDP for two types of statistics: random multilinear forms, and number of monochromatic copies of a subgraph. We show that the corresponding rate functions in these cases can be expressed as a variational problem over a suitable space of functions. We use the tools developed to study Gibbs measures with the corresponding Hamiltonians, which include tensor generalizations of both Ising (with non-compact base measure) and Potts models. For these Gibbs measures, we establish scaling limits of log normalizing constants, and weak laws in terms of weak* topology, which are of possible independent interest.
We propose a new wavelet-based method for density estimation when the data are size-biased. More specifically, we consider a power of the density of interest, where this power exceeds 1/2. Warped wavelet bases are employed, where warping is attained by some continuous cumulative distribution function. A special case is the conventional orthonormal wavelet estimation, where the warping distribution is the standard continuous uniform. We show that both linear and nonlinear wavelet estimators are consistent, with optimal and/or near-optimal rates. Monte Carlo simulations are performed to compare four special settings which are easy to interpret in practice. An application with a real dataset on fatal traffic accidents involving alcohol illustrates the method. We observe that warped bases provide more flexible and superior estimates for both simulated and real data. Moreover, we find that estimating the power of a density (for instance, its square root) further improves the results.
The ability to quickly and accurately identify covariate shift at test time is a critical and often overlooked component of safe machine learning systems deployed in high-risk domains. While methods exist for detecting when predictions should not be made on out-of-distribution test examples, identifying distributional level differences between training and test time can help determine when a model should be removed from the deployment setting and retrained. In this work, we define harmful covariate shift (HCS) as a change in distribution that may weaken the generalization of a predictive model. To detect HCS, we use the discordance between an ensemble of classifiers trained to agree on training data and disagree on test data. We derive a loss function for training this ensemble and show that the disagreement rate and entropy represent powerful discriminative statistics for HCS. Empirically, we demonstrate the ability of our method to detect harmful covariate shift with statistical certainty on a variety of high-dimensional datasets. Across numerous domains and modalities, we show state-of-the-art performance compared to existing methods, particularly when the number of observed test samples is small.
We develop a data-driven optimal shrinkage algorithm for matrix denoising in the presence of high-dimensional noise with separable covariance structure; that is, the nose is colored and dependent. The algorithm, coined extended OptShrink (eOptShrink), involves a new imputation and rank estimation and we do not need to estimate the separable covariance structure of the noise. On the theoretical side, we study the asymptotic behavior of singular values and singular vectors of the random matrix associated with the noisy data, including the sticking property of non-outlier singular values and delocalization of the non-outlier singular vectors with a convergence rate. We apply these results to establish the guarantee of the imputation, rank estimation and eOptShrink algorithm with a convergence rate. On the application side, in addition to a series of numerical simulations with a comparison with various state-of-the-art optimal shrinkage algorithms, we apply eOptShrink to extract fetal electrocardiogram from the single channel trans-abdominal maternal electrocardiogram.
Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.
Recent years have witnessed significant advances in technologies and services in modern network applications, including smart grid management, wireless communication, cybersecurity as well as multi-agent autonomous systems. Considering the heterogeneous nature of networked entities, emerging network applications call for game-theoretic models and learning-based approaches in order to create distributed network intelligence that responds to uncertainties and disruptions in a dynamic or an adversarial environment. This paper articulates the confluence of networks, games and learning, which establishes a theoretical underpinning for understanding multi-agent decision-making over networks. We provide an selective overview of game-theoretic learning algorithms within the framework of stochastic approximation theory, and associated applications in some representative contexts of modern network systems, such as the next generation wireless communication networks, the smart grid and distributed machine learning. In addition to existing research works on game-theoretic learning over networks, we highlight several new angles and research endeavors on learning in games that are related to recent developments in artificial intelligence. Some of the new angles extrapolate from our own research interests. The overall objective of the paper is to provide the reader a clear picture of the strengths and challenges of adopting game-theoretic learning methods within the context of network systems, and further to identify fruitful future research directions on both theoretical and applied studies.