We consider a strictly stationary random field on the two-dimensional integer lattice with regularly varying marginal and finite-dimensional distributions. Exploiting the regular variation, we define the spatial extremogram which takes into account only the largest values in the random field. This extremogram is a spatial autocovariance function. We define the corresponding extremal spectral density and its estimator, the extremal periodogram. Based on the extremal periodogram, we consider the Whittle estimator for suitable classes of parametric random fields including the Brown-Resnick random field and regularly varying max-moving averages.
We consider the problem of constructing minimax rate-optimal estimators for a doubly robust nonparametric functional that has witnessed applications across the causal inference and conditional independence testing literature. Minimax rate-optimal estimators for such functionals are typically constructed through higher-order bias corrections of plug-in and one-step type estimators and, in turn, depend on estimators of nuisance functions. In this paper, we consider a parallel question of interest regarding the optimality and/or sub-optimality of plug-in and one-step bias-corrected estimators for the specific doubly robust functional of interest. Specifically, we verify that by using undersmoothing and sample splitting techniques when constructing nuisance function estimators, one can achieve minimax rates of convergence in all H\"older smoothness classes of the nuisance functions (i.e. the propensity score and outcome regression) provided that the marginal density of the covariates is sufficiently regular. Additionally, by demonstrating suitable lower bounds on these classes of estimators, we demonstrate the necessity to undersmooth the nuisance function estimators to obtain minimax optimal rates of convergence.
In many investigations, the primary outcome of interest is difficult or expensive to collect. Examples include long-term health effects of medical interventions, measurements requiring expensive testing or follow-up, and outcomes only measurable on small panels as in marketing. This reduces effective sample sizes for estimating the average treatment effect (ATE). However, there is often an abundance of observations on surrogate outcomes not of primary interest, such as short-term health effects or online-ad click-through. We study the role of such surrogate observations in the efficient estimation of treatment effects. To quantify their value, we derive the semiparametric efficiency bounds on ATE estimation with and without the presence of surrogates and several intermediary settings. The difference between these characterizes the efficiency gains from optimally leveraging surrogates. We study two regimes: when the number of surrogate observations is comparable to primary-outcome observations and when the former dominates the latter. We take an agnostic missing-data approach circumventing strong surrogate conditions previously assumed. To leverage surrogates' efficiency gains, we develop efficient ATE estimation and inference based on flexible machine-learning estimates of nuisance functions appearing in the influence functions we derive. We empirically demonstrate the gains by studying the long-term earnings effect of job training.
Extremely large-scale MIMO (XL-MIMO) is a promising technique for future 6G communications. The sharp increase of the number of antennas causes the electromagnetic propagation to change from far-field to near-field. Due to the near-field effect, the exhaustive near-field beam training at all angles and distances involves very high overhead. The improved fast near-field beam training scheme based on time-delay beamforming can significantly reduce the overhead, but it suffers from very high hardware cost and energy consumption caused by extra time-delay circuits. In this paper, we propose a near-field two dimension (2D) hierarchical beam training scheme to reduce the overhead without extra hardware circuits. Specifically, we first formulate the near-field codeword design problem for any required high or low resolutions with different angle and distance coverages. Next, we propose a Gerchberg-Saxton (GS)-based algorithm to obtain the unconstrained codeword by considering the ideal fully digital architecture. Based on the unconstrained codeword, an iterative optimization algorithm is then proposed to acquire the practical codeword by considering the more practical hybrid digital-analog architecture. Finally, with the help of the practical multi-resolution codebooks, we propose a near-field 2D hierarchical beam training scheme to significantly reduce the training overhead, which is verified by extensive simulation results.
We study the problem of estimating latent population flows from aggregated count data. This problem arises when individual trajectories are not available due to privacy issues or measurement fidelity. Instead, the aggregated observations are measured over discrete-time points, for estimating the population flows among states. Most related studies tackle the problems by learning the transition parameters of a time-homogeneous Markov process. Nonetheless, most real-world population flows can be influenced by various uncertainties such as traffic jam and weather conditions. Thus, in many cases, a time-homogeneous Markov model is a poor approximation of the much more complex population flows. To circumvent this difficulty, we resort to a multi-marginal optimal transport (MOT) formulation that can naturally represent aggregated observations with constrained marginals, and encode time-dependent transition matrices by the cost functions. In particular, we propose to estimate the transition flows from aggregated data by learning the cost functions of the MOT framework, which enables us to capture time-varying dynamic patterns. The experiments demonstrate the improved accuracy of the proposed algorithms than the related methods in estimating several real-world transition flows.
We consider the problem of estimating a multivariate function $f_0$ of bounded variation (BV), from noisy observations $y_i = f_0(x_i) + z_i$ made at random design points $x_i \in \mathbb{R}^d$, $i=1,\ldots,n$. We study an estimator that forms the Voronoi diagram of the design points, and then solves an optimization problem that regularizes according to a certain discrete notion of total variation (TV): the sum of weighted absolute differences of parameters $\theta_i,\theta_j$ (which estimate the function values $f_0(x_i),f_0(x_j)$) at all neighboring cells $i,j$ in the Voronoi diagram. This is seen to be equivalent to a variational optimization problem that regularizes according to the usual continuum (measure-theoretic) notion of TV, once we restrict the domain to functions that are piecewise constant over the Voronoi diagram. The regression estimator under consideration hence performs (shrunken) local averaging over adaptively formed unions of Voronoi cells, and we refer to it as the Voronoigram, following the ideas in Koenker (2005), and drawing inspiration from Tukey's regressogram (Tukey, 1961). Our contributions in this paper span both the conceptual and theoretical frontiers: we discuss some of the unique properties of the Voronoigram in comparison to TV-regularized estimators that use other graph-based discretizations; we derive the asymptotic limit of the Voronoi TV functional; and we prove that the Voronoigram is minimax rate optimal (up to log factors) for estimating BV functions that are essentially bounded.
Deep learning-based 3D human pose estimation performs best when trained on large amounts of labeled data, making combined learning from many datasets an important research direction. One obstacle to this endeavor are the different skeleton formats provided by different datasets, i.e., they do not label the same set of anatomical landmarks. There is little prior research on how to best supervise one model with such discrepant labels. We show that simply using separate output heads for different skeletons results in inconsistent depth estimates and insufficient information sharing across skeletons. As a remedy, we propose a novel affine-combining autoencoder (ACAE) method to perform dimensionality reduction on the number of landmarks. The discovered latent 3D points capture the redundancy among skeletons, enabling enhanced information sharing when used for consistency regularization. Our approach scales to an extreme multi-dataset regime, where we use 28 3D human pose datasets to supervise one model, which outperforms prior work on a range of benchmarks, including the challenging 3D Poses in the Wild (3DPW) dataset. Our code and models are available for research purposes.
Fluid antenna systems (FAS) are an emerging technology that promises a significant diversity gain even in the smallest spaces. Motivated by the groundbreaking potentials of liquid antennas, researchers in the wireless communication community are investigating a novel antenna system where a single antenna can freely switch positions along a small linear space to pick the strongest received signal. However, the FAS positions do not necessarily follow the ever-existing rule separating them by at least half the radiation wavelength. Previous work in the literature parameterized the channels of the FAS ports simply enough to provide a single-integral expression of the probability of outage and various insights on the achievable performance. Nevertheless, this channel model may not accurately capture the correlation between the ports, given by Jake's model. This work builds on the state-of-the-art and accurately approximates the FAS channel while maintaining analytical tractability. The approximation is performed in two stages. The first stage approximation considerably reduces the number of multi-fold integrals in the probability of outage expression, while the second stage approximation provides a single integral representation of the FAS probability of outage. Further, the performance of such innovative technology is investigated under a less-idealized correlation model. Numerical results validate our approximations of the FAS channel model and demonstrate a limited performance gain under realistic assumptions. Further, our work opens the door for future research to investigate scenarios in which the FAS provides a performance gain compared to the current multiple antennas solutions.
This work develops non-asymptotic theory for estimation of the long-run variance matrix and its inverse, the so-called precision matrix, for high-dimensional time series under general assumptions on the dependence structure including long-range dependence. The estimation involves shrinkage techniques which are thresholding and penalizing versions of the classical multivariate local Whittle estimator. The results ensure consistent estimation in a double asymptotic regime where the number of component time series is allowed to grow with the sample size as long as the true model parameters are sparse. The key technical result is a concentration inequality of the local Whittle estimator for the long-run variance matrix around the true model parameters. In particular, it handles simultaneously the estimation of the memory parameters which enter the underlying model. Novel algorithms for the considered procedures are proposed, and a simulation study and a data application are also provided.
The class of $\alpha$-stable distributions is widely used in various applications, especially for modelling heavy-tailed data. Although the $\alpha$-stable distributions have been used in practice for many years, new methods for identification, testing, and estimation are still being refined and new approaches are being proposed. The constant development of new statistical methods is related to the low efficiency of existing algorithms, especially when the underlying sample is small or the underlying distribution is close to Gaussian. In this paper we propose a new estimation algorithm for stability index, for samples from the symmetric $\alpha$-stable distribution. The proposed approach is based on quantile conditional variance ratio. We study the statistical properties of the proposed estimation procedure and show empirically that our methodology often outperforms other commonly used estimation algorithms. Moreover, we show that our statistic extracts unique sample characteristics that can be combined with other methods to refine existing methodologies via ensamble methods. Although our focus is set on the symmetric $\alpha$-stable case, we demonstrate that the considered statistic is insensitive to the skewness parameter change, so that our method could be also used in a more generic framework. For completeness, we also show how to apply our method on real data linked to plasma physics.
Image segmentation is considered to be one of the critical tasks in hyperspectral remote sensing image processing. Recently, convolutional neural network (CNN) has established itself as a powerful model in segmentation and classification by demonstrating excellent performances. The use of a graphical model such as a conditional random field (CRF) contributes further in capturing contextual information and thus improving the segmentation performance. In this paper, we propose a method to segment hyperspectral images by considering both spectral and spatial information via a combined framework consisting of CNN and CRF. We use multiple spectral cubes to learn deep features using CNN, and then formulate deep CRF with CNN-based unary and pairwise potential functions to effectively extract the semantic correlations between patches consisting of three-dimensional data cubes. Effective piecewise training is applied in order to avoid the computationally expensive iterative CRF inference. Furthermore, we introduce a deep deconvolution network that improves the segmentation masks. We also introduce a new dataset and experimented our proposed method on it along with several widely adopted benchmark datasets to evaluate the effectiveness of our method. By comparing our results with those from several state-of-the-art models, we show the promising potential of our method.