We provide new algorithms for two tasks relating to heterogeneous tabular datasets: clustering, and synthetic data generation. Tabular datasets typically consist of heterogeneous data types (numerical, ordinal, categorical) in columns, but may also have hidden cluster structure in their rows: for example, they may be drawn from heterogeneous (geographical, socioeconomic, methodological) sources, such that the outcome variable they describe (such as the presence of a disease) may depend not only on the other variables but on the cluster context. Moreover, sharing of biomedical data is often hindered by patient confidentiality laws, and there is current interest in algorithms to generate synthetic tabular data from real data, for example via deep learning. We demonstrate a novel EM-based clustering algorithm, MMM (``Madras Mixture Model''), that outperforms standard algorithms in determining clusters in synthetic heterogeneous data, and recovers structure in real data. Based on this, we demonstrate a synthetic tabular data generation algorithm, MMMsynth, that pre-clusters the input data, and generates cluster-wise synthetic data assuming cluster-specific data distributions for the input columns. We benchmark this algorithm by testing the performance of standard ML algorithms when they are trained on synthetic data and tested on real published datasets. Our synthetic data generation algorithm outperforms other literature tabular-data generators, and approaches the performance of training purely with real data.
We study hypothesis testing under communication constraints, where each sample is quantized before being revealed to a statistician. Without communication constraints, it is well known that the sample complexity of simple binary hypothesis testing is characterized by the Hellinger distance between the distributions. We show that the sample complexity of simple binary hypothesis testing under communication constraints is at most a logarithmic factor larger than in the unconstrained setting and this bound is tight. We develop a polynomial-time algorithm that achieves the aforementioned sample complexity. Our framework extends to robust hypothesis testing, where the distributions are corrupted in the total variation distance. Our proofs rely on a new reverse data processing inequality and a reverse Markov inequality, which may be of independent interest. For simple $M$-ary hypothesis testing, the sample complexity in the absence of communication constraints has a logarithmic dependence on $M$. We show that communication constraints can cause an exponential blow-up leading to $\Omega(M)$ sample complexity even for adaptive algorithms.
Spatial data can come in a variety of different forms, but two of the most common generating models for such observations are random fields and point processes. Whilst it is known that spectral analysis can unify these two different data forms, specific methodology for the related estimation is yet to be developed. In this paper, we solve this problem by extending multitaper estimation, to estimate the spectral density matrix function for multivariate spatial data, where processes can be any combination of either point processes or random fields. We discuss finite sample and asymptotic theory for the proposed estimators, as well as specific details on the implementation, including how to perform estimation on non-rectangular domains and the correct implementation of multitapering for processes sampled in different ways, e.g. continuously vs on a regular grid.
Biomechanical and orthopaedic studies frequently encounter complex datasets that encompass both circular and linear variables. In most cases the circular and linear variables are (i) considered in isolation with dependency between variables neglected and (ii) the cyclicity of the circular variables disregarded resulting in erroneous decision making. Given the inherent characteristics of circular variables, it is imperative to adopt methods that integrate directional statistics to achieve precise modelling. This paper is motivated by the modelling of biomechanical data, i.e., the fracture displacements, that is used as a measure in external fixator comparisons. We focus on a data set, based on an Ilizarov ring fixator, comprising of six variables. A modelling framework applicable to the 6D joint distribution of circular-linear data based on vine copulas is proposed. The pair-copula decomposition concept of vine copulas represents the dependence structure as a combination of circular-linear, circular-circular and linear-linear pairs modelled by their respective copulas. This framework allows us to assess the dependencies in the joint distribution as well as account for the cyclicity of the circular variables. Thus, a new approach for accurate modelling of mechanical behaviour for Ilizarov ring fixators and other data of this nature is imparted.
Inferring causation from time series data is of scientific interest in different disciplines, particularly in neural connectomics. While different approaches exist in the literature with parametric modeling assumptions, we focus on a non-parametric model for time series satisfying a Markovian structural causal model with stationary distribution and without concurrent effects. We show that the model structure can be used to its advantage to obtain an elegant algorithm for causal inference from time series based on conditional dependence tests, coined Causal Inference in Time Series (CITS) algorithm. We describe Pearson's partial correlation and Hilbert-Schmidt criterion as candidates for such conditional dependence tests that can be used in CITS for the Gaussian and non-Gaussian settings, respectively. We prove the mathematical guarantee of the CITS algorithm in recovering the true causal graph, under standard mixing conditions on the underlying time series. We also conduct a comparative evaluation of performance of CITS with other existing methodologies in simulated datasets. We then describe the utlity of the methodology in neural connectomics -- in inferring causal functional connectivity from time series of neural activity, and demonstrate its application to a real neurobiological dataset of electro-physiological recordings from the mouse visual cortex recorded by Neuropixel probes.
Discovering underlying structures of causal relations from observational studies poses a great challenge in scientific research where randomized trials or intervention-based studies are infeasible. This challenge pertains to the lack of knowledge on pre-specified roles of cause and effect in observations studies. Leveraging Shannon's seminal work on information theory, we propose a new conceptual framework of asymmetry where any causal link between putative cause and effect is captured by unequal information flows from one variable to another. We present an entropy-based asymmetry coefficient that not only enables us to assess for whether one variable is a stronger predictor of the other, but also detects an imprint of the underlying causal relation in observational studies. Our causal discovery analytics can accommodate low-dimensional confounders naturally. The proposed methodology relies on scalable non-parametric density estimation using fast Fourier transformation, making the resulting estimation method manyfold faster than the classical bandwidth-based density estimation while maintaining comparable mean integrated squared error rates. We investigate key asymptotic properties of our methodology and utilize a data-splitting and cross-fitting technique to facilitate inference for the direction of causal relations. We illustrate the performance of our methodology through simulation studies and real data examples.
A class of (block) rational Krylov subspace based projection method for solving large-scale continuous-time algebraic Riccati equation (CARE) $0 = \mathcal{R}(X) := A^HX + XA + C^HC - XBB^HX$ with a large, sparse $A$ and $B$ and $C$ of full low rank is proposed. The CARE is projected onto a block rational Krylov subspace $\mathcal{K}_j$ spanned by blocks of the form $(A^H+ s_kI)C^H$ for some shifts $s_k, k = 1, \ldots, j.$ The considered projections do not need to be orthogonal and are built from the matrices appearing in the block rational Arnoldi decomposition associated to $\mathcal{K}_j.$ The resulting projected Riccati equation is solved for the small square Hermitian $Y_j.$ Then the Hermitian low-rank approximation $X_j = Z_jY_jZ_j^H$ to $X$ is set up where the columns of $Z_j$ span $\mathcal{K}_j.$ The residual norm $\|R(X_j )\|_F$ can be computed efficiently via the norm of a readily available $2p \times 2p$ matrix. We suggest to reduce the rank of the approximate solution $X_j$ even further by truncating small eigenvalues from $X_j.$ This truncated approximate solution can be interpreted as the solution of the Riccati residual projected to a subspace of $\mathcal{K}_j.$ This gives us a way to efficiently evaluate the norm of the resulting residual. Numerical examples are presented.
By approximating posterior distributions with weighted samples, particle filters (PFs) provide an efficient mechanism for solving non-linear sequential state estimation problems. While the effectiveness of particle filters has been recognised in various applications, their performance relies on the knowledge of dynamic models and measurement models, as well as the construction of effective proposal distributions. An emerging trend involves constructing components of particle filters using neural networks and optimising them by gradient descent, and such data-adaptive particle filtering approaches are often called differentiable particle filters. Due to the expressiveness of neural networks, differentiable particle filters are a promising computational tool for performing inference on sequential data in complex, high-dimensional tasks, such as vision-based robot localisation. In this paper, we review recent advances in differentiable particle filters and their applications. We place special emphasis on different design choices for key components of differentiable particle filters, including dynamic models, measurement models, proposal distributions, optimisation objectives, and differentiable resampling techniques.
We present a novel deep learning method for estimating time-dependent parameters in Markov processes through discrete sampling. Departing from conventional machine learning, our approach reframes parameter approximation as an optimization problem using the maximum likelihood approach. Experimental validation focuses on parameter estimation in multivariate regression and stochastic differential equations (SDEs). Theoretical results show that the real solution is close to SDE with parameters approximated using our neural network-derived under specific conditions. Our work contributes to SDE-based model parameter estimation, offering a versatile tool for diverse fields.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.
Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.