Herein, we propose a Spearman rank correlation based screening procedure for ultrahigh-dimensional data with censored response case. The proposed method is model-free without specifying any regression forms of predictors or response variable and is robust under the unknown monotone transformations of these response variable and predictors. The sure-screening and rank-consistency properties are established under some mild regularity conditions. Simulation studies demonstrate that the new screening method performs well in the presence of a heavy-tailed distribution, strongly dependent predictors or outliers and that offers superior performance over the existing nonparametric screening procedures. In particular, the new screening method still works well when a response variable is observed under a high censoring rate. An illustrative example is provided.
We present an immersed boundary method to simulate the creeping motion of a rigid particle in a fluid described by the Stokes equations discretized thanks to a finite element strategy on unfitted meshes, called Phi-FEM, that uses the description of the solid with a level-set function. One of the advantages of our method is the use of standard finite element spaces and classical integration tools, while maintaining the optimal convergence (theoretically in the H1 norm for the velocity and L2 for pressure; numerically also in the L2 norm for the velocity).
In recent years, graphical multiple testing procedures have gained popularity due to their generality and ease of interpretation. In contemporary research, online error control is often required, where an error criterion, such as familywise error rate (FWER) or false discovery rate (FDR), shall remain under control while testing an a priori unbounded sequence of hypotheses. Although the classical graphical procedure can be extended to the online setting, previous work has shown that it leads to low power, and other approaches, such as Adaptive-Discard (ADDIS) procedures, are preferred instead. In this paper, we introduce an ADDIS-Graph with FWER control and its extension for the FDR setting. These graphical ADDIS procedures combine the good interpretability of graphical procedures with the high online power of ADDIS procedures. Moreover, they can be adapted to a local dependence structure and an asynchronous testing setup, leading to power improvements over the current state-of-art methods. Consequently, the proposed methods are useful for a wide range of applications, including innovative complex trial designs, such as platform trials, and large-scale test designs, such as in the evaluation of A/B tests for marketing research.
In the sequential decision making setting, an agent aims to achieve systematic generalization over a large, possibly infinite, set of environments. Such environments are modeled as discrete Markov decision processes with both states and actions represented through a feature vector. The underlying structure of the environments allows the transition dynamics to be factored into two components: one that is environment-specific and another that is shared. Consider a set of environments that share the laws of motion as an example. In this setting, the agent can take a finite amount of reward-free interactions from a subset of these environments. The agent then must be able to approximately solve any planning task defined over any environment in the original set, relying on the above interactions only. Can we design a provably efficient algorithm that achieves this ambitious goal of systematic generalization? In this paper, we give a partially positive answer to this question. First, we provide a tractable formulation of systematic generalization by employing a causal viewpoint. Then, under specific structural assumptions, we provide a simple learning algorithm that guarantees any desired planning error up to an unavoidable sub-optimality term, while showcasing a polynomial sample complexity.
Dirichlet Process mixture models (DPMM) in combination with Gaussian kernels have been an important modeling tool for numerous data domains arising from biological, physical, and social sciences. However, this versatility in applications does not extend to strong theoretical guarantees for the underlying parameter estimates, for which only a logarithmic rate is achieved. In this work, we (re)introduce and investigate a metric, named Orlicz-Wasserstein distance, in the study of the Bayesian contraction behavior for the parameters. We show that despite the overall slow convergence guarantees for all the parameters, posterior contraction for parameters happens at almost polynomial rates in outlier regions of the parameter space. Our theoretical results provide new insight in understanding the convergence behavior of parameters arising from various settings of hierarchical Bayesian nonparametric models. In addition, we provide an algorithm to compute the metric by leveraging Sinkhorn divergences and validate our findings through a simulation study.
High-dimensional feature selection is a central problem in a variety of application domains such as machine learning, image analysis, and genomics. In this paper, we propose graph-based tests as a useful basis for feature selection. We describe an algorithm for selecting informative features in high-dimensional data, where each observation comes from one of $K$ different distributions. Our algorithm can be applied in a completely nonparametric setup without any distributional assumptions on the data, and it aims at outputting those features in the data, that contribute the most to the overall distributional variation. At the heart of our method is the recursive application of distribution-free graph-based tests on subsets of the feature set, located at different depths of a hierarchical clustering tree constructed from the data. Our algorithm recovers all truly contributing features with high probability, while ensuring optimal control on false-discovery. Finally, we show the superior performance of our method over other existing ones through synthetic data, and also demonstrate the utility of the method on two real-life datasets from the domains of climate change and single cell transcriptomics.
We develop a novel, general and computationally efficient framework, called Divide and Conquer Dynamic Programming (DCDP), for localizing change points in time series data with high-dimensional features. DCDP deploys a class of greedy algorithms that are applicable to a broad variety of high-dimensional statistical models and can enjoy almost linear computational complexity. We investigate the performance of DCDP in three commonly studied change point settings in high dimensions: the mean model, the Gaussian graphical model, and the linear regression model. In all three cases, we derive non-asymptotic bounds for the accuracy of the DCDP change point estimators. We demonstrate that the DCDP procedures consistently estimate the change points with sharp, and in some cases, optimal rates while incurring significantly smaller computational costs than the best available algorithms. Our findings are supported by extensive numerical experiments on both synthetic and real data.
Risk-sensitive reinforcement learning (RL) has become a popular tool to control the risk of uncertain outcomes and ensure reliable performance in various sequential decision-making problems. While policy gradient methods have been developed for risk-sensitive RL, it remains unclear if these methods enjoy the same global convergence guarantees as in the risk-neutral case. In this paper, we consider a class of dynamic time-consistent risk measures, called Expected Conditional Risk Measures (ECRMs), and derive policy gradient updates for ECRM-based objective functions. Under both constrained direct parameterization and unconstrained softmax parameterization, we provide global convergence of the corresponding risk-averse policy gradient algorithms. We further test a risk-averse variant of REINFORCE algorithm on a stochastic Cliffwalk environment to demonstrate the efficacy of our algorithm and the importance of risk control.
Hyperdimensional computing (HDC) uses binary vectors of high dimensions to perform classification. Due to its simplicity and massive parallelism, HDC can be highly energy-efficient and well-suited for resource-constrained platforms. However, in trading off orthogonality with efficiency, hypervectors may use tens of thousands of dimensions. In this paper, we will examine the necessity for such high dimensions. In particular, we give a detailed theoretical analysis of the relationship among dimensions of hypervectors, accuracy, and orthogonality. The main conclusion of this study is that a much lower dimension, typically less than 100, can also achieve similar or even higher detecting accuracy compared with other state-of-the-art HDC models. Based on this insight, we propose a suite of novel techniques to build HDC models that use binary hypervectors of dimensions that are orders of magnitude smaller than those found in the state-of-the-art HDC models, yet yield equivalent or even improved accuracy and efficiency. For image classification, we achieved an HDC accuracy of 96.88\% with a dimension of only 32 on the MNIST dataset. We further explore our methods on more complex datasets like CIFAR-10 and show the limits of HDC computing.
With the advancements in connected devices, a huge amount of real-time data is being generated. Efficient storage, transmission, and analysation of this real-time big data is important, as it serves a number of purposes ranging from decision making to fault prediction, etc. Alongside this, real-time big data has rigorous utility and privacy requirements, therefore, it is also significantly important to choose the handling strategies meticulously. One of the optimal way to store and transmit data in the form of lossless compression is Huffman coding, which compresses the data into a variable length binary stream. Similarly, in order to protect the privacy of such big data, differential privacy is being used nowadays, which perturbs the data on the basis of privacy budget and sensitivity. Nevertheless, traditional differential privacy mechanisms provide privacy guarantees. However, on the other hand, real-time data cannot be dealt as an ordinary set of records, because it usually has certain underlying patterns and cycles, which can be used for forming a link to a specific individuals private information that can lead to severe privacy leakages (e.g., analysing smart metering data can lead to classification of individuals daily routine). Thus, it is equally important to develop a privacy preservation model, which preserves the privacy on the basis of occurrences and patterns in the data. In this paper, we design a novel Huff-DP mechanism, which selects the optimal privacy budget on the basis of privacy requirement for that specific record. In order to further enhance the budget determination, we propose static, sine, and fuzzy logic based decision algorithms. From the experimental evaluations, it can be concluded that our proposed Huff-DP mechanism provides effective privacy protection alongside reducing the privacy budget computational cost.
In recent years, Face Image Quality Assessment (FIQA) has become an indispensable part of the face recognition system to guarantee the stability and reliability of recognition performance in an unconstrained scenario. For this purpose, the FIQA method should consider both the intrinsic property and the recognizability of the face image. Most previous works aim to estimate the sample-wise embedding uncertainty or pair-wise similarity as the quality score, which only considers the information from partial intra-class. However, these methods ignore the valuable information from the inter-class, which is for estimating to the recognizability of face image. In this work, we argue that a high-quality face image should be similar to its intra-class samples and dissimilar to its inter-class samples. Thus, we propose a novel unsupervised FIQA method that incorporates Similarity Distribution Distance for Face Image Quality Assessment (SDD-FIQA). Our method generates quality pseudo-labels by calculating the Wasserstein Distance (WD) between the intra-class similarity distributions and inter-class similarity distributions. With these quality pseudo-labels, we are capable of training a regression network for quality prediction. Extensive experiments on benchmark datasets demonstrate that the proposed SDD-FIQA surpasses the state-of-the-arts by an impressive margin. Meanwhile, our method shows good generalization across different recognition systems.