Dynamic structural causal models (SCMs) are a powerful framework for reasoning in dynamic systems about direct effects which measure how a change in one variable affects another variable while holding all other variables constant. The causal relations in a dynamic structural causal model can be qualitatively represented with a full-time causal graph. Assuming linearity and causal sufficiency and given the full-time causal graph, the direct causal effect is always identifiable and can be estimated from data by adjusting on any set of variables given by the so-called single-door criterion. However, in many application such a graph is not available for various reasons but nevertheless experts have access to an abstraction of the full-time causal graph which represents causal relations between time series while omitting temporal information. This paper presents a complete identifiability result which characterizes all cases for which the direct effect is graphically identifiable from summary causal graphs and gives two sound finite adjustment sets that can be used to estimate the direct effect whenever it is identifiable.
Recent advancements in evaluating matrix-exponential functions have opened the doors to the practical use of exponential time-integration methods in numerical weather prediction (NWP). The success of exponential methods in shallow water simulations has led to the question of whether they can be beneficial in a 3D atmospheric model. In this paper, we take the first step forward by evaluating the behavior of exponential time-integration methods in the Navy's compressible deep-atmosphere nonhydrostatic global model (NEPTUNE-Navy Environmental Prediction sysTem Utilizing a Nonhydrostatic Engine). Simulations are conducted on a set of idealized test cases designed to assess key features of a nonhydrostatic model and demonstrate that exponential integrators capture the desired large and small-scale traits, yielding results comparable to those found in the literature. We propose a new upper boundary absorbing layer independent of reference state and shown to be effective in both idealized and real-data simulations. A real-data forecast using an exponential method with full physics is presented, providing a positive outlook for using exponential integrators for NWP.
Nonparametric varying coefficient (NVC) models are useful for modeling time-varying effects on responses that are measured repeatedly for the same subjects. When the number of covariates is moderate or large, it is desirable to perform variable selection from the varying coefficient functions. However, existing methods for variable selection in NVC models either fail to account for within-subject correlations or require the practitioner to specify a parametric form for the correlation structure. In this paper, we introduce the nonparametric varying coefficient spike-and-slab lasso (NVC-SSL) for Bayesian high-dimensional NVC models. Through the introduction of functional random effects, our method allows for flexible modeling of within-subject correlations without needing to specify a parametric covariance function. We further propose several scalable optimization and Markov chain Monte Carlo (MCMC) algorithms. For variable selection, we propose an Expectation Conditional Maximization (ECM) algorithm to rapidly obtain maximum a posteriori (MAP) estimates. Our ECM algorithm scales linearly in the total number of observations $N$ and the number of covariates $p$. For uncertainty quantification, we introduce an approximate MCMC algorithm that also scales linearly in both $N$ and $p$. We demonstrate the scalability, variable selection performance, and inferential capabilities of our method through simulations and a real data application. These algorithms are implemented in the publicly available R package NVCSSL on the Comprehensive R Archive Network.
The proliferation of data generation has spurred advancements in functional data analysis. With the ability to analyze multiple variables simultaneously, the demand for working with multivariate functional data has increased. This study proposes a novel formulation of the epigraph and hypograph indexes, as well as their generalized expressions, specifically tailored for the multivariate functional context. These definitions take into account the interrelations between components. Furthermore, the proposed indexes are employed to cluster multivariate functional data. In the clustering process, the indexes are applied to both the data and their first and second derivatives. This generates a reduced-dimension dataset from the original multivariate functional data, enabling the application of well-established multivariate clustering techniques that have been extensively studied in the literature. This methodology has been tested through simulated and real datasets, performing comparative analyses against state-of-the-art to assess its performance.
Some early color photographic processes based on special color screen filters pose specific challenges in their digitization and digital presentation. Those challenges include dynamic range, resolution, and the difficulty of stitching geometrically-repeating patterns. We describe a novel method used to digitize the collection of early color photographs at the National Geographic Society which makes use of a custom open-source software tool to analyze and precisely stitch regular color screen processes.
The possibility of dynamically modifying the computational load of neural models at inference time is crucial for on-device processing, where computational power is limited and time-varying. Established approaches for neural model compression exist, but they provide architecturally static models. In this paper, we investigate the use of early-exit architectures, that rely on intermediate exit branches, applied to large-vocabulary speech recognition. This allows for the development of dynamic models that adjust their computational cost to the available resources and recognition performance. Unlike previous works, besides using pre-trained backbones we also train the model from scratch with an early-exit architecture. Experiments on public datasets show that early-exit architectures from scratch not only preserve performance levels when using fewer encoder layers, but also improve task accuracy as compared to using single-exit models or using pre-trained models. Additionally, we investigate an exit selection strategy based on posterior probabilities as an alternative to frame-based entropy.
We present a nonparametric graphical model. Our model uses an undirected graph that represents conditional independence for general random variables defined by the conditional dependence coefficient (Azadkia and Chatterjee (2021)). The set of edges of the graph are defined as $E=\{(i,j):R_{i,j}\neq 0\}$, where $R_{i,j}$ is the conditional dependence coefficient for $X_i$ and $X_j$ given $(X_1,\ldots,X_p) \backslash \{X_{i},X_{j}\}$. We propose a graph structure learning by two steps selection procedure: first, we compute the matrix of sample version of the conditional dependence coefficient $\widehat{R_{i,j}}$; next, for some prespecificated threshold $\lambda>0$ we choose an edge $\{i,j\}$ if $ \left|\widehat{R_{i,j}} \right| \geq \lambda.$ The graph recovery structure has been evaluated on artificial and real datasets. We also applied a slight modification of our graph recovery procedure for learning partial correlation graphs for the elliptical distribution.
Leverage score sampling is crucial to the design of randomized algorithms for large-scale matrix problems, while the computation of leverage scores is a bottleneck of many applications. In this paper, we propose a quantum algorithm to accelerate this useful method. The speedup is at least quadratic and could be exponential for well-conditioned matrices. We also prove some quantum lower bounds, which suggest that our quantum algorithm is close to optimal. As an application, we propose a new quantum algorithm for rigid regression problems with vector solution outputs. It achieves polynomial speedups over the best classical algorithm known. In this process, we give an improved randomized algorithm for rigid regression.
A change point detection (CPD) framework assisted by a predictive machine learning model called "Predict and Compare" is introduced and characterised in relation to other state-of-the-art online CPD routines which it outperforms in terms of false positive rate and out-of-control average run length. The method's focus is on improving standard methods from sequential analysis such as the CUSUM rule in terms of these quality measures. This is achieved by replacing typically used trend estimation functionals such as the running mean with more sophisticated predictive models (Predict step), and comparing their prognosis with actual data (Compare step). The two models used in the Predict step are the ARIMA model and the LSTM recursive neural network. However, the framework is formulated in general terms, so as to allow the use of other prediction or comparison methods than those tested here. The power of the method is demonstrated in a tribological case study in which change points separating the run-in, steady-state, and divergent wear phases are detected in the regime of very few false positives.
Unmeasured confounding presents a common challenge in observational studies, potentially making standard causal parameters unidentifiable without additional assumptions. Given the increasing availability of diverse data sources, exploiting data linkage offers a potential solution to mitigate unmeasured confounding within a primary study of interest. However, this approach often introduces selection bias, as data linkage is feasible only for a subset of the study population. To address this concern, we explore three nonparametric identification strategies under the assumption that a unit' s inclusion in the linked cohort is determined solely by the observed confounders, while acknowledging that the ignorability assumption may depend on some partially unobserved covariates. The existence of multiple identification strategies motivates the development of estimators that effectively capture distinct components of the observed data distribution. Appropriately combining these estimators yields triply robust estimators for the average treatment effect. These estimators remain consistent if at least one of the three distinct parts of the observed data law is correct. Moreover, they are locally efficient if all the models are correctly specified. We evaluate the proposed estimators using simulation studies and real data analysis.
Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.