Motivated by the problem of inferring the graph structure of functional connectivity networks from multi-level functional magnetic resonance imaging data, we develop a valid inference framework for high-dimensional graphical models that accounts for group-level heterogeneity. We introduce a neighborhood-based method to learn the graph structure and reframe the problem as that of inferring fixed effect parameters in a doubly high-dimensional linear mixed model. Specifically, we propose a LASSO-based estimator and a de-biased LASSO-based inference framework for the fixed effect parameters in the doubly high-dimensional linear mixed model, leveraging random matrix theory to deal with challenges induced by the identical fixed and random effect design matrices arising in our setting. Moreover, we introduce consistent estimators for the variance components to identify subject-specific edges in the inferred graph. To illustrate the generality of the proposed approach, we also adapt our method to account for serial correlation by learning heterogeneous graphs in the setting of a vector autoregressive model. We demonstrate the performance of the proposed framework using real data and benchmark simulation studies.
This work addresses the problem of simulating Gaussian random fields that are continuously indexed over a class of metric graphs, termed graphs with Euclidean edges, being more general and flexible than linear networks. We introduce three general algorithms that allow to reconstruct a wide spectrum of random fields having a covariance function that depends on a specific metric, called resistance metric, and proposed in recent literature. The algorithms are applied to a synthetic case study consisting of a street network. They prove to be fast and accurate in that they reproduce the target covariance function and provide random fields whose finite-dimensional distributions are approximately Gaussian.
ZX-diagrams are a powerful graphical language for the description of quantum processes with applications in fundamental quantum mechanics, quantum circuit optimization, tensor network simulation, and many more. The utility of ZX-diagrams relies on a set of local transformation rules that can be applied to them without changing the underlying quantum process they describe. These rules can be exploited to optimize the structure of ZX-diagrams for a range of applications. However, finding an optimal sequence of transformation rules is generally an open problem. In this work, we bring together ZX-diagrams with reinforcement learning, a machine learning technique designed to discover an optimal sequence of actions in a decision-making problem and show that a trained reinforcement learning agent can significantly outperform other optimization techniques like a greedy strategy or simulated annealing. The use of graph neural networks to encode the policy of the agent enables generalization to diagrams much bigger than seen during the training phase.
Low-rank matrix approximation play a ubiquitous role in various applications such as image processing, signal processing, and data analysis. Recently, random algorithms of low-rank matrix approximation have gained widespread adoption due to their speed, accuracy, and robustness, particularly in their improved implementation on modern computer architectures. Existing low-rank approximation algorithms often require prior knowledge of the rank of the matrix, which is typically unknown. To address this bottleneck, we propose a low-rank approximation algorithm termed efficient orthogonal decomposition with automatic basis extraction (EOD-ABE) tailored for the scenario where the rank of the matrix is unknown. Notably, we introduce a randomized algorithm to automatically extract the basis that reveals the rank. The efficacy of the proposed algorithms is theoretically and numerically validated, demonstrating superior speed, accuracy, and robustness compared to existing methods. Furthermore, we apply the algorithms to image reconstruction, achieving remarkable results.
In the framework of solid mechanics, the task of deriving material parameters from experimental data has recently re-emerged with the progress in full-field measurement capabilities and the renewed advances of machine learning. In this context, new methods such as the virtual fields method and physics-informed neural networks have been developed as alternatives to the already established least-squares and finite element-based approaches. Moreover, model discovery problems are starting to emerge and can also be addressed in a parameter estimation framework. These developments call for a new unified perspective, which is able to cover both traditional parameter estimation methods and novel approaches in which the state variables or the model structure itself are inferred as well. Adopting concepts discussed in the inverse problems community, we distinguish between all-at-once and reduced approaches. With this general framework, we are able to structure a large portion of the literature on parameter estimation in computational mechanics - and we can identify combinations that have not yet been addressed, two of which are proposed in this paper. We also discuss statistical approaches to quantify the uncertainty related to the estimated parameters, and we propose a novel two-step procedure for identification of complex material models based on both frequentist and Bayesian principles. Finally, we illustrate and compare several of the aforementioned methods with mechanical benchmarks based on synthetic and real data.
Sampling trajectories from a distribution followed by ranking them based on a specified cost function is a common approach in autonomous driving. Typically, the sampling distribution is hand-crafted (e.g a Gaussian, or a grid). Recently, there have been efforts towards learning the sampling distribution through generative models such as Conditional Variational Autoencoder (CVAE). However, these approaches fail to capture the multi-modality of the driving behaviour due to the Gaussian latent prior of the CVAE. Thus, in this paper, we re-imagine the distribution learning through vector quantized variational autoencoder (VQ-VAE), whose discrete latent-space is well equipped to capture multi-modal sampling distribution. The VQ-VAE is trained with demonstration data of optimal trajectories. We further propose a differentiable optimization based safety filter to minimally correct the VQVAE sampled trajectories to ensure collision avoidance. We use backpropagation through the optimization layers in a self-supervised learning set-up to learn good initialization and optimal parameters of the safety filter. We perform extensive comparisons with state-of-the-art CVAE-based baseline in dense and aggressive traffic scenarios and show a reduction of up to 12 times in collision-rate while being competitive in driving speeds.
In the context of imitation learning applied to dexterous robotic hands, the high complexity of the systems makes learning complex manipulation tasks challenging. However, the numerous datasets depicting human hands in various different tasks could provide us with better knowledge regarding human hand motion. We propose a method to leverage multiple large-scale task-agnostic datasets to obtain latent representations that effectively encode motion subtrajectories that we included in a transformer-based behavior cloning method. Our results demonstrate that employing latent representations yields enhanced performance compared to conventional behavior cloning methods, particularly regarding resilience to errors and noise in perception and proprioception. Furthermore, the proposed approach solely relies on human demonstrations, eliminating the need for teleoperation and, therefore, accelerating the data acquisition process. Accurate inverse kinematics for fingertip retargeting ensures precise transfer from human hand data to the robot, facilitating effective learning and deployment of manipulation policies. Finally, the trained policies have been successfully transferred to a real-world 23Dof robotic system.
Recently, the performance of monocular depth estimation (MDE) has been significantly boosted with the integration of transformer models. However, the transformer models are usually computationally-expensive, and their effectiveness in light-weight models are limited compared to convolutions. This limitation hinders their deployment on resource-limited devices. In this paper, we propose a cross-architecture knowledge distillation method for MDE, dubbed DisDepth, to enhance efficient CNN models with the supervision of state-of-the-art transformer models. Concretely, we first build a simple framework of convolution-based MDE, which is then enhanced with a novel local-global convolution module to capture both local and global information in the image. To effectively distill valuable information from the transformer teacher and bridge the gap between convolution and transformer features, we introduce a method to acclimate the teacher with a ghost decoder. The ghost decoder is a copy of the student's decoder, and adapting the teacher with the ghost decoder aligns the features to be student-friendly while preserving their original performance. Furthermore, we propose an attentive knowledge distillation loss that adaptively identifies features valuable for depth estimation. This loss guides the student to focus more on attentive regions, improving its performance. Extensive experiments on KITTI and NYU Depth V2 datasets demonstrate the effectiveness of DisDepth. Our method achieves significant improvements on various efficient backbones, showcasing its potential for efficient monocular depth estimation.
Qini curves have emerged as an attractive and popular approach for evaluating the benefit of data-driven targeting rules for treatment allocation. We propose a generalization of the Qini curve to multiple costly treatment arms, that quantifies the value of optimally selecting among both units and treatment arms at different budget levels. We develop an efficient algorithm for computing these curves and propose bootstrap-based confidence intervals that are exact in large samples for any point on the curve. These confidence intervals can be used to conduct hypothesis tests comparing the value of treatment targeting using an optimal combination of arms with using just a subset of arms, or with a non-targeting assignment rule ignoring covariates, at different budget levels. We demonstrate the statistical performance in a simulation experiment and an application to treatment targeting for election turnout.
Transformer architectures have facilitated the development of large-scale and general-purpose sequence models for prediction tasks in natural language processing and computer vision, e.g., GPT-3 and Swin Transformer. Although originally designed for prediction problems, it is natural to inquire about their suitability for sequential decision-making and reinforcement learning problems, which are typically beset by long-standing issues involving sample efficiency, credit assignment, and partial observability. In recent years, sequence models, especially the Transformer, have attracted increasing interest in the RL communities, spawning numerous approaches with notable effectiveness and generalizability. This survey presents a comprehensive overview of recent works aimed at solving sequential decision-making tasks with sequence models such as the Transformer, by discussing the connection between sequential decision-making and sequence modeling, and categorizing them based on the way they utilize the Transformer. Moreover, this paper puts forth various potential avenues for future research intending to improve the effectiveness of large sequence models for sequential decision-making, encompassing theoretical foundations, network architectures, algorithms, and efficient training systems. As this article has been accepted by the Frontiers of Computer Science, here is an early version, and the most up-to-date version can be found at //journal.hep.com.cn/fcs/EN/10.1007/s11704-023-2689-5
We introduce a generic framework that reduces the computational cost of object detection while retaining accuracy for scenarios where objects with varied sizes appear in high resolution images. Detection progresses in a coarse-to-fine manner, first on a down-sampled version of the image and then on a sequence of higher resolution regions identified as likely to improve the detection accuracy. Built upon reinforcement learning, our approach consists of a model (R-net) that uses coarse detection results to predict the potential accuracy gain for analyzing a region at a higher resolution and another model (Q-net) that sequentially selects regions to zoom in. Experiments on the Caltech Pedestrians dataset show that our approach reduces the number of processed pixels by over 50% without a drop in detection accuracy. The merits of our approach become more significant on a high resolution test set collected from YFCC100M dataset, where our approach maintains high detection performance while reducing the number of processed pixels by about 70% and the detection time by over 50%.