The measurements performed by particle physics experiments must account for the imperfect response of the detectors used to observe the interactions. One approach, unfolding, statistically adjusts the experimental data for detector effects. Recently, generative machine learning models have shown promise for performing unbinned unfolding in a high number of dimensions. However, all current generative approaches are limited to unfolding a fixed set of observables, making them unable to perform full-event unfolding in the variable dimensional environment of collider data. A novel modification to the variational latent diffusion model (VLD) approach to generative unfolding is presented, which allows for unfolding of high- and variable-dimensional feature spaces. The performance of this method is evaluated in the context of semi-leptonic top quark pair production at the Large Hadron Collider.
We consider pure-exploration problems in the context of stochastic sequential adaptive experiments with a finite set of alternatives. The central objective is to answer a query regarding the alternatives with high confidence while minimizing measurement efforts. One canonical example is identifying the best-performing alternative, a problem known as ranking and selection in simulation or best-arm identification in machine learning. We formulate the problem complexity measure as a maximin optimization problem for the static fixed-budget, fixed-confidence, and posterior convergence rate settings. By incorporating dual variables directly into the analysis, we derive necessary and sufficient conditions for an allocation's optimality. The introduction of dual variables allows us to sidestep the combinatorial complexity that arises when considering only primal variables. These optimality conditions enable the extension of the top-two algorithm design principle to more general pure-exploration problems. Moreover, our analysis yields a straightforward and effective information-directed selection rule that adaptively chooses from a candidate set based on the informational value of the candidates. We demonstrate the broad range of contexts in which our design principle can be implemented. In particular, when combined with information-directed selection, top-two Thompson sampling achieves asymptotic optimality in Gaussian best-arm identification, resolving a notable open question in the pure-exploration literature. Our algorithm attains optimality in $\varepsilon$-best-arm identification (or ranking and selection with a probability of good selection guarantee) and thresholding bandits. Our results provide a general principle for adapting Thompson sampling to general pure-exploration problems. Numerical experiments highlight the efficiency of our proposed algorithms compared to existing methods.
Vehicle telematics provides granular data for dynamic driving risk assessment, but current methods often rely on aggregated metrics (e.g., harsh braking counts) and do not fully exploit the rich time-series structure of telematics data. In this paper, we introduce a flexible framework using continuous-time hidden Markov model (CTHMM) to model and analyze trip-level telematics data. Unlike existing methods, the CTHMM models raw time-series data without predefined thresholds on harsh driving events or assumptions about accident probabilities. Moreover, our analysis is based solely on telematics data, requiring no traditional covariates such as driver or vehicle characteristics. Through unsupervised anomaly detection based on pseudo-residuals, we identify deviations from normal driving patterns -- defined as the prevalent behaviour observed in a driver's history or across the population -- which are linked to accident risk. Validated on both controlled and real-world datasets, the CTHMM effectively detects abnormal driving behaviour and trips with increased accident likelihood. In real-data analysis, higher anomaly levels in longitudinal and lateral accelerations consistently correlate with greater accident risk, with classification models using this information achieving ROC-AUC values as high as 0.86 for trip-level analysis and 0.78 for distinguishing drivers with claims. Furthermore, the methodology reveals significant behavioural differences between drivers with and without claims, offering valuable insights for insurance applications, accident analysis, and prevention.
Preconditioning techniques are crucial for enhancing the efficiency of solving large-scale linear equation systems that arise from partial differential equation (PDE) discretization. These techniques, such as Incomplete Cholesky factorization (IC) and data-driven neural network methods, accelerate the convergence of iterative solvers like Conjugate Gradient (CG) by approximating the original matrices. This paper introduces a novel approach that integrates Graph Neural Network (GNN) with traditional IC, addressing the shortcomings of direct generation methods based on GNN and achieving significant improvements in computational efficiency and scalability. Experimental results demonstrate an average reduction in iteration counts by 24.8% compared to IC and a two-order-of-magnitude increase in training scale compared to previous methods. A three-dimensional static structural analysis utilizing finite element methods was validated on training sparse matrices of up to 5 million dimensions and inference scales of up to 10 million. Furthermore, the approach demon-strates robust generalization capabilities across scales, facilitating the effective acceleration of CG solvers for large-scale linear equations using small-scale data on modest hardware. The method's robustness and scalability make it a practical solution for computational science.
Offline reinforcement learning (RL) aims to find an optimal policy for Markov decision processes (MDPs) using a pre-collected dataset. In this work, we revisit the linear programming (LP) reformulation of Markov decision processes for offline RL, with the goal of developing algorithms with optimal $O(1/\sqrt{n})$ sample complexity, where $n$ is the sample size, under partial data coverage and general function approximation, and with favorable computational tractability. To this end, we derive new \emph{error bounds} for both the dual and primal-dual formulations of the LP, and incorporate them properly as \emph{constraints} in the LP reformulation. We then show that under a completeness-type assumption, $O(1/\sqrt{n})$ sample complexity can be achieved under standard single-policy coverage assumption, when one properly \emph{relaxes} the occupancy validity constraint in the LP. This framework can readily handle both infinite-horizon discounted and average-reward MDPs, in both general function approximation and tabular cases. The instantiation to the tabular case achieves either state-of-the-art or the first sample complexities of offline RL in these settings. To further remove any completeness-type assumption, we then introduce a proper \emph{lower-bound constraint} in the LP, and a variant of the standard single-policy coverage assumption. Such an algorithm leads to a $O(1/\sqrt{n})$ sample complexity with dependence on the \emph{value-function gap}, with only realizability assumptions. Our properly constrained LP framework advances the existing results in several aspects, in relaxing certain assumptions and achieving the optimal $O(1/\sqrt{n})$ sample complexity, with simple analyses. We hope our results bring new insights into the use of LP formulations and the equivalent primal-dual minimax optimization for offline RL, through the error-bound induced constraints.
Multigrid methods despite being known to be asymptotically optimal algorithms, depend on the careful selection of their individual components for efficiency. Also, they are mostly restricted to standard cycle types like V-, F-, and W-cycles. We use grammar rules to generate arbitrary-shaped cycles, wherein the smoothers and their relaxation weights are chosen independently at each step within the cycle. We call this a flexible multigrid cycle. These flexible cycles are used in Algebraic Multigrid (AMG) methods with the help of grammar rules and optimized using genetic programming. The flexible AMG methods are implemented in the software library of hypre, and the programs are optimized separately for two cases: a standalone AMG solver for a 3D anisotropic problem and an AMG preconditioner with conjugate gradient for a multiphysics code. We observe that the optimized flexible cycles provide higher efficiency and better performance than the standard cycle types.
A canonical desideratum for prediction problems is that performance guarantees should hold not just on average over the population, but also for meaningful subpopulations within the overall population. But what constitutes a meaningful subpopulation? In this work, we take the perspective that relevant subpopulations should be defined with respect to the clusters that naturally emerge from the distribution of individuals for which predictions are being made. In this view, a population refers to a mixture model whose components constitute the relevant subpopulations. We suggest two formalisms for capturing per-subgroup guarantees: first, by attributing each individual to the component from which they were most likely drawn, given their features; and second, by attributing each individual to all components in proportion to their relative likelihood of having been drawn from each component. Using online calibration as a case study, we study a multi-objective algorithm that provides guarantees for each of these formalisms by handling all plausible underlying subpopulation structures simultaneously, and achieve an $O(T^{1/2})$ rate even when the subpopulations are not well-separated. In comparison, the more natural cluster-then-predict approach that first recovers the structure of the subpopulations and then makes predictions suffers from a $O(T^{2/3})$ rate and requires the subpopulations to be separable. Along the way, we prove that providing per-subgroup calibration guarantees for underlying clusters can be easier than learning the clusters: separation between median subgroup features is required for the latter but not the former.
Binaural speech enhancement (BSE) aims to jointly improve the speech quality and intelligibility of noisy signals received by hearing devices and preserve the spatial cues of the target for natural listening. Existing methods often suffer from the compromise between noise reduction (NR) capacity and spatial cues preservation (SCP) accuracy and a high computational demand in complex acoustic scenes. In this work, we present a learning-based lightweight binaural complex convolutional network (LBCCN), which excels in NR by filtering low-frequency bands and keeping the rest. Additionally, our approach explicitly incorporates the estimation of interchannel relative acoustic transfer function to ensure the spatial cues fidelity and speech clarity. Results show that the proposed LBCCN can achieve a comparable NR performance to state-of-the-art methods under various noise conditions, but with a much lower computational cost and a better SCP. The reproducible code and audio examples are available at //github.com/jywanng/LBCCN.
Robotic wire arc additive manufacturing has been widely adopted due to its high deposition rates and large print volume relative to other metal additive manufacturing processes. For complex geometries, printing with variable height within layers offer the advantage of producing overhangs without the need for support material or geometric decomposition. This approach has been demonstrated for steel using precomputed robot speed profiles to achieve consistent geometric quality. In contrast, aluminum exhibits a bead geometry that is tightly coupled to the temperature of the previous layer, resulting in significant changes to the height of the deposited material at different points in the part. This paper presents a closed-loop approach to correcting for variations in the height of the deposited material between layers. We use an IR camera mounted on a separate robot to track the welding flame and estimate the height of deposited material. The robot velocity profile is then updated to account for the error in the previous layer and the nominal planned height profile while factoring in process and system constraints. Implementation of this framework showed significant improvement over the open-loop case and demonstrated robustness to inaccurate model parameters.
The accurate and interpretable prediction of future events in time-series data often requires the capturing of representative patterns (or referred to as states) underpinning the observed data. To this end, most existing studies focus on the representation and recognition of states, but ignore the changing transitional relations among them. In this paper, we present evolutionary state graph, a dynamic graph structure designed to systematically represent the evolving relations (edges) among states (nodes) along time. We conduct analysis on the dynamic graphs constructed from the time-series data and show that changes on the graph structures (e.g., edges connecting certain state nodes) can inform the occurrences of events (i.e., time-series fluctuation). Inspired by this, we propose a novel graph neural network model, Evolutionary State Graph Network (EvoNet), to encode the evolutionary state graph for accurate and interpretable time-series event prediction. Specifically, Evolutionary State Graph Network models both the node-level (state-to-state) and graph-level (segment-to-segment) propagation, and captures the node-graph (state-to-segment) interactions over time. Experimental results based on five real-world datasets show that our approach not only achieves clear improvements compared with 11 baselines, but also provides more insights towards explaining the results of event predictions.
Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis.