The Causal Roadmap outlines a systematic approach to asking and answering questions of cause-and-effect: define the quantity of interest, evaluate needed assumptions, conduct statistical estimation, and carefully interpret results. To protect research integrity, it is essential that the algorithm for statistical estimation and inference be pre-specified prior to conducting any effectiveness analyses. However, it is often unclear which algorithm will perform optimally for the real-data application. Instead, there is a temptation to simply implement one's favorite algorithm -- recycling prior code or relying on the default settings of a computing package. Here, we call for the use of simulations that realistically reflect the application, including key characteristics such as strong confounding and dependent or missing outcomes, to objectively compare candidate estimators and facilitate full specification of the Statistical Analysis Plan. Such simulations are informed by the Causal Roadmap and conducted after data collection but prior to effect estimation. We illustrate with two worked examples. First, in an observational longitudinal study, outcome-blind simulations are used to inform nuisance parameter estimation and variance estimation for longitudinal targeted minimum loss-based estimation (TMLE). Second, in a cluster randomized trial with missing outcomes, treatment-blind simulations are used to examine Type-I error control in Two-Stage TMLE. In both examples, realistic simulations empower us to pre-specify an estimation approach that is expected to have strong finite sample performance and also yield quality-controlled computing code for the actual analysis. Together, this process helps to improve the rigor and reproducibility of our research.
In this study, we investigate the construction of quantum CSS duadic codes with dimensions greater than one. We introduce a method for extending smaller splittings of quantum duadic codes to create larger, potentially degenerate quantum duadic codes. Furthermore, we present a technique for computing or bounding the minimum distances of quantum codes constructed through this approach. Additionally, we introduce quantum CSS triadic codes, a family of quantum codes with a rate of at least $\frac{1}{3}$.
The recursive Neville algorithm allows one to calculate interpolating functions recursively. Upon a judicious choice of the abscissas used for the interpolation (and extrapolation), this algorithm leads to a method for convergence acceleration. For example, one can use the Neville algorithm in order to successively eliminate inverse powers of the upper limit of the summation from the partial sums of a given, slowly convergent input series. Here, we show that, for a particular choice of the abscissas used for the extrapolation, one can replace the recursive Neville scheme by a simple one-step transformation, while also obtaining access to subleading terms for the transformed series after convergence acceleration. The matrix-based, unified formulas allow one to estimate the rate of convergence of the partial sums of the input series to their limit. In particular, Bethe logarithms for hydrogen are calculated to 100 decimal digits. Generalizations of the method to series whose remainder terms can be expanded in terms of inverse factorial series, or series with half-integer powers, are also discussed.
In-situ sensing, in conjunction with learning models, presents a unique opportunity to address persistent defect issues in Additive Manufacturing (AM) processes. However, this integration introduces significant data privacy concerns, such as data leakage, sensor data compromise, and model inversion attacks, revealing critical details about part design, material composition, and machine parameters. Differential Privacy (DP) models, which inject noise into data under mathematical guarantees, offer a nuanced balance between data utility and privacy by obscuring traces of sensing data. However, the introduction of noise into learning models, often functioning as black boxes, complicates the prediction of how specific noise levels impact model accuracy. This study introduces the Differential Privacy-HyperDimensional computing (DP-HD) framework, leveraging the explainability of the vector symbolic paradigm to predict the noise impact on the accuracy of in-situ monitoring, safeguarding sensitive data while maintaining operational efficiency. Experimental results on real-world high-speed melt pool data of AM for detecting overhang anomalies demonstrate that DP-HD achieves superior operational efficiency, prediction accuracy, and robust privacy protection, outperforming state-of-the-art Machine Learning (ML) models. For example, when implementing the same level of privacy protection (with a privacy budget set at 1), our model achieved an accuracy of 94.43%, surpassing the performance of traditional models such as ResNet50 (52.30%), GoogLeNet (23.85%), AlexNet (55.78%), DenseNet201 (69.13%), and EfficientNet B2 (40.81%). Notably, DP-HD maintains high performance under substantial noise additions designed to enhance privacy, unlike current models that suffer significant accuracy declines under high privacy constraints.
In recent years, deep reinforcement learning (DRL) approaches have generated highly successful controllers for a myriad of complex domains. However, the opaque nature of these models limits their applicability in aerospace systems and safety-critical domains, in which a single mistake can have dire consequences. In this paper, we present novel advancements in both the training and verification of DRL controllers, which can help ensure their safe behavior. We showcase a design-for-verification approach utilizing k-induction and demonstrate its use in verifying liveness properties. In addition, we also give a brief overview of neural Lyapunov Barrier certificates and summarize their capabilities on a case study. Finally, we describe several other novel reachability-based approaches which, despite failing to provide guarantees of interest, could be effective for verification of other DRL systems, and could be of further interest to the community.
This study explores how sentence types affect the Lombard effect and intelligibility enhancement, focusing on comparisons between natural and grid sentences. Using the Lombard Chinese-TIMIT (LCT) corpus and the Enhanced MAndarin Lombard Grid (EMALG) corpus, we analyze changes in phonetic and acoustic features across different noise levels. Our results show that grid sentences produce more pronounced Lombard effects than natural sentences. Then, we develop and test a normal-to-Lombard conversion model, trained separately on LCT and EMALG corpora. Through subjective and objective evaluations, natural sentences are superior in maintaining speech quality in intelligibility enhancement. In contrast, grid sentences could provide superior intelligibility due to the more pronounced Lombard effect. This study provides a valuable perspective on enhancing speech communication in noisy environments.
Classification of different object surface material types can play a significant role in the decision-making algorithms for mobile robots and autonomous vehicles. RGB-based scene-level semantic segmentation has been well-addressed in the literature. However, improving material recognition using the depth modality and its integration with SLAM algorithms for 3D semantic mapping could unlock new potential benefits in the robotics perception pipeline. To this end, we propose a complementarity-aware deep learning approach for RGB-D-based material classification built on top of an object-oriented pipeline. The approach further integrates the ORB-SLAM2 method for 3D scene mapping with multiscale clustering of the detected material semantics in the point cloud map generated by the visual SLAM algorithm. Extensive experimental results with existing public datasets and newly contributed real-world robot datasets demonstrate a significant improvement in material classification and 3D clustering accuracy compared to state-of-the-art approaches for 3D semantic scene mapping.
The goal of this paper is to investigate the complexity of gradient algorithms when learning sparse functions (juntas). We introduce a type of Statistical Queries ($\mathsf{SQ}$), which we call Differentiable Learning Queries ($\mathsf{DLQ}$), to model gradient queries on a specified loss with respect to an arbitrary model. We provide a tight characterization of the query complexity of $\mathsf{DLQ}$ for learning the support of a sparse function over generic product distributions. This complexity crucially depends on the loss function. For the squared loss, $\mathsf{DLQ}$ matches the complexity of Correlation Statistical Queries $(\mathsf{CSQ})$--potentially much worse than $\mathsf{SQ}$. But for other simple loss functions, including the $\ell_1$ loss, $\mathsf{DLQ}$ always achieves the same complexity as $\mathsf{SQ}$. We also provide evidence that $\mathsf{DLQ}$ can indeed capture learning with (stochastic) gradient descent by showing it correctly describes the complexity of learning with a two-layer neural network in the mean field regime and linear scaling.
Hardware trojan detection methods, based on machine learning (ML) techniques, mainly identify suspected circuits but lack the ability to explain how the decision was arrived at. An explainable methodology and architecture is introduced based on the existing hardware trojan detection features. Results are provided for explaining digital hardware trojans within a netlist using trust-hub trojan benchmarks.
Quantum low-density parity-check (qLDPC) codes offer a promising route to scalable fault-tolerant quantum computation with constant overhead. Recent advancements have shown that qLDPC codes can outperform the quantum memory capability of surface codes even with near-term hardware. The question of how to implement logical gates fault-tolerantly for these codes is still open. We present new examples of high-rate bivariate bicycle (BB) codes with enhanced symmetry properties. These codes feature explicit nice bases of logical operators (similar to toric codes) and support fold-transversal Clifford gates without overhead. As examples, we construct $[[98,6,12]]$ and $[[162, 8, 12]]$ BB codes which admit interesting fault-tolerant Clifford gates. Our work also lays the mathematical foundations for explicit bases of logical operators and fold-transversal gates in quantum two-block and group algebra codes, which might be of independent interest.
This contribution examines the capabilities of the Python ecosystem to solve nonlinear energy minimization problems, with a particular focus on transitioning from traditional MATLAB methods to Python's advanced computational tools, such as automatic differentiation. We demonstrate Python's streamlined approach to minimizing nonlinear energies by analyzing three problem benchmarks - the p-Laplacian, the Ginzburg-Landau model, and the Neo-Hookean hyperelasticity. This approach merely requires the provision of the energy functional itself, making it a simple and efficient way to solve this category of problems. The results show that the implementation is about ten times faster than the MATLAB implementation for large-scale problems. Our findings highlight Python's efficiency and ease of use in scientific computing, establishing it as a preferable choice for implementing sophisticated mathematical models and accelerating the development of numerical simulations.