Despite the growing availability of sensing and data in general, we remain unable to fully characterise many in-service engineering systems and structures from a purely data-driven approach. The vast data and resources available to capture human activity are unmatched in our engineered world, and, even in cases where data could be referred to as ``big,'' they will rarely hold information across operational windows or life spans. This paper pursues the combination of machine learning technology and physics-based reasoning to enhance our ability to make predictive models with limited data. By explicitly linking the physics-based view of stochastic processes with a data-based regression approach, a spectrum of possible Gaussian process models are introduced that enable the incorporation of different levels of expert knowledge of a system. Examples illustrate how these approaches can significantly reduce reliance on data collection whilst also increasing the interpretability of the model, another important consideration in this context.
Stress prediction in porous materials and structures is challenging due to the high computational cost associated with direct numerical simulations. Convolutional Neural Network (CNN) based architectures have recently been proposed as surrogates to approximate and extrapolate the solution of such multiscale simulations. These methodologies are usually limited to 2D problems due to the high computational cost of 3D voxel based CNNs. We propose a novel geometric learning approach based on a Graph Neural Network (GNN) that efficiently deals with three-dimensional problems by performing convolutions over 2D surfaces only. Following our previous developments using pixel-based CNN, we train the GNN to automatically add local fine-scale stress corrections to an inexpensively computed coarse stress prediction in the porous structure of interest. Our method is Bayesian and generates densities of stress fields, from which credible intervals may be extracted. As a second scientific contribution, we propose to improve the extrapolation ability of our network by deploying a strategy of online physics-based corrections. Specifically, we condition the posterior predictions of our probabilistic predictions to satisfy partial equilibrium at the microscale, at the inference stage. This is done using an Ensemble Kalman algorithm, to ensure tractability of the Bayesian conditioning operation. We show that this innovative methodology allows us to alleviate the effect of undesirable biases observed in the outputs of the uncorrected GNN, and improves the accuracy of the predictions in general.
Quantum computing has recently emerged as a transformative technology. Yet, its promised advantages rely on efficiently translating quantum operations into viable physical realizations. In this work, we use generative machine learning models, specifically denoising diffusion models (DMs), to facilitate this transformation. Leveraging text-conditioning, we steer the model to produce desired quantum operations within gate-based quantum circuits. Notably, DMs allow to sidestep during training the exponential overhead inherent in the classical simulation of quantum dynamics -- a consistent bottleneck in preceding ML techniques. We demonstrate the model's capabilities across two tasks: entanglement generation and unitary compilation. The model excels at generating new circuits and supports typical DM extensions such as masking and editing to, for instance, align the circuit generation to the constraints of the targeted quantum device. Given their flexibility and generalization abilities, we envision DMs as pivotal in quantum circuit synthesis, enhancing both practical applications but also insights into theoretical quantum computation.
Optical computing systems can provide high-speed and low-energy data processing but face deficiencies in computationally demanding training and simulation-to-reality gap. We propose a model-free solution for lightweight in situ optimization of optical computing systems based on the score gradient estimation algorithm. This approach treats the system as a black box and back-propagates loss directly to the optical weights' probabilistic distributions, hence circumventing the need for computation-heavy and biased system simulation. We demonstrate a superior classification accuracy on the MNIST and FMNIST datasets through experiments on a single-layer diffractive optical computing system. Furthermore, we show its potential for image-free and high-speed cell analysis. The inherent simplicity of our proposed method, combined with its low demand for computational resources, expedites the transition of optical computing from laboratory demonstrations to real-world applications.
In this work, we propose a global model selection criterion to estimate the graph of conditional dependencies of a random vector based on a finite sample. By global criterion, we mean optimizing a function over the entire set of possible graphs, eliminating the need to estimate the individual neighborhoods and subsequently combine them to estimate the graph. We prove the almost sure convergence of the graph estimator. This convergence holds provided the data is a realization of a multivariate stochastic process that satisfies a mixing condition. To the best of our knowledge, these are the first results to show the consistency of a model selection criterion for Markov random fields on graphs under non-independent data.
To improve the predictability of complex computational models in the experimentally-unknown domains, we propose a Bayesian statistical machine learning framework utilizing the Dirichlet distribution that combines results of several imperfect models. This framework can be viewed as an extension of Bayesian stacking. To illustrate the method, we study the ability of Bayesian model averaging and mixing techniques to mine nuclear masses. We show that the global and local mixtures of models reach excellent performance on both prediction accuracy and uncertainty quantification and are preferable to classical Bayesian model averaging. Additionally, our statistical analysis indicates that improving model predictions through mixing rather than mixing of corrected models leads to more robust extrapolations.
The Causal Roadmap outlines a systematic approach to our research endeavors: define quantity of interest, evaluate needed assumptions, conduct statistical estimation, and carefully interpret of results. At the estimation step, it is essential that the estimation algorithm be chosen thoughtfully for its theoretical properties and expected performance. Simulations can help researchers gain a better understanding of an estimator's statistical performance under conditions unique to the real-data application. This in turn can inform the rigorous pre-specification of a Statistical Analysis Plan (SAP), not only stating the estimand (e.g., G-computation formula), the estimator (e.g., targeted minimum loss-based estimation [TMLE]), and adjustment variables, but also the implementation of the estimator -- including nuisance parameter estimation and approach for variance estimation. Doing so helps ensure valid inference (e.g., 95% confidence intervals with appropriate coverage). Failing to pre-specify estimation can lead to data dredging and inflated Type-I error rates.
In recent years, research involving human participants has been critical to advances in artificial intelligence (AI) and machine learning (ML), particularly in the areas of conversational, human-compatible, and cooperative AI. For example, around 12% and 6% of publications at recent AAAI and NeurIPS conferences indicate the collection of original human data, respectively. Yet AI and ML researchers lack guidelines for ethical, transparent research practices with human participants. Fewer than one out of every four of these AAAI and NeurIPS papers provide details of ethical review, the collection of informed consent, or participant compensation. This paper aims to bridge this gap by exploring normative similarities and differences between AI research and related fields that involve human participants. Though psychology, human-computer interaction, and other adjacent fields offer historic lessons and helpful insights, AI research raises several specific concerns$\unicode{x2014}$namely, participatory design, crowdsourced dataset development, and an expansive role of corporations$\unicode{x2014}$that necessitate a contextual ethics framework. To address these concerns, this paper outlines a set of guidelines for ethical and transparent practice with human participants in AI and ML research. These guidelines can be found in Section 4 on pp. 4$\unicode{x2013}$7.
In this paper we examine the effectiveness of several multi-arm bandit algorithms when used as a trust system to select agents to delegate tasks to. In contrast to existing work, we allow for recursive delegation to occur. That is, a task delegated to one agent can be delegated onwards by that agent, with further delegation possible until some agent finally executes the task. We show that modifications to the standard multi-arm bandit algorithms can provide improvements in performance in such recursive delegation settings.
Ising solvers offer a promising physics-based approach to tackle the challenging class of combinatorial optimization problems. However, typical solvers operate in a quadratic energy space, having only pair-wise coupling elements which already dominate area and energy. We show that such quadratization can cause severe problems: increased dimensionality, a rugged search landscape, and misalignment with the original objective function. Here, we design and quantify a higher-order Hopfield optimization solver, with 28nm CMOS technology and memristive couplings for lower area and energy computations. We combine algorithmic and circuit analysis to show quantitative advantages over quadratic Ising Machines (IM)s, yielding 48x and 72x reduction in time-to-solution (TTS) and energy-to-solution (ETS) respectively for Boolean satisfiability problems of 150 variables, with favorable scaling.
Novel categories are commonly defined as those unobserved during training but present during testing. However, partially labelled training datasets can contain unlabelled training samples that belong to novel categories, meaning these can be present in training and testing. This research is the first to generalise between what we call observed-novel and unobserved-novel categories within a new learning policy called open-set learning with augmented category by exploiting unlabelled data or Open-LACU. After surveying existing learning policies, we introduce Open-LACU as a unified policy of positive and unlabelled learning, semi-supervised learning and open-set recognition. Subsequently, we develop the first Open-LACU model using an algorithmic training process of the relevant research fields. The proposed Open-LACU classifier achieves state-of-the-art and first-of-its-kind results.