Despite the advancements in high-performance computing and modern numerical algorithms, the cost remains prohibitive for multi-query kinetic plasma simulations. In this work, we develop data-driven reduced-order models (ROM) for collisionless electrostatic plasma dynamics, based on the kinetic Vlasov-Poisson equation. Our ROM approach projects the equation onto a linear subspace defined by principal proper orthogonal decomposition (POD) modes. We introduce an efficient tensorial method to update the nonlinear term using a precomputed third-order tensor. We capture multiscale behavior with a minimal number of POD modes by decomposing the solution into multiple time windows using a physical-time indicator and creating a temporally-local ROM. Applied to 1D-1V simulations, specifically the benchmark two-stream instability case, our time-windowed reduced-order model (TW-ROM) with the tensorial approach solves the equation approximately 280 times faster than Eulerian simulations while maintaining a maximum relative error of 4% for the training data and 13% for the testing data.
One of the most promising applications of quantum computers is to simulate quantum mechanical systems and deliver an advantage to classical computation by leveraging their inherent quantum behaviour. In this work, we present a new approach to achieve a noise tolerant Hamiltonian simulation algorithm for ground state energy estimation which also surmounts stochastic limitations most of its counterparts face. This algorithm is based on an adaptive set of fuzzy bisection searches to estimate the ground state energy digit by digit that can get to any arbitrary target precision. It builds upon the Quantum Eigenvalue Transformation of Unitary Matrices (QETU) algorithm and it delivers good approximations in simulations with quantum depolarizing probability up to 1e-3, particularly for the Transverse-Field Ising Model (TFIM). We ran simulations with different system Hamiltonians, system sizes and time evolution encoding methods on IBM Qiskit and we demonstrate the key results in this work, as well as compare the performance with other existing methods.
In unitary property testing a quantum algorithm, also known as a tester, is given query access to a black-box unitary and has to decide whether it satisfies some property. We propose a new technique for proving lower bounds on the quantum query complexity of unitary property testing and related problems, which utilises the connection between unitary property testing and unitary channel discrimination. The main advantage of this technique is that all obtained lower bounds hold for any $\mathsf{C}$-tester with $\mathsf{C} \subseteq \mathsf{QMA}(\text{poly(n)} / \mathsf{qpoly}$, showing that even having access to both (unentangled) quantum proofs and advice does not help for many unitary problems. We apply our technique to prove lower bounds for problems like quantum phase estimation, the entanglement entropy problem, quantum Gibbs sampling and more, removing all logarithmic factors in the lower bounds obtained by the sample-to-query lifting theorem of Wang and Zhang (2023). As a direct corollary, we show that there exists a quantum oracle relative to which $\mathsf{QMA}(\text{poly(n)} / \mathsf{qpoly} \not\supset \mathsf{SBQP}$.
Self-adaptation is a crucial feature of autonomous systems that must cope with uncertainties in, e.g., their environment and their internal state. Self-adaptive systems are often modelled as two-layered systems with a managed subsystem handling the domain concerns and a managing subsystem implementing the adaptation logic. We consider a case study of a self-adaptive robotic system; more concretely, an autonomous underwater vehicle (AUV) used for pipeline inspection. In this paper, we model and analyse it with the feature-aware probabilistic model checker ProFeat. The functionalities of the AUV are modelled in a feature model, capturing the AUV's variability. This allows us to model the managed subsystem of the AUV as a family of systems, where each family member corresponds to a valid feature configuration of the AUV. The managing subsystem of the AUV is modelled as a control layer capable of dynamically switching between such valid feature configurations, depending both on environmental and internal conditions. We use this model to analyse probabilistic reward and safety properties for the AUV.
We provide a quantitative assessment of welfare in the classical model of risk-sharing and exchange under uncertainty. We prove three kinds of results. First, that in an equilibrium allocation, the scope for improving individual welfare by a given margin (an $\ve$-improvement) vanishes as the number of states increases. Second, that the scope for a change in aggregate resources that may be distributed to enhance individual welfare by a given margin also vanishes. Equivalently: in an inefficient allocation, for a given level of resource sub-optimality (as measured by the coefficient of resource under-utilization), the possibilities for enhancing welfare by perturbing aggregate resources decrease exponentially to zero with the number of states. Finally, we consider efficient risk-sharing in standard models of uncertainty aversion with multiple priors, and show that, in an inefficient allocation, certain sets of priors shrink with the size of the state space.
Physics-based computer models based on numerical solution of the governing equations generally cannot make rapid predictions, which in turn, limits their applications in the clinic. To address this issue, we developed a physics-informed neural network (PINN) model that encodes the physics of a closed-loop blood circulation system embedding a left ventricle (LV). The PINN model is trained to satisfy a system of ordinary differential equations (ODEs) associated with a lumped parameter description of the circulatory system. The model predictions have a maximum error of less than 5% when compared to those obtained by solving the ODEs numerically. An inverse modeling approach using the PINN model is also developed to rapidly estimate model parameters (in $\sim$ 3 mins) from single-beat LV pressure and volume waveforms. Using synthetic LV pressure and volume waveforms generated by the PINN model with different model parameter values, we show that the inverse modeling approach can recover the corresponding ground truth values, which suggests that the model parameters are unique. The PINN inverse modeling approach is then applied to estimate LV contractility indexed by the end-systolic elastance $E_{es}$ using waveforms acquired from 11 swine models, including waveforms acquired before and after administration of dobutamine (an inotropic agent) in 3 animals. The estimated $E_{es}$ is about 58% to 284% higher for the data associated with dobutamine compared to those without, which implies that this approach can be used to estimate LV contractility using single-beat measurements. The PINN inverse modeling can potentially be used in the clinic to simultaneously estimate LV contractility and other physiological parameters from single-beat measurements.
This study aims to provide a comprehensive assessment of single-objective and multi-objective optimisation algorithms for the design of an elbow-type draft tube, as well as to introduce a computationally efficient optimisation workflow. The proposed workflow leverages deep neural network surrogates trained on data obtained from numerical simulations. The use of surrogates allows for a more flexible and faster evaluation of novel designs. The success history-based adaptive differential evolution with linear reduction and the multi-objective evolutionary algorithm based on decomposition were identified as the best-performing algorithms and used to determine the influence of different objectives in the single-objective optimisation and their combined impact on the draft tube design in the multi-objective optimisation. The results for the single-objective algorithm are consistent with those of the multi-objective algorithm when the objectives are considered separately. Multi-objective approach, however, should typically be chosen, especially for computationally inexpensive surrogates. A multi-criteria decision analysis method was used to obtain optimal multi-objective results, showing an improvement of 1.5% and 17% for the pressure recovery factor and drag coefficient, respectively. The difference between the predictions and the numerical results is less than 0.5% for the pressure recovery factor and 3% for the drag coefficient. As the demand for renewable energy continues to increase, the relevance of data-driven optimisation workflows, as discussed in this study, will become increasingly important, especially in the context of global sustainability efforts.
Higher order artificial neurons whose outputs are computed by applying an activation function to a higher order multinomial function of the inputs have been considered in the past, but did not gain acceptance due to the extra parameters and computational cost. However, higher order neurons have significantly greater learning capabilities since the decision boundaries of higher order neurons can be complex surfaces instead of just hyperplanes. The boundary of a single quadratic neuron can be a general hyper-quadric surface allowing it to learn many nonlinearly separable datasets. Since quadratic forms can be represented by symmetric matrices, only $\frac{n(n+1)}{2}$ additional parameters are needed instead of $n^2$. A quadratic Logistic regression model is first presented. Solutions to the XOR problem with a single quadratic neuron are considered. The complete vectorized equations for both forward and backward propagation in feedforward networks composed of quadratic neurons are derived. A reduced parameter quadratic neural network model with just $ n $ additional parameters per neuron that provides a compromise between learning ability and computational cost is presented. Comparison on benchmark classification datasets are used to demonstrate that a final layer of quadratic neurons enables networks to achieve higher accuracy with significantly fewer hidden layer neurons. In particular this paper shows that any dataset composed of $\mathcal{C}$ bounded clusters can be separated with only a single layer of $\mathcal{C}$ quadratic neurons.
Mesh degeneration is a bottleneck for fluid-structure interaction (FSI) simulations and for shape optimization via the method of mappings. In both cases, an appropriate mesh motion technique is required. The choice is typically based on heuristics, e.g., the solution operators of partial differential equations (PDE), such as the Laplace or biharmonic equation. Especially the latter, which shows good numerical performance for large displacements, is expensive. Moreover, from a continuous perspective, choosing the mesh motion technique is to a certain extent arbitrary and has no influence on the physically relevant quantities. Therefore, we consider approaches inspired by machine learning. We present a hybrid PDE-NN approach, where the neural network (NN) serves as parameterization of a coefficient in a second order nonlinear PDE. We ensure existence of solutions for the nonlinear PDE by the choice of the neural network architecture. Moreover, we present an approach where a neural network corrects the harmonic extension such that the boundary displacement is not changed. In order to avoid technical difficulties in coupling finite element and machine learning software, we work with a splitting of the monolithic FSI system into three smaller subsystems. This allows to solve the mesh motion equation in a separate step. We assess the quality of the learned mesh motion technique by applying it to a FSI benchmark problem. In addition, we discuss generalizability and computational cost of the learned extension operators.
The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.
As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.