The vehicle routing problem with time windows (VRPTW) is a classic optimization problem that arises in many different areas, such as logistics and transportation. The goal of the VRPTW is to find the shortest possible route for a fleet of vehicles to visit a set of destinations. In recent years, there has been growing interest in using variational quantum algorithms (VQAs), to find approximate solutions to problems that can be formulated as quadratic unconstrained binary optimization (QUBO) problems. In this work, we formulate the VRPTW as a QUBO and apply a quantum variational approach to the VRPTW using our earlier suggested encoding scheme described in [1] to reduce drastically the number of qubits required. We evaluate our approach on a set of VRPTW instances ranging from 11 to 3964 routes constructed with data provided by researchers from ExxonMobil. We compare the solutions obtained with standard full encoding approaches for which the max problems size possible in NISQ era are of the order of 20-30 routes. We run our algorithms in simulators as well as cloud quantum hardware provided by IBMQ, AWS (Rigetti) and IonQ and benchmark our results against each other as well as on the simulators. We show that our approach can find approximate solutions to the VRPTW that are comparable to the solutions found by quantum algorithms using the full encoding. Our results suggest that our unique encoding approach, provides a promising approach to drastically reducing the number of qubits required to find decent approximate solutions for industry-based optimization problems.
Stochastic inversion problems are typically encountered when it is wanted to quantify the uncertainty affecting the inputs of computer models. They consist in estimating input distributions from noisy, observable outputs, and such problems are increasingly examined in Bayesian contexts where the targeted inputs are affected by stochastic uncertainties. In this regard, a stochastic input can be qualified as meaningful if it explains most of the output uncertainty. While such inverse problems are characterized by identifiability conditions, constraints of "signal to noise", that can formalize this meaningfulness, should be accounted for within the definition of the model, prior to inference. This article investigates the possibility of forcing a solution to be meaningful in the context of parametric uncertainty quantification, through the tools of global sensitivity analysis and information theory (variance, entropy, Fisher information). Such forcings have mainly the nature of constraints placed on the input covariance, and can be made explicit by considering linear or linearizable models. Simulated experiments indicate that, when injected into the modeling process, these constraints can limit the influence of measurement or process noise on the estimation of the input distribution, and let hope for future extensions in a full non-linear framework, for example through the use of linear Gaussian mixtures.
Multiphysics processes in fractured porous media is a research field of importance for several subsurface applications and has received considerable attention over the last decade. The dynamics are characterised by strong couplings between processes as well as interaction between the processes and the structure of the fractured medium itself. The rich range of behavior calls for explorative mathematical modelling, such as experimentation with constitutive laws and novel coupling concepts between physical processes. Moreover, efficient simulations of the strong couplings between multiphysics processes and geological structures require the development of tailored numerical methods. We present a modelling framework and its implementation in the open-source simulation toolbox PorePy, which is designed for rapid prototyping of multiphysics processes in fractured porous media. PorePy uses a mixed-dimensional representation of the fracture geometry and generally applies fully implicit couplings between processes. The code design follows the paradigms of modularity and differentiable programming, which together allow for extreme flexibility in experimentation with governing equations with minimal changes to the code base. The code integrity is supported by a multilevel testing framework ensuring the reliability of the code. We present our modelling framework within a context of thermo-poroelasticity in deformable fractured porous media, illustrating the close relation between the governing equations and the source code. We furthermore discuss the design of the testing framework and present simulations showcasing the extendibility of PorePy, as well as the type of results that can be produced by mixed-dimensional simulation tools.
Time-dependent gravity data from satellite missions like GRACE-FO reveal mass redistribution in the system Earth at various time scales: long-term climate change signals, inter-annual phenomena like El Nino, seasonal mass transports and transients, e. g. due to earthquakes. For this contemporary issue, a classical inverse problem has to be considered: the gravitational potential has to be modelled on the Earth's surface from measurements in space. This is also known as the downward continuation problem. Thus, it is important to further develop current mathematical methods for such inverse problems. For this, the (Learning) Inverse Problem Matching Pursuits ((L)IPMPs) have been developed within the last decade. Their unique feature is the combination of local as well as global trial functions in the approximative solution of an inverse problem such as the downward continuation of the gravitational potential. In this way, they harmonize the ideas of a traditional spherical harmonic ansatz and the radial basis function approach. Previous publications on these methods showed proofs of concept. Here, we consider the methods for high-dimensional experiments settings with more than 500 000 grid points which yields a resolution of 20 km at best on a realistic satellite geometry. We also explain the changes in the methods that had to be done to work with such a large amount of data. The corresponding code (updated for big data use) is available at //doi.org/10.5281/zenodo.8223771 under the licence CC BY-NC-SA 3.0 Germany.
We present the Continuous Empirical Cubature Method (CECM), a novel algorithm for empirically devising efficient integration rules. The CECM aims to improve existing cubature methods by producing rules that are close to the optimal, featuring far less points than the number of functions to integrate. The CECM consists on a two-stage strategy. First, a point selection strategy is applied for obtaining an initial approximation to the cubature rule, featuring as many points as functions to integrate. The second stage consists in a sparsification strategy in which, alongside the indexes and corresponding weights, the spatial coordinates of the points are also considered as design variables. The positions of the initially selected points are changed to render their associated weights to zero, and in this way, the minimum number of points is achieved. Although originally conceived within the framework of hyper-reduced order models (HROMs), we present the method's formulation in terms of generic vector-valued functions, thereby accentuating its versatility across various problem domains. To demonstrate the extensive applicability of the method, we conduct numerical validations using univariate and multivariate Lagrange polynomials. In these cases, we show the method's capacity to retrieve the optimal Gaussian rule. We also asses the method for an arbitrary exponential-sinusoidal function in a 3D domain, and finally consider an example of the application of the method to the hyperreduction of a multiscale finite element model, showcasing notable computational performance gains. A secondary contribution of the current paper is the Sequential Randomized SVD (SRSVD) approach for computing the Singular Value Decomposition (SVD) in a column-partitioned format. The SRSVD is particularly advantageous when matrix sizes approach memory limitations.
In this study, we consider a class of non-autonomous time-fractional partial advection-diffusion-reaction (TF-ADR) equations with Caputo type fractional derivative. To obtain the numerical solution of the model problem, we apply the non-symmetric interior penalty Galerkin (NIPG) method in space on a uniform mesh and the L1-scheme in time on a graded mesh. It is demonstrated that the computed solution is discretely stable. Superconvergence of error estimates for the proposed method are obtained using the discrete energy-norm. Also, we have applied the proposed method to solve semilinear problems after linearizing by the Newton linearization process. The theoretical results are verified through numerical experiments.
In this work, we solve inverse problems of nonlinear Schr\"{o}dinger equations that can be formulated as a learning process of a special convolutional neural network. Instead of attempting to approximate functions in the inverse problems, we embed a library as a low dimensional manifold in the network such that unknowns can be reduced to some scalars. The nonlinear Schr\"{o}dinger equation (NLSE) is $i\frac{\partial \psi}{\partial t}-\beta\frac{\partial^2 \psi}{\partial x^2}+\gamma|\psi|^2\psi+V(x)\psi=0,$ where the wave function $\psi(x,t)$ is the solution to the forward problem and the potential $V(x)$ is the quantity of interest of the inverse problem. The main contributions of this work come from two aspects. First, we construct a special neural network directly from the Schr\"{o}dinger equation, which is motivated by a splitting method. The physics behind the construction enhances explainability of the neural network. The other part is using a library-search algorithm to project the solution space of the inverse problem to a lower-dimensional space. The way to seek the solution in a reduced approximation space can be traced back to the compressed sensing theory. The motivation of this part is to alleviate the training burden in estimating functions. Instead, with a well-chosen library, one can greatly simplify the training process. A brief analysis is given, which focuses on well-possedness of some mentioned inverse problems and convergence of the neural network approximation. To show the effectiveness of the proposed method, we explore in some representative problems including simple equations and a couple equation. The results can well verify the theory part. In the future, we can further explore manifold learning to enhance the approximation effect of the library-search algorithm.
Particle methods based on evolving the spatial derivatives of the solution were originally introduced to simulate reaction-diffusion processes, inspired by vortex methods for the Navier--Stokes equations. Such methods, referred to as gradient random walk methods, were extensively studied in the '90s and have several interesting features, such as being grid free, automatically adapting to the solution by concentrating elements where the gradient is large and significantly reducing the variance of the standard random walk approach. In this work, we revive these ideas by showing how to generalize the approach to a larger class of partial differential equations, including hyperbolic systems of conservation laws. To achieve this goal, we first extend the classical Monte Carlo method to relaxation approximation of systems of conservation laws, and subsequently consider a novel particle dynamics based on the spatial derivatives of the solution. The methodology, combined with asymptotic-preserving splitting discretization, yields a way to construct a new class of gradient-based Monte Carlo methods for hyperbolic systems of conservation laws. Several results in one spatial dimension for scalar equations and systems of conservation laws show that the new methods are very promising and yield remarkable improvements compared to standard Monte Carlo approaches, either in terms of variance reduction as well as in describing the shock structure.
The power of Clifford or, geometric, algebra lies in its ability to represent geometric operations in a concise and elegant manner. Clifford algebras provide the natural generalizations of complex, dual numbers and quaternions into non-commutative multivectors. The paper demonstrates an algorithm for the computation of inverses of such numbers in a non-degenerate Clifford algebra of an arbitrary dimension. The algorithm is a variation of the Faddeev-LeVerrier-Souriau algorithm and is implemented in the open-source Computer Algebra System Maxima. Symbolic and numerical examples in different Clifford algebras are presented.
In recent years, the concept of introducing physics to machine learning has become widely popular. Most physics-inclusive ML-techniques however are still limited to a single geometry or a set of parametrizable geometries. Thus, there remains the need to train a new model for a new geometry, even if it is only slightly modified. With this work we introduce a technique with which it is possible to learn approximate solutions to the steady-state Navier--Stokes equations in varying geometries without the need of parametrization. This technique is based on a combination of a U-Net-like CNN and well established discretization methods from the field of the finite difference method.The results of our physics-aware CNN are compared to a state-of-the-art data-based approach. Additionally, it is also shown how our approach performs when combined with the data-based approach.
This paper focuses on investigating the learning operators for identifying weak solutions to the Navier-Stokes equations. Our objective is to establish a connection between the initial data as input and the weak solution as output. To achieve this, we employ a combination of deep learning methods and compactness argument to derive learning operators for weak solutions for any large initial data in 2D, and for low-dimensional initial data in 3D. Additionally, we utilize the universal approximation theorem to derive a lower bound on the number of sensors required to achieve accurate identification of weak solutions to the Navier-Stokes equations. Our results demonstrate the potential of using deep learning techniques to address challenges in the study of fluid mechanics, particularly in identifying weak solutions to the Navier-Stokes equations.