In this paper, we follow the physics guided modeling approach and integrate a neural differential equation network into the physical structure of a vehicle single track model. By relying on the kinematic relations of the single track ordinary differential equations (ODE), a small neural network and few training samples are sufficient to substantially improve the model accuracy compared with a pure physics based vehicle single track model. To be more precise, the sum of squared error is reduced by 68% in the considered scenario. In addition, it is demonstrated that the prediction capabilities of the physics guided neural ODE model are superior compared with a pure black box neural differential equation approach.
This paper studies the $p$-biharmonic equation on graphs, which arises in point cloud processing and can be interpreted as a natural extension of the graph $p$-Laplacian from the perspective of hypergraph. The asymptotic behavior of the solution is investigated when the random geometric graph is considered and the number of data points goes to infinity. We show that the continuum limit is an appropriately weighted $p$-biharmonic equation with homogeneous Neumann boundary conditions. The result relies on the uniform $L^p$ estimates for solutions and gradients of nonlocal and graph Poisson equations. The $L^\infty$ estimates of solutions are also obtained as a byproduct.
In this article, we investigate some issues related to the quantification of uncertainties associated with the electrical properties of graphene nanoribbons. The approach is suited to understand the effects of missing information linked to the difficulty of fixing some material parameters, such as the band gap, and the strength of the applied electric field. In particular, we focus on the extension of particle Galerkin methods for kinetic equations in the case of the semiclassical Boltzmann equation for charge transport in graphene nanoribbons with uncertainties. To this end, we develop an efficient particle scheme which allows us to parallelize the computation and then, after a suitable generalization of the scheme to the case of random inputs, we present a Galerkin reformulation of the particle dynamics, obtained by means of a generalized polynomial chaos approach, which allows the reconstruction of the kinetic distribution. As a consequence, the proposed particle-based scheme preserves the physical properties and the positivity of the distribution function also in the presence of a complex scattering in the transport equation of electrons. The impact of the uncertainty of the band gap and applied field on the electrical current is analyzed.
This paper investigates the intricate connection between visual perception and the mathematical modelling of neural activity in the primary visual cortex (V1). The focus is on modelling the visual MacKay effect [Mackay, Nature 1957]. While bifurcation theory has been a prominent mathematical approach for addressing issues in neuroscience, especially in describing spontaneous pattern formations in V1 due to parameter changes, it faces challenges in scenarios with localized sensory inputs. This is evident, for instance, in Mackay's psychophysical experiments, where the redundancy of visual stimuli information results in irregular shapes, making bifurcation theory and multi-scale analysis less effective. To address this, we follow a mathematical viewpoint based on the input-output controllability of an Amari-type neural fields model. In this framework, we consider sensory input as a control function, a cortical representation via the retino-cortical map of the visual stimulus that captures its distinct features. This includes highly localized information in the center of MacKay's funnel pattern "MacKay rays". From a control theory point of view, the Amari-type equation's exact controllability property is discussed for linear and nonlinear response functions. For the visual MacKay effect modelling, we adjust the parameter representing intra-neuron connectivity to ensure that cortical activity exponentially stabilizes to the stationary state in the absence of sensory input. Then, we perform quantitative and qualitative studies to demonstrate that they capture all the essential features of the induced after-image reported by MacKay.
Motivated by the widespread use of approximate derivatives in machine learning and optimization, we study inexact subgradient methods with non-vanishing additive errors and step sizes. In the nonconvex semialgebraic setting, under boundedness assumptions, we prove that the method provides points that eventually fluctuate close to the critical set at a distance proportional to $\epsilon^\rho$ where $\epsilon$ is the error in subgradient evaluation and $\rho$ relates to the geometry of the problem. In the convex setting, we provide complexity results for the averaged values. We also obtain byproducts of independent interest, such as descent-like lemmas for nonsmooth nonconvex problems and some results on the limit of affine interpolants of differential inclusions.
A critical issue in approximating solutions of ordinary differential equations using neural networks is the exact satisfaction of the boundary or initial conditions. For this purpose, neural forms have been introduced, i.e., functional expressions that depend on neural networks which, by design, satisfy the prescribed conditions exactly. Expanding upon prior progress, the present work contributes in three distinct aspects. First, it presents a novel formalism for crafting optimized neural forms. Second, it outlines a method for establishing an upper bound on the absolute deviation from the exact solution. Third, it introduces a technique for converting problems with Neumann or Robin conditions into equivalent problems with parametric Dirichlet conditions. The proposed optimized neural forms were numerically tested on a set of diverse problems, encompassing first-order and second-order ordinary differential equations, as well as first-order systems. Stiff and delay differential equations were also considered. The obtained solutions were compared against solutions obtained via Runge-Kutta methods and exact solutions wherever available. The reported results and analysis verify that in addition to the exact satisfaction of the boundary or initial conditions, optimized neural forms provide closed-form solutions of superior interpolation capability and controllable overall accuracy.
In this paper, to address the optimization problem on a compact matrix manifold, we introduce a novel algorithmic framework called the Transformed Gradient Projection (TGP) algorithm, using the projection onto this compact matrix manifold. Compared with the existing algorithms, the key innovation in our approach lies in the utilization of a new class of search directions and various stepsizes, including the Armijo, nonmonotone Armijo, and fixed stepsizes, to guide the selection of the next iterate. Our framework offers flexibility by encompassing the classical gradient projection algorithms as special cases, and intersecting the retraction-based line-search algorithms. Notably, our focus is on the Stiefel or Grassmann manifold, revealing that many existing algorithms in the literature can be seen as specific instances within our proposed framework, and this algorithmic framework also induces several new special cases. Then, we conduct a thorough exploration of the convergence properties of these algorithms, considering various search directions and stepsizes. To achieve this, we extensively analyze the geometric properties of the projection onto compact matrix manifolds, allowing us to extend classical inequalities related to retractions from the literature. Building upon these insights, we establish the weak convergence, convergence rate, and global convergence of TGP algorithms under three distinct stepsizes. In cases where the compact matrix manifold is the Stiefel or Grassmann manifold, our convergence results either encompass or surpass those found in the literature. Finally, through a series of numerical experiments, we observe that the TGP algorithms, owing to their increased flexibility in choosing search directions, outperform classical gradient projection and retraction-based line-search algorithms in several scenarios.
Domain decomposition provides an effective way to tackle the dilemma of physics-informed neural networks (PINN) which struggle to accurately and efficiently solve partial differential equations (PDEs) in the whole domain, but the lack of efficient tools for dealing with the interfaces between two adjacent sub-domains heavily hinders the training effects, even leads to the discontinuity of the learned solutions. In this paper, we propose a symmetry group based domain decomposition strategy to enhance the PINN for solving the forward and inverse problems of the PDEs possessing a Lie symmetry group. Specifically, for the forward problem, we first deploy the symmetry group to generate the dividing-lines having known solution information which can be adjusted flexibly and are used to divide the whole training domain into a finite number of non-overlapping sub-domains, then utilize the PINN and the symmetry-enhanced PINN methods to learn the solutions in each sub-domain and finally stitch them to the overall solution of PDEs. For the inverse problem, we first utilize the symmetry group acting on the data of the initial and boundary conditions to generate labeled data in the interior domain of PDEs and then find the undetermined parameters as well as the solution by only training the neural networks in a sub-domain. Consequently, the proposed method can predict high-accuracy solutions of PDEs which are failed by the vanilla PINN in the whole domain and the extended physics-informed neural network in the same sub-domains. Numerical results of the Korteweg-de Vries equation with a translation symmetry and the nonlinear viscous fluid equation with a scaling symmetry show that the accuracies of the learned solutions are improved largely.
In this paper, we provide a theoretical analysis of a type of operator learning method without data reliance based on the classical finite element approximation, which is called the finite element operator network (FEONet). We first establish the convergence of this method for general second-order linear elliptic PDEs with respect to the parameters for neural network approximation. In this regard, we address the role of the condition number of the finite element matrix in the convergence of the method. Secondly, we derive an explicit error estimate for the self-adjoint case. For this, we investigate some regularity properties of the solution in certain function classes for a neural network approximation, verifying the sufficient condition for the solution to have the desired regularity. Finally, we will also conduct some numerical experiments that support the theoretical findings, confirming the role of the condition number of the finite element matrix in the overall convergence.
Quantum hypothesis testing (QHT) has been traditionally studied from the information-theoretic perspective, wherein one is interested in the optimal decay rate of error probabilities as a function of the number of samples of an unknown state. In this paper, we study the sample complexity of QHT, wherein the goal is to determine the minimum number of samples needed to reach a desired error probability. By making use of the wealth of knowledge that already exists in the literature on QHT, we characterize the sample complexity of binary QHT in the symmetric and asymmetric settings, and we provide bounds on the sample complexity of multiple QHT. In more detail, we prove that the sample complexity of symmetric binary QHT depends logarithmically on the inverse error probability and inversely on the negative logarithm of the fidelity. As a counterpart of the quantum Stein's lemma, we also find that the sample complexity of asymmetric binary QHT depends logarithmically on the inverse type II error probability and inversely on the quantum relative entropy. We then provide lower and upper bounds on the sample complexity of multiple QHT, with it remaining an intriguing open question to improve these bounds. The final part of our paper outlines and reviews how sample complexity of QHT is relevant to a broad swathe of research areas and can enhance understanding of many fundamental concepts, including quantum algorithms for simulation and search, quantum learning and classification, and foundations of quantum mechanics. As such, we view our paper as an invitation to researchers coming from different communities to study and contribute to the problem of sample complexity of QHT, and we outline a number of open directions for future research.
In this paper, we propose a novel multiscale model reduction strategy tailored to address the Poisson equation within heterogeneous perforated domains. The numerical simulation of this intricate problem is impeded by its multiscale characteristics, necessitating an exceptionally fine mesh to adequately capture all relevant details. To overcome the challenges inherent in the multiscale nature of the perforations, we introduce a coarse space constructed using the Constraint Energy Minimizing Generalized Multiscale Finite Element Method (CEM-GMsFEM). This involves constructing basis functions through a sequence of local energy minimization problems over eigenspaces containing localized information pertaining to the heterogeneities. Through our analysis, we demonstrate that the oversampling layers depend on the local eigenvalues, thereby implicating the local geometry as well. Additionally, we provide numerical examples to illustrate the efficacy of the proposed scheme.