This paper aims to devise an adaptive neural network basis method for numerically solving a second-order semilinear partial differential equation (PDE) with low-regular solutions in two/three dimensions. The method is obtained by combining basis functions from a class of shallow neural networks and the resulting multi-scale analogues, a residual strategy in adaptive methods and the non-overlapping domain decomposition method. At the beginning, in view of the solution residual, we partition the total domain $\Omega$ into $K+1$ non-overlapping subdomains, denoted respectively as $\{\Omega_k\}_{k=0}^K$, where the exact solution is smooth on subdomain $\Omega_{0}$ and low-regular on subdomain $\Omega_{k}$ ($1\le k\le K$). Secondly, the low-regular solutions on different subdomains \(\Omega_{k}\)~($1\le k\le K$) are approximated by neural networks with different scales, while the smooth solution on subdomain \(\Omega_0\) is approximated by the initialized neural network. Thirdly, we determine the undetermined coefficients by solving the linear least squares problems directly or the nonlinear least squares problem via the Gauss-Newton method. The proposed method can be extended to multi-level case naturally. Finally, we use this adaptive method for several peak problems in two/three dimensions to show its high-efficient computational performance.
This paper presents a fast and robust numerical method for reconstructing point-like sources in the time-harmonic Maxwell's equations given Cauchy data at a fixed frequency. This is an electromagnetic inverse source problem with broad applications, such as antenna synthesis and design, medical imaging, and pollution source tracing. We introduce new imaging functions and a computational algorithm to determine the number of point sources, their locations, and associated moment vectors, even when these vectors have notably different magnitudes. The number of sources and locations are estimated using significant peaks of the imaging functions, and the moment vectors are computed via explicitly simple formulas. The theoretical analysis and stability of the imaging functions are investigated, where the main challenge lies in analyzing the behavior of the dot products between the columns of the imaginary part of the Green's tensor and the unknown moment vectors. Additionally, we extend our method to reconstruct small-volume sources using an asymptotic expansion of their radiated electric field. We provide numerical examples in three dimensions to demonstrate the performance of our method.
A new variant of the GMRES method is presented for solving linear systems with the same matrix and subsequently obtained multiple right-hand sides. The new method keeps such properties of the classical GMRES algorithm as follows. Both bases of the search space and its image are maintained orthonormal that increases the robustness of the method. Moreover there is no need to store both bases since they are effectively represented within a common basis. Along with it our method is theoretically equivalent to the GCR method extended for a case of multiple right-hand sides but is more numerically robust and requires less memory. The main result of the paper is a mechanism of adding an arbitrary direction vector to the search space that can be easily adopted for flexible GMRES or GMRES with deflated restarting.
This study presents a novel representation learning model tailored for dynamic networks, which describes the continuously evolving relationships among individuals within a population. The problem is encapsulated in the dimension reduction topic of functional data analysis. With dynamic networks represented as matrix-valued functions, our objective is to map this functional data into a set of vector-valued functions in a lower-dimensional learning space. This space, defined as a metric functional space, allows for the calculation of norms and inner products. By constructing this learning space, we address (i) attribute learning, (ii) community detection, and (iii) link prediction and recovery of individual nodes in the dynamic network. Our model also accommodates asymmetric low-dimensional representations, enabling the separate study of nodes' regulatory and receiving roles. Crucially, the learning method accounts for the time-dependency of networks, ensuring that representations are continuous over time. The functional learning space we define naturally spans the time frame of the dynamic networks, facilitating both the inference of network links at specific time points and the reconstruction of the entire network structure without direct observation. We validated our approach through simulation studies and real-world applications. In simulations, we compared our methods link prediction performance to existing approaches under various data corruption scenarios. For real-world applications, we examined a dynamic social network replicated across six ant populations, demonstrating that our low-dimensional learning space effectively captures interactions, roles of individual ants, and the social evolution of the network. Our findings align with existing knowledge of ant colony behavior.
This study focuses on addressing the challenge of solving the reduced biquaternion equality constrained least squares (RBLSE) problem. We develop algebraic techniques to derive both complex and real solutions for the RBLSE problem by utilizing the complex and real forms of reduced biquaternion matrices. Additionally, we conduct a perturbation analysis for the RBLSE problem and establish an upper bound for the relative forward error of these solutions. Numerical examples are presented to illustrate the effectiveness of the proposed approaches and to verify the accuracy of the established upper bound for the relative forward errors.
We propose and analyse a boundary-preserving numerical scheme for the weak approximations of some stochastic partial differential equations (SPDEs) with bounded state-space. We impose regularity assumptions on the drift and diffusion coefficients only locally on the state-space. In particular, the drift and diffusion coefficients may be non-globally Lipschitz continuous and superlinearly growing. The scheme consists of a finite difference discretisation in space and a Lie--Trotter splitting followed by exact simulation and exact integration in time. We prove weak convergence of optimal order 1/4 for globally Lipschitz continuous test functions of the scheme by proving strong convergence towards a strong solution driven by a different noise process. Boundary-preservation is ensured by the use of Lie--Trotter time splitting followed by exact simulation and exact integration. Numerical experiments confirm the theoretical results and demonstrate the effectiveness of the proposed Lie--Trotter-Exact (LTE) scheme compared to existing methods for SPDEs.
Regularization is a critical technique for ensuring well-posedness in solving inverse problems with incomplete measurement data. Traditionally, the regularization term is designed based on prior knowledge of the unknown signal's characteristics, such as sparsity or smoothness. Inhomogeneous regularization, which incorporates a spatially varying exponent $p$ in the standard $\ell_p$-norm-based framework, has been used to recover signals with spatially varying features. This study introduces weighted inhomogeneous regularization, an extension of the standard approach incorporating a novel exponent design and spatially varying weights. The proposed exponent design mitigates misclassification when distinct characteristics are spatially close, while the weights address challenges in recovering regions with small-scale features that are inadequately captured by traditional $\ell_p$-norm regularization. Numerical experiments, including synthetic image reconstruction and the recovery of sea ice data from incomplete wave measurements, demonstrate the effectiveness of the proposed method.
Parameter inference is essential when interpreting observational data using mathematical models. Standard inference methods for differential equation models typically rely on obtaining repeated numerical solutions of the differential equation(s). Recent results have explored how numerical truncation error can have major, detrimental, and sometimes hidden impacts on likelihood-based inference by introducing false local maxima into the log-likelihood function. We present a straightforward approach for inference that eliminates the need for solving the underlying differential equations, thereby completely avoiding the impact of truncation error. Open-access Jupyter notebooks, available on GitHub, allow others to implement this method for a broad class of widely-used models to interpret biological data.
In this contribution we study the formal ability of a multi-resolution-times lattice Boltzmann scheme to approximate isothermal and thermal compressible Navier Stokes equations with a single particle distribution. More precisely, we consider a total of 12 classical square lattice Boltzmann schemes with prescribed sets of conserved and nonconserved moments. The question is to determine the algebraic expressions of the equilibrium functions for the nonconserved moments and the relaxation parameters associated to each scheme. We compare the fluid equations and the result of the Taylor expansion method at second order accuracy for bidimensional examples with a maximum of 17 velocities and three-dimensional schemes with at most 33 velocities. In some cases, it is not possible to fit exactly the physical model. For several examples, we adjust the Navier Stokes equations and propose nontrivial expressions for the equilibria.
Large-scale eigenvalue problems arise in various fields of science and engineering and demand computationally efficient solutions. In this study, we investigate the subspace approximation for parametric linear eigenvalue problems, aiming to mitigate the computational burden associated with high-fidelity systems. We provide general error estimates under non-simple eigenvalue conditions, establishing the theoretical foundations for our methodology. Numerical examples, ranging from one-dimensional to three-dimensional setups, are presented to demonstrate the efficacy of reduced basis method in handling parametric variations in boundary conditions and coefficient fields to achieve significant computational savings while maintaining high accuracy, making them promising tools for practical applications in large-scale eigenvalue computations.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.