In this paper, we present a deep surrogate model for learning the Green's function associated with the reaction-diffusion operator in rectangular domain. The U-Net architecture is utilized to effectively capture the mapping from source to solution of the target partial differential equations (PDEs). To enable efficient training of the model without relying on labeled data, we propose a novel loss function that draws inspiration from traditional numerical methods used for solving PDEs. Furthermore, a hard encoding mechanism is employed to ensure that the predicted Green's function is perfectly matched with the boundary conditions. Based on the learned Green's function from the trained deep surrogate model, a fast solver is developed to solve the corresponding PDEs with different sources and boundary conditions. Various numerical examples are also provided to demonstrate the effectiveness of the proposed model.
Adversarial robustness and generalization are both crucial properties of reliable machine learning models. In this paper, we study these properties in the context of quantum machine learning based on Lipschitz bounds. We derive tailored, parameter-dependent Lipschitz bounds for quantum models with trainable encoding, showing that the norm of the data encoding has a crucial impact on the robustness against perturbations in the input data. Further, we derive a bound on the generalization error which explicitly depends on the parameters of the data encoding. Our theoretical findings give rise to a practical strategy for training robust and generalizable quantum models by regularizing the Lipschitz bound in the cost. Further, we show that, for fixed and non-trainable encodings as frequently employed in quantum machine learning, the Lipschitz bound cannot be influenced by tuning the parameters. Thus, trainable encodings are crucial for systematically adapting robustness and generalization during training. With numerical results, we demonstrate that, indeed, Lipschitz bound regularization leads to substantially more robust and generalizable quantum models.
In this paper, we consider the numerical solution of a nonlinear Schrodinger equation with spatial random potential. The randomly shifted quasi-Monte Carlo (QMC) lattice rule combined with the time-splitting pseudospectral discretization is applied and analyzed. The nonlinearity in the equation induces difficulties in estimating the regularity of the solution in random space. By the technique of weighted Sobolev space, we identify the possible weights and show the existence of QMC that converges optimally at the almost-linear rate without dependence on dimensions. The full error estimate of the scheme is established. We present numerical results to verify the accuracy and investigate the wave propagation.
In this paper, we study the stability of commonly used filtration functions in topological data analysis under small pertubations of the underlying nonrandom point cloud. Relying on these stability results, we then develop a test procedure to detect and determine structural breaks in a sequence of topological data objects obtained from weakly dependent data. The proposed method applies for instance to statistics of persistence diagrams of $\mathbb{R}^d$-valued Bernoulli shift systems under the \v{C}ech or Vietoris-Rips filtration.
In the present work, we introduce a novel approach to enhance the precision of reduced order models by exploiting a multi-fidelity perspective and DeepONets. Reduced models provide a real-time numerical approximation by simplifying the original model. The error introduced by the such operation is usually neglected and sacrificed in order to reach a fast computation. We propose to couple the model reduction to a machine learning residual learning, such that the above-mentioned error can be learned by a neural network and inferred for new predictions. We emphasize that the framework maximizes the exploitation of high-fidelity information, using it for building the reduced order model and for learning the residual. In this work, we explore the integration of proper orthogonal decomposition (POD), and gappy POD for sensors data, with the recent DeepONet architecture. Numerical investigations for a parametric benchmark function and a nonlinear parametric Navier-Stokes problem are presented.
Learning distance functions between complex objects, such as the Wasserstein distance to compare point sets, is a common goal in machine learning applications. However, functions on such complex objects (e.g., point sets and graphs) are often required to be invariant to a wide variety of group actions e.g. permutation or rigid transformation. Therefore, continuous and symmetric product functions (such as distance functions) on such complex objects must also be invariant to the product of such group actions. We call these functions symmetric and factor-wise group invariant (or SFGI functions in short). In this paper, we first present a general neural network architecture for approximating SFGI functions. The main contribution of this paper combines this general neural network with a sketching idea to develop a specific and efficient neural network which can approximate the $p$-th Wasserstein distance between point sets. Very importantly, the required model complexity is independent of the sizes of input point sets. On the theoretical front, to the best of our knowledge, this is the first result showing that there exists a neural network with the capacity to approximate Wasserstein distance with bounded model complexity. Our work provides an interesting integration of sketching ideas for geometric problems with universal approximation of symmetric functions. On the empirical front, we present a range of results showing that our newly proposed neural network architecture performs comparatively or better than other models (including a SOTA Siamese Autoencoder based approach). In particular, our neural network generalizes significantly better and trains much faster than the SOTA Siamese AE. Finally, this line of investigation could be useful in exploring effective neural network design for solving a broad range of geometric optimization problems (e.g., $k$-means in a metric space).
In this study, we present a precise anisotropic interpolation error estimate for the Morley finite element method (FEM) and apply it to fourth-order elliptical equations. We did not impose a shape-regularity mesh condition for the analysis. Therefore, anisotropic meshes can be used. The main contributions of this study include providing new proof of the consistency term. This enabled us to obtain an anisotropic consistency error estimate. The core idea of the proof involves using the relationship between the Raviart--Thomas and Morley finite element spaces. Our results show optimal convergence rates and imply that the modified Morley FEM may be effective for errors.
Diffusion model has become a main paradigm for synthetic data generation in many subfields of modern machine learning, including computer vision, language model, or speech synthesis. In this paper, we leverage the power of diffusion model for generating synthetic tabular data. The heterogeneous features in tabular data have been main obstacles in tabular data synthesis, and we tackle this problem by employing the auto-encoder architecture. When compared with the state-of-the-art tabular synthesizers, the resulting synthetic tables from our model show nice statistical fidelities to the real data, and perform well in downstream tasks for machine learning utilities. We conducted the experiments over $15$ publicly available datasets. Notably, our model adeptly captures the correlations among features, which has been a long-standing challenge in tabular data synthesis. Our code is available at //github.com/UCLA-Trustworthy-AI-Lab/AutoDiffusion.
In this study, we consider the numerical solution of the Neumann initial boundary value problem for the wave equation in 2D domains. Employing the Laguerre transform with respect to the temporal variable, we effectively transform this problem into a series of Neumann elliptic problems. The development of a fundamental sequence for these elliptic equations provides us with the means to introduce modified double layer potentials. Consequently, we are able to derive a sequence of boundary hypersingular integral equations as a result of this transformation. To discretize the system of equations, we apply the Maue transform and implement the Nystr\"om method with trigonometric quadrature techniques. To demonstrate the practical utility of our approach, we provide numerical examples.
In this paper, we propose a new formulation and a suitable finite element method for the steady coupling of viscous flow in deformable porous media using divergence-conforming filtration fluxes. The proposed method is based on the use of parameter-weighted spaces, which allows for a more accurate and robust analysis of the continuous and discrete problems. Furthermore, we conduct a solvability analysis of the proposed method and derive optimal error estimates in appropriate norms. These error estimates are shown to be robust in the case of large Lam\'e parameters and small permeability and storativity coefficients. To illustrate the effectiveness of the proposed method, we provide a few representative numerical examples, including convergence verification, poroelastic channel flow simulation, and test the robustness of block-diagonal preconditioners with respect to model parameters.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.